What is software optimization -?
Program optimization or software optimization is the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources. In general, a computer program may be optimized so that it executes more rapidly, or is capable of operating with less memory storage or other resources, or draw less power.
To perform software optimization we need to identify problem areas in software which are not working efficiently. To identify problem areas, we need to analyze the program or software and cover various aspects like usage of memory by particular instruction, time duration taken by function execution or code segment. This will allow you to determine those improvements that are required in your application.
The process that measures for example, usage of memory, usage of particular instruction, or durations of functional calls is called as profiling or program profiling or software profiling. The tool used for profiling is called as Profiler.
The profiler available for .net is CLR profiler, Perfmon.exe tool. Similarly there are different profilers available for java, mainframe, SQL Server.
Similarly, to perform profiling on Windows Azure Storage, we use Storage Analytics. So in other words, Windows Azure Storage Analytics offers profiler like capabilities for Windows Azure Storage.
What Storage Analytics Does –
Windows azure storage analytics performs logging and provides metrics data for storage account. This data then can be used for profiling by analyzing usage trends; trace requests and diagnose issues with storage account.
Logging is about recording each call that is made to storage.
Logging can be enabled at service level only and not the storage account level. For example, logging can be enabled for blob, queue or table service individually but not for whole storage account. The log information is stored in blob service under container named as $logs. Storage administrator can read and delete logs but cannot create or update them. Also $logs container cannot be deleted however only its contents can be deleted.
Information captured in Logging Authenticated Requests – The following types of authenticated requests are logged:
• Successful requests
• Failed requests, including timeout, throttling, network, authorization, and other errors
• Requests to analytics data
Information captured in Logging Anonymous requests – The following types of anonymous requests are logged:
• Successful requests
• Server errors
• Timeout errors for both client and server
• Failed GET requests with error code 304 (Not Modified)
All other failed requests are not logged. The storage analytics log has a format about which details can be found out here - http://msdn.microsoft.com/en-us/library/windowsazure/hh343259.aspx
What is Metric Data –?
Metric data is any reading which is at least at an interval scale. As opposed to Non Metric data which can be nominal or ordinal.
Ex: weight, height, distance, revenue, cost etc., all of them are interval scales or above. Hence they are metric data.
On the other hands, satisfaction ratings, Yes/No responses, Male/Female readings etc., are Non Metric Data.
Storage Analytics Metrics –
The metrics data can be categorized as:
Capacity: Provides information regarding the storage capacity consumed for the blob service, the number of containers and total number of objects stored by the service. This data is updated daily and it provides separate capacity information for data stored by user and the data stored for $logs.
Requests: Provides summary information of requests executed against the service. It provides total number of requests, total ingress/egress, server latency, total number of failures by category, etc. at an hourly granularity. The summary is provided at service level and it also provides aggregates at an API level for the APIs that have been used for the hour. This data is available for all the three services provided – Blobs, Tables, and Queues.
Storage Analytics Metrics stores information about Transaction Statistics and Capacity Data.
Transaction Metrics –
Transaction metrics consist of data related to transactions occurring over storage service which includes ingress and egress of data, data availability and error information.
Transaction data is recorded at two levels – the service level and the API level.
At the Service level, statistics summarizing all requested API operations are written to a table entity even if no requests are made to the service. At the API level, statistics are only written to an entity if particular operation is requested. Three tables are created by storage analytics metrics for transaction data –
What is capacity data –
Capacity data is the data which shows the capability of system to meet changing demands. Capacity in software terminology means memory.
Capacity Metrics –
Currently capacity metrics are only available for blob service. Capacity data is recorded daily for a storage account’s Blob service, and two table entities are written. One entity provides statistics for user data, and the other provides statistics about the $logs blob container used by Storage Analytics. The $MetricsCapacityBlob table includes the following statistics:
· Capacity: The amount of storage used by the storage account’s Blob service, in bytes.
· ContainerCount: The number of blob containers in the storage account’s Blob service.
· ObjectCount: The number of committed and uncommitted block or page blobs in the storage account’s Blob service.
Containers and metrics family tables are simply regular tables and containers within your storage account hence they will be charged for their use.
You may find this interesting –