Thursday, September 13, 2012

IIS 6.0 Tuning for Performance by Peter A. Bromberg

In the Patterns and Practices Group's "Improving .NET Application Performance and Scalability", which is available in full text online and as a PDF download from the above link, as well as in softcover through MSPress and major booksellers, there are over 1000 pages and appendixes of detailed information about how to improve .NET application performance and scalability, written by the top experts in the business. One area that is both little understood and potentially confusing is the tuning of Internet Information Services 6.0.
The skinny about all this is that the PAP group says the default settings shipped with IIS and the .NET Framework should be changed. They provide detailed information in pages 332 through 342 in Chapter 6 on ASP.NET, and they provide even more information in Chapter 17. I'll summarize some of the more important points here, since I know that, human nature being what it is, most people running IIS and reading this article probably have not waded through this lengthy but excellent publication. Once you see the quality of the information you can get from it, it may encourage you to do so; it is an investment of your time as a professional ASP.NET developer that I highly recommend. The fact that the book is made available free online by Microsoft should not in any way diminish its importance or value to developers who are interested in achieving the absolute best performance and scalability from their .NET Applications.
NOTE:  This "helper" article focuses almost totally on the IIS-related issues and settings. However, Chapter 6 and additional information in the various checklists and in Chapter 17 address many other issues that are related to, but do not specifically involve IIS 6.0 settings. Some of these can be addressed at the machine.config level, others are "best practices" coding techniques, and some can be addressed in web.config. Paragraphs marked "Discussion" are my individual comments. The rest is (mostly) untouched snippets from the PAP publication itself.
First, lets examine some of the "reduce contention" formula settings. All this information, and a lot more, is right in the book:

GoDiagram Components
Add diagrams to improve your user interface, allowing users to more easily visualize and manipulate their data.

Bug Tracking Saves Time
Save hundreds of development hours each month. Click to see how...

Demo Builder - Create Flash Presentations
Create interactive Flash movies that allow you to show how applications and systems work.  Download a FREE trial now.


Formula for Reducing Contention

The formula for reducing contention can give you a good empirical start for tuning the ASP.NET thread pool. Consider using the Microsoft product group-recommended settings that are shown in Table 6.1 if the following conditions are true:
  • You have available CPU.
  • Your application performs I/O bound operations such as calling a Web method or accessing the file system.
  • The ASP.NET Applications/Requests In Application Queue performance counter indicates that you have queued requests.
Table 6.1: Recommended Threading Settings for Reducing Contention
Configuration setting Default value (.NET Framework 1.1) Recommended value
maxconnection 2 12 * #CPUs
maxIoThreads 20 100
maxWorkerThreads 20 100
minFreeThreads 8 88 * #CPUs
minLocalRequestFreeThreads 4 76 * #CPUs
To address this issue, you need to configure the following items in the Machine.config file. Apply the recommended changes that are described in the following section, across the settings and not in isolation. For a detailed description of each of these settings, see "Thread Pool Attributes" in Chapter 17, "Tuning .NET Application Performance."
  • Set maxconnection to 12 * # of CPUs . This setting controls the maximum number of outgoing HTTP connections that you can initiate from a client. In this case, ASP.NET is the client. Set maxconnection to 12 * # of CPUs.
  • Set maxIoThreads to 100 . This setting controls the maximum number of I/O threads in the .NET thread pool. This number is automatically multiplied by the number of available CPUs. Set maxloThreads to 100.
  • Set maxWorkerThreads to 100 . This setting controls the maximum number of worker threads in the thread pool. This number is then automatically multiplied by the number of available CPUs. Set maxWorkerThreads to 100.
  • Set minFreeThreads to 88 * # of CPUs . This setting is used by the worker process to queue all the incoming requests if the number of available threads in the thread pool falls below the value for this setting. This setting effectively limits the number of requests that can run concurrently to maxWorkerThreads ? minFreeThreads . Set minFreeThreads to 88 * # of CPUs. This limits the number of concurrent requests to 12 (assuming maxWorkerThreads is 100).
  • Set minLocalRequestFreeThreads to 76 * # of CPUs . This setting is used by the worker process to queue requests from localhost (where a Web application sends requests to a local Web service) if the number of available threads in the thread pool falls below this number. This setting is similar to minFreeThreads but it only applies to localhost requests from the local computer. Set minLocalRequestFreeThreads to 76 * # of CPUs.
Discussion: The proviso above indicates that these settings should be used when your application has I/O bound operations and the Applications/Requests In Application Queue perfcounter indicates you have queued requests. However, I have found that settings approaching those indicated can improve performance on ASP.NET apps that do not exhibit these conditions. I recommend using the "Homer" web stress tool from at least one remote machine (and preferably more than one machine, with the supplied ASP controller page), or the .NET ACT Application Center Test application, to throw a good solid load at your app and carefully measure the performance statistics with each set of both the default and the above settings. In particular, pay close attention to the Requests per second and the time to last byte readings. This baseline testing scenario should provide the basis for further tuning if it is necessary, and it doesn't take long at all. You can only improve something if you have metrics, and the way you get the metrics is to take the time to get them! You can easily script all kinds of "user paths" through your ASP.NET application with testing software such as is mentioned here, and get the important baseline metrics you need. One more thing-- rule number 1 of software testing and debugging:
"When you are going to change something, ONLY CHANGE ONE THING AT A TIME!" Test it, get the metrics, and only then, proceed.

Kernel Mode Caching

If you deploy your application on Windows Server 2003, ASP.NET pages automatically benefit from the IIS 6.0 kernel cache. The kernel cache is managed by the HTTP.sys kernel-mode device driver. This driver handles all HTTP requests. Kernel mode caching may produce significant performance gains because requests for cached responses are served without switching to user mode.
The following default setting in the Machine.config file ensures that dynamically generated ASP.NET pages can use kernel mode caching, subject to the requirements listed below.
<httpRunTime enableKernelOutputCache="true" . . ./>
Dynamically generated ASP.NET pages are automatically cached subject to the following restrictions:
  • Pages must be retrieved by using HTTP GET requests. Responses to HTTP POST requests are not cached in the kernel.
  • Query strings are ignored when responses are cached. If you want a request for http://contoso.com/myapp.aspx?id=1234 to be cached in the kernel, all requests for http://contoso.com/myapp.aspx are served from the cache, regardless of the query string.
  • Pages must have an expiration policy. In other words, the pages must have an Expires header.
  • Pages must not have VaryByParams .
  • Pages must not have VaryByHeaders .
  • The page must not have security restrictions. In other words, the request must be anonymous and not require authentication. The HTTP.sys driver only caches anonymous responses.
  • There must be no filters configured for the W3wp.exe file instance that are unaware of the kernel cache.
Discussion: The "enableKernelOutputCache = "true" setting IS NOT present in the default machine.config "httpRunTime" element. Since it is not present, we should be able to expect that the default setting of "true" is automatic. Personally, I feel better explicitly putting the attribute in there, and setting it to "true". As an aside, I have found that it is ALWAYS a good idea to KEEP A BACKUP COPY of your machine.config stored somewhere safe.

Tuning the Thread Pool for Burst Load Scenarios

If your application experiences unusually high loads of users in small bursts (for example, 1000 clients all logging in at 9 A.M. in the morning), your system may be unable to handle the burst load. Consider setting minWorkerThreads and minIOThreads as specified in Knowledge Base article 810259, "FIX: SetMinThreads and GetMinThreads API Added to Common Language Runtime ThreadPool Class," at http://support.microsoft.com/default.aspx?scid=kb;en-us;810259.
Discussion: The .NET Threadpool is somewhat limited in its flexibility and is specifically limited in terms of how many instances you may have per process, since it is static. If you have ASP.NET applications that specifically need to run background thread processing, you may wish to investigate using a custom threadpool class. I have used Ami Bar's SmartThreadPool with great success, and have even modified it to provide a ThreadPriority overload. You can have more than one instance of this pool, and each can be custom configured. This type of approach provides maximum flexibility while simultaneously permitting individual threadpool tuning of critical resources.

Tuning the Thread Pool When Calling COM Objects

ASP.NET Web pages that call single-threaded apartment (STA) COM objects should use the ASPCOMPAT attribute. The use of this attribute ensures that the call is executed using a thread from the STA thread pool. However, all calls to an individual COM object must be executed on the same thread. As a result, the thread count for the process can increases during periods of high load. You can monitor the number of active threads used in the ASP.NET worker process by viewing the Process:Thread Count (aspnet_wp instance) performance counter.
The thread count value is higher for an application when you are using ASPCOMPAT attribute compared to when you are not using it. When tuning the thread pool for scenarios where your application extensively uses STA COM components and the ASPCOMPAT attribute, you should ensure that the total thread count for the worker process does not exceed the following value.
75 + ((maxWorkerThread + maxIoThreads) * #CPUs * 2)

Evaluating the Change

To determine whether the formula for reducing contention has worked, look for improved throughput. Specifically, look for the following improvements:
  • CPU utilization increases.
  • Throughput increases according to the ASP.NET Applications\Requests/Sec performance counter.
  • Requests in the application queue decrease according to the ASP.NET Applications\Requests In Application Queue performance counter.
If this change does not improve your scenario, you may have a CPU-bound scenario. In a CPU-bound scenario, adding more threads may increase thread context switching, further degrading performance.
When tuning the thread pool, monitor the Process\Thread Count (aspnet_wp) performance counter. This value should not be more than the following.
75 + ((maxWorkerThread + maxIoThreads) * #CPUs)
If you are using AspCompat, then this value should not be more than the following.
75 + ((maxWorkerThread + maxIoThreads) * #CPUs * 2)
Values beyond this maximum tend to increase processor context switching.
Discussion: There is a long list of attention items that revolve around and are tightly woven into the IIS tuning issue for ASP.NET application tuning and scalability. These include, but are not limted to the following:
  • Improving page response times.
  • Designing scalable Web applications.
  • Using server controls efficiently.
  • Using efficient caching strategies.
  • Analyzing and applying appropriate state management techniques.
  • Minimizing view state impact.
  • Improving performance without impacting security.
  • Minimizing COM interop scalability issues.
  • Optimizing threading.
  • Optimizing resource management.
  • Avoiding common data binding mistakes.
  • Using security settings to reduce server load.
  • Avoiding common deployment mistakes.
You can find detailed treatment of most of these issues in Chapter 6 of the above-captioned publication.
I hope this brief synopsis of IIS tuning parameters is useful to you. Once again, I strongly recommend reading all this in the bigger context of the book, and mapping out an optimization plan that includes code review, refactoring, and optimization tuning both at the ASP.NET application and IIS webserver levels. One of the great things about the lessons learned from IIS / ASP.NET testing and tuning optimizations is that they can be carried forward to new applications and will improve your skills and value as a professional developer. I spent nearly three weeks at the Microsoft Testing Lab in Charlotte, NC under the tutelage of Dennis Bass and his fine crew, and the lessons learned there were invaluable. If this book were avalable then, I may not have needed to spend so many nights in hotel rooms.

Wednesday, September 12, 2012

Daemon and rstatd daemon

In Unix and other multitasking computer operating systems, a daemon  is a computer program that runs as a background process, rather than being under the direct control of an interactive user. Typically daemon names end with the letter d: for example, syslogd is the daemon that implements the system logging facility and sshd is a daemon that services incoming SSH connections.

In a Unix environment, the parent process of a daemon is often, but not always, the init process. A daemon is usually created by a process forking a child process and then immediately exiting, thus causing init to adopt the child process. In addition, a daemon or the operating system typically must perform other operations, such as dissociating the process from any controlling terminal (tty). Such procedures are often implemented in various convenience routines such as daemon(3) in Unix.

Systems often start daemons at boot time: they often serve the function of responding to network requests, hardware activity, or other programs by performing some task. Daemons can also configure hardware (like udevd on some GNU/Linux systems), run scheduled tasks (like cron), and perform a variety of other tasks.

Daemon stands for Disk and Execution Monitor. A daemon is a long-running background process that answers requests for services. The term originated with Unix, but most operating systems use daemons in some form or another. In Windows NT, 2000, and XP, for example, daemons are called "services". In Unix, the names of daemons conventionally end in "d". Some examples include inetd, httpd, nfsd, sshd, named, and lpd.

rstatd Daemon

Purpose

Returns performance statistics obtained from the kernel.

Syntax

/usr/sbin/rpc.rstatd

Description

The rstatd daemon is a server that returns performance statistics obtained from the kernel. The rstatd daemon is normally started by the inetd daemon.

Files

/etc/inetd.conf     TCP/IP configuration file that starts RPC daemons and other TCP/IP daemons.
/etc/services     Contains an entry for each server available through Internet.

Friday, September 7, 2012

The Differences Between Thick, Thin & Smart Clients

When implementing a client/server architecture you need to determine if it will be the client or the server that handles the bulk of the workload. By client, we mean the application that runs on a PC or workstation and relies on a server to perform some operations.
 

A great starting point to discuss the nature of the underlying differences would be to start with an example of thick and thin based on an operating system and the applications. For example, a terminal or Java-based client would be considered a thin client whereas one running Microsoft Windows would be considered a thick client.

One major inconsistency when describing thick and thin is that the hardware may be thin — but the applications or software running may be thick. While that doesn't seem to make much sense, if you think of the division between thick vs. thin starting at the operating system level, rather than at the CPU, it's logical.
Thick vs. Thin Client Applications

A thin client machine is going to communicate with a central processing server, meaning there is little hardware and software installed on the user's machine. At times, thin may be defined as simply not needing the software or operating system installed on the user machine. This allows all end users' systems to be centrally managed and software deployed on a central server location as opposed to installed on each individual system.

Thin clients are really best-suited to environments in which the same information is going to be accessed by the clients, making it a better solution for public environments. For this reason, thin clients are often deployed in hotels and airports, where installing software to all systems wouldn't make sense. It would be a massive headache for IT to both deploy and maintain.

When using thin clients, compared to a feature-rich desktop PCs today, they often tend to look a bit primitive and outdated. Since many thin clients run on very little hardware, it is impossible to incorporate rich graphical user interfaces. To use the client, an input device (keyboard) and viewing device (display) is usually the basic requirements. Some may not even require a mouse.

In contrast, a thick client will provide users with more features, graphics and choices making the applications more customizable. Unlike thin clients, thick clients do not rely on a central processing server because the processing is done locally on the user system, and the server is accessed primarily for storage purposes. For that reason, thick clients often are not well-suited for public environments. To maintain a thick client, IT needs to maintain all systems for software deployment and upgrades, rather than just maintaining the applications on the server.  Additionally, thick clients often require operating specific applications, again posing more work and limitations for deployment. The trade-off is a more robust and local computing environment.
Looking Towards Smart Clients

Over the past few years, has started to move towards smart clients, also called rich clients. The trend is a move from traditional client/server architecture to a Web-based model. More similar to a fat client vs. a thin client, smart clients are Internet-connected devices that allows a user's local applications to interact with server-based applications through the use of Web services.

For example, a smart client running a word processing application can interface with a remote database over the Internet in order to collect data from the database to be used in the word processing document.

Smart clients support work offline. That is, they can work with data even when they are not connected to the Internet (which distinguishes them from browser-based applications, which do not work when the device is not connected to the Internet). Smart client applications have the capability to be deployed and updated in real time over the network from a centralized server, they support multiple platforms and languages because they are built on Web services, and can run on almost any device that has Internet connectivity, including desktops, workstations, notebooks, tablet PCs, PDAs, and mobile phones. Smart clients will offer rich GUIs, and overall development and maintenance costs are higher than, for example, thin clients.

On the downside, smart clients require users to install or deploy a runtime a library — routines that are bound to the program during execution. For example, if the client is Windows-, Java- or Flash-based, you need to have that runtime on the user machine. Smart clients are most often contrasted with Web browser clients (or browser-based applications).
Be sure to check our previous discussion on Thick and Thin in terms of hardware.


Key Terms To Understanding Storage Servers:

client
The client part of a client-server architecture. Typically, a client is an application that runs on a personal computer or workstation and relies on a server to perform some operations.

server
A computer or device on a network that manages network resources. Servers are often dedicated, meaning that they perform no other tasks besides their server tasks.

client/server architecture
A network architecture in which each computer or process on the network is either a client or a server.

The Differences Between Thick & Thin Client Hardware

In the world of client/server architecture, you need to determine if it will be the client or the server that handles the bulk of the workload. By client, we mean the application that runs on a personal computer or workstation and relies on a server to perform some operations.

Thick or thin client architecture is actually quite similar. In both cases, you can consider it as being the client application running on a PC whose function is to send and receive data over the network to the server program. The server would normally communicate that information to the middle-tier software (the backend), which retrieves and stores that information from a database.

While they share similarities, there are many differences between thick and thin clients. Thick and thin are the terms used to refer to the hardware (e.g., how a PC communicates with the server), but the terms are also used to describe applications. While this article deals specifically with hardware issues, be sure to check back as we will continue our Thick and Thin discussion as related to applications.
Thin Clients

A thin client is designed to be especially small so that the bulk of the data processing occurs on the server. Although the term thin client often refers to software, it is increasingly used for the computers, such as network computers and Net PCs, that are designed to serve as the clients for client/server architectures. A thin client is a network computer without a hard disk drive. They act as a simple terminal to the server and require constant communication with the server as well.

Thin clients provide a desktop experience in environments where the end user has a well-defined and regular number of tasks for which the system is used. Thin clients can be found in medical offices, airline ticketing, schools, governments, manufacturing plants and even call centers. Along with being easy to install, thin clients also offer a lower total cost of ownership over thick clients.
Thick Clients

In contrast, a thick client (also called a fat client) is one that will perform the bulk of the processing  in client/server applications. With thick clients, there is no need for continuous server communications as it is mainly communicating archival storage information to the server. As in the case of a thin client, the term is often used to refer to software, but again is also used to describe the networked computer itself. If your applications require multimedia components or that are bandwidth intensive, you'll also want to consider going with thick clients. One of the biggest advantages of thick clients rests in the nature of some operating systems and software being unable to run on thin clients. Thick clients can handle these as it has its own resources.
Thick vs. Thin - A Quick Comparison
Thin Clients
   
Thick Clients
- Easy to deploy as they require no extra or specialized software installation

- Needs to validate with the server after data capture

- If the server goes down, data collection is halted as the client needs constant communication with the server

- Cannot be interfaced with other equipment (in plants or factory settings for example)

- Clients run only and exactly as specified by the server

- More downtime

-Portability in that all applications are on the server so any workstation can access

- Opportunity to use older, outdated PCs as clients

- Reduced security threat

JVM

Acronym for Java Virtual Machine. An abstract computing machine, or virtual machine, JVM is a platform-independent execution environment that converts Java bytecode into machine language and executes it. Most programming languages compile source code directly into machine code that is designed to run on a specific microprocessor architecture or operating system, such as Windows or UNIX. A JVM -- a machine within a machine -- mimics a real Java processor, enabling Java bytecode to be executed as actions or operating system calls on any processor regardless of the operating system. For example, establishing a socket connection from a workstation to a remote machine involves an operating system call. Since different operating systems handle sockets in different ways, the JVM translates the programming code so that the two machines that may be on different platforms are able to connect. 

JVM consist of following components:-

1)Byte-code verifier :- It verify the byte-code ,it check's for unusual code.

2)Class Loader :- After verifying Class Loader will load the byte-code into the memory for execution.

3)Execution engine :-
It further consist of 2 parts :-
a)Interpreter :- It interpret the code & run.
b)JIT(Just-in-Time Interpreter)
JVM Hotspot defines when to use Interpreter or JIT.

4)Garbage Collector:- It periodically check for the object on heap , whose link is broken
So it can collect the garbage from Heap.

5) Security Manager :- It constantly monitors the code.It is 2nd level of security.[1st level is Byte-code verifier ].

Thursday, September 6, 2012

Top 10 SQL Server Counters for Monitoring SQL Server Performance

Do you have a list of SQL Server Counters you review when monitoring your SQL Server environment? Counters allow you a method to measure current performance, as well as performance over time. Identifying the metrics you like to use to measure SQL Server performance and collecting them over time gives you a quick and easy way to identify SQL Server problems, as well as graph your performance trend over time.
Below is my top 10 list of SQL Server counters in no particular order. For each counter I have described what it is, and in some cases I have described the ideal value of these counters. This list should give you a starting point for developing the metrics you want to use to measure database performance in your SQL Server environment.

1. SQLServer: Buffer Manager: Buffer cache hit ratio

The buffer cache hit ratio counter represents how often SQL Server is able to find data pages in its buffer cache when a query needs a data page. The higher this number the better, because it means SQL Server was able to get data for queries out of memory instead of reading from disk. You want this number to be as close to 100 as possible. Having this counter at 100 means that 100% of the time SQL Server has found the needed data pages in memory. A low buffer cache hit ratio could indicate a memory problem.

2. SQLServer: Buffer Manager: Page life expectancy

The page life expectancy counter measures how long pages stay in the buffer cache in seconds. The longer a page stays in memory, the more likely SQL Server will not need to read from disk to resolve a query. You should watch this counter over time to determine a baseline for what is normal in your database environment. Some say anything below 300 (or 5 minutes) means you might need additional memory.

3. SQLServer: SQL Statistics: Batch Requests/Sec

Batch Requests/Sec measures the number of batches SQL Server is receiving per second. This counter is a good indicator of how much activity is being processed by your SQL Server box. The higher the number, the more queries are being executed on your box. Like many counters, there is no single number that can be used universally to indicate your machine is too busy. Today’s machines are getting more and more powerful all the time and therefore can process more batch requests per second. You should review this counter over time to determine a baseline number for your environment.

4. SQLServer: SQL Statistics: SQL Compilations/Sec

The SQL Compilations/Sec measure the number of times SQL Server compiles an execution plan per second. Compiling an execution plan is a resource-intensive operation. Compilations/Sec should be compared with the number of Batch Requests/Sec to get an indication of whether or not complications might be hurting your performance. To do that, divide the number of batch requests by the number of compiles per second to give you a ratio of the number of batches executed per compile. Ideally you want to have one compile per every 10 batch requests.

5. SQLServer: SQL Statistics: SQL Re-Compilations/Sec

When the execution plan is invalidated due to some significant event, SQL Server will re-compile it. The Re-compilations/Sec counter measures the number of time a re-compile event was triggered per second. Re-compiles, like compiles, are expensive operations so you want to minimize the number of re-compiles. Ideally you want to keep this counter less than 10% of the number of Compilations/Sec.

6. SQLServer: General Statistics: User Connections

The user connections counter identifies the number of different users that are connected to SQL Server at the time the sample was taken. You need to watch this counter over time to understand your baseline user connection numbers. Once you have some idea of your high and low water marks during normal usage of your system, you can then look for times when this counter exceeds the high and low marks. If the value of this counter goes down and the load on the system is the same, then you might have a bottleneck that is not allowing your server to handle the normal load. Keep in mind though that this counter value might go down just because less people are using your SQL Server instance.

7. SQLServer: Locks: Lock Waits / Sec: _Total

In order for SQL Server to manage concurrent users on the system, SQL Server needs to lock resources from time to time. The lock waits per second counter tracks the number of times per second that SQL Server is not able to retain a lock right away for a resource. Ideally you don't want any request to wait for a lock. Therefore you want to keep this counter at zero, or close to zero at all times.

8. SQLServer: Access Methods: Page Splits / Sec

This counter measures the number of times SQL Server had to split a page when updating or inserting data per second. Page splits are expensive, and cause your table to perform more poorly due to fragmentation. Therefore, the fewer page splits you have the better your system will perform. Ideally this counter should be less than 20% of the batch requests per second.

9. SQLServer: General Statistic: Processes Block

The processes blocked counter identifies the number of blocked processes. When one process is blocking another process, the blocked process cannot move forward with its execution plan until the resource that is causing it to wait is freed up. Ideally you don't want to see any blocked processes. When processes are being blocked you should investigate.

10. SQLServer: Buffer Manager: Checkpoint Pages / Sec

The checkpoint pages per second counter measures the number of pages written to disk by a checkpoint operation. You should watch this counter over time to establish a baseline for your systems. Once a baseline value has been established you can watch this value to see if it is climbing. If this counter is climbing, it might mean you are running into memory pressures that are causing dirty pages to be flushed to disk more frequently than normal.

Function to replace a string in Loadrunner

Here is one Function which we can use for Replace a string:

As you know there Is no inbuilt function is available in C to replace a string, but it can be done using a manual function and here is the function.

And there are lot of manual functions are available to replace a string, but those are all not work all the time in Loadrunner, if it runs also we will get some irrelevant output and errors.
That’s why The following code will work in all the conditions of the Loadrunner.
And it is more useful for those who are getting the response of special characters in Binary or hexadecimal form..

For example:
If you are getting a response of address like this “37\x20WALNUT\x20DRIVE\x2C\x20LARNE\x2C\x20BT40\x202WQ” and if you are passing this value as “37 WALNUT DRIVE, LARNE, BT40 2WQ” it is necessary to convert the all hexadecimal values into normal form, but if you know the real values of hexadecimal response like “\x20=  “ and “\x2C = ,” then you can replace the whole string with the new string.

And incase if you are getting the date as “2012\x2D02\x2D09” and the real date format you are passing is like “2012-02-09”, it is more headache to convert this date type with the normal C replace string functions in Loadrunner;
Because when you are doing with the C code, if you are assigning any values like “2012\x2D02\x2D09”  to a variable it will directly convert these hexadecimal value into “2012-02-09” without any replace function while assigning to a variable, But it will not happen with the Loadrunner response data, because the response will be stored in a register variable as a whole string format and if we assign this value to another variable also will not convert the string into normal form.

That’s why the Below function is very useful for those who wants to replace their hexadecimal response string to as they want.

Note:
This will Not work with normal ‘C-User’ Protocol But it will Work in ‘HTTP/HTML’ Protocol because here we are using a Web protocol function” web_conver_parm();”.
If you want to work this code in other protocols means some slight changes required.
And This Function can still Optimized depending on your need.

Here is the Function:

char *strReplace(const char *src, const char *from, const char *to) 
 { 
   char *value; 
   char *dst; 
   char *match; 
   int size; 
   int fromlen; 
   int tolen; 

   size = strlen(src) + 1; 
   fromlen = strlen(from); 
   tolen = strlen(to); 
    
   value = (char *)malloc(size); 
    
   dst = value; 
    
   if ( value != NULL ) 
   { 
     for ( ;; ) 
     { 
       match = (char *) strstr(src, from); 
       if ( match != NULL ) 
       { 
         size_t count = match - src; 
                                 char *temp; 
                                 size += tolen - fromlen; 
                                 temp = (char *)realloc(value, size); 
    
         if ( temp == NULL ) 
         { 
          free(value); 
           return NULL; 
         } 
    
         dst = temp + (dst - value); 
         value = temp; 
    
         memmove(dst, src, count); 
         src += count; 
         dst += count; 
    
         memmove(dst, to, tolen); 
         src += fromlen; 
         dst += tolen; 
       } 
       else
       { 
         strcpy(dst, src); 
         break; 
       } 
     }
   } 
   return value; 
 } 

 int lr_replace( const char *lrparam, char *findstr, char *replacestr ) 
 { 
   int res = 0; 
   char *result_str; 
   char lrp[1024]; 
    
   sprintf( lrp, "{%s}", lrparam); 
    
   result_str = strReplace( lr_eval_string(lrp), findstr, replacestr ); 
    
   if (result_str != NULL ) 
   { 
     lr_save_string( result_str, lrparam ); 
     free( result_str ); 
     res = 1; 
   } 
   return res; 
 }

Action()
{
                char *SetUpDt;
SetUpDt=lr_eval_string("{ParamSetUpDt}");
lr_output_message(" The Original String = %s", SetUpDt); 

lr_save_string(SetUpDt, "MyPar"); 
web_convert_param( "MyPar", 
                "SourceEncoding=PLAIN", 
                 "TargetEncoding=URL",
                LAST);

lr_output_message(" The Converted String = %s", lr_eval_string("{MyPar}")); 

lr_replace("MyPar", "%5Cx2D", "-" ); 

lr_output_message("The Replaced String = %s", lr_eval_string("{MyPar}"));

Return 0;
}

Output:
The original String =2012\x2D02\x2D09
The Converted String = 2011%5Cx2D04%5Cx2D11
The Replaced String = 2012-02-09

Monday, September 3, 2012

Transaction Names as Numbers in Loadrunner Analysis

Problem statement  
While generating analysis report the transaction summary has numbers instead of transaction names. This issue is seen mostly in case of test being run in performance Centre. 
The possible reasons for this behaviour are:
  • Version conflict between controller and Load Injector.
  • Map file was missing on one of the load generators.
Analysis of the load test run showed that the load test had exceeded its timeslot and ended with the error "Executing run failed to stop in a timely manner". Furthermore no load test results were collated for this load test. As a consequence test result collation was performed manually by setting the collator status to "before collating results" and then collating the results.

      Solution

Install correct version of Load generator on all the LG machines.

In order to resolve this problem, where no map file is used and all information is written to the eve file, revert to the old way of writing the results using these steps:

a) Close all the instances of Controller and make sure no test is running.
b) On the Controller machine search and find the file "Wlrun7.ini".
c) Back up the file to save the original version
d) On the file go to [GENERAL] section and add the line: EveVersion=2, close and save the changes.
e) Launch the controller and run the Load Test. If this is controller machine in Performance Centre implementation, perform this operation on every controller machine that is running a load test.
f) Open the results with analysis and all the transactions should be present.
This problem is fully resolved in Load Runner 11 and Performance Center 11, as information to the MAP file is written during the load test itself and not only at the end of the load tests

      Alternate Workaround

Try the following workaround so that Analysis will display names under transaction names rather than numbers in the Analysis Report.
1. Open a new analysis session and change the option there  in Tools ---> Options ---> Result collection --->Choose "Generate summary data only."
The default is "Display summary while generating complete data."
 2. Then, go to File ---> Open ----> Change the "Type of File"  to "Load Runner Results."  The default is "Analysis Session Files."
 Choose the .lrr file and Click on "Open". 
3. The Analysis report will be generated.  Save the .lra file.
Open the .lra file and check the Summary report for transaction names.