Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. Performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.
Performance testing can verify that a system meets the specifications claimed by its manufacturer or vendor. The process can compare two or more devices or programs in terms of parameters such as speed, data transfer rate, bandwidth, throughput, efficiency
Performance testing can verify that a system meets the specifications claimed by its manufacturer or vendor. The process can compare two or more devices or programs in terms of parameters such as speed, data transfer rate, bandwidth, throughput, efficiency
Performance testing is commonly used synonymously with load testing, although there are subtle differences in professional’s opinions. Performance testing is very important because there is a direct correlation between fast and stable web applications and the revenue generated from them. Shoppers, researchers, and just about any type of user today will not tolerate errors. Nor do people have patience with a page that takes more than 5 seconds to load. Performance = profit.
Scalability of the system is also a primary consideration because future performance is very important for websites given the nature of web linking and the Slashdot Effect that can generate an enormous volume of users in a short period of time.
We can find certain differences between terms used in Performance Testing, Wiki explains them better, Find below the differences explained by Wiki:
1. Load testing: is the process of putting demand on a system or device and measuring its response. There is little agreement on what the specific goals of load testing are. The term is often used synonymously with software performance testing, reliability testing, and volume testing.
2. Performance Testing: in the computer industry is used to determine the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.
3. Web testing: is the name given to software testing that focuses on web applications, and is one of the fastest growing areas of software testing. Complete web testing of a system before going live is the primary step to get assured of an entire web application’s ability to work properly. It can help address such issues like readiness of your web server for the traffic you are expecting and for the increasing number of users (Load testing), the ability to survive a massive spike in user traffic, your server hardware sufficiency and so on.
4. Stress testing: refers to tests that put a greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. In particular, the goals of such tests may be to ensure the software doesn't crash in conditions of insufficient computational resources (such as memory or disk space), unusually high concurrency, or denial of service attacks.
5. Endurance testing: is usually done to determine if the application can sustain the continuous expected load. During endurance tests, memory utilization is monitored to detect potential leaks. Also important, but often overlooked is performance degradation. That is, to ensure that the throughput and/or response times after some long period of sustained activity are as good or better than at the beginning of the test.
6. Soak testing: involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use. For example, in software testing, a system may behave exactly as expected when tested for 1 hour. However, when it is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.
7. Spike testing: is done by spiking the number of users and understanding the behavior of the application whether it will go down or will it be able to handle dramatic changes in load.
8. Configuration testing: is another variation on traditional performance testing. Rather than testing for performance from the perspective of load you are testing the effects of configuration changes in the application landscape on application performance and behaviour. A common example would be experimenting with different methods of load-balancing.
9. Isolation testing: is a term used to describe repeating a test execution that resulted in an application problem. Often used to isolate and confirm the fault domain.
10. Reliability testing: often consists of conducting a test on an item (under specified load and conditions) to determine the time it takes for a failure to occur. Forcing failures also allows analysis of the mode of failure for possible corrective actions.
No comments:
Post a Comment