Assumptions -- This is my experience on a Web based Java application. There will be a separate post around Stress and Load testing.
When you start off on a project, it is very easy to forget about performance testing and the significance of it.
I tried my best to sell the idea of performance testing to the business but failed initially. The lesson I learned was to talk in the language of the stakeholders. Performance testing is not something we do at the end. By monitoring application performance at regular intervals, it is easier to analyse and fix the code. It would also educate us in avoiding the mistakes in the future. Thus we can avoid big bang performance testing and fixing right before the release. The stakeholders need to be informed about the advantages of doing continuous performance testing.
There are two types of performance testing namely "front end" and "back end".
Front end performance deals with how different browsers respond to the scripts. It analysis the page load / wait times of different components of the page. This translates in to user behaviour. Statistics state that on an average an user is happy to wait for four seconds for the page to load. Anything over ten seconds, you would start losing potential customers to the site.
Back end performance deals with how different services and servers cope up with the load. Database performance is also covered under this area.
We can run the performance tests at different levels namely Application, Service, Unit levels. I treat these tests similar to how I would treat my functional tests. And like automated tests, the higher the level of tests the more difficult it gets to analyse the problem.
Different tools would allow you to run these tests. The tools may also vary based on the programming language you are using.
For example JProfile is used to monitor JVMs and the application at a lower level (for classes and methods) or dotTrace in the .Net world.
After which we can test the application at a service / API level (maybe by using tools like Jmeter for generating the load)
And the top level is to hit the application through the web interface or even headless for starters. I have used Browsermob (Neustar) or VSTS if you are a .Net addict.
Running the performance tests from your local machines would limit you to the performance of those machines itself. So it is recommended to run it on the cloud. You could fire the test scripts to make it run against different build agents or virtual machines.
We need to make sure the performance test environments are configured and set to be production like. It is a common belief that having a production like environment is very expensive. The alternative solution is to fire up virtual machines. This worked for me as in some cases the production environments were also hosted on virtual machines.
It is highly beneficial to have the performance test environments in an independent infrastructure. If they are in a shared environment then we need to analyse how much and how often the other systems are going to impact the performance of the application.
Because our application was hosted on virtual machines, we had the privilege to scale up the machines and services to observe performance improvements.
We had to make sure that the performance test scripts that are written is mimicking actual user behaviour. For application level tests we were comparing statistics between the beta and the legacy systems. This helped us to configure our tests accordingly. We were running these tests from Browsermob(Neustar) but monitoring was done through internally built tools using Graphite Gdash.
What and how you monitor plays a significant role.
From the front end perspective, the things you may need to typically monitor is the response time (how long does it take to serve a request) and throughput (how many requests can it serve per second). If you have a legacy system then you can compare these statistics generated from production to the stats on the performance environment.
Ideally the monitoring tools should be sitting as close to the application servers as possible. If the monitoring tools are too far away in space and time then you would not have real time accurate statistics. If the tools are in the same environment as the system under test, then that too may effect the performance of the environment.
Common issues that one faces are memory leaks and CPU reaching its limits. So from the back end perspective, things you would find helpful in monitoring are CPU usage and memory for various machines and services.
Ideally you should not take into account network lag as part of the metrics. As you can do only as much about it. And it would also hide the actual problems in the system under test. Hence the run the tests through the cloud servers located closest to your environments.
Once you run the performance test and capture the results, then it is time to tackle the problem. Finding the bottleneck and fixing it may result in another bottleneck surfacing up.
While performance testing, change one thing at a time. For example try not to change the script and the application version at the same time. In that case if you see any discrepancies then you do not know whether it was the application version or the script that caused it. Hence it is a good idea to create baselines for every change you make and take baby steps to achieve the end result.
One of the good practices is to run automated performance test scripts as part of your Continuous Integration pipeline. By taking this approach, we had to set a performance threshold based on which the build would fail. This gives us a fast feedback like any other functional tests.
Some lessons learned:
1) Something which helped increase the performance of the system is caching. We should have some level of intelligent caching. Too much of caching will not help either as users would start seeing stale information. Caching can be considered at several levels. You can have caching at three different levels: the Content Delivery Network (CDN) level, the application level or even at the service / database levels.
While coding we should make sure that static content is cached at some level. Because in our case the static content was being served from one of the services which was eating up all its memory.
2) Services and databases should not be doing background tasks while their primary purpose is to serve the customer. Having other instances of these services to do the background tasks would decrease the risk of performance problems.
3) Sometimes databases queries too can be expensive. Monitoring these queries and to try and minimise the calls to the database is an effective method of increasing the system performance.
4) There may be instances where you would have some sections of the pages making extra calls to the backend systems to retrieve additional information even though not all end users may require it. So it would be better to analyse these conditions and wrap these sections such that the additional calls would be made only when a user genuinely wants that information.
Note: Browsermob has its only API through which we can write the perf test scripts. We can write two types of scripts. One which runs the test through the browser and the other one is headless. The costs differ based on which one you run.
P.S. It is tricky to write performance tests with sessions and logins in it.
When you start off on a project, it is very easy to forget about performance testing and the significance of it.
I tried my best to sell the idea of performance testing to the business but failed initially. The lesson I learned was to talk in the language of the stakeholders. Performance testing is not something we do at the end. By monitoring application performance at regular intervals, it is easier to analyse and fix the code. It would also educate us in avoiding the mistakes in the future. Thus we can avoid big bang performance testing and fixing right before the release. The stakeholders need to be informed about the advantages of doing continuous performance testing.
There are two types of performance testing namely "front end" and "back end".
Front end performance deals with how different browsers respond to the scripts. It analysis the page load / wait times of different components of the page. This translates in to user behaviour. Statistics state that on an average an user is happy to wait for four seconds for the page to load. Anything over ten seconds, you would start losing potential customers to the site.
Back end performance deals with how different services and servers cope up with the load. Database performance is also covered under this area.
We can run the performance tests at different levels namely Application, Service, Unit levels. I treat these tests similar to how I would treat my functional tests. And like automated tests, the higher the level of tests the more difficult it gets to analyse the problem.
Different tools would allow you to run these tests. The tools may also vary based on the programming language you are using.
For example JProfile is used to monitor JVMs and the application at a lower level (for classes and methods) or dotTrace in the .Net world.
After which we can test the application at a service / API level (maybe by using tools like Jmeter for generating the load)
And the top level is to hit the application through the web interface or even headless for starters. I have used Browsermob (Neustar) or VSTS if you are a .Net addict.
Running the performance tests from your local machines would limit you to the performance of those machines itself. So it is recommended to run it on the cloud. You could fire the test scripts to make it run against different build agents or virtual machines.
We need to make sure the performance test environments are configured and set to be production like. It is a common belief that having a production like environment is very expensive. The alternative solution is to fire up virtual machines. This worked for me as in some cases the production environments were also hosted on virtual machines.
It is highly beneficial to have the performance test environments in an independent infrastructure. If they are in a shared environment then we need to analyse how much and how often the other systems are going to impact the performance of the application.
Because our application was hosted on virtual machines, we had the privilege to scale up the machines and services to observe performance improvements.
We had to make sure that the performance test scripts that are written is mimicking actual user behaviour. For application level tests we were comparing statistics between the beta and the legacy systems. This helped us to configure our tests accordingly. We were running these tests from Browsermob(Neustar) but monitoring was done through internally built tools using Graphite Gdash.
What and how you monitor plays a significant role.
From the front end perspective, the things you may need to typically monitor is the response time (how long does it take to serve a request) and throughput (how many requests can it serve per second). If you have a legacy system then you can compare these statistics generated from production to the stats on the performance environment.
Ideally the monitoring tools should be sitting as close to the application servers as possible. If the monitoring tools are too far away in space and time then you would not have real time accurate statistics. If the tools are in the same environment as the system under test, then that too may effect the performance of the environment.
Common issues that one faces are memory leaks and CPU reaching its limits. So from the back end perspective, things you would find helpful in monitoring are CPU usage and memory for various machines and services.
Ideally you should not take into account network lag as part of the metrics. As you can do only as much about it. And it would also hide the actual problems in the system under test. Hence the run the tests through the cloud servers located closest to your environments.
Once you run the performance test and capture the results, then it is time to tackle the problem. Finding the bottleneck and fixing it may result in another bottleneck surfacing up.
While performance testing, change one thing at a time. For example try not to change the script and the application version at the same time. In that case if you see any discrepancies then you do not know whether it was the application version or the script that caused it. Hence it is a good idea to create baselines for every change you make and take baby steps to achieve the end result.
One of the good practices is to run automated performance test scripts as part of your Continuous Integration pipeline. By taking this approach, we had to set a performance threshold based on which the build would fail. This gives us a fast feedback like any other functional tests.
Some lessons learned:
1) Something which helped increase the performance of the system is caching. We should have some level of intelligent caching. Too much of caching will not help either as users would start seeing stale information. Caching can be considered at several levels. You can have caching at three different levels: the Content Delivery Network (CDN) level, the application level or even at the service / database levels.
While coding we should make sure that static content is cached at some level. Because in our case the static content was being served from one of the services which was eating up all its memory.
2) Services and databases should not be doing background tasks while their primary purpose is to serve the customer. Having other instances of these services to do the background tasks would decrease the risk of performance problems.
3) Sometimes databases queries too can be expensive. Monitoring these queries and to try and minimise the calls to the database is an effective method of increasing the system performance.
4) There may be instances where you would have some sections of the pages making extra calls to the backend systems to retrieve additional information even though not all end users may require it. So it would be better to analyse these conditions and wrap these sections such that the additional calls would be made only when a user genuinely wants that information.
Note: Browsermob has its only API through which we can write the perf test scripts. We can write two types of scripts. One which runs the test through the browser and the other one is headless. The costs differ based on which one you run.
P.S. It is tricky to write performance tests with sessions and logins in it.
No comments:
Post a Comment