Quantcast
Channel: Qants - Tips in the field of Java Performance & Software Testing » load and performance report
Viewing all articles
Browse latest Browse all 2

Java EE – Load & Performance – Testing and reporting using open source tools – Part 1 – Concept, examples and sample reports

$
0
0

The purpose of this article is to present an alternative to commercial tools used for load testing and reporting. It is the reflection of the solution i have built by using open source tools such as:

  • JMeter
  • MySQL
  • Pentaho Data Integration
  • Jasper Reports – Jasper Server – IReport
  • Shell Scripting
  • Java native tools

I will cover all topics from generating the load, storing the test results into a readable format, transforming and importing them into the database, generating the load and performance reports and integrating the load and performance tests into continuous integration systems.

In order to make it easier to read and understand, i will start this article by looking at the final output, the reports. Let’s have a look at the possible load and performance reports we might need.

I will split and describe the parts of the report before mixing them together again, and presenting the final report. Although i have tried to make the charts as readable as possible, most of the charts are too large to be reduced to a printable size, so that the charts presented here are a “zoomed out” version of the ones the solution uses.

The good part is that you can always build the same report with different sizes, a smalle one for BI and a larger one for debug purposes (where timestamps and chart information is clearly visible)

Load and performance reports with Jasper Reports

There are several types of information that we need to have in the report, so i will group them as follows (also grouped in the graphical representation)
Test relevant information:

Load & Performance Report - Test Information

  • the major version of the software under test and the hudson build number of the software under test
  • the unique test run id and the date and time when the test was run
  • the name of the test scenario – what component is this test focusing on

Load configuration:

  • number of virtual users involved in the test scenario
  • ramp up time for the virtual users
  • generic waiting times between transactions called by the virtual users (constant and uniform delay)

Database specfic information:

  • number of records in the database before starting the test and after running the test (records of interest to the test)

Application server configuration:

Load & Performance Report - Server configuration information

  • garbage collection strategy
  • memory settings (initial, maximal, young, old, survivor ratio)
  • number of http threads that the application server is configured to use (minimum and maximum)
  • number of jdbc connections that the application server is configured to use (minimum and maximum)
  • other information as JDBC statement cache
  • detailed description of the server configuration (if needed)


Response times distribution for each of the requests involved in the test scenario:

Load & Performance Report - Response times per transaction

  • name of transaction
  • total number of succesful transactions
  • minimum response time
  • maximum response time
  • average response time
  • standard deviation
  • 50 to 90 percent line
  • average number of transactions / second

Dual axis graphical representation (transactions, response times):

Load & Performance Report - Transactions vs Response Times

  • the number of transactions over time ( represented in total number of transactions per second ) (left y axis)
  • the response times for the transactions ( each response time is represented as an average of all response times of all requests taking place in one second. There will be a line for each of the requests involved in the test scenario (if there are for example 5 different requests from different users in the same time, there will be 5 different lines represented in the chart) (right axis y)
  • the absolute maximum response time, measured per second, from all requests (if there are 5 requests in one second, the first one with 2 seconds response time, and the last one with 1 second response time, the response time for the first request will be represented. This is useful to detect spikes in response times) (most right axis y)
  • the requests involved in the test scenario (each transaction has it’s own unique color, and is represented with the use of bars in the chart)
  • timestamp (x)

Application server performance:

CPU USAGE

Jasper Report - Performance Report - CPU UsageMEMORY USAGE

Jasper Report - Performance Report - Heap Usage

HTTP THREAD POOL USAGE

Jasper Report - Performance Report - Thread Pool Usage

JDBC THREAD POOL USAGE

Jasper Report - Monitor JDBC Connections
HTTP THREAD USAGE

Jasper Report - Monitoring of Http Thread Pool

We are now ready to combine all the information above into a single report. Normally, the application server monitoring charts would also be part of the report, but due to the size of the final report used in debugging, i cannot post it here. Normally, this would reside under the dual axis representation

Jasper Report – Template 1 – Full Load and Performance Report

Load & Performance Report with Jasper Reports

You can download the pdf (BI version) of this report from the following location:


http://qants.files.wordpress.com/2011/05/template-1-full-report.pdf

Jasper Report – Template 2 - Performance report / request – Drill down reporting

If we wanted to see the behaviour of one request involved in the test, we would need to be able to select a single request from the main report, and drill down into it, analyzing the above information only for the selected request. The final report will look almost identical to the main report, the only difference being that all other requests will be filtered (not shown and not taken into consideration) This allows us to take an isolated look on the behaviour of a specific request with regard to the behaviour of the test in general

This is how the header will look like for the “Home” transaction, after drilling down from the report above. The drill down will select a transaction, and rebuild the report only for that transaction

Drill down performance report

Template 2 - Filtered information

Surely, the chart will also be filtered (this is only a sample of it, since it does not match the size of the image), and will follow exactly the same structure as Template 1, the main report

Template 2 - Filtered chart for one transaction


Jasper Report – Template 3 – Performance report sampling – time filtering

If we wanted to see the behaviour of the application under test at a specific point in time, we would need to be able to generate the report only for a selected timeframe. If for example we notice a sudden drop down in the throughput (number of transactions / second), we need to take a deeper look in the timeframe when this happened. The filtering of the timestamp will be reflected in all information, from response times, to the chart. (response times, average transactions, min and max times, etc.)

This is also very useful to see when the application is “on top”: all users have logged in, and the maximum number of transactions / second can be acchieved at this moment. You can therefore use such report to identify the “maximum number of transactions on top” and the corresponding response times

This is how the chart will look like after filtering for one minute ( the selected timeframe) of the test. As one can see, the data is shown only for the selected timeframe.

Template 3 - Time filtering load and performance report

Jasper Report – Template 4 – Performance Report Memory usage

Template 4.1 – Application specific – Application own objects

We need to be able to monitor specific objects over time. We can have multiple reports, each representing different memory performance areas. The report below is a graphical representation of usage of objects belonging to the application under test, over time. Since the number of objects used in an application can be very high, the chart needs to be extended horizontally, therefore the sample below is a zoomed out version of the chart. As one can see, each object is represented with a different color, over time, so one can detect increases or decreases in number of objects over time.

The right side displays the legend, showing all object names and corresponding colors and forms used in representing in the chart.

Memory Usage Monitoring with JMap - Application own objects

This is a sample of the report above, which demonstrates how the chart looks when zoomed in

Memory Usage Monitoring with JMap - Application own objects

Template 4.2 – Application specific – Application own objects – Filtered object

We need to drill down in the above report, by selecting the object for which we want a detailed report. This is how the report will look like. The upper part denotes the number of objects, the lower one the storage needed.

Memory usage with JMap - Focus on object

Template 4.3 – Application specific – Java Base Type objects

This report is using the same concept as the one above, but only the java base types are displayed (chars, integers, arrays, double arrays, etc.) The upper part displays the number of objects in memory at the time of sample, the one below displays the total number of bytes occupied in memory.

As in the chart above, the history is displayed on the right side

Memory Usage Monitoring with JMap - Primitives


Basically, as it will be explained in the details section, the memory analysis reporting can be extended to any pattern of objects by simply adding them to the function collecting the memory usage information. In the end it is a question of creating a separate query for each of the type of objects we need, and use it in a separate dataset or report. I have for example created different reports for eclipselink objects, javascript objects, vectors lists and arrays, etc.


I discussed about the outputs, the performance reports. Now i will focus myself on how to build the integrated load testing solution, from generating the load to presenting it in graphical representations, and integrating them into continuous integration systems, or automating them as individual test suites.

The rest of the items following will be presented using the following structure:

CONFIGURING LOAD GENERATORS

  1. Load testing generators – Getting your application under load with distributed JMeter
    • basic concept of jmeter in distributed mode and network architecturing
    • detail of the concept to which this white paper is referring
    • detail of the folder structure needed (results, temporary folders, transformation folders,classes, scripts and testplan folders,property files)
    • configuring the server and the agents
  2. Test plan structure
    • configuring the testplan to use dynamic generated parameters (runtime parameters and property files)
  3. Test script structure
    • name of the testplan
    • location of the testplan
    • jmeter agents to use in the load test
    • workflow controlling : defining what and the order of the functions called (starting the test, collecting memory information, collecting jms information, warming up, collecting logs data transformation, generating performance report, etc.) thereby controlling the run of the test
    • runtime parameters – virtual users, ramp up time, pause times, number of repetions, number of loops / repetition, application specific parameters, etc.)
  4. Function file
    • detail of the file containing all functions needed for controlling the load test, and of the functions in the file
  5. Storing and processing the test results
    • selecting what information to store
    • transforming the results file into a fully xml compliant file for further ETL processing – detail of processing functions
  6. Storing and processing additional collected data
    • use of shell functions for collecting application server information such as memory, jms usage, ejb cache usage and so on
    • detailing of tools used to collect application server information:
      • jmap for memory
      • imqcmd for jms
      • vmstat for cpu
      • glassfish rest monitoring for app server resources

ETL – EXTRACT TRANSFORM AND LOAD – DATA PROCESSING AND IMPORTING INTO THE DATABASE

  1. Database structure:
    • detail of the structure needed for performance reporting
    • ER Diagram
  2. Importing test results into the database using Pentaho Data Integration Jobs
    • import job for response times and throughput
    • import job for memory footprinting

REPORTING

  1. Jasper Reports – Generating and storing the performance reports
    • using Jasper Server for reporting server
    • using IReport for designing reports
    • detailing of reports based on real life examples
  2. Continuous integration – Automating and Integrating the tests into CI Systems
    • integrating with hudson: concept, implementation
    • jmeter plugin description and configuration
    • automatically generating reports with use of JMeter testplans
This is it for now. In the next article i will discuss the first point, configuring the load generators, with the rest of the points to follow in the near future. Please let me know if you have any questions to be cleared up to this point, and if you find this article useful.
Alex

End of part 1 – Load & Performance testing and reporting with open source tools –  Concept, examples and sample reports



Viewing all articles
Browse latest Browse all 2

Trending Articles