Quantcast
Channel: Qants - Tips in the field of Java Performance & Software Testing » load and performance report
Viewing all articles
Browse latest Browse all 2

Java EE – Load & Performance – Testing and reporting using open source tools – Part 2 – Configuring Load Generators – Generating load with JMeter, storing, filtering and preparing results for importing

$
0
0

CONFIGURING LOAD GENERATORS – JMETER

.

Load testing generators – Getting your application under load with distributed JMeter

      1. CONFIGURING JMETER DISTRIBUTED MODE
        JMeter is the number one open source tool for generating load. It simulates multiple virtual users, by running the test for each virtual user in a separate thread. It is built on Java Platform, hence it provides high portability. It provides a straight-forward graphical interface for designing and running tests, which turns to be very helpful for debugging purposes. Additionally, it allows running testplans in command line interface, eliminating therefore the resource consumption overhead of the GUI.Configured in Distributed Mode, JMeter allows using multiple instances of JMeter as agents (slaves), using the RMI protocol and configurable ports. Thank to this, any new JMeter Agent can be “plugged in”at any time by simply starting the agent, and adding it to the list of slaves.It is of course best to keep your JMeter environment in a separate subnetwork, in order to avoid any outer factors like network delays due to traffic in the main network. I suggest using the JMeter Server as a gateway to the JMeter load subnetwork, so that the JMeter server has two network interfaces:
        • One for communicating with the main network, where the database and reporting server will reside
        • One for communicating and controlling the load on the JMeter Agents residing in the JMeter load subnetworkA simple graphical representation of this structure looks like below:
        JMeter Distributed Testing Structure
        JMeter Distributed Testing Example
        .

        Now, JMeter is a very flexible solution, and it is designed to get your tests up and running in no time. In order to run in in distributed mode, one of your servers has to be the controller. I suggest keeping this server as a controller only, and not using it also for generating load, but for collecting results, starting transformations, running scripts,etc. Think of it like a “Test Commander

        There are two steps that have to be performed in order to get your distributed environment running:

          • Configure the JMeter Master Controller

        This is quite simple. You need to modify one single property file, residing in the bin folder of your JMeter installation folder. The file to be modified is “jmeter.properties
        Look for the following block:

        #---------------------------------------------------------------------------
        # Remote hosts and RMI configuration
        #---------------------------------------------------------------------------
        # Remote Hosts - comma delimited
        remote_hosts=127.0.0.1
        #remote_hosts:localhost:1099,localhost:2010This is where you have to add your JMeter servers, as remote_hosts, separated by comma, like this:remote_hosts=192.168.1.2,192.168.1.3,192.168.1.4

        I do suggest adding the JMeter agents as a list of IP’s instead of hostnames, to avoid any DNS problems due to faulty network configurations.

          • Configure the JMeter slaves

        The only thing left to do is starting the JMeter agents in “server mode”. I suggest creating a service in your startup folder (etc/init.d or a windows service) so that the JMeter agent starts up at every server restart.
        In order to start JMeter in server mode you have to run the following file, residing in the same “bin” folder of the JMeter installation folder on the JMeter agents:
        bin/jmeter-server

      2. GENERATING LOAD AND STORING THE RESULTS

The solution to which this series or articles is referring to is using XML as test results output. There are two reasons behind choosing XML for output:

    • Preparing the test results for data transformation

XML is preferred because of the simple xpath extractor. It will get very handy when we want to group and export additional information like test date, build number, software version, etc.

    • Preparing for the Hudson JMeter plugin which expects xml files for reporting

The Hudson plugin comes very handy when analyzing trends between several runs over different software versions. It expects an xml file which it will parse and build the performance report and trend. The output looks like this:

JMeter Hudson Performance Plugin

JMeter Hudson Performance Plugin

 .


.

We will use one JMeter Master Controller called “master” and three slave agents that we will call “agent1″, “agent2″, “agent3″. The JMeter Master Controller is the server that we will use as test controller. It is the server where we store the test plans, test scenarios, function files, test results. It is the location which will hold the entire file structure needed for running a load test. It is the “Test commander”, and will only be used for generating load, exporting data, and running additional functions. What it will definitely not fulfill is the function of generating load, where the load will only be generated by the JMeter agents (slaves).
Besides this, the JMeter agents only need to be started in “server mode”. The file structure (repository) of the JMeter Master Contoller does not need to be manually synchronized on the agents. When starting a test, the agent will retrieve a temporary copy of the test plan from the JMeter Master Contoller which it will hold onto for the time of running the test

  • Starting a simple test in distributed mode and storing the results locally

With JMeter properly set up and configured, we are now ready to start a simple test in distributed mode. Let us suppose we now have a testplan called “DemoDistributedTest.jmx“, stored in the /tmp/jmeter/testplans folder. Since we are only interested in running the test in command line , this is the command template needed:


jmeter -n -t /tmp/jmeter/testplans/DemoDistributedTest.jmx -Jagent1,agent2,agent3 -l /tmp/jmeter/testresults/DemoDistributedTest.xml

This will start the test in non-gui mode (-n flag) indicating the location of the testplan in the Repository (the folder structure from above) (-t flag), choosing the JMeter agents to run the test with (-J flag) and storing the results into a selected file

  • Adding a jmeter agent to the load configuration

As mentioned above, you only need to set up JMeter on an additional agent server, which we will call agent4. After adding the new agent to the list of JMeter agents in the Master configuration file, we are ready to repeat the test with 4 JMeter agents:

jmeter -n -t /tmp/jmeter/testplans/DemoDistributedTest.jmx -Jagent1,agent2,agent3,agent4 -l /tmp/jmeter/testresults/DemoDistributedTest.xml

  • Preparing for automation and dynamic parameters

We have now run a test on multiple agents, by defining the testplan, the agents and the output file using the command line. Going on from here, we need to provide all this information in a dynamic way. We need to be able to run the command above by using some sort of controlling scripts. I will use some sort of pseudo codes for expressing what i want to achieve.


PSEUDOCODE: RUN TESTPLAN testplan_name ON AGENTS agent_list STORE THE RESULTS IN result_file

This translates into the following command line:

jmeter -n -t ${TESTPLAN} -J${AGENT_LIST} -l ${RESULT_FILE}

Additionally, we need to be able to specify not only test configuration parameters, but also application test parameters, like usernames, passwords, number of virtual users, pause times and so on:

PSEUDOCODE:RUN TESTPLAN testplan_name ON AGENTS agent_list WITH THE FOLLOWING PARAMS runtime_parameters STORE THE RESULTS IN result_file

This translates into:

jmeter -n -t ${TESTPLAN} -J${AGENT_LIST} -l ${RESULT_FILE} - parameters

We are now ready to create our first test script. There are three sections that each and every test script will contain:

    • header - this will contain test configuration information: location of the testplans, agent list as well as the file containing the functions for controlling the test. This information will be stored in variables in the header.
    • body - this is the part where the functions controlling the run of the test will reside, together with other monitoring tasks that need to be running while the test is running
    • footer - this is where we will end the testplan, and generate the reports. Since we want the test plans and test results to reside under a logical file structure, we can now build the structure of folders needed. This looks like:
          • testplans
          • scripts
          • results
          • includes  {contains files with functions and global variables needed by every test}
          • propfiles {contains application test specific variables, will be discussed later}

By now our template test script looks like this:

# META
# Created: 28 July 2009
# Owner: Alexandru Ersenie
# Description: Description of what the test does
# HEADER
JMETER_TEST_FOLDER="/home/testing/jmeter/"
. ${JMETER_TEST_FOLDER}includes/testconfiguration.properties
TESTPLAN_SUBFOLDER="homepage/login/"
TESTPLAN_FILE_NAME="Tplan_Login.jmx"
JMETER_AGENTS="agent1,agent2,agent3"
# BODY
jmeter -n -t ${JMETER_TESTPLANS}${TESTPLAN_SUBFOLDER}${TESTPLAN_FILE_NAME} -R ${JMETER_AGENTS} -l ${JMETER_RESULTS}${TESTPLAN_FILE_NAME}.xml

  • Function file and global variables

Running a test is not enough. We want to be able not only to pass runtime variables to the test, but also collect all kind of application server statistics, control if the test is going to store the results in the database and generate a report, if it uses a warm up or not,etc. We need to be able to clear the application server log files before starting the test, and moving them after running the test into a location where we can reactively analyse them, and so on.
In order to achieve this, there is a need for some sort of functions controlling the set up, runtime, tear down and export phases.

It will then be very easy to create a script template which will call the functions that reside in this file. This is how a test script will look like in the end. The functions are explained below:


# META
# Created: 28 July 2009
# Owner: Alexandru Ersenie
# Description: Description of what the test does
# HEADER
JMETER_TEST_FOLDER="/home/testing/jmeter/"
. ${JMETER_TEST_FOLDER}includes/testconfiguration.properties
TESTPLAN_SUBFOLDER="homepage/login/"
TESTPLAN_FILE_NAME="Tplan_Login.jmx"
JMETER_AGENTS="agent1,agent2,agent3"
# BODY
log_test_config
get_db_info
empty_temp_files
configure_appserver
run_scenario_warmup
trace_jms_usage &
trace_class_loading &
trace_gf_statistics
run_scenario
wait_for_user_input
#FOOTER
collect_logs
write_testInfo
log_heap_config
check_reporting
log_test_end
exit

SET UP PHASE

The set up phase contains functions needed to get the test running. This includes: configuring the application server, removing server log files, removing temporary files, retrieving actual status of the database, etc.

  • function log_test_config: logs information regarding the test run to the console and into a log file


    function log_test_config
    {
    mkdir -p ${JMETER_RESULTS}
    echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Test Configuration: Test will be performed on deployed version: "$APPNAME" - Build Number: "$APP_VERSION" - Revision Number: "$REVISION_NUMBER"" | tee ${JMETER_RESULTS}/run.log
    echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Test Configuration: Test will be performed on server: "${HOST}" port "${PORT}" protocol "${PROTOCOL}""
    echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Test Configuration: "$VIRTUAL_USERS" virtual users starting in "$RAMP_TIME" seconds " | tee -a ${JMETER_RESULTS}/run.log
    echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Test Configuration: "$TRANSACTIONS" transactions will be generated, waiting between "$PAUSE_TIME" and "`expr $PAUSE_TIME + $PAUSE_TIME_DEV`" ms between requests" | tee -a ${JMETER_RESULTS}/run.log
    echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Test Configuration: Repeating the scenario "$REPETITIONS" times, increasing the load "$LOOPS" times with "$ADDEDVU" users / repetition" | tee -a ${JMETER_RESULTS}/run.log}

  • function get_db_info: retrieves the number of specific records in the database at the start of the test by running a JMeter test containing a JDBC Select Request. The testplan will store the total number of records into an output file, which will be grepped after. The best part here is that you can add as many requests in the jmeter test plan, and then just extract them one by one from the output file and use them further on in your report.


    function get_db_info
    {
    jmeter -n -t ${JMETER_TESTPLANS}extra/Tplan-DBInfo.jmx -JRESULT_FOLDER=${JMETER_RESULTS}
    DBINFO_RECORDSINDB=`grep -E '[0-9]' ${JMETER_RESULTS}/db_info*`
    echo "Number of records in Database:" ${DBINFO_RECORDSINDB}
    }

  • function empty_log: connects via ssh to the application server and empties the log files. It will clear the application server log file, the garbage collection and java virtual machine log file.


    function empty_log
    {
    echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Removing old Glassfish and JVM logs" | tee -a ${JMETER_RESULTS}/run.log
    ssh ${DOMAIN_SERVER} sudo -u glassfish /bin/bash -c "'rm ${GLASSFISH_PATH}/${DOMAIN_NAME}/logs/server.log_*'"
    ssh ${DOMAIN_SERVER} sudo -u glassfish /bin/bash -c "'cat /dev/null > ${GLASSFISH_PATH}/${DOMAIN_NAME}/logs/jvm.log'"
    ssh ${DOMAIN_SERVER} sudo -u glassfish /bin/bash -c "'cat /dev/null > ${GLASSFISH_PATH}/${DOMAIN_NAME}/logs/jgc.log'"
    }

  • function empty_temp_files: clears the temporary files used by the monitoring and exporting functions (jms monitoring, glassfish statistics, etc.)


    function empty_temp_files
    {
    cat /dev/null > /tmp/running
    cat /dev/null > /tmp/jms_monitoring
    cat /dev/null > /tmp/jmap_objects.log
    cat /dev/null > /tmp/glassfish_stats
    }

  • function configure_appserver: configures the desired number of threads in JDBC and HTTP Thread pool. This is useful when you want to increase the number of threads in the thread pool by keeping the same load


    function  configure_appserver
    {
    echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Server Configuration: Configuring server minimum and maximum number of threads" | tee -a ${JMETER_RESULTS}/run.log
    ssh ${APPSERVER} "sudo -u glassfish /bin/bash -c '/opt/glassfish/bin/asadmin --port '${DOMAIN_ADMIN_PORT}' -u admin --passwordfile /opt/glassfish/passwords set server.thread-pools.thread-pool.http-thread-pool.min-thread-pool-size=${GF_THREADS_MIN}'"
    ssh ${APPSERVER} "sudo -u glassfish /bin/bash -c '/opt/glassfish/bin/asadmin --port '${DOMAIN_ADMIN_PORT}' -u admin --passwordfile /opt/glassfish/passwords set server.thread-pools.thread-pool.http-thread-pool.max-thread-pool-size=${GF_THREADS_MAX}'"
    echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Set up: Starting the test with a threadpool of minimum ${GF_THREADS_MIN} and maximum ${GF_THREADS_MAX} threads" | tee -a ${JMETER_RESULTS}/run.log
    #echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - This test runs with ${VIRTUAL_USERS} Virtual Users / Agent, on following agents: ${JMETER_AGENTS}"
    }

RUNTIME PHASE

The runtime phase contains functions that control the run of the test: warming up, monitoring, memory tracing, etc.
    • run_scenario_warmup: the function takes a command line input parameter which controls if the test is going to be run with a warm-up first or not. It will start a test with a reduced number of virtual users in order to warm up the application server. The test will output the results into a simple log file (not xml), so that the warm up results will not be taken into consideration when importing the test results generated by the load test.

      function run_scenario_warmup
      {
      if [ "$WARMUP" != "yes" ];
      then
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Set up: No server warm up. Test will be started without warming up the server" | tee -a ${JMETER_RESULTS}/run.log
      else
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Set up: Warming up server by running the scenario with "${WARMUP_THREAD_USERS}" users on one agent executing exactly 50 payment transactions" | tee -a ${JMETER_RESULTS}/run.log
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Running: Warming up started" | tee -a ${JMETER_RESULTS}/run.log
      jmeter -n -t ${JMETER_TESTPLANS}${TESTPLAN_SUBFOLDER}${TESTPLAN_FILE_NAME} -R agent1 -l ${JMETER_RESULTS}${TESTPLAN_FILE_NAME}.log -Ggroup1.hostname=${HOST} -Ggroup1.port=${PORT} -Ggroup1.protocol=${PROTOCOL} -Ggroup1.fullhost=${FULLHOST} -Ggroup2.threads=${threads_start} -Ggroup2.ramptime=${rampup_start} -Ggroup2.users=${THREAD_USERS} -Ggroup2.startid=${START_ID} -Ggroup2.synctimer=${SYNC_TIMER} -Ggroup3.pay_transactions=50-Ggroup4.pausetimeconst=${PAUSE_TIME} -Ggroup4.pausetimerandom=${PAUSE_TIME_DEV} -G ${JMETER_PROPFILES}${TESTPLAN_SUBFOLDER}${TESTPLAN_FILE_NAME}.properties &
      wait_for_plan '${TESTPLAN_FILE_NAME}'
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Tear down: Warming up ended, sleeping for 10 seconds" | tee -a ${JMETER_RESULTS}/run.log
      sleep 10
      fi
      }
    • trace_class_loading: this function will trace memory usage statistics every 10 seconds, using jmap. It will connect to the application server and retrieve a map of objects periodically, which will be stored into an export file, that will be processed in the end. The type of objects can be extended by simply adding a regular expression. Currently it retrieves hashmaps,vectors, eclipse persistence objects, javascript objects etc. The function uses a temporary file called “running” which resides in the “/tmp” folder. While the file exists, the collection of statistics keeps going, and stops once the test stops. (when the test stops, the temporary file is removed, so that in the next check, the statistics function will not find the file and will exit). In the end, the file is processed and exported into a final jmap_objects file, which will be used for exporting memory information in the performance database
      function trace_class_loading
      {
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Tracing: Starting to collect Heap Usage statistics with a refresh of 10 seconds" | tee -a ${JMETER_RESULTS}/run.log
      glassfish_pid=`ssh ${DOMAIN_SERVER} sudo -u glassfish /bin/bash -c '/usr/java/jdk1.6.0_24/bin/jps -q | grep ASMain ' | awk '{print$1}'`
      echo "glassfih pid is " $glassfish_pid
      status=`ls /tmp | grep running`
      # As long as the script is running perform Jmap histogram on the server every x seconds, where x is defined in sleep command


      while [ "$status" != "" ];
      do
      JMAP_TIMESTAMP=`date +%H-%M-%S`
      ssh ${DOMAIN_SERVER} "sudo -u glassfish /bin/bash -c '${JAVA_BIN}jmap -histo ${glassfish_pid}'" | egrep -i -e '\[[I,B,C]+' -e 'myobjects' -e 'java.util.Tree' -e 'java.util.Hash' -e 'java.util.Vector' -e 'Klass' -e 'org.mozilla' -e 'com.sun.tools.javac.zip.ZipFileIndexEntry' -e 'java.lang' -e 'java.util.concurrent' -e 'org.eclipse.persistence' | sed -e '/[0-9]/s/$/',"${JMAP_TIMESTAMP}"'/' >> /tmp/jmap_objects.log
      sleep 10
      status=`ls /tmp | grep running`
      done

      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Tracing: Collection of Heap Usage Statistics ended" | tee -a ${JMETER_RESULTS}/run.log

      # After the script has ended perform a full garbage collector. Comment this line if you do not want a FULL GC

      ssh ${DOMAIN_SERVER} "sudo -u glassfish /bin/bash -c '${JAVA_BIN}jmap -histo:live ${glassfish_pid}'" | egrep -i -e '\[[I,B,C]+' -e 'myobjects' -e 'java.util.Tree' -e 'java.util.Hash' -e 'java.util.Vector' -e 'Klass' >> /tmp/jmap_objects.log

      # Move the jmap log in the results folder for further processing

      mv /tmp/jmap_objects.log ${JMETER_RESULTS}/jmap_objects.log

      #Process jmap log for removing spaces - old and very slow

      awk '{printf("%s,%s,%s,%s,%s\n",$1,$2,$3,$4,$5);}' ${JMETER_RESULTS}/jmap_objects.log | awk '{gsub(/:/,"");print}' > ${JMETER_RESULTS}/processed_jmap_objects.log
      cp ${JMETER_RESULTS}/processed_jmap_objects.log ${JMETER_TRANSFORMATION}processed_jmap_objects.log}

    • trace_app_statistics: this function uses glassfish’s rest monitoring interface and curl commands for retrieving information such as jdbc connections, http thread usage, number of specific beans in cache, etc. This function is very flexible, as all data is written in the database in the form of: test_id, timestamp, label, value. Thank to this, any new monitoring item is just a new label in the database, and can be retrieved in a separate report by building a simple query. I will post here a sample of this function, which collects information regarding JDBC Connection usage. It can be easily extended by simply adding a new monitoring item, for example HTTP_THREAD_POOL_THREAD_COUNT

      function trace_app_statistics
      {
      status=`ls /tmp | grep glassfish_stats`
      while [ "$status" != "" ];
      do
      MONITOR_TIMESTAMP=`date +%H-%M-%S`
      #JDBC Monitoring
      JDBC_CONN_USED=`curl -s -u user:password http://glassfish:4848/monitoring/domain/server/resources/EocPool | grep numconnused | grep -o -E '"current":[0-9]*' | sed 's/["]*[a-z]*["][:]*//'`

      echo $MONITOR_TIMESTAMP":JDBC - Connections used:"$JDBC_CONN_USED >> ${JMETER_RESULTS}/glassfish_stats.log

      sleep $CMD_PARAM_INTERVAL
      status=`ls /tmp | grep glassfish_stats`
      done
      cp ${JMETER_RESULTS}/glassfish_stats.log ${JMETER_TRANSFORMATION}glassfish_stats.log
      }
    • run_scenario: this is the main function controlling the run of the test. It takes several parameters as input, such as number of repetitions, number of virtual users, number of loops inside a repetition, number of virtual users to be increased in a loop, ramp up time, timers, etc. It sets the basis for running the test scenario in the following form:

      ./PAYMENT_SCENARIO 'users=150 ramptime=150 pausetime=2000 pausetimedev=200 pay_transactions=500 repeats=1 loopsinrepeat=1 waitforuserinput=no minthreads=150 maxthreads=200 generatereport=no warmup=no startid=100' 'hostname=myappserver port=8080 protocol=http'

      It will repeat the test for the desired number of times, increasing the number of users progressively by the defined number and so on.

      function run_scenario
      {
      scenario_counter=$REPETITIONS
      cool_down=0
      while [ "$scenario_counter" != "0" ];
      do
      # Repeat test for a defined number of times (counter)
      counter_start=$LOOPS
      counter_end=$LOOPS
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Set up: Starting the load test" | tee -a ${JMETER_RESULTS}/run.log
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Set up: Starting iteration number " `expr $REPETITIONS - $scenario_counter + 1` "out of " $REPETITIONS "iterations" | tee -a ${JMETER_RESULTS}/run.log
      threads_start=${VIRTUAL_USERS}
      rampup_start=${RAMP_TIME}
      while [ "$counter_start" != "0" ];
      do
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Running: Running with a number of " ${threads_start} " virtual users / agent" | tee -a ${JMETER_RESULTS}/run.log
      counter_start=`expr $counter_start - 1`
      jmeter -n -t ${JMETER_TESTPLANS}${TESTPLAN_SUBFOLDER}${TESTPLAN_FILE_NAME} -R ${JMETER_AGENTS} -l ${JMETER_RESULTS}/${TESTPLAN_FILE_NAME}.xml -Ggroup1.hostname=${HOST} -Ggroup1.port=${PORT} -Ggroup1.protocol=${PROTOCOL} -Ggroup1.fullhost=${FULLHOST} -Ggroup2.threads=${threads_start} -Ggroup2.ramptime=${rampup_start} -Ggroup2.users=${THREAD_USERS} -Ggroup2.startid=${START_ID} -Ggroup2.synctimer=${SYNC_TIMER} -Ggroup3.pay_transactions=${TRANSACTIONS} -Ggroup4.pausetimeconst=${PAUSE_TIME} -Ggroup4.pausetimerandom=${PAUSE_TIME_DEV} -G ${JMETER_PROPFILES}${TESTPLAN_SUBFOLDER}${TESTPLAN_FILE_NAME}.properties &
      wait_for_plan '${TESTPLAN_FILE_NAME}'
      threads_start=`expr $threads_start + $ADDEDVU`
      if [ "$counter_start" != "0" ];
      then echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Running: Increasing number of virtual users by " ${ADDEDVU} "users / agent" | tee -a ${JMETER_RESULTS}/run.log
      fi
      done
      # Cool Down
      scenario_counter=`expr $scenario_counter - 1`
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Teardown: Load test ended, sleeping for 30 seconds" | tee -a ${JMETER_RESULTS}/run.log
      sleep 10
      done
      }
    • wait_for_user_input: this function waits for the users input in order to stop the test scenario, and to start importing the results. It is useful when you want to keep collecting memory statistics until the test is ended, and you want for example to do a full garbage collection, so that, before importing, you have a list of all objects that are left after the test run, and after a major collection. These will be good candidates for memory leaks. The function takes a command line parameter as input. Once the wait is over, all temporary files are erased, and all functions that run by checking the existence of such temporary files will exit, allowing the main scenario to end, and start the processing of test results and importing into the performance database

      function wait_for_user_input
      {
      if [ "$USERINPUT" != "no" ];
      then
      while : ; do
      read -t 2 && break
      done
      echo exited while loop
      rm -r /tmp/running
      rm -r /tmp/jms_monitoring
      rm -r /tmp/glassfish_stats
      else
      rm -r /tmp/running
      rm -r /tmp/jms_monitoring
      rm -r /tmp/glassfish_stats
      fi
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Analysis Services: Sleeping for 30 seconds to allow jms and performance monitor to stop gracefully" | tee -a ${JMETER_RESULTS}/run.log
      sleep 30
      }

TEAR DOWN

The tear down phase contains functions that control the analysis part of the test: collecting logs, writing test information in a test configuration file, logging heap configuration information, logging the end of the test.
    • collect_logs

      function collect_logs
      {
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Analysis Services: Collecting and compressing application server logs (jvm,jgc,jms,server)" | tee -a ${JMETER_RESULTS}/run.log
      cd ${JMETER_RESULTS}
      # Collect Glassfish logs on Server
      scp ${DOMAIN_SERVER}:${GLASSFISH_PATH}/${DOMAIN_NAME}/logs/server.log* ${JMETER_RESULTS} || on_error "${LOG_TIMESTAMP}: ERROR - Could not perform Glassfish log collecting via SCP on server ${DOMAIN_SERVER} "
      zip -j ${JMETER_RESULTS}/${testrun_id}-server.zip ${JMETER_RESULTS}/server.log*
      rm server.log*
      # Collect JVM logs on server
      scp ${DOMAIN_SERVER}:${GLASSFISH_PATH}/${DOMAIN_NAME}/logs/jvm.log* ${JMETER_RESULTS}/${testrun_id}-jvm-original.log || on_error "${LOG_TIMESTAMP}: ERROR - Could not perform JVM log collecting via SCP on server ${DOMAIN_SERVER} "
      sed 's/\x0//g' ${JMETER_RESULTS}/${testrun_id}-jvm-original.log > ${JMETER_RESULTS}/${testrun_id}-jvm.log
      rm ${JMETER_RESULTS}/${testrun_id}-jvm-original.log
      zip -j ${JMETER_RESULTS}/${testrun_id}-server.zip ${JMETER_RESULTS}/${testrun_id}-jvm.log*
      # Collect Java Garbage collection output on server
      scp ${DOMAIN_SERVER}:${GLASSFISH_PATH}/${DOMAIN_NAME}/logs/jgc.log* ${JMETER_RESULTS}/${testrun_id}-jgc-original.log || on_error "${LOG_TIMESTAMP}: ERROR - Could not perform JGC log collecting via SCP on server ${DOMAIN_SERVER} "
      sed 's/\x0//g' ${JMETER_RESULTS}/${testrun_id}-jgc-original.log > ${JMETER_RESULTS}/${testrun_id}-jgc.log
      rm ${JMETER_RESULTS}/${testrun_id}-jgc-original.log
      zip -j ${JMETER_RESULTS}/${testrun_id}-server.zip ${JMETER_RESULTS}/${testrun_id}-jgc.log*
      # Collect Java Garbage collection output on server
      scp ${DOMAIN_SERVER}:${GLASSFISH_PATH}/${DOMAIN_NAME}/logs/jms.log* ${JMETER_RESULTS}/${testrun_id}-jms-original.log || on_error "${LOG_TIMESTAMP}: ERROR - Could not perform JGC log collecting via SCP on server ${DOMAIN_SERVER} "
      sed 's/\x0//g' ${JMETER_RESULTS}/${testrun_id}-jms-original.log > ${JMETER_RESULTS}/${testrun_id}-jms.log
      rm ${JMETER_RESULTS}/${testrun_id}-jms-original.log
      # Calculate peak number of jms messages while test was running
      TOTAL_JMS_MSG=`sort -r -t ' ' +1 -n ${JMETER_RESULTS}/${testrun_id}-jms.log | awk '{print $1}' | sed q`
      TOTAL_JMS_BYTES=`sort -r -t ' ' +3 -n ${JMETER_RESULTS}/${testrun_id}-jms.log | awk '{print $3}' | sed q`
      PEAK_JMS_MSG=`sort -r -t ' ' +6 -n ${JMETER_RESULTS}/${testrun_id}-jms.log | awk '{print $6}' | sed q`
      PEAK_JMS_TOTAL_BYTES=`sort -r -t ' ' +9 -n ${JMETER_RESULTS}/${testrun_id}-jms.log | awk '{print $9}' | sed q`
      zip -j ${JMETER_RESULTS}/${testrun_id}-server.zip ${JMETER_RESULTS}/${testrun_id}-jms.log*

      return
      }
    • write_test_info: this function writes the necessary test configuration information in an xml file. It generates an increasing unique id for the testrun_id, and writes the testinformation from multiple sources that were provided as command line parameters, or as results of other functions called in the set up or run time phase. It will write all information needed to identify the load configuration and the server configuration at the time of test run: number of users, ramp up, number of processing threads, garbage collection strategy, jvm heap configuration, jdbc pool configuration, etc.


      function write_testInfo
      {
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Analysis Services: Writing Test Summary for importing in Test Results" | tee -a ${JMETER_RESULTS}/run.log

      # Get unique ID for the script,in order to store in DB. ID is formed of the last digits of IP address and an id that is incremented
      # Get last digits of IP address
      typeset -i test_id
      local_ip=`/sbin/ifconfig | grep ‘inet addr:’| grep -v ’127.0.0.1′ | grep -v ’192.168′ | cut -d: -f2 | cut -d. -f4 | awk ‘{ print $1}’`
      test_id=$(cat ${ID_FILE})
      new_id=`expr $test_id + 1`
      echo $new_id >${ID_FILE}
      testrun_id=$local_ip$new_id
      # Getting testdescription and server configuration
      echo $threadnum
      echo $threadwarm
      # Copying the glassfish server logs to a path configured to be the logging path for the reporting server
      cp ${JMETER_RESULTS}/${testrun_id}-server.zip ${JMETER_SERVER_LOGS}${testrun_id}_server.zip
      # Writing xml test information
      CODE_500=`grep -c ‘rc=”5′ ${JMETER_RESULTS}/*.xml`
      CODE_400=`grep -c ‘rc=”4′ ${JMETER_RESULTS}/*.xml`
      CODE_200=`grep -c ‘rc=”2′ ${JMETER_RESULTS}/*.xml`


      echo '<?xml version="1.0" encoding="UTF-8"?><testInfo build="'$REVISION_NUMBER'" test_id="'$testrun_id'" portal_version="'$PORTAL_VERSION'" revision_number="'$REVISION_NUMBER'" scenario_id="'${SCRIPT_NAME}'" architecture_id="'$ARCHITECTURE_ID'" test_date="'$LOG_RUNDATE $LOG_TIMESTAMP'" db_recordsindb="'$DBINFO_RECORDSINDB'"><test-configuration threads="'$VIRTUAL_USERS'" rampup="'$RAMP_TIME'" payment_transactions="'$TRANSACTIONS'" waittime="'$PAUSE_TIME'" waitdeviation="'$PAUSE_TIME_DEV'"></test-configuration><glassfish-configuration server_threads_min="'$GF_THREADS_MIN'" server_threads_max="'$GF_THREADS_MAX'" jdbc_thread_pool_min="'$JDBC_THREAD_POOL_MIN'" jdbc_thread_pool_max="'$JDBC_THREAD_POOL_MAX'" jdbc_statement_cache="'$JDBC_STATEMENT_CACHE'"></glassfish-configuration><jvm-options gc_strategy_param1="'$GC_STRATEGY_PARAM1'" total_initial_heap="'$TOTAL_INITIAL_HEAP'" total_maximum_heap="'$TOTAL_MAXIMUM_HEAP'" newgen_initial_heap="'$NEWGEN_INITIAL_HEAP'" newgen_maximum_heap="'$NEWGEN_MAXIMUM_HEAP'" survivor_ratio="'$SURVIVOR_RATIO'" log_level="'$LOG_LEVEL'"></jvm-options><jms-usage>total_jms_msg="'$TOTAL_JMS_MSG'" total_jms_bytes="'$TOTAL_JMS_BYTES'" peak_jms_msg="'$PEAK_JMS_MSG'" peak_jms_total_bytes="'$PEAK_JMS_TOTAL_BYTES'"</jms-usage><statusCodes>code_500="'$CODE_500'" code_400="'$CODE_400'" code_200="'$CODE_200'" </statusCodes></testInfo>' > ${JMETER_RESULTS}/testInfo.xml
      return

      }
    • log_heap_config: this function analysis the jvm configuration on the server by using “jmap” with “-heap” parameter


      function log_heap_config
      {
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Analysis Services: Collecting Glassfish heap configuration" | tee -a ${JMETER_RESULTS}/run.log
      glassfish_pid=`ssh ${DOMAIN_SERVER} sudo -u glassfish /bin/bash -c '${JAVA_BIN}jps -q | grep ASMain ' | awk '{print$1}'`
      ssh ${DOMAIN_SERVER} "sudo -u glassfish /bin/bash -c '${JAVA_BIN}jmap -heap ${glassfish_pid}'" > ${JMETER_RESULTS}/${testrun_id}-heapconfig.txt
      }
    • check_reporting: this function checks if the import of testresults is wished after the test run. This is controlled by a command line parameter called “REPORTING“. If no reporting is wished, then just a small processing is performed, for hudson analysis. Otherwise, the “data_transformation” function will be called, which will start the exporting of data in the database.

      function check_reporting
      {
      if [ "$REPORTING" == "yes" ];
      then
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Analysis Services: Sleeping for 30 seconds before starting data transformation services" | tee -a ${JMETER_RESULTS}/run.log
      data_transformation
      else
      hudson_transformation
      fi
      }
    • log_test_end: this function signalises the end of the test by outputing a log message to the console and in the test run log file.


      function log_test_end
      {
      echo "${PLACEHOLDER}`date +%H-%M-%S`: LOGGING - Test Finish: Test Scenario Ended" | tee -a ${JMETER_RESULTS}/run.log
      }
.

.

In the second article i discussed how to build an integrated load solution with JMeter. I covered topics such as: dynamic test plans, test scripts, test repository, shell functions, monitoring functions, logging functions, etc. I talked about the structure of a test script template, and detailed the shell functions called in it. It is now time to dive into the third part, one of the most interesting maybe in this series of articles:

ETL – EXTRACT TRANSFORM AND LOAD – DATA PROCESSING AND IMPORTING INTO THE DATABASE

This is it for now. Please let me know if you have any questions to be cleared up to this point, and if you find this article useful.
Alex
.

End of part 2 – Load & Performance – Testing and reporting using open source tools – Part 2 – Configuring Load Generators – Generating load with JMeter, storing, filtering and preparing results for importing

.



Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles



Latest Images