Others

How to locate a performance problem with AppDynamics

Today, we want to introduce you to the Architecture Practice team at Openbravo, a team newly created by the Services Department, aimed at offering sizing, architecture and performance services to Openbravo Partners. We are going to explain how a performance problem can be located in a process using the AppDynamics monitoring tool. We already presented AppDynamics in a previous post and you can find documentation about it in our wiki, so there is no need for any further presentation. Let’s go into detail.

First of all, we log into AppDynamics and we can select the instance we are monitoring. Take care to change the data time window you want to analyze before doing anything else. This can be done at the top right-hand corner of the interface, set it according to the time range you want to monitor. Then, we can go to the Troubleshoot >> Slow Response Times section. If you know something about the transaction you want to monitor you can use the filters to show only transactions of those types. Once you have filtered you can sort it by the slowest execution times, so that you can analyze the worst scenario.

Slow Response Times

Once this is done, you just have to double click the transaction and a pop-up will appear where you can drill down to the call stack. Here you can select the hotspots view, which does a kind of profiler’s job and tells you which part of the process’ execution consumed most processing time. In this case, we can find out there is a areBusinessDatesDifferent method which takes approximately 141 seconds. To its right, we can see there is a SQL query involved as JDBC appears at the external call column. We can click there and another pop-up will appear showing us the query and its execution time: 141 seconds.

Hot Spots
Query

In this case the text of the query is too long for AppDynamics to show it all. If this happens you can do three things to get the SQL. One, you can debug your Openbravo code to obtain it. Second, you could set your log_min_duration_statement parameter in PostgreSQL so that slow queries get logged in the postgres log. Thirdly, you can review the code line that AppDynamics says the query is at and look at the query, its parameters, why it was executed,… Now we know that the SQL part takes the most time in the slow process we can start analyzing it in order to improve it.

As you can see, AppDynamics is a great tool to monitor and locate performance problems in your Openbravo instance. You can very quickly discover the point in the process which takes more time to execute. At Openbravo we use it to monitor instances and proactively see if there is any problem. The AppDynamics’ license comes within Openbravo’s Enterprise Edition included.

Previous post

Creating virtual fields using Client Class in the Openbravo ERP Platform

Next post

Creating Virtual Fields Using Property Fields in the Openbravo ERP Platform

2 Comments

  1. Xavier
    April 21, 2015 at 2:58 PM

    I understand that when you talk about enabling postgreSQL log you’re talking about development or preproduction environments so IO are very hard to assume in a production ones in terms of latency.

    Very interesting report anyway.

  2. April 21, 2015 at 6:05 PM

    Hi Xavier, thanks for your comment. We can enable the log in production environments taking care of what we do. log_min_duration_statement parameter allows us to set a threshold of time cost. Only queries that are slower than this value will be logged in the log. If we put a conservative value, only the slow queries will appear, and thus, the disk should not be affected.

    Regards.

Leave a reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>