I'm speaking at OpenWorld 2017

Wednesday, 3 September 2014

PeopleSoft Performance Monitor Archiving Performance Issue

***UPDATE: 4 Feb 2015.  PeopleTools 8.53 with (I think) patch 13 solves this performance problem without the need for the fix described below.


I love the PeopleSoft Performance Monitor (PPM).  

Some of the benefits of logging all production and development transactions include


  • Diagnosing application issues for which the end user didn't provide enough information when they reported a problem.  With PPM you can see exactly what they did and when.
  • Profiling who is using your system, how much and what they're using.  This can help focus on developing improvements on parts of the system that are used most.
  • Troubleshooting performance problems.  Even in standard logging mode you can easily get reports on slow components, slow queries, etc.
  • Discovering the profile of transactions during development phase.  The precise load on your PeopleSoft systems can be quickly determined by looking at the entire PMU tree.  For example it helps answer the question of what really happens to your web and application servers when a particular user with 10 pagelets in their homepage logs into the system.


At Oxfam we log every transaction and retain 5 days of history before archiving.  On a busy system you can easily clock up millions of rows of data in PSPMTRANSHIST before arching runs and this can expose a severe performance problem during the archive process.

In  App Engine PSPM_ARCHIVE section ARCHIVE.ARCPCODE there is a function ArchiveTransactions.  This functions identifies ALL the rows to be archived using datetime based critieria and opens a SQL object &TransHistSQL.  Depending on number of rows in PSPMTRANSHIST, number of rows not being archived, power of your DB server this SELECT can take from a few seconds to hours!  The code then iteratively fetches rows from &TransHistSQL until either there are no rows left or the number of rows fetched = the &NumTransInBatch (which for Oracle and SQL Server databases is 1,000, DB2 = 500 and Informix and Sybase = 300).

If the open of &TransHistSQL takes a long time and you have millions of rows then the whole archiving process may take days to finish.  

A quick and dirty fix to this is to modify the open SELECT statement to only return the number of rows that it's going to archive in a batch.  I've used the TOP n operator in Microsoft SQL Server for this BUT THIS WON'T WORK WITH ORACLE.  I think Oracle users will need to add new criteria something like WHERE ROWNUM <= &NumTransInBatch .  After all, why open a cursor on millions of rows when you're only going to process 1,000 of them?  

I've marked in red the new code for SQL Server.  For us ,at Oxfam, this improved performance of archiving from 50 mins per batch to 2 seconds.

/*-- SELECT THE ROWS FROM PSPMTRANSHIST ELIGLIBLE FOR ARCHIVING */
   &TransHistSQL.Open("SELECT TOP " | &NumTransInBatch | " X.PM_INSTANCE_ID, X.PM_TRANS_DEFN_SET, X.PM_TRANS_DEFN_ID, X.PM_AGENTID, X.PM_TRANS_STATUS, X.OPRID, X.PM_PERF_TRACE, X.PM_CONTEXT_VALUE1, X.PM_CONTEXT_VALUE2, X.PM_CONTEXT_VALUE3, X.PM_CONTEXTID_1, X.PM_CONTEXTID_2, X.PM_CONTEXTID_3, X.PM_PROCESS_ID, %DateTimeOut(X.PM_AGENT_STRT_DTTM), %DateTimeOut(X.PM_MON_STRT_DTTM), X.PM_TRANS_DURATION, X.PM_PARENT_INST_ID, X.PM_TOP_INST_ID, X.PM_METRIC_VALUE1, X.PM_METRIC_VALUE2, X.PM_METRIC_VALUE3, X.PM_METRIC_VALUE4, X.PM_METRIC_VALUE5, X.PM_METRIC_VALUE6, X.PM_METRIC_VALUE7, X.PM_ADDTNL_DESCR, Z.PM_ARCHIVE_MODE FROM PSPMTRANSHIST X, PSPMAGENT Y, PSPMSYSDEFN Z WHERE X.PM_AGENTID=Y.PM_AGENTID AND Y.PM_SYSTEMID=Z.PM_SYSTEMID AND (Z.PM_ARCHIVE_MODE='1' OR Z.PM_ARCHIVE_MODE='2') AND %DateTimeDiff(X.PM_MON_STRT_DTTM, %CurrentDateTimeIn) >= (PM_MAX_HIST_AGE * 24 * 60)");





No comments: