aboutsummaryrefslogtreecommitdiff
path: root/crosperf/experiment_runner.py
diff options
context:
space:
mode:
authorDenis Nikitin <denik@google.com>2020-06-02 23:22:00 -0700
committerDenis Nikitin <denik@chromium.org>2020-06-05 04:47:44 +0000
commitad18d3390cf5da60b7143122d0f3789c91929183 (patch)
treeddf6317841f5fab8112f02cf2a22229062383641 /crosperf/experiment_runner.py
parent91c5578c0e8c1bf6aef5151de3b7738f46cee50b (diff)
downloadtoolchain-utils-ad18d3390cf5da60b7143122d0f3789c91929183.tar.gz
crosperf: Include PID in topstats
Top statistics was showing commands which could combined multiple processes. To include PID in topstats we need to split the commands into separate processes. The process PID is appended to the command name in topstats. The top chrome process is the renderer process running the benchmark. The list size depends on the number of non-chrome processes which we limit to 5. For example with 10 chrome processes in the top with 10 following non-chrome processes the list will show 15 entries. BUG=None TEST=Tested on eve, bob and cheza. Change-Id: Ibf1e61c8cb522aba13cd51a590bb7e24597f66a6 Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/third_party/toolchain-utils/+/2227626 Reviewed-by: George Burgess <gbiv@chromium.org> Tested-by: Denis Nikitin <denik@chromium.org>
Diffstat (limited to 'crosperf/experiment_runner.py')
-rw-r--r--crosperf/experiment_runner.py6
1 files changed, 3 insertions, 3 deletions
diff --git a/crosperf/experiment_runner.py b/crosperf/experiment_runner.py
index e2bd50db..21fa3ea0 100644
--- a/crosperf/experiment_runner.py
+++ b/crosperf/experiment_runner.py
@@ -275,8 +275,8 @@ class ExperimentRunner(object):
all_failed = True
topstats_file = os.path.join(results_directory, 'topstats.log')
- self.l.LogOutput('Storing top5 statistics of each benchmark run into %s.' %
- topstats_file)
+ self.l.LogOutput(
+ 'Storing top statistics of each benchmark run into %s.' % topstats_file)
with open(topstats_file, 'w') as top_fd:
for benchmark_run in experiment.benchmark_runs:
if benchmark_run.result:
@@ -291,7 +291,7 @@ class ExperimentRunner(object):
# Header with benchmark run name.
top_fd.write('%s\n' % str(benchmark_run))
# Formatted string with top statistics.
- top_fd.write(benchmark_run.result.FormatStringTop5())
+ top_fd.write(benchmark_run.result.FormatStringTopCommands())
top_fd.write('\n\n')
if all_failed: