diff options
author | zhizhouy <zhizhouy@google.com> | 2020-03-24 17:11:45 -0700 |
---|---|---|
committer | Commit Bot <commit-bot@chromium.org> | 2020-03-26 03:46:38 +0000 |
commit | 7ee1e5dc3841f8e521b45607bd07567fb9f7863f (patch) | |
tree | bdc0f9f87e7b208634f6db9747a2aab432d2bd56 /crosperf/results_cache_unittest.py | |
parent | d511f2bc35d35f203ab500d93d57c328b50529e5 (diff) | |
download | toolchain-utils-7ee1e5dc3841f8e521b45607bd07567fb9f7863f.tar.gz |
crosperf: raise error at exit when benchmarks fail to run
Crosperf always returns 0 no matter benchmarks fail or experiment
interrupted in the middle. Thus we cannot tell if a run succeeded or not
with the exit status of it and it makes our nightly test failures hard
to find.
In this patch, I changed the behavior of crosperf return value:
1) Crosperf will not generate any report or send email if terminated or
all benchmarks fail. It raises RuntimeError stating all benchmarks
failing in the end.
2) Crosperf will generate report if benchmarks (but not all) fail, and
will raise RuntimeError stating benchmarks partially failing.
3) Crosperf will also copy results json files to local results directory
for further info.
BUG=chromium:1063703
TEST=Passed all unittests, tested with different failure situations.
Change-Id: I998bad51cd7301b9451645d22e8734963bc01aed
Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/third_party/toolchain-utils/+/2119231
Reviewed-by: Caroline Tice <cmtice@chromium.org>
Commit-Queue: Zhizhou Yang <zhizhouy@google.com>
Tested-by: Zhizhou Yang <zhizhouy@google.com>
Auto-Submit: Zhizhou Yang <zhizhouy@google.com>
Diffstat (limited to 'crosperf/results_cache_unittest.py')
-rwxr-xr-x | crosperf/results_cache_unittest.py | 12 |
1 files changed, 9 insertions, 3 deletions
diff --git a/crosperf/results_cache_unittest.py b/crosperf/results_cache_unittest.py index 1e7f04a1..ed6ff95b 100755 --- a/crosperf/results_cache_unittest.py +++ b/crosperf/results_cache_unittest.py @@ -501,6 +501,9 @@ class ResultTest(unittest.TestCase): @mock.patch.object(Result, 'CopyFilesTo') def test_copy_results_to(self, mockCopyFilesTo): + results_file = [ + '/tmp/result.json.0', '/tmp/result.json.1', '/tmp/result.json.2' + ] perf_data_files = [ '/tmp/perf.data.0', '/tmp/perf.data.1', '/tmp/perf.data.2' ] @@ -508,16 +511,19 @@ class ResultTest(unittest.TestCase): '/tmp/perf.report.0', '/tmp/perf.report.1', '/tmp/perf.report.2' ] + self.result.results_file = results_file self.result.perf_data_files = perf_data_files self.result.perf_report_files = perf_report_files self.result.CopyFilesTo = mockCopyFilesTo self.result.CopyResultsTo('/tmp/results/') - self.assertEqual(mockCopyFilesTo.call_count, 2) - self.assertEqual(len(mockCopyFilesTo.call_args_list), 2) + self.assertEqual(mockCopyFilesTo.call_count, 3) + self.assertEqual(len(mockCopyFilesTo.call_args_list), 3) self.assertEqual(mockCopyFilesTo.call_args_list[0][0], - ('/tmp/results/', perf_data_files)) + ('/tmp/results/', results_file)) self.assertEqual(mockCopyFilesTo.call_args_list[1][0], + ('/tmp/results/', perf_data_files)) + self.assertEqual(mockCopyFilesTo.call_args_list[2][0], ('/tmp/results/', perf_report_files)) def test_get_new_keyvals(self): |