aboutsummaryrefslogtreecommitdiff
path: root/README.md
blob: 1fe89d3fbcd5a3787c3238f6c73673e1f0898c0c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
# ART Performance Tests



## General repository information.

The top-level contains scripts used to build, run, and compare the results of
the Java benchmarks and the APK compilation process statistics.
Other tools are available under tools/<tool> for example to gather memory
statistics or gather profiling information. See the [Tools][] section.

All scripts must include a `--help` or `-h` command-line option displaying
a useful help message.


## Running

### Running via the script helper

Statistics can be obtained with the `run.py` script on host with

    ./run.py

To obtain the results on target, `dx` and `adb` need to be available in your
`PATH`. This will be the case if you run from your Android environment.

    ./run.py --target
    ./run.py --target=<adb target device>

`run.py` provides multiple options.

    ./run.py --target --iterations=5


### Running manually

    ./build.sh

On host

    cd build/classes
    java org/linaro/bench/RunBench --help
    # Run all the benchmarks.
    java org/linaro/bench/RunBench
    # Run a specific benchmark.
    java org/linaro/bench/RunBench benchmarks/micro/Base64
    # Run a specific sub-benchmark.
    java org/linaro/bench/RunBench benchmarks/micro/Base64.Encode
    # Run the specified class directly without auto-calibration.
    java benchmarks/micro/Base64

And similarly on target

    cd build/
    adb push bench.apk /data/local/tmp
    adb shell "cd /data/local/tmp && dalvikvm -cp /data/local/tmp/bench.apk org/linaro/bench/RunBench"
    adb shell "cd /data/local/tmp && dalvikvm -cp /data/local/tmp/bench.apk org/linaro/bench/RunBench benchmarks/micro/Base64"
    adb shell "cd /data/local/tmp && dalvikvm -cp /data/local/tmp/bench.apk org/linaro/bench/RunBench benchmarks/micro/Base64.Encode"
    adb shell "cd /data/local/tmp && dalvikvm -cp /data/local/tmp/bench.apk benchmarks/micro/Base64"


### Comparing the rsults

The results of `run.py` can be compared using `compare.py`.


    ./run.py --target --iterations=10 --output-pkl=/tmp/res1.pkl
    ./run.py --target --iterations=10 --output-pkl=/tmp/res2.pkl
    ./compare.py /tmp/res1.pkl /tmp/res2.pkl



## Tools

This repository includes other development tools and utilities.

### Benchmarks

The `run.py` and `compare.py` scripts in `tools/benchmarks` allow collecting
and comparing the run times of the Java benchmarks. The options for these
scripts are similar to the API for the top-level scripts. See
`tools/benchmarks/run.py --help` and `tools/benchmarks/compare.py --help`.

### Compilation statistics

The `run.py` and `compare.py` scripts in `tools/compilation_statistics` allow
collecting and comparing statistics about the APK compilation process on target.
The options for these scripts are similar to the API for the top-level scripts.
See `tools/compilation_statistics/run.py --help` and
`tools/compilation_statistics/compare.py --help`.

### Profiling

The `tools/perf` directory includes tools to profile the Java benchmarks on
target and generate an html output. See `tools/perf/PERF.README` for details.



## How to Write a Benchmark

Each set of related benchmarks is implemented as a Java class and kept in the
benchmarks/ folder.

Before contributing, make sure that `test/test.py` passes.

### Rules

1. Test method names start with "time" -- Test launcher will find all timeXXX()
   methods and run them.
2. Verify methods start with "verify" -- all boolean verifyXXX() methods will
   be run to check the benchmark is working correctly.
3. Leave iterations as parameter -- Test launcher will fill it with a value
   to make sure it runs in a reasonable duration.
4. Benchmarks should take between 5 and 10 seconds to run.

### Example

    public class MyBenchmark {
           public static void main(String [] args) {
                  MyBenchmark b = new MyBenchmark();
                  long before = System.currentTimeMillis();
                  b.timeMethod0(1000);
                  b.timeMethod1(1000);
                  long after = System.currentTimeMillis();
                  System.out.println("MyBenchmark: " + (after - before));
           }

    //                  +----> test method prefix should be "time..."
    //                  |
    // ignored <---+    |              +-------> No need to set iterations. Test
                   |    |              |         framework will try to fill a
                   |    |              |         reasonable value automatically.
    //             |    |              |
           public int timeTestAdd(int iters) {
                  int result = 0;
                  for (int i = 0; i < iters; i++) {
                      // test code
                      testAddResults[i] = i + i;
                  }
                  return result;
           }

           public boolean verifyTestAdd() {
                  boolean result = // test contents of testAddResults[]
                  return result;
           }

    // If you want to fill iterations with your own value. Write a method like:

    //    Don't warm up test <-----+               +---------> Your choice
    //                             |               |
           @IterationsAnnotation(noWarmup=true, iterations=600)
           public long timeSfib(int iters) {
              long sum = 0;
              for (int i = 0; i < iters; i++) {
                  sum += sfib(20);
              }
              return sum;
           }
    }

    // Please refer to existing benchmarks for further examples.


## Test Suite Details

TODO: Detail all benchmarks here, especially what they are intended to achieve.

### e.g. Raytrace

Description, License (if any), Main Focus, Secondary Focus, Additional Comments

### Control Flow Recursive

Control flow recursive is ported from:
https://github.com/WebKit/webkit/blob/master/PerformanceTests/SunSpider/tests/sunspider-1.0.2/controlflow-recursive.js

License is Revised BSD licence:
http://benchmarksgame.alioth.debian.org/license.html

### HashMapBench

Benchmark for hash map, which is converted from:
http://browserbench.org/JetStream/sources/hash-map.js

License is Apache 2.0.

### BitfieldRotate

Large portions Copyright (c) 2000-2015 The Legion of the Bouncy Castle Inc. (http://www.bouncycastle.org)

See BitfieldRotate.java header for license text.

License iS BSD-like.