Age | Commit message (Collapse) | Author |
|
Change-Id: Ia7d49acf78f4ed462f749a2cc18d939e4ccf3dfe
|
|
am: 9bc36c5df3
Change-Id: I76659a0a943486b9b81d149c7fa698ca359cf923
|
|
am: 31ccc08192 -s ours
Change-Id: I3f7d4c3f04a22b1554d05fb094d34e501021642f
|
|
Exempt-From-Owner-Approval: Changes already landed internally
Change-Id: I7e8a2abdcdc3d5950a3a6381b85c39eebb82f9b7
|
|
am: e14a5f72b3
Change-Id: Ic49d389c5bdafe8b9020521fc49896168b18d6cd
|
|
am: 0cdeec25b1
Change-Id: I2b8082869a39152f23b2803b61638930d4de98c2
|
|
am: ae63082056
Change-Id: Ib92067628dfff40745b67b63c174e1e8fa1a18e9
|
|
BUG:67772237
Change-Id: I028546d1996b3b672a35707ddb1174e42b96eec1
|
|
am: bcf415db9d
Change-Id: Ice3f6c94da7f23c6990de263276701336ca8fc3e
|
|
Change-Id: I53619bae55db81496ce22acba364c5e603944168
|
|
Change-Id: If1751d881f28c00343197ff14e3c98bdc342fe81
|
|
Change-Id: I1c291fba04d70b5a5613a495a404bc38de76e1fd
|
|
e3e1107fc6 am: 86f459097d
am: 42099c645e
Change-Id: I86d1835828a119f1cde9c37705801d22a799be89
|
|
am: 86f459097d
Change-Id: I14a32befed91b0d31dd4452c536afc0877c9dc88
|
|
am: e3e1107fc6
Change-Id: I3947109be160fdd2a0f6aeeb77d157b4d1e4f02a
|
|
|
|
parse traces from ext4 events
|
|
Adds trace parsing for ext4_da_write_begin, ext4_da_write_end,
ext4_sync_file_enter, and ext4_sync_file_exit events
Test: run the 4 new unit tests (nosetests tests/test_filesystem.py)
Change-Id: I9b7049d5e9879cc3dedb5fd9d3d1433fa8304618
|
|
parse traces from the common clock infrastructure
|
|
Traces from clock_set_rate, clock_enable, clock_disable need
specialized parsing. Add additional feature to parse these traces.
Test: run the 3 new unit tests (nosetests tests/test_common_clk.py)
Change-Id: Ib93b72697fc4d5eb30cffb914bbe0cb4c4cd872d
|
|
|
|
bare_trace: Fix get_duration() for window use
|
|
|
|
The previous implementation only looked at the maximum timestamp
of the traces. However, if trace windows are used to analyse a
subset of the full trace, get_duration() would return an
erroneous value:
(times are normalized)
trace = <trace built with window=[5, 10]>
print trace.get_duration()
> 10.0
This commit adds a lower-bound lookup to fix this issue.
tests/test_baretrace.py has also been updated to account
for this change.
|
|
There are some issues with caching at the moment which I haven't been
able to debug but that are confirmed as affecting multiple
people. Until we can find the time to fix the issue, let's disable
caching by default.
|
|
Classes that implement the generate_data_dict method have an additional argument
to the signature. This leads to errors in case the base class method is called
with the additional argument.
This patch also adds two tests for FTrace and SysTrace parsing.
|
|
DataFrame.sort_values"
|
|
To prevent annoying warning.
Change-Id: I60f006a3c806864939019a90d12b236487fb51aa
Signed-off-by: Joel Fernandes <joelaf@google.com>
|
|
Change-Id: I583e7c899074be772394a977e9e6526277dd7647
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
|
|
|
|
There's a bug in trappy caching where corruption or removal of single CSVs
causes all CSVs to get removed. Fix this and add a test-case to cover it.
Also test that only missing items are regenerated.
Change-Id: Iea41c8e0a005515790b580e7f78f06bdd60141a3
Signed-off-by: Joel Fernandes <joelaf@google.com>
Reviewed-by: KP Singh <kpsingh@google.com>
|
|
Change-Id: I238bb50ed1907def19b23b0610eec87234ef4d51
Signed-off-by: Joel Fernandes <joelaf@google.com>
|
|
Change-Id: I3e8ff4d9d11220cddeac955e9a949fc3464ecc36
Signed-off-by: Joel Fernandes <joelaf@google.com>
|
|
Forward propogate secondary DF into primary DF and return the merged DF.
Implements: https://github.com/ARM-software/trappy/issues/250
Change-Id: I312d77302bbca8bb13bfa598785ebc0cc879fe34
Signed-off-by: Joel Fernandes <joelaf@google.com>
|
|
There's a bug in trappy caching where corruption or removal of single CSVs
causes all CSVs to get removed. Fix this and add a test-case to cover it.
Also test that only missing items are regenerated.
Change-Id: Iea41c8e0a005515790b580e7f78f06bdd60141a3
Signed-off-by: Joel Fernandes <joelaf@google.com>
|
|
Trappy cache fixes
|
|
- json params loading represents keys as unicode, use dumps to fix it
- finalize_objects need to be run always for cases where some events not
cached, fix it by moving it up and also handle bjackman's concern that we run
it twice by skipping finalize for cached events.
- fix time normalization breakage as a result of the above changes
Change-Id: I2011de0ae8112e937ee61baee8d53a63d0bbe85a
Signed-off-by: Joel Fernandes <joelaf@google.com>
|
|
- json params loading represents keys as unicode, use dumps to fix it
- finalize_objects need to be run always for cases where some events not
cached, fix it by moving it up and also handle bjackman's concern that we run
it twice by skipping finalize for cached events.
- fix time normalization breakage as a result of the above changes
Change-Id: I2011de0ae8112e937ee61baee8d53a63d0bbe85a
Signed-off-by: Joel Fernandes <joelaf@google.com>
|
|
Change-Id: Ie79698d90e0406cc11c52d364144ec08c33dfac4
Signed-off-by: Joel Fernandes <joelaf@google.com>
|
|
|
|
This reverts commit 9493bfaba69108b16db1ee21c53c7f0f95eec5ba.
|
|
This reverts commit c9243e261fb37be1b1149ae6d111e18745b75959.
|
|
This reverts commit c0a49a16d85ef5b732fafe02faad73c67df7b665.
|
|
This reverts commit 2b9b25c74416328a0f68af5b60b73534e4ccbd02.
|
|
|
|
Lets move this out of the main parsing loop so that the code is easier to read.
Suggested-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Joel Fernandes <joelaf@google.com>
Reviewed-by: KP Singh <kpsingh@google.com>
|
|
generate_data_dict uses try/except for integer conversion. This is expensive
consider the frequency of the calls. Optimize it by using a slightly more
uglier but much fast version of the same. With this I get a speed of ~8.5% in
parsing.
Change-Id: I909ad170756fd284c7d924950945e755880ceb5f
Signed-off-by: Joel Fernandes <joelaf@google.com>
Reviewed-by: KP Singh <kpsingh@google.com>
|
|
I found that trappy spends a lot of time looking for the array pattern "{}".
Vast majority of trace lines don't have them. Instead of running the regex
every time, check for the ={} pattern using the 'in' operator which is much
faster than the regex for the common case, and then do the regex sub if needed.
The speed up is huge and in my test run, I saw parsing stage on a 100MB trace
speed up from 13s -> 10.5s !!
Change-Id: I7ae68efc99eaf8844624871be2a4d66a7820a9b0
Signed-off-by: Joel Fernandes <joelaf@google.com>
Reviewed-by: KP Singh <kpsingh@google.com>
|
|
Its not necessary to check for unique word twice. Reusing the 'class detection'
path for this purpose is sufficient. This improve performance by 3-6% of the
parsing stage.
Change-Id: Iff8ebd0086d2ccc10bec14e3d40a63f3c04aaf1a
Signed-off-by: Joel Fernandes <joelaf@google.com>
Reviewed-by: KP Singh <kpsingh@google.com>
|
|
|