diff options
author | Mythri Alle <mythria@google.com> | 2024-04-16 09:16:05 +0000 |
---|---|---|
committer | Mythri Alle <mythria@google.com> | 2024-04-18 08:19:37 +0000 |
commit | 36aba6488cd335f651161be607ac0af240faddc8 (patch) | |
tree | 255742e4ed87db7bbdcaffca583689f1be9342e0 | |
parent | 680e7679b657c7319b580a9b493477c11f775660 (diff) | |
download | art-36aba6488cd335f651161be607ac0af240faddc8.tar.gz |
Reset the method trace buffer index when handling overflow
In non-streaming cases, we don't record any new entries once the
overflow happens so we used to return early. This skipped resetting the
trace buffer index which means we miss to runtime always. This is a
quick fix to reset the buffer index.
In a follow up CL we can uninstall method listeners so we don't get
reported of method entry / exit events anymore since we stopped
recording these events.
Bug: 329498538, 259258187
Test: art/test.py --64 --trace
Change-Id: I67a257637404fe75e0a0747bfae5aa10f9fbc1fb
-rw-r--r-- | runtime/trace.cc | 13 |
1 files changed, 8 insertions, 5 deletions
diff --git a/runtime/trace.cc b/runtime/trace.cc index dd14e323c3..1f97f155dd 100644 --- a/runtime/trace.cc +++ b/runtime/trace.cc @@ -1656,11 +1656,6 @@ void Trace::LogMethodTraceEvent(Thread* thread, // method is only called by the sampling thread. In method tracing mode, it can be called // concurrently. - // In non-streaming modes, we stop recoding events once the buffer is full. - if (trace_writer_->HasOverflow()) { - return; - } - uintptr_t* method_trace_buffer = thread->GetMethodTraceBuffer(); size_t* current_index = thread->GetMethodTraceIndexPtr(); // Initialize the buffer lazily. It's just simpler to keep the creation at one place. @@ -1672,12 +1667,20 @@ void Trace::LogMethodTraceEvent(Thread* thread, trace_writer_->RecordThreadInfo(thread); } + if (trace_writer_->HasOverflow()) { + // In non-streaming modes, we stop recoding events once the buffer is full. Just reset the + // index, so we don't go to runtime for each method. + *current_index = kPerThreadBufSize; + return; + } + size_t required_entries = GetNumEntries(clock_source_); if (*current_index < required_entries) { // This returns nullptr in non-streaming mode if there's an overflow and we cannot record any // more entries. In streaming mode, it returns nullptr if it fails to allocate a new buffer. method_trace_buffer = trace_writer_->PrepareBufferForNewEntries(thread); if (method_trace_buffer == nullptr) { + *current_index = kPerThreadBufSize; return; } } |