Age | Commit message (Collapse) | Author |
|
|
|
TestRandomGraph_RandomGraphTest_LargeGraph_TENSOR_FLOAT32_Rank3_44
generates a graph with a SIN operator followed by a FLOOR operator. The
small accuracy mismatch from the SIN operator may be enlarged by the
FLOOR operator and result in a test failure that should be permitted.
This CL disables this particular test.
Fixes: 192049606
Test: NNAPI CTS
Change-Id: I6863cb6f476565d73396677bbbe6982c63270d94
|
|
|
|
Bug: 155842363
Test: CTS
Change-Id: I1bba976a778ad439c137e02193fbc27ea598ffc0
|
|
android10-tests-dev
|
|
|
|
|
|
Fixes: 142102702
Test: CTS
Change-Id: I5387b7b7b77459e5debdd153f398e541f712f65a
|
|
avg_pool_v1_2 is computing an average pooling with window size
~50*50. All input elements are 1, and each output is expected to be ~2500/2500 = 1.0f.
However, for FP16, 2048+1 = 2048, so the actual output would be 2048/2500.
Bug: 145164694
Test: NNT_static
Change-Id: Iba615c7e6c169eebdb44e9fbc684388a9317116d
|
|
Fixes: 143412014
Test: NNT_static_fuzzing
Change-Id: Ia5bdbd49d4c7ab565fb9752289827cfacc0b9f68
|
|
android10-tests-dev
|
|
|
|
- Fix CTS/VTS generated tests.
- Skip RGG tests with this operator.
Bug: 142558844
Test: NNT_static
Change-Id: I7b0423c4a3adcf4f75a8987badd1569bb8f28f17
|
|
Bug: 135624375
Bug: 140616719
Test: mma
Change-Id: I667c17a10a0c958a370c5ac0a7f8efcc78115170
|
|
into qt-dev
|
|
The crash would happen if auxiliary inputs were provided with a shape
with less than 3 dimensions. This is the case when marking optional
input as omitted.
The tests weren't catching this behaviour because they set three- or
two-dimensional shape consisting of zeros instead of just
one-dimensional zero. The change modifies the tests so that they use
one-dimensional shape.
Fix: 135242001
Test: NeuralNetworksTest_static
Change-Id: I47168d6a59445d2591a3c25ce7bbd4e17e6b2547
|
|
The quant8 results should be clamped by 0 and 255 before static casting
to uint8_t.
Bug: 134800353
Test: NeuralNetworksTest_static_fuzzing
Change-Id: I1bc8f386782c67158edd3da2f663fee5746dda48
|
|
* changes:
Relax the tolerable range for quant and boolean values.
Add RandomOperand flags: doNotCheckAccuracy and doNotConnect.
|
|
Before this CL, the quant tolerable range is too strict for complex
operations such as HEATMAP_MAX_KEYPOINT. The absolute tolerance, bias,
and MSE criteria for quantized tensors are slightly relaxed.
Before this CL, the accuracy checker does not allow any boolean value
mismatch. However, there are several cases that two floating point values
are very close to each other, and the result of comparison operations, e.g.
GREATER, ends up to be flipped because of the accumulated error. With
this CL, we only expect the number of mismatches does not exceed a
certain ratio.
Bug: 134801089
Test: NNT_static_fuzzing
Change-Id: I7faabafce91b245f525b4ef39736862d82f38edc
|
|
This CL provides the following fixes:
* Should not check accuracy of TOPK_V2 second output
We should not check the second output as the sorting is not required
to be stable.
The second output of TOPK_V2 is marked as doNotCheckAccuracy and
doNotConnect.
* Handle NaN/Infinity float values in RGG
Some operations that may produce invalid floating point values,
e.g. NaN, Inf. We should not connect the output tensor of such
operation to the input of another operation, as some floating point
operations are undefined upon NaN and Inf.
A complete list of such operations:
* ANEURALNETWORKS_DIV
* ANEURALNETWORKS_LOG
* ANEURALNETWORKS_POW
* ANEURALNETWORKS_SQRT
* ANEURALNETWORKS_RSQRT
* ANEURALNETWORKS_L2_NORMALIZATION
* ANEURALNETWORKS_REDUCE_PROD
The output of these operations are marked as doNotConnect.
Bug: 134800514
Bug: 134753636
Test: NNT_static_fuzzing
Change-Id: Id71244c5fe3b26ba9a483c361867a3021c7de20e
|
|
In this test, the RGG produces a non-sensible graph with extreme large
output gain and highly clamped output range.
Bug: 134368996
Test: NeuralNetworksTest_static_fuzzing
Change-Id: I4295e7d64e954d560a08fb8ae943eec95aa18d28
|
|
|
|
* changes:
Use relative bias and MSE on fp values.
Use rounding in requantize.
|
|
The filter skips the BATCH_TO_SPACE_ND tests with batch dimension being
1 if the target device has feature level < Android Q.
Bug: 132038686
Test: NeuralNetworksTest_static_fuzzing
Change-Id: I3a2aa93f8968cdda07e0e95853a52a7980a37477
|
|
Test: mm and vts with the sample-all driver
Bug: 131297191
Change-Id: Ifdbfe21ba7302d47e1f1da32720315337cb284c3
|
|
Fixes: 134099258
Test: NeuralNetworksTest_static_fuzzing
Change-Id: I2350d3b117836f8d6de9a3e816edbad61588107c
|
|
This is to avoid a systematic bias of ops using requantize.
Fixes: 134078526
Test: NeuralNetworksTest_static
Change-Id: I3f64f5b2d5778bfe613707cfa252f67e87dce9c5
|
|
|
|
|
|
If a driver crashes, every object associated with that driver becomes
"dead", and any method invocation on such an object fails with a
transport error. In the NNAPI, this is a problem for IDevice and
IPreparedModel objects. Without some mechanism to recover from a
driver crash, all further uses of an IDevice or IPreparedModel will
fail -- e.g., it's impossible to execute an already-compiled model,
and it's impossible to create a new compiled model. The only way to
recover from this is to restart the application.
This fix addresses the first part of this problem. All references to
IDevice in the runtime go through VersionedIDevice, so it sufficies to
replace the IDevice reference in a VersionedIDevice when the IDevice
dies. Therefore, it is now possible to create a new compiled model
after a driver crash (the crash will appear to be a transient error).
A previously-compiled model is still dead, and this fix does not
address that problem.
When we attempt to replace the IDevice, we use tryGetService() rather
than getService(): Rather than waiting for the driver to become
available, we recover it if it is available, and otherwise retain the
behavior prior to this change -- i.e., the attempt to use the IDevice
fails, and the runtime employs a fallback path if possible. This way we
avoid a potentially long wait for the driver to come back up (up to 5
seconds, by default, per init start_period behavior).
As an alternative approach, it might be possible to handle recovery by
means of a death recipient, rather than during a VersionedIDevice
method call. However, that alternative approach would probably result
in more transient failures because of a crash, because the recovery
would then be asynchronous with respect to calls that are vulnerable
to a dead driver.
Bug: 118623798
Test: NeuralNetworksTest_static
Test: NeuralNetworksTest_mt_static
Test: Ran NeuralNetworksTest_static --gtest_filter=TrivialTest.AddTwo --gtest_repeat=-1
and killed driver during the running; verified that there are
no failures (we use the CPU fallback path) and that we eventually
recover from the driver death (saw in the logcat that we run on
device, then attempt recovery and fail several times and so run on
CPU, then succeed in recovery and go back to running on device).
Test: Modified VersionedIDevice::recoverable<> so that the first time we
find a dead object, we sleep 20 seconds, allowing time for another
thread to recover from the driver crash, so that the sleeper needs
to tolerate the recovery already having happened. Ran
NeuralNetworksTest_mt_static --gtest_filter=GeneratedTests.add --gtest_repeat=-1
and killed driver during the running; verified that there are no
failures (we use the CPU fallback path) and that we took the
recovery path (by observing that the sleep happened and by
inspecting the logcat).
Test: Modified NeuralNetworksTest_static TrivialTest.AddTwo to use
introspection/control interface to force a particular driver;
set debug.nn.partition to 2 to turn off CPU fallback;
ran NeuralNetworksTest_static --gtest_filter=TrivialTest.AddTwo --gtest_repeat=-1
and killed driver during the running; verified that there are
several failures (as we attempt recovery and fail several times)
but that we eventually recover from the driver death (saw in the logcat
that we went through the recovery path and that we go back to
using the driver).
Test: Modified each sample-* driver to sleep(10) when it begins its
asynchronous execution; ran NeuralNetworksTest_static
--gtest_filter=GeneratedTests.add with
useCpuOnly = 0, computeMode = ComputeMode::ASYNC, allowSyncExecHal = 0
and killed driver and confirmed (1) that the runtime was not blocked and
(2) that an appropriate log message was recorded. See http://ag/6575732.
Test: Modified each sample-* driver to do asynchronous prepareModel and to sleep(10)
when it begins its asynchronous preparation; ran NeuralNetworksTest_static
--gtest_filter=GeneratedTests.add with
useCpuOnly = 0, computeMode = ComputeMode::ASYNC, allowSyncExecHal = 0
and killed driver and confirmed (1) that the runtime was not blocked and
(2) that an appropriate log message was recorded. See http://ag/6575732.
Test: Modified each sample-* driver to return an error for launching an
asynchronous call (tested execution and prepareModel separately), but not
make the corresponding call to callback->notify; ran NeuralNetworksTest_static
--gtest_filter=GeneratedTests.add with
useCpuOnly = 0, computeMode = ComputeMode::ASYNC, allowSyncExecHal = 0
and confirmed that the execution succeeded and that appropriate
messages were logged (preparation or execution failure followed by CPU fallback).
See http://ag/7669359.
Change-Id: I55b779bc2a38243d5df122433672a9f2e073c8b4
|
|
|
|
std::ostrstream has to invoke freeze(false) after a call to str() to
prevent memory leak.
Fixes: 133860120
Test: NeuralNetworksTest_static
Change-Id: I3570a870c25e1f60cd932cd72bb56a04c56bf8c9
|
|
This is to make sure that the copied SymmPreChannelQuantParams object
has this->params.scales pointing to this->scales.data() instead of
other.scales.data().
Fixes: 133790991
Test: NeuralNetworksTest_static
Test: NeuralNetworksTest_static_asan
Change-Id: Ic558f007110961669398dbeb161223fe99289a89
|
|
Prior to this CL, asynchronous calls were protected with the following
usage pattern:
(1) the callback object is registered for protection
(2) the asynchronous call that uses the callback object is invoked
(3) callback->wait() is called to wait for the asynchronous results
(4) the error status of launching the call is checked
(5) the callback object is unregistered for protection when leaving
scope
However, if a transport error occured when launching the asynchronous
execution (e.g., the data being sent across HIDL exceeds a preset
limit, resulting in a transport error), the code will continue waiting
at (3) before it can check the launch error in (4).
This CL fixes this by checking the launch status before waiting for the
results. Additionally, because VersionedIPreparedModel::execute takes in
a callback as an argument from the caller, extra protections are put in
place to notify the callback in the event that the asynchronous call
could not be launched because of unexpected behavior from the driver or
internal problems in the runtime.
Bug: 133325508
Bug: 118624080
Test: mma
Test: NeuralNetworksTest_static
Test: CtsNNAPITestCases
Test: ran "NeuralNetworksTest_static --gtest_filter=GeneratedTests.add",
killed the sample-minimal driver, and confirmed (1) that the runtime
was not blocked and (2) that the appropriate log message was recorded.
NOTE: this was facilitated by adding a 10 second sleep in the sample
driver for the asynchronous preparation and asynchronous execution,
enabling the service to be manually killed via
"adb shell kill -9 <pid>".
Test: ran "NeuralNetworksTest_static --gtest_filter=GeneratedTests.add",
with local modifications to the sample driver to have it return an
error message for launching an asynchronous call, but not make the
corresponding call to callback->notify. Ensured the runtime still
progressed and the appropriate messages were logged. Confirmed that
without the changes in this CL, a hang occurs.
Test: ran "NeuralNetworksTest_static --gtest_filter=GeneratedTests.add",
with local modifications to the runtime to sleep for 10 seconds before
calling an asynchronous call. In this window, the sample-minimal
driver was manually killed (via "adb shell kill -9 <pid>"), prompting
a transport failure. Ensured the runtime still progressed and the
appropriate messages were logged.
Change-Id: Ic4e8cc8399b1e30fadfaf01842ce62550ad2223f
|
|
go/allowlist. The terms “allowlist” and “blocklist” describe
their purpose, while the other words use metaphors to
decribe their purpose.
Test: NeuralNetworksTest_static
Bug: 132147842
Change-Id: I83e336ac822cdc412f76c46bc6913ccfadda72b6
|
|
|
|
Test: N/A
Bug: 132147842
Change-Id: I62641ebd5f9e516d1d940076c95d62a515016772
|
|
Updated for every operation that consumes and outputs quantized
operands.
Test: -
Bug: 131865857
Change-Id: Icbfb2f344b225342267503b2378645832cb905b6
|
|
|
|
|
|
This is a follow-up to change I1cd258e3a861236a0e1913076f222d7521830976.
Bug: 111381617
Test: N/A
Change-Id: Ie26c7dc98b54e24ae4ef6ff576440f70d0ee3fb8
Merged-In: Ie26c7dc98b54e24ae4ef6ff576440f70d0ee3fb8
(cherry picked from commit bf07eca4a9b5801e29ba3a716ddd60c3115b9882)
|
|
Bug: 130029167
Test: N/A
Change-Id: I2aca3a62d307c5ec3542222b6a4432e5d4a90252
Merged-In: I2aca3a62d307c5ec3542222b6a4432e5d4a90252
(cherry picked from commit 81f5aa18a6d496c97682fb7d8813377476384571)
|
|
in a memory pool." into qt-dev
|
|
|
|
in a memory pool.
Bug: 131331435
Test: mm
Change-Id: Ifa99830f8220cf083dd79f523b18d4fb882e2e93
|
|
|
|
|
|
Addtionally, fix a LOGISTIC generated test with invalid offset.
Bug: 132806761
Test: NeuralNetworksTest_static
Change-Id: I4e0eb1cce80f1a54413a5a307aa87c66f336351b
|
|
* Check rank 0 operand in compliantWith
* Check hardware buffer in compliantWith
* Disallow hardware buffer for pre-1.2 model in validateModel
* Add compliance tests for rank 0 tensor and hardware buffer
Bug: 131845106
Test: NeuralNetworksTest_static
Test: NeuralNetworksTest_static_fuzzing
Test: Above tests with debug.nn.strict-slicing set to 1
Change-Id: I0e2f80f93074d15ea68ac5fd162ca9e70e128835
|
|
Burst execution is asynchronous at the HAL layer: the runtime sends a
request packet across one FMQ, then waits for the response to be
received on the result FMQ. However, if the driver crashes after the
request request has been made but before the result has been received,
the runtime will hang. Specifically, the call to
ResultChannelReceiver::getPacketBlocking will never return.
This CL adds a death recipient to detect when the driver has crashed and
to unblock the runtime by returning a failure. The death recipient
additionally marks the sender and receiver objects as invalid, causing
any subsequent calls to send or receive a packet to immediately return a
failure in order to avoid future hangs.
This CL additionally returns a value to the runtime to indicate whether
the burst execution should be re-run using another execution path, such
as IPreparedModel::executeSynchronously* or IPreparedModel::execute.
ExecutionBurstController will request a re-run either when (1) the
request packet failed to send across the FMQ (e.g., when the number of
elements in the packet exceeded the size of the FMQ) or (2) when the
burst object has been marked as invalid.
Test: mma
Test: ran NeuralNetworksTest_static, made the sample driver's burst
execution artificially long, killed the sample driver, and
confirmed (1) the runtime recovered and (2) the appropriate log
messages appeared in logcat
Bug: 129157135
Bug: 131086786
Change-Id: I04fcb6247dc78ea057c7596682159af1f9025235
|