Age | Commit message (Collapse) | Author |
|
'make haiku' is failing when this fuzzer is enabled due to b/151102177.
This disables the fuzz target from running automatically for now.
This also re-arranges some library sections in the build file to make
the 'bpfmt' presubmit happy.
Bug: 151238901
Bug: 170512504
Test: 'make haiku' and verify the fuzz target isn't built
Change-Id: I2e11bd04bb1be136f7b3b0cc1ac2f80ad08a0b03
|
|
The tests are newly added in R and should not be enforced for devices
with only a system upgrade.
Bug: 162195407
Bug: 162395335
Test: CTS with a mixed build of R + P
Test: CTS with a mixed build of R + Q
Change-Id: Ia0761b0b0ed9c663c262388f4ad36bdc0c2f40d1
Merged-In: Ia0761b0b0ed9c663c262388f4ad36bdc0c2f40d1
(cherry picked from commit 34ca86b85dbe2fa954f1412b3f4f81dcf2797b5b)
|
|
Change I514b78ee was supposed to replace *dynamic* tests with
*unknown_rank* and *unknown_dimension* test but the old tests did not
get removed for some reason.
Bug: 132458982
Bug: 154597673
Test: m
Change-Id: I19ed85add6d0e5a57a0b6415af6dbf63e2cc17b6
|
|
|
|
into rvc-dev
|
|
This feature would likely have very limited vendor support in Android R.
It's too late to add IF or WHILE tests to the 1.3 VTS where an inner or
outer input or output operand of a control flow operation has a type
that is not fully specified. To avoid exercising untested behaviour, we
have decided to disallow this at the HAL level. This change ensures that
we do not schedule such operations for execution on any device other
than the CPU device.
See http://b/159076604#comment5 and http://b/132458982#comment63.
Bug: 159076604
Bug: 132458982
Test: NNT_static
Change-Id: Ic5f864e6129acc4208b7751b5b182ae30a39f0a4
|
|
|
|
Bug: 155421116
Test: NeuralNetworksTest_static
Change-Id: I434d70e30b137a37febfab83127d79f9a6ecbae6
|
|
Bug: 132458982
Bug: 154597673
Test: NNT_static --gtest_filter='*unknown_*'
Change-Id: I514b78ee3cf2efd86a47412e413a6873341a59d0
|
|
|
|
Bug: 153876253
Test: mm
Change-Id: I4d36ed2aeaa7666fa349b50c93397792e34fad12
|
|
Fix: 159076604
Test: m
Change-Id: I44f31a004309594f03ef7eadf10c2ff51097e954
|
|
rvc-dev
|
|
This change restricts presubmit to only run a single pass of
NeuralNetworksTest_static ("pass 10"), corresponding to:
* useCpuOnly = 0
* computeMode = ComputeMode::ASYNC
* allowSyncExecHal = 1
Bug: 131770421
Test: mma
Test: atest
Change-Id: Ifd316ea87151ae80001208a38cb7af321f7cf9ec
|
|
* changes:
Support WHILE with growing output tensor in CpuExecutor
Relax control flow boundary operand dimension constraint
|
|
1. Adds a WHILE test that produces a tensor of dynamic size. The loop
body model has inputs and outputs of unknown rank.
2. Adds an IF test where the branch models have unkown input and output
ranks.
3. Modifies the partitioner to avoid trying to delegate referenced
models if an IF or WHILE or their associated referenced models have
unknown dimensions.
4. Adds dynamic loop output shape support to CpuExecutor by allocating
new temporaries on each iteration.
Bug: 132458982
Bug: 154597673
Test: NNT_static --gtest_filter='*_dynamic*'
Change-Id: I8da1aa92abf090adbdcbf647c113fe27f4c002d8
|
|
Also updates the NDK spec to mention the constraints and adds some
validation tests.
Bug: 132458982
Bug: 156918813
Test: NNT_static
Change-Id: Ia112e46da065a623a52ac1c402d28dcb963e5580
|
|
LSH_PROJECTION is very sensitive to the value of the hash tensor. Prior
to this CL, AllInputsAsInternalCoverter will convert the hash tensor to
internal by introducing a dummy ADD operation. Under relaxed execution
mode, the small precision loss in ADD will result in a significant
difference in the final result. This CL prevents the hash tensor from
being converted to internal in relaxed precision tests.
Additionally, this CL removes a redundant variation in
lsh_projection_float16.
Fixes: 155962587
Test: NNT_static
Change-Id: Id5522b4949a4e3ab4801537e8eb747a25f0cd0e8
|
|
rvc-dev
|
|
The generated test is added only to CTS since VTS would fail on some 1.2
drivers.
Fix: 156284111
Test: NeuralNetworksTest_static
Change-Id: I4f3c6cdb9f546501e4ca375d7900431c384d6885
|
|
The scale in NNAPI is always a fp32 data type. As there are accelerators
that only support fp16 data type, there is precision loss when
executing a casting operation that converts a fp16 buffer to/from a
quantized buffer. In the worst case, the precision loss in scale could
be up to 0.5 ULP, and results in 0.5 * 255 = 127.5 ULP difference in the
final result. This is much larger than the tolerance of these operations
in RGG. This CL sets the scale of such operations to be always representable
in fp16 to avoid potential precission loss.
Additionally, relax the MSE requirement on DEQUANTIZE.
Bug: 155842363
Test: NNT_static_fuzzing
Change-Id: I511ddb6753be1dabac36e1423820a03a4f28641e
|
|
rvc-dev
|
|
|
|
Operations fixed:
* MEAN
* ARGMIN/ARGMAX
* STRIDED_SLICE
The operations would crash when provided with inputs that resulted in an
empty output shape.
The change fixes the bug by making the operations output a tensor of
size [1] in this case.
Also, update the documentation to clarify this behaviour and move
squeeze operation test to appropriate spec directory.
Bug: 155508675
Bug: 155660285
Bug: 155508675
Bug: 155238914
Test: NNTest_static
Change-Id: Ia865c26021dd4d781659957049dd567beeaeae99
|
|
|
|
This test was incorrectly passing this comparison before because the
uninitialized value in `that.channelQuant->scales` was 0xaaaa... instead
of not needing to be compared at all when `channelQuant` isn't set. Now
we check whether to examine `scales` at all.
Bug: http://b/156464649
Bug: http://b/156514991
Test: atest CtsNNAPITestCases:TensorRankConstraint
Change-Id: I1f4db99bdd63e5dc1c95f574e6eca3adfffd138d
(cherry picked from commit 63deac05d3a1d53430966a49c6979554f128383d)
|
|
|
|
Fix: 155923033
Test: m
Change-Id: Ia701c6097695fd452c408d4423998c55d823a52f
|
|
|
|
AHWB buffer support is a feature introduced in 1.2. Prior to this
CL, requests with AHWB as memory pool will be sent to 1.0 and 1.1
drivers. This CL modifies the compliance check and avoids sending
ahwb to 1.0 and 1.1 drivers. Using AHWBs on compilations with 1.0
and 1.1 drivers will result in a CPU fallback.
Bug: 155686276
Test: NNTS
Change-Id: Ib00a2d0d24d4a8c385b5992a1168e20c4a6bb786
|
|
Bug: 141704706
Test: NeuralNetworksTest_static on coral
Change-Id: Icd2ee06877f6790a9dd610c501908dc110959972
|
|
failing tests" into rvc-dev
|
|
When asked to reduce across all dimensions, reduce would produce a
zero-sized tensor without dimensions and cause segmentation fault in the
implementation.
The change fixes the bug by making the op output a tensor of size [1] in
this case.
Also, updated the bug to clarify this behaviour.
Bug: 155508675
Test: NNTest_static
Change-Id: Ie98d8fa2e508255fd50f6bd8184dc323ba90fac8
|
|
Test: mm
Test: atest NeuralNetworksTest_static
Fixes: 155942515
Fixes: 155942747
Fixes: 155942575
Fixes: 155942378
Fixes: 155942301
Fixes: 155942510
Fixes: 155942697
Fixes: 155942566
Change-Id: I940dda440c958fcef527d532d57b01586e070fb7
|
|
|
|
|
|
|
|
The test does not make sense when partitioning or CPU fallback is
disabled.
Bug: 155849908
Test: adb shell setprop debug.nn.partition 0 && \
NNT_static --gtest_filter=FailingDriverTest.FailAfterInterpretedWhile
Test: adb shell setprop debug.nn.partition 2 && \
NNT_static --gtest_filter=FailingDriverTest.FailAfterInterpretedWhile
Change-Id: Ie81eef414de3f767f4dfcf282d39ba63e366057f
|
|
1. Adds bias dimension count validation.
2. Does more validation in validate().
Bug: 155575142
Bug: 155261461
Test: NNT_static --gtest_filter="*while_fib*"
Test: NNT_static --gtest_filter=ValidationTestDimensionProductOverflow2.DynamicOutputShapeOverflow
Change-Id: I7b70a29e76fdf99e656ee6e3867cfd97675cfeec
|
|
Execute function in our tests among other things makes sure that the
models work with caching enabled. Caching token and directory are
constant and therefore the same for all tests. This is usually not a
problem since the tests check only one model with a set of different
configs but this was not the case for QuantizationCouplingTest suite.
This suite ran signed and unsigned models in one test making them share
the same caching token and directory which is an incorrect use of
caching API.
This CL solves this problem by removing the execution of an unsigned
model from the test. This is safe since the unsigned model is still
verified in GeneratedTests suite.
Bug: 155604227
Test: NNTest_static
Change-Id: I6aa2c8cd3fa41e37a5de53259d1fcc784f327093
|
|
|
|
|
|
|
|
Bug: 147925145
Test: mma
Test: NeuralNetworksTest_static
Change-Id: Ie636c7cdc84ff7a262925f8ff8ebbd4d4f45201b
|
|
|
|
* changes:
Create tests for VersionedInterfaces errors
Simplify IDevice reboot logic
|
|
This helps cover the code path of data copying between hidl memory and
IBuffer, as well as data copying between IBuffers. These code paths may
not be covered by CTS because of the lack of driver support.
This CL additionally extracts TestAshmem class from TestGenerated to a
common TestUtils.
Bug: 152209365
Test: NNT_static
Change-Id: I617bfbc391c7d0ada0c32b31ee2e6e493d5dc6a2
|
|
Fixes: 155195637
Test: NNT_static_fuzzing and inspect the dumped spec
Change-Id: I11b50433b17afeceedae954f2832c1c2193b2c7f
|
|
|
|
* changes:
Regenerate tests.
Re-enable tests that are previously disabled.
|