Age | Commit message (Collapse) | Author |
|
'make haiku' is failing when this fuzzer is enabled due to b/151102177.
This disables the fuzz target from running automatically for now.
This also re-arranges some library sections in the build file to make
the 'bpfmt' presubmit happy.
Bug: 151238901
Bug: 170512504
Test: 'make haiku' and verify the fuzz target isn't built
Change-Id: I2e11bd04bb1be136f7b3b0cc1ac2f80ad08a0b03
|
|
The tests are newly added in R and should not be enforced for devices
with only a system upgrade.
Bug: 162195407
Bug: 162395335
Test: CTS with a mixed build of R + P
Test: CTS with a mixed build of R + Q
Change-Id: Ia0761b0b0ed9c663c262388f4ad36bdc0c2f40d1
Merged-In: Ia0761b0b0ed9c663c262388f4ad36bdc0c2f40d1
(cherry picked from commit 34ca86b85dbe2fa954f1412b3f4f81dcf2797b5b)
|
|
The description for zeroPoint appears to be empty on
https://developer.android.com/ndk/reference/struct/a-neural-networks-operand-type.html#zeropoint
Moreover, the description for the two fields has become outdated, e.g.
- TENSOR_QUANT16_SYMM only uses scale,
- TENSOR_QUANT8_SYMM_PER_CHANNEL does not use scale or zeroPoint.
Fix: 146789186
Bug: 160406237
Test: generate_api.sh
Test: m
Change-Id: I439a70405a47576da22dacf7070de3bf65ac7caf
|
|
|
|
Change I514b78ee was supposed to replace *dynamic* tests with
*unknown_rank* and *unknown_dimension* test but the old tests did not
get removed for some reason.
Bug: 132458982
Bug: 154597673
Test: m
Change-Id: I19ed85add6d0e5a57a0b6415af6dbf63e2cc17b6
|
|
Bug: 156918813
Bug: 158557728
Test: generate_api.sh
Change-Id: I49f0903a4576fdc9d1a41139940d5fd31c99329f
|
|
|
|
|
|
At the NDK level, we allow IF and WHILE operations where an inner or
outer input or output operand has a type that is not fully specified.
However, this is not allowed At the HAL level. This CL adds HAL-level
validation.
See http://b/132458982#comment63
Bug: 132458982
Test: NNT_static
Change-Id: I54754d6241a1f8eb99717899ffd4f0ace4750060
|
|
into rvc-dev
|
|
This feature would likely have very limited vendor support in Android R.
It's too late to add IF or WHILE tests to the 1.3 VTS where an inner or
outer input or output operand of a control flow operation has a type
that is not fully specified. To avoid exercising untested behaviour, we
have decided to disallow this at the HAL level. This change ensures that
we do not schedule such operations for execution on any device other
than the CPU device.
See http://b/159076604#comment5 and http://b/132458982#comment63.
Bug: 159076604
Bug: 132458982
Test: NNT_static
Change-Id: Ic5f864e6129acc4208b7751b5b182ae30a39f0a4
|
|
|
|
Bug: 155421116
Test: NeuralNetworksTest_static
Change-Id: I434d70e30b137a37febfab83127d79f9a6ecbae6
|
|
Bug: 132458982
Bug: 154597673
Test: NNT_static --gtest_filter='*unknown_*'
Change-Id: I514b78ee3cf2efd86a47412e413a6873341a59d0
|
|
|
|
Bug: 153876253
Test: mm
Change-Id: I4d36ed2aeaa7666fa349b50c93397792e34fad12
|
|
Fix: 159076604
Test: m
Change-Id: I44f31a004309594f03ef7eadf10c2ff51097e954
|
|
rvc-dev
|
|
This change restricts presubmit to only run a single pass of
NeuralNetworksTest_static ("pass 10"), corresponding to:
* useCpuOnly = 0
* computeMode = ComputeMode::ASYNC
* allowSyncExecHal = 1
Bug: 131770421
Test: mma
Test: atest
Change-Id: Ifd316ea87151ae80001208a38cb7af321f7cf9ec
|
|
* changes:
Support WHILE with growing output tensor in CpuExecutor
Relax control flow boundary operand dimension constraint
|
|
1. Adds a WHILE test that produces a tensor of dynamic size. The loop
body model has inputs and outputs of unknown rank.
2. Adds an IF test where the branch models have unkown input and output
ranks.
3. Modifies the partitioner to avoid trying to delegate referenced
models if an IF or WHILE or their associated referenced models have
unknown dimensions.
4. Adds dynamic loop output shape support to CpuExecutor by allocating
new temporaries on each iteration.
Bug: 132458982
Bug: 154597673
Test: NNT_static --gtest_filter='*_dynamic*'
Change-Id: I8da1aa92abf090adbdcbf647c113fe27f4c002d8
|
|
Also updates the NDK spec to mention the constraints and adds some
validation tests.
Bug: 132458982
Bug: 156918813
Test: NNT_static
Change-Id: Ia112e46da065a623a52ac1c402d28dcb963e5580
|
|
|
|
This change adds additional validation for non-optional tensors for the
following operations:
* EMBEDDING_LOOKUP
* HASHTABLE_LOOKUP
* LSH_PROJECTION
* BIDIRECTIONAL_SEQUENCE_LSTM
* LSTM
* RANDOM_MULTINOMIAL
* RNN
* SVDF
* SPLIT
Some operations such as SVDF unpack the scalar values without checking
if the value is present, leading to a failed CHECK. This CL adds
protections to use default values in these cases, and relies on a
corresponding Prepare method to cause these cases to fail validation.
Bug: 157516274
Test: mma
Test: CtsNNAPITestCases
Test: NeuralNetworksTest_static
Test: libneuralnetworks_fuzzer
Change-Id: I6bb804ec40205c9741b04231022894c714ad28ec
|
|
LSH_PROJECTION is very sensitive to the value of the hash tensor. Prior
to this CL, AllInputsAsInternalCoverter will convert the hash tensor to
internal by introducing a dummy ADD operation. Under relaxed execution
mode, the small precision loss in ADD will result in a significant
difference in the final result. This CL prevents the hash tensor from
being converted to internal in relaxed precision tests.
Additionally, this CL removes a redundant variation in
lsh_projection_float16.
Fixes: 155962587
Test: NNT_static
Change-Id: Id5522b4949a4e3ab4801537e8eb747a25f0cd0e8
|
|
|
|
rvc-dev
|
|
|
|
Bug: 157268934
Test: mma
Test: libneuralnetworks_fuzzer
Change-Id: I5d78db36b110eaec2230478c4759f94f386d59c3
|
|
* changes:
Fix FULLY_CONNECTED issue with unknown num_units.
Fix CAST issue with outputs of unknown rank.
|
|
The generated test is added only to CTS since VTS would fail on some 1.2
drivers.
Fix: 156284111
Test: NeuralNetworksTest_static
Change-Id: I4f3c6cdb9f546501e4ca375d7900431c384d6885
|
|
The scale in NNAPI is always a fp32 data type. As there are accelerators
that only support fp16 data type, there is precision loss when
executing a casting operation that converts a fp16 buffer to/from a
quantized buffer. In the worst case, the precision loss in scale could
be up to 0.5 ULP, and results in 0.5 * 255 = 127.5 ULP difference in the
final result. This is much larger than the tolerance of these operations
in RGG. This CL sets the scale of such operations to be always representable
in fp16 to avoid potential precission loss.
Additionally, relax the MSE requirement on DEQUANTIZE.
Bug: 155842363
Test: NNT_static_fuzzing
Change-Id: I511ddb6753be1dabac36e1423820a03a4f28641e
|
|
Fixes: 156748888
Test: NNT_static
Test: 1.3 VTS with ag/11509996
Change-Id: Ib3b191ccefdfd0d03f8c69772976ef0f2421a9d7
|
|
Fixes: 156750075
Test: NNT_static
Test: 1.3 VTS with ag/11509996
Change-Id: I9afea6076af8153ab7572b5f0fecfec41451ec86
|
|
The segfault could happen if a model provided to the sample driver
contained BIDIRECTIONAL_SEQUENCE_LSTM or LSTM with no inputs.
The CL moves input and output count checks to the beginning of
validation logic of these operations.
Fix: 156306557
Test: NNTest_static + NNAPI_BSLstmFailure from ag/11514800
Change-Id: Ic7b0d8bd4ca954f03cfe2a4d2deca9b1e0022cee
|
|
rvc-dev
|
|
|
|
Operations fixed:
* MEAN
* ARGMIN/ARGMAX
* STRIDED_SLICE
The operations would crash when provided with inputs that resulted in an
empty output shape.
The change fixes the bug by making the operations output a tensor of
size [1] in this case.
Also, update the documentation to clarify this behaviour and move
squeeze operation test to appropriate spec directory.
Bug: 155508675
Bug: 155660285
Bug: 155508675
Bug: 155238914
Test: NNTest_static
Change-Id: Ia865c26021dd4d781659957049dd567beeaeae99
|
|
|
|
This test was incorrectly passing this comparison before because the
uninitialized value in `that.channelQuant->scales` was 0xaaaa... instead
of not needing to be compared at all when `channelQuant` isn't set. Now
we check whether to examine `scales` at all.
Bug: http://b/156464649
Bug: http://b/156514991
Test: atest CtsNNAPITestCases:TensorRankConstraint
Change-Id: I1f4db99bdd63e5dc1c95f574e6eca3adfffd138d
(cherry picked from commit 63deac05d3a1d53430966a49c6979554f128383d)
|
|
|
|
|
|
Bug: 153143917
Test: NNT_static 256 --gtest_filter=TrivialTest.AddTwoWithHardwareBufferInput
Change-Id: I4c851cbae28a3bba48e5bbc842b1d880b555a8ed
|
|
Fix: 155923033
Test: m
Change-Id: Ia701c6097695fd452c408d4423998c55d823a52f
|
|
|
|
Fixes: 155942301
Fixes: 155942510
Fixes: 155942378
Fixes: 155942566
Fixes: 155942747
Fixes: 155942575
Fixes: 155942515
Test: mm
Change-Id: Id13486cda1380504b3dd97a770982ff952fa88b4
|
|
|
|
AHWB buffer support is a feature introduced in 1.2. Prior to this
CL, requests with AHWB as memory pool will be sent to 1.0 and 1.1
drivers. This CL modifies the compliance check and avoids sending
ahwb to 1.0 and 1.1 drivers. Using AHWBs on compilations with 1.0
and 1.1 drivers will result in a CPU fallback.
Bug: 155686276
Test: NNTS
Change-Id: Ib00a2d0d24d4a8c385b5992a1168e20c4a6bb786
|
|
Bug: 141704706
Test: NeuralNetworksTest_static on coral
Change-Id: Icd2ee06877f6790a9dd610c501908dc110959972
|
|
failing tests" into rvc-dev
|