Age | Commit message (Collapse) | Author |
|
into rvc-dev
|
|
Before this change, the remover could confuse explicit padding
operation signatures with implicit padding ones and remove arguments
incorrectly.
Before this change, the remover could remove one of the two last
CONV_2D/DEPTHWISE_CONV_2D arguments that must come together.
Fix: 153022427
Test: NNT_static_fuzzing --gtest_filter="*CONV*"
Test: NNT_static
Change-Id: I4fad8adef400070c5510fb015a383a0c894912ae
|
|
Fix a racing error between ag/10888514 and ag/10890131.
Fixes: 153109585
Test: NNT_static
Change-Id: I4ef3e737dc79e6ab4d0ae912ed791a32eb9a7916
|
|
Additionally, check dimensions overflow in CpuExecutor.
Fixes: 152382062
Test: NNT_static
Change-Id: I7439ae31799e15c11e3802ae29c8f702a2318e4a
|
|
|
|
The tests are taken from 1.2 test specs.
Fix: 152405977
Test: NNTest_static
Change-Id: I15ad10dd826ddd63476153e3982c5218947ae9c3
|
|
|
|
|
|
Prior to this CL, the token of a HAL model partition is a rehash of
{NDK token, the set of operation indexes in the partition}. Since we
now have multiple referenced models, the operation index is no longer
sufficient to uniquely identify an operation. This CL replaces operation
indexes with (subgraph index, operation index) pairs.
Fixes: 152877439
Test: NNT_static
Change-Id: I6789409a335b1fdd11d9d6f4965335805cbf9ef3
|
|
|
|
Change I733ed312 modified the runtime to remove optional arguments set
to default values. That made some examples comply with earlier
specifications.
Bug: 147105700
Test: NNT_static
Change-Id: Ie7468f2e0582d73c1c411b4c3bde3a14b848bb7d
|
|
|
|
Bug: 148804027
Test: NeuralNetworksTest_static
BEHAVIORAL CHANGES:
ANeuralNetworksExecution_getDuration() used to return either
ANEURALNETWORKS_NO_ERROR or ANEURALNETWORKS_BAD_STATE if called on an
Execution that failed, depending on what path (if any) we took through
ExecutionBuilder::computeFenced() (ANEURALNETWORKS_NO_ERROR if we did
not go through computeFenced(), sometimes but not always
ANEURALNETWORKS_BAD_STATE instead if we did go through
computeFenced()). Now we always return ANEURALNETWORKS_BAD_STATE in
the case of an Execution that completes with other than
ANEURALNETWORKS_NO_ERROR.
ANeuralNetworksExecution_getOutputOperand*() used to return either
ANEURALNETWORKS_NO_ERROR, ANEURALNETWORKS_OUTPUT_INSUFFICIENT_SIZE, or
ANEURALNETWORKS_BAD_STATE if called on an Execution that failed,
depending on what path (if any) we took through
ExecutionBuilder::computeFenced() and whether the execution completed
with ANEURALNETWORKS_OUTPUT_INSUFFICIENT_SIZE
(ANEURALNETWORKS_NO_ERROR or ANEURALNETWORKS_OUTPUT_INSUFFICIENT_SIZE
if we did not go through computeFenced(), sometimes but not always
ANEURALNETWORKS_BAD_STATE instead if we did go through
computeFenced()). Now we always return ANEURALNETWORKS_BAD_STATE in
the case of an Execution that completes with other than
ANEURALNETWORKS_NO_ERROR or ANEURALNETWORKS_OUTPUT_INSUFFICIENT_SIZE.
In certain cases of fenced execution, if the driver did not provide a
fenced duration (ANEURALNETWORKS_FENCED_DURATION_*),
ANeuralNetworksExecution_getDuration() used to report it as the
corresponding unfenced duration; now it is reported as UINT64_MAX.
TESTING CHANGES:
Change tests to conform to the aforementioned behavioral changes.
Test that ANeuralNetworksExecution_getDuration() fails with
ANEURALNETWORKS_BAD_STATE if called on a running Execution.
Do some numeric comparisons among fenced durations, and between fenced
and unfenced durations.
ANeuralNetworksExecution_getDuration() tests for fenced execution now
exercise all 16 combinations (rather than a 4 combination subset) of the
four durations (driver reports duration versus driver reports
UINT64_MAX).
SPECIFICATION CHANGES:
Bring the specification for ANeuralNetworksExecution_getDuration() and
ANeuralNetworksExecution_getOutputOperand*() into greater consistency.
CODE CLEANUP:
The properties represented by certain fields of ExecutionBuilder are
NOT represented by those fields when we take certain paths through
computeFenced(). Be more explicit about that, by naming and
commenting various fields and methods more appropriately, and adding
private utility functions to encapsulate the representation
differences.
END
Change-Id: Id16542bc1221a142d56048adc5eab3bd67b8e506
|
|
Fix: 147105700
Test: NNT_static
Change-Id: I733ed312c855e48b31b5876bdbce714b3da9929a
|
|
Fix: 152757537
Test: m
Change-Id: I028f4ddef36f5af88e0d12c14aac77ed5ef06fbf
|
|
|
|
Added tests of DIV by zero. The tests exercise DIV by zero of all
supported data types in CTS/VTS and only expect runtime/drivers to not
crash.
Updated the reference implementation of INT32 DIV. Before this CL, the
reference implementation will crash with floating point exception. This
CL forces INT DIV by zero to return 0.
Fixes: 151151830
Test: NNT_static
Change-Id: I49f39c6ed41bab5d368498b37909de6a1c808d1e
|
|
It also adds mutation testing for rank validation to TestValidateOperations.cpp.
Test: TestValidateOperations
Bug: 147106551
Change-Id: Ia588772047d6afd0ff51ae434f2816ffcfdeca62
|
|
* changes:
Catch integer overflow in getSizeOfData()
Disallow operand types where size overflows uint32
|
|
into rvc-dev
|
|
Prior to this CL, the variance is computed as:
sigma^2 = sum(x^2) / len - mean^2
This is not numerically stable because sum(x^2) may go very large and
lose precision.
This CL computes the variance by
sigma^2 = sum((x - mean)^2) / len
This is more stable especially when sigma^2 is much smaller than mean^2.
Additionally, this CL disallows invalid FP values for generated tests.
Fixes: 151360275
Test: NNT_static
Change-Id: I2842dd44553ef9ca4876f198763d741640e5178c
|
|
Bug: 146044137
Test: NNT_static
Change-Id: Idd6a070b379c6f5b10105647ec93579320c558f3
|
|
Also fixes a bug ValidationTestModel.SetOperandValueFromModel that this
change has uncovered.
Fix: 146044137
Test: NeuralNetworksTest_utils
Test: NeuralNetworksTest_static
Change-Id: Ibb8ca42115451eafc9288c679f792a0e3fe790c7
|
|
Validate:
- operations are in execution order
- this is a requirement at the HAL level
- at the API level, this is a requirement that there IS an
execution order (i.e., the graph is acyclic and leaves
no operands unwritten)
- operands are not read before written
- SUBGRAPH_INPUT/SUBGRAPH_OUTPUT operands are in
inputIndexes/outputIndexes
Add tests:
- detect graph cycle (Cycle)
- detect operand read before written (AcyclicReadBeforeWrite)
- detect operand never written (MissingWrite, UnwrittenOperand)
- detect multiple writes to the same operand (MultipleWrite)
Also:
- improve some error messages
- clean up some comments
Bug: 66478689
Test: mma
Test: CtsNNAPITestCases
Test: NeuralNetworksTest_static
Test: VtsHalNeuralnetworksV1_*TargetTest
Change-Id: I018b0c195e59b8b89ac8b62e0d80039d673ce81e
Merged-In: I018b0c195e59b8b89ac8b62e0d80039d673ce81e
(cherry picked from commit 2da722e81bbeacb9c09770d2b4416295efca5b40)
|
|
|
|
Without this check concatenation would crash if passed 5D tensors as
inputs.
Also:
* Add a folder for CTS only tests and modify Android.bp accordingly.
* Add concatenation test with invalid rank to CTS.
* Modify createModel and GeneratedTests::execute functions to handle
validation failures for models with exepectFailure flag.
The test is added only to CTS because we will very likely relax this
rank requirement in the future. If this test was added to VTS, future
drivers would have to contain logic to reject models with larger ranks
from clients using older versions. In order to make driver development
easier, we only introduce this test to CTS. Once we relax the rank
requirements, we will remove corresponding tests.
Fix: 139957680
Test: NNTest_static and VtsHalNeuralnetworksV1_3Target
Change-Id: I6d3b825360e055934a4e2d44e50b8fdc83b958e3
|
|
|
|
|
|
|
|
Fix: 152216745
Test: NNT_static
Change-Id: I2873773166d65f2c55ee701c818d40d13229dfb7
|
|
Bug: 151433109
Bug: 149199424
Fix: 116355758
Test: NNT_static
Change-Id: Ib62c8270b85b0a75e532fe15b54e7ad7ea3636ba
|
|
Also corrects some wordings related to control flow.
Bug: 149693818
Test: NNT_Static
Change-Id: If16eee0613f9166f0688b4289f68a389e92e6e2f
|
|
|
|
|
|
|
|
|
|
Also adds mStarted validation.
Fix: 149819396
Test: NNT_static
Change-Id: I2f947a4ead527b7cff6bf93439c3295e59626f7a
|
|
* changes:
Document and test L2_NORMALIZATION with input of all zeros.
Add tests for corner cases of quant8 l2 norm.
|
|
* changes:
Add SQUEEZE with omitted axis in RGG.
Fix SQUEEZE with optional squeeze dims.
Add TRANSPOSE with omitted permutation to RGG.
Support omitted operand in RGG.
Enable RGG to generate a roi tensor of lifetime SUBGRAPH_INPUT.
|
|
Fixes: 132322114
Test: NNT_static_fuzzing
Change-Id: I32d7b43516fda6756be551ef104167f1e5c938ef
|
|
Also added tests.
Fixes: 116355758
Fixes: 151775127
Test: NNT_static
Change-Id: I8904d7caf381970fc3b8869aa6aa09c03acbc6df
|
|
rvc-dev
|
|
Add cflag -UNDEBUG to default built target.
Add a source file that will cause the build of test targets to fail
if NDEBUG is defined.
Bug: 148456997
Test: atest NeuralNetworksTest_static
Change-Id: I1473cf99702ccf6e0b3b5f68f8fe32b9b0244d5f
|
|
|
|
|
|
Fixes: 143985908
Test: NNT_static
Change-Id: I97e662612947895e0244a8242728d362c0063f89
|
|
Bug: 143972021
Test: NNT_static
Change-Id: I8d1eccd3141e7f57c4b85b5dd06686c69e301aa7
|
|
- These tests have bias of 0 which is prohibited by the spec
Bug: 73641582
Test: mm
Test: CTS test
Change-Id: Ifbeb9203464cdda53bde514add173b0563beea4b
|
|
Fixes: 132323720
Test: NNT_static_fuzzing
Change-Id: I681fad8ec61bace438f4dec14972dc3600236468
|
|
Bug: 132323720
Test: NNT_static_fuzzing
Change-Id: Iaf1d9b66be98f40ae04b732dc778951a236757c8
|