d86dd5b131
* init in this branch Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * Lenetet Mnist workflow Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * small fix for calculations Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * for Alex to check placeholder null pointer issue Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * CNN3D workflow Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * state for launching on dxg to regenterate dl4j examples Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * SD RNN test case workflow Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * small fixes Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * checkpoint at lstmBlock: Input array 1 (x) rank must be got input with rank 2 issue Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * Fix LSTMLayer inputs order Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * lstm mismatch with c++ op issue Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * LSTMLayer config draft Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * LSTMLayer config draft v2 Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * have doubt I had to do this Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * NDRNN generated by codegen Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * LSTMLayerTestCases draft Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * minor fixes again * added LSTMLayer testcases to nd4j-tests + setted Preconditions in LSTMLayer constructors Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * added lost SDCNNtestcases Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * overrided getNumOutputs from DynamicCustomOp in LSTMLayer and reorganized LSTMLayerOutputs according to cpp op Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * finished with LSTMLayerOutputs Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * Fix MKLDNN platform checks (i.e., when MKLDNN can be used vs. not) Signed-off-by: Alex Black <blacka101@gmail.com> * Fix LSTMLayerWeights input order Signed-off-by: Alex Black <blacka101@gmail.com> * More fixes Signed-off-by: Alex Black <blacka101@gmail.com> * minor fixes Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * fixed LSTMLayer testcases Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * finished SameDiffRNNTestCase Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * finished all testcases + minor fixes Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * Multiple generation-related fixes Signed-off-by: Alex Black <blacka101@gmail.com> * Fix multiple issues Signed-off-by: Alex Black <blacka101@gmail.com> * More fixes Signed-off-by: Alex Black <blacka101@gmail.com> * LSTM fixes Signed-off-by: Alex Black <blacka101@gmail.com> * Regenerate ND4J namespaces and fix multiple issues Signed-off-by: Alex Black <blacka101@gmail.com> * changed SameDiffRNNTestCase Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * Small fix Signed-off-by: Alex Black <blacka101@gmail.com> * added Nd4j.getRandom().setSeed(12345) where needed Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com> * #8828 Fix ND4J profiler NaN/Inf checks when using OpContext Signed-off-by: Alex Black <blacka101@gmail.com> * #8828 Fix ND4J profiler NaN/Inf checks when using OpContext Signed-off-by: Alex Black <blacka101@gmail.com> * Tweak to weight init for SameDiff CNN test case Signed-off-by: Alex Black <blacka101@gmail.com> * Tweaks for test cases Signed-off-by: Alex Black <blacka101@gmail.com> * Ignore failing tests until fixed Signed-off-by: Alex Black <blacka101@gmail.com> * Fix Signed-off-by: Alex Black <blacka101@gmail.com> Co-authored-by: Alex Black <blacka101@gmail.com> |
||
---|---|---|
.. | ||
src/test | ||
pom.xml | ||
readme.md |
readme.md
#DL4J and SameDiff Integration Tests
These tests are designed to check a number of aspects of DL4J and SameDiff:
- Predictions (i.e., network output)
- Training (training curves, parameters, gradient calculation)
- Evaluation (accuracy, etc)
- Model serialization (saving + loading models)
- Overfitting sanity checks (make sure we can overfit a single example)
- Data pipelines
- Parallel Wrapper
- Validating conditions that should always hold (frozen layer params don't change, for example)
They are designed for the following purposes:
- Detecting regressions: i.e., new commit changed or broke previously working functionality
- Detecting integration issues - i.e., issues that show up only when components are used together (but not in isolation in unit test)
- Detecting significant differences between CPU and CUDA backends
- Validating implementation via sanity checks on training - i.e., can we overfit a single example?
- Checking networks and data pipelines on real-world scale data and nets
- Operating as fully automated pre-release checks (replacing manual sanity checks)
Main Classes
Explanation of the main classes:
- IntegrationTestBaselineGenerator: Run manually to generate and save "expected results" for comparing in the future. Output goes to dl4j-test-resources, for saving/uploading.
- IntegrationTestRunner: Actually runs the tests, and compares the output/result to those generated by the baseline generator
- TestCase: integration tests extend this
- testcases/*.java: the actual integration test definitions
- IntegrationTestsDL4J: entry point for running the DL4J integration tests
- IntegrationTestsSameDiff: entry point for running the SameDiff integration tests
Types of Test Components
The integration tests are set up to be able to run multiple types of tests on each network configuration.
Networks may be pretrained (from model zoo) or randomly initialized (from specified configuration).
Specifically, test cases can be run with any subset of the following components to be tested, by setting TestCase.XYZ boolean options to true or false:
- testPredictions: Testing output (predictions) on some specified data vs. saved/known good arrays
- testGradients: Testing gradients on some specified data vs. saved/known good arrays
- testPretrain: Test layerwise pretraining parameters and training curves
- testTrainingCurves: Train, and check score vs. iteration
- testParamsPostTraining: validate params match post training
- testEvaluation: test the evaluation performance (post training, if 4 or 5 are true)
- testParallelInference: validate that single net and parallel inference results match
- testOverfitting: sanity check - try to overfit a single example
See TestCase.java for more details.
Adding a New Integration Test
The process to add a new test is simple:
- Add a method that creates and returns a TestCase object (example: testcases/MLPTestCases.getMLPMnist())
- Add it as a unit test to IntegrationTests class (example: IntegrationTestsDL4J.testMLPMnist())
- Run IntegrationTestBaselineGenerator with the new test case, to generate and save the "known good" results.
- Run the new integration test to make sure it passes, on both CPU and CUDA backends
- Commit the generated test resources from step 3 to dl4j-test-resources repo
Note that IntegrationTestBaselineGenerator assumes you have the dl4j-test-resources cloned parallel to the DL4J mono-repo.