* Fixing issues from Sonar report
* Proper logger of exceptions
* Coding style fixes
* Use dup parameter
* Cleanup, minor issues
* Cuda compilation fixed and some minor fixes
* First steps for DL4J NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Conv2d NHWC forward pass works
Signed-off-by: Alex Black <blacka101@gmail.com>
* Conv2d NHWC backprop
Signed-off-by: Alex Black <blacka101@gmail.com>
* Conv2d backprop + fixes; subsampling fwd/bwd; improve tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Zero padding layer NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Cropping2D NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Deconv2d NHWC + clean up NHWC test framework code duplication
Signed-off-by: Alex Black <blacka101@gmail.com>
* CnnLossLayer NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Upsampling and batchnorm NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Space to depth
Signed-off-by: Alex Black <blacka101@gmail.com>
* Depthwise pt1
Signed-off-by: Alex Black <blacka101@gmail.com>
* Depthwise pt2 and LRN
Signed-off-by: Alex Black <blacka101@gmail.com>
* SpaceToBatch
Signed-off-by: Alex Black <blacka101@gmail.com>
* LocallyConnected2D
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix depthwise nhwc support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Upsampling NHWC - workaround for #8857
Signed-off-by: Alex Black <blacka101@gmail.com>
* Workaround for #8859 - SpaceToDepth
Signed-off-by: Alex Black <blacka101@gmail.com>
* Batch normalization workaround - #8860
Signed-off-by: Alex Black <blacka101@gmail.com>
* cuDNN fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Switch cudnn conv2d to permute based impl due to 'true' NHWC not working
Signed-off-by: Alex Black <blacka101@gmail.com>
* cuDNN subsampling helper NHWC fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Upsampling/batchnorm fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* CNN2D NHWC gradient checks (make CNNGradientCheckTest parameterized)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Gradient checks, SConv2d, bunch of fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Global pooling NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Also test both float and double for cuDNN NHWC tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Javadoc
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore failing keras import test until next PR
Signed-off-by: Alex Black <blacka101@gmail.com>
* Remove old nd4j-jackson dependencies
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix use of old/deprecated JSON serializer
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix deserialization
Signed-off-by: Alex Black <blacka101@gmail.com>
* Delete test using deleted ser/de classes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Delete another copy of old test
Signed-off-by: Alex Black <blacka101@gmail.com>
* Input format extended
* Deleted redundant code
* Added weights format to conv2d config
* Refactoring
* dl4j base test functionality
* Different tests base class per module
* Check base class for dl4j-graph subproject tests
* Check if test classes extend BaseDL4JTest
* Use nd4j-common-tests as transient dependency
* Enums and tests added
* Added codegenerated methods
* Use namespace methods
* Replace DifferentialFunctionFactory with codegenerated classes
* Fixed linspace
* Namespaces regenerated
* Namespaces used instead of factory
* Regenerated base classes
* Input format extended
* Added weights format to conv2d config
* Refactoring
* dl4j base test functionality
* Different tests base class per module
* Check base class for dl4j-graph subproject tests
* Check if test classes extend BaseDL4JTest
* Use nd4j-common-tests as transient dependency
* Enums and tests added
* Added codegenerated methods
* Use namespace methods
* Replace DifferentialFunctionFactory with codegenerated classes
* Fixed linspace
* Namespaces regenerated
* Regenerated base classes
* Regenerated namespaces
* Generate nd4j namespaces
* INDArrays accepting constructors
* Generated some ops
* Some fixes
* SameDiff ops regenerated
* Regenerated nd4j ops
* externalErrors moved
* Compilation fixes
* SquaredDifference - strict number of args
* Deprecated code cleanup. Proper base class for tests.
* Extend test classes with BaseND4JTest
* Extend test classes with BaseDL4JTest
* Legacy code
* DL4J cleanup
* Exclude test utils from base class check
* Tests fixed
* Arbiter tests fix
* Test dependency scope fix + pom.xml formatting
Signed-off-by: Alex Black <blacka101@gmail.com>
* Significant number of fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another round of fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another round of fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Few additional fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* DataVec missing test scope dependencies
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* Increase default timeout on Spark tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8840 disable deeplearning4j-nlp-korean module for scala 2.12
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix for change-scala-versions.sh
Signed-off-by: Alex Black <blacka101@gmail.com>
* CUDA test fixes + more timeout issues
Signed-off-by: Alex Black <blacka101@gmail.com>
* More CUDA
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fix for cuDNN subsampling + same mode
Signed-off-by: Alex Black <blacka101@gmail.com>
* Flaky test fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Reduce memory requirements for ValidateCuDNN BN test
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix slow/ineffirient scalnet tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Increase timeouts to avoid failures if CI machines are slower than expected
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore flaky test (issue #8849) and increase timeout for slow CI downloads
Signed-off-by: Alex Black <blacka101@gmail.com>
* Add a message to the runtime exception
Signed-off-by: Paul Dubs <paul.dubs@gmail.com>
* Output Convolutions as PNG instead of JPG
A lossless encoding is useful in this case, as it allows small details to be preserved
Signed-off-by: Paul Dubs <paul.dubs@gmail.com>
* libnd4j added optional alpha and beta support to matmul
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j typos fixes
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j add optional alpha and beta to matmul_bp
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j one more typo fix
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j added optional alpha and beta to mkl implementation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* MatMul alpha/beta on java side
Signed-off-by: raver119 <raver119@gmail.com>
* alpha/beta fix in libnd4j
Signed-off-by: raver119 <raver119@gmail.com>
* alpha/beta fix in matmul_bp
Signed-off-by: raver119 <raver119@gmail.com>
* restored view validation
Signed-off-by: raver119 <raver119@gmail.com>
* gemv/gemm now use MatMul op
Signed-off-by: raver119 <raver119@gmail.com>
* few tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* additional INDArray.mmul signature
Signed-off-by: raver119 <raver119@gmail.com>
* make C order default for INDArray.mmul, unless both A/B have F order
Signed-off-by: raver119 <raver119@gmail.com>
* Nd4j.gemm validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* disable mkldnn matmul for xxf with beta != 0 case
Signed-off-by: raver119 <raver119@gmail.com>
* SimpleRnn workspace fix + timeouts
Signed-off-by: Alex Black <blacka101@gmail.com>
* two more tests + minor fix in matmul platform check
Signed-off-by: raver119 <raver119@gmail.com>
* Flaky test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* propagate testresources profile
Signed-off-by: raver119 <raver119@gmail.com>
* Resources fix + flaky test fix
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Oleg <oleg.semeniv@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* init in this branch
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* Lenetet Mnist workflow
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* small fix for calculations
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* for Alex to check placeholder null pointer issue
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* CNN3D workflow
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* state for launching on dxg to regenterate dl4j examples
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* SD RNN test case workflow
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* small fixes
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* checkpoint at lstmBlock: Input array 1 (x) rank must be got input with rank 2 issue
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* Fix LSTMLayer inputs order
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* lstm mismatch with c++ op issue
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* LSTMLayer config draft
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* LSTMLayer config draft v2
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* have doubt I had to do this
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* NDRNN generated by codegen
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* LSTMLayerTestCases draft
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* minor fixes again
* added LSTMLayer testcases to nd4j-tests + setted Preconditions in LSTMLayer constructors
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* added lost SDCNNtestcases
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* overrided getNumOutputs from DynamicCustomOp in LSTMLayer and reorganized LSTMLayerOutputs according to cpp op
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* finished with LSTMLayerOutputs
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* Fix MKLDNN platform checks (i.e., when MKLDNN can be used vs. not)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix LSTMLayerWeights input order
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* minor fixes
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* fixed LSTMLayer testcases
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* finished SameDiffRNNTestCase
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* finished all testcases + minor fixes
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* Multiple generation-related fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix multiple issues
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* LSTM fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Regenerate ND4J namespaces and fix multiple issues
Signed-off-by: Alex Black <blacka101@gmail.com>
* changed SameDiffRNNTestCase
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* Small fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* added Nd4j.getRandom().setSeed(12345) where needed
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* #8828 Fix ND4J profiler NaN/Inf checks when using OpContext
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8828 Fix ND4J profiler NaN/Inf checks when using OpContext
Signed-off-by: Alex Black <blacka101@gmail.com>
* Tweak to weight init for SameDiff CNN test case
Signed-off-by: Alex Black <blacka101@gmail.com>
* Tweaks for test cases
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore failing tests until fixed
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* Cholesky fixed
* Constructors added
* MatMul wrapper
* Constructor added
* Missing wrappers added
* Generate Linalg namespace added
* Output data types
* Unit tests
* Added mmul
* Code generation
* Code generated
* Build fixed
* Fixing signatures
* Tests fixed
* Tests fixed
* Added enum
* Fix tests
* Some fixes
* Eye test fixed
* SameDiff: small fix for renameVariable - also replace variable name in lossVariable list if necessary
Signed-off-by: Alex Black <blacka101@gmail.com>
* Some fixes
* Tests fixed
* Revert wrong fix
* Some fixes
* Some fixes
* Extending base test class
* Added pad
* Fixed for generated signatures
* Fixes due to nd4j codegen
* Backwards compatibility fixes
* Fixed errors in tests, reverted wrong changes
* Test fixed
* Added missing operations used for nd4s operators
* Compilation fixed
* Added meshgrid
* Fixed constructors
* fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix bad commit (incorrectly reverted change from master)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixed test
Co-authored-by: Alex Black <blacka101@gmail.com>
* #8777 MultiLayerNetwork.evaluate(MultiDataSetIterator) overload
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8768 SameDiff.equals
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8750 shade freemarker library and switch to it in DL4J UI
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8704 DL4J UI redirect
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8776 RecordReaderDataSetIterator builder collectMetaData fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8718 Fix DL4J doEvaluation metadata
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8715 ArchiveUtils - Add option to not log every extracted file
Signed-off-by: Alex Black <blacka101@gmail.com>
* No exception for evaluations that don't support metadata
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8765 CompGraph+MDS fix for SharedTrainingMaster
Signed-off-by: Alex Black <blacka101@gmail.com>
* small fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Timeout
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore
Signed-off-by: Alex Black <blacka101@gmail.com>
* Revert freemarker shading
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore
Signed-off-by: Alex Black <blacka101@gmail.com>
* tf op initial
* ..
* protobuf parsing working
* model build working
* test passing
* headers
* conffix
* service loader + tests
* revert cuda version
* msg
* override
* refacc
* pom
* rem bad import
* dtype fix + const cast caaching
* rem unnecessary fields
* rem println
* rem dep
* refacc
* rem redundant arg
* Ignore TFOpLayer in DTypeTests
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* Revive and start updating DL4J integration tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Add SameDiff support - first pass
Signed-off-by: Alex Black <blacka101@gmail.com>
* SameDiff test case generation
Signed-off-by: Alex Black <blacka101@gmail.com>
* SameDiff integration tests polishing
Signed-off-by: Alex Black <blacka101@gmail.com>
* More SameDiff integration test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Final polish
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small test tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* Add check to ensure ALL tests extend BaseND4JTest for proper timeouts + logging
Signed-off-by: Alex Black <blacka101@gmail.com>
* Add 'must extend BaseDL4JTest' check for deeplearning4j-core
Signed-off-by: Alex Black <blacka101@gmail.com>
* Flush logging on workspace exit during tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8565 Normalizer toString/hashcode
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8731 ImagePreProcessingScaler lables/segmentation fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8691 Fix SameDiffLayer/Vertx finetuning and parameter setting support
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8663 DL4J embedding layer weight init - don't depend on vocab size
Signed-off-by: Alex Black <blacka101@gmail.com>
* EmbeddingLayer test tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* Test spam reduction
Signed-off-by: Alex Black <blacka101@gmail.com>
* Arbiter bad import fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small spark test tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter test log spam reduction
Signed-off-by: Alex Black <blacka101@gmail.com>
* More test spam reduction
Signed-off-by: Alex Black <blacka101@gmail.com>
* Copied and pasted RegressionTest100b4.java to RegressionTest100b6.java with renamed b4->b6
* assertEquals > assertTrue for half dtype
Signed-off-by: atuzhykov <andrewtuzhukov@gmail.com>
* Gradients tests added
* Fix for Standard deviation serialization + test
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Test fixed
* Spark config driver host config for CI
Signed-off-by: Alex Black <blacka101@gmail.com>
* Op validation timeout increase
Signed-off-by: Alex Black <blacka101@gmail.com>
* Gradient check - fix for low probability test failure due to randomly all 0s mask
Signed-off-by: AlexDBlack <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* Change the regular expression for the Bert tokenizer.
The previous regular expression causes StackOverflowErrors
if given a document with a large amount of whitespace. I
believe that the one I've provided is an equivalent.
* Add test for new BertWordPieceTokenizer RegEx.
This test should cause a StackOverflowError with the previous version.
* Fix assert off by one.
* Test speedups / integration test run only for CUDA - NLP
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* nlp-uima CUDA slow tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Spark CUDA timeout fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fixes for global pooling + masking with different mask datatypes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Global pooling backprop dtype fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update Japanese translation for Deeplearning4J UI (#8525)
Signed-off-by: k-tamura <ktamura.biz.80@gmail.com>
* RL4J: Remove processing done on observations in Policy & Async (#8471)
* Removed processing from Policy.play() and fixed missing resets
Signed-off-by: unknown <aboulang2002@yahoo.com>
* Adjusted unit test to check if DQNs have been reset
Signed-off-by: unknown <aboulang2002@yahoo.com>
* Fixed a couple of problems, added and updated unit tests
Signed-off-by: unknown <aboulang2002@yahoo.com>
* Removed processing from AsyncThreadDiscrete
Signed-off-by: unknown <aboulang2002@yahoo.com>
* Fixed a few problems
Signed-off-by: unknown <aboulang2002@yahoo.com>
* python version bump
* increase
* RL4J: Replace gym-java-client with JavaCPP (#8595)
* RL4J: Replace gym-java-client with JavaCPP
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
Co-authored-by: Kohei Tamura <ktamura.biz.80@gmail.com>
Co-authored-by: Alexandre Boulanger <44292157+aboulang2002@users.noreply.github.com>
Co-authored-by: Max Pumperla <max.pumperla@googlemail.com>
Co-authored-by: Samuel Audet <samuel.audet@gmail.com>
* Cleanup modules
* Moving subprojects to nd4j-api
* Project cleanup
* Dropped AWS sub-project
* dl4j-util moved to core
* dl4j-perf moved to core
* Tests coverage
* Revert "Moving subprojects to nd4j-api"
This reverts commit bc6eb573c6b60c407ade47172c5d204725077e6b.
* Moved nd4j-buffer and nd4j-context to nd4j-api
* Rolled back change
* Revert "Project cleanup"
This reverts commit 64ac7f369b2d968f7be437718034f093fc886ffc.
* Datavec cleaned up
* Revert "Moved nd4j-buffer and nd4j-context to nd4j-api"
This reverts commit 75f4e8da80d2551e44e1251dd6c5923289fff8e1.
# Conflicts:
# nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/opvalidation/ReductionBpOpValidation.java
* Resolve conflict
* Compilation fixed.
* nd4j-context and nd4j-buffer moved to nd4j-api
* Fixed TF mapping for mmul
* Fix for dl4j-cuda tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Move last few tests from deeplearning4j-nn to -core
Signed-off-by: Alex Black <blacka101@gmail.com>
* Remove incorrect TF import mapping for TensorMmul op
Signed-off-by: Alex Black <blacka101@gmail.com>
* Cleaned TF mapping
* Fix path for test results on windows
* Remove old dependency
Signed-off-by: Alex Black <blacka101@gmail.com>
* One more attempt to fix path for test results on windows
* fixup! One more attempt to fix path for test results on windows
* fixup! One more attempt to fix path for test results on windows
Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
* Add maven profile + base tests methods for integration tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Switch from system property to environment variable; seems more reliable in intellij
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add nd4j-common-tests module, and common base test; cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ensure all ND4J tests extend BaseND4JTest
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test spam reduction, import fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add test logging to nd4j-aeron
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix unintended change
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Reduce sprint test log spam
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More test spam cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Significantly speed up TSNE tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* W2V iterator test unit/integration split
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More NLP test speedups
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Avoid debug/verbose mode leaking between tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* test tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter extends base DL4J test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter test speedup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* nlp-uima test speedup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More test speedups
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix ND4J base test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Few small ND4J test speed improvements
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J tests speedup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More tweaks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Even more test speedups
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More tweaks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Various test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* More test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Add ability to specify number of threads for C++ ops in BaseDL4JTest and BaseND4JTest
Signed-off-by: Alex Black <blacka101@gmail.com>
* nd4j-aeron test profile fix for CUDA
Signed-off-by: Alex Black <blacka101@gmail.com>
* Allow scalar op result array auto allocation
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Don't swallow underlying exception for calculateOutputShape execution failures
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ignore for known keras failure
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Fused batch norm fixes-
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests switched off.
* Added test for resize_bicubic.
* Eliminated wasted in test of bicubic resize.
* Switched off multithreading explicit.
* HsvToRgb and RgbToHsv added
* Eliminated waste comments and conform proper float constants.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed multithreading with resize_bicubic helper for cpu platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* ResizeBicubic was fixed.
* Some fixes
* Fix op name
* Validation fixed.
* Clarifications for tests
* Wrappers and small fixes for new ops.
* cleaned up bert iterator tests (#110)
Signed-off-by: eraly <susan.eraly@gmail.com>
* Various pre-release fixes (#111)
* Various fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix default dtypes for MaxPoolWithArgmax
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small pre-release tweak (#112)
* Log UI address on launch as in previous Play-based UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Logging level tweak for UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* http not https
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec python ensure host (#113)
* ensure host
* one more host ensure
* info->debug
* [WIP] reverse improvements (#115)
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* reverse draft
Signed-off-by: raver119 <raver119@gmail.com>
* reverse kernel
Signed-off-by: raver119 <raver119@gmail.com>
* reverse kernel
Signed-off-by: raver119 <raver119@gmail.com>
* 2 micro fixes
Signed-off-by: raver119 <raver119@gmail.com>
* Shugeo resize fix5 (#102)
* Refactored resize images ops to use TF-like bool args as input.
* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.
* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.
* Refactored nearest_neighbor resize op.
* Added a pair of tests for special case of resize_bilinear algorithm.
* Fixed issue with resize_bilinear op.
* Refactored cpu implementation for helpers with resize_nearest_neighbor op.
* Final fixed for resize ops to conform TF v.1.5
* Refactored cuda helpers for resize_neares_neighbor op.
* Fixed resize_bilinear to accept proper data.
* Fixed issue with non-float input for resize_bilinear op.
* Refactored cuda helper for resize_bilinear to proper process non-float inputs.
* Added tests for resize_bilinear to int inputs.
* Fixed ResizeBilinear wrapper
* Tests fixed
* Fixed float and bool constant to avoid overflow for some kind of compilers.
* Corrected float constants with float data type.
* Added f suffix for float constants.
* Corrected float constant to avoid overflow with initializing lists.
* Corrected float initializing list with float input.
* Corrected bool constant with initalizing list.
* Corrected float and bool values with initializing lists.
* Fixed wrong constant.
* Fixed issue with 1x1 input picture for resize.
* ResizeBilinear default values on import fix
Signed-off-by: raver119 <raver119@gmail.com>
* Keras causal conv1d support first steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Causal conv mode
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Gradient check and fixes for causal conv1d
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix Conv1D import and testing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small keras test fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Don't allow setting causal convolution mode to conv2d/3d layers
Signed-off-by: Alex Black <blacka101@gmail.com>
* More robustly infer nIn for recurrent layers for ambiguous NCW and NWC cases
Signed-off-by: Alex Black <blacka101@gmail.com>
* Polish and cleanup
Signed-off-by: Alex Black <blacka101@gmail.com>
* Update shaded Jackson version to 2.10.1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove no longer needed scala compiler plugin from UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix op name for BitwiseAnd op
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* TimeDistributedLayer mask array fix + test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Switch Nearest neighbors server implementation from Play to Vertx
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* No more scala version suffix for nearest neighbor server
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* logback.xml fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Header tweaks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #6377 Keras sparse cross entropy loss import support
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix small bug in reshape preprocessor
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Minor optimization.
* Reduce number of objects.
* Extend arrays when limit reached
* Test
* Some fixes.
* Small fix
* Wrong condition fixed.
* Fixes of reallocation.
* Small fix.
* Tests
* Clean up
* Test added.
* Tests and some fixes.
* Test
* Test fixed.
* Conflict fixed.
* UX improved
* Fix repo links and clean up old github templates
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More link updates
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small base spark test fix; ROC toString for empty ROC
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* don't allocate so many float arrays, use INDArrays instead
Signed-off-by: Ryan Nett <rnett@skymind.io>
* re-add pre-processing, better names
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use float[][] pool to avoid extra ndarray creation
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add fallback for Conv layer activation
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add fallback and config option for LSTM layers
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add fallback option and setting for dropout
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix comments and error messages
Signed-off-by: Ryan Nett <rnett@skymind.io>
* move helper fail count to layer instance
Signed-off-by: Ryan Nett <rnett@skymind.io>
* ignore helperCountFail for equals and json
Signed-off-by: Ryan Nett <rnett@skymind.io>
* typo fix (MLK -> MKL)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add MKLDNN to error messages
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add helperAllowFallback to builders, deprecate cudnnAllowFallback
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Test updated to reflect changes in random generation
* Invalid resource removed and fixed tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Small batch norm fix (cuda/no-mkldnn)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Dropout fix for RnnOutputLayer
Signed-off-by: Alex Black <blacka101@gmail.com>
* Allow block size < 2 in batch_to_space_nd and space_to_batch_nd for import, in spite of what TF docs say
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Nd4j pad update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* switched from guava Immutables to Collections.unmodifiableList/Map
Signed-off-by: Ryan Nett <rnett@skymind.io>
* javadoc
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use new pad
Signed-off-by: Ryan Nett <rnett@skymind.io>
* conv tests use OpValidation
Signed-off-by: Ryan Nett <rnett@skymind.io>
* deconv3d overrides
Signed-off-by: Ryan Nett <rnett@skymind.io>
* test fix for the new pad method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more test fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more test fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* rename SameDiff function methods to op (except for the actual SameDiff function ones)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more pad overloads, test fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* test updates
Signed-off-by: Ryan Nett <rnett@skymind.io>
* conv1d test
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove Conv1D tf import (there isn't a TF conv1d op)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove numThreads from Nd4j
Signed-off-by: Ryan Nett <rnett@skymind.io>
* replace Old ops with their newer versions, deprecate ones that haven't already been deprecated
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove use of setNumThreads
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix for Reverse and ATan2
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix test for wrong equals type
Signed-off-by: Ryan Nett <rnett@skymind.io>
* well it works now
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better javadocs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* NonNulls
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better array literal
Signed-off-by: Ryan Nett <rnett@skymind.io>
* re-add tf import stuff (will remove later)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* conv1d config load fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* partial config usage changes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove Old op classes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* config property fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed one too many ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Small build fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix RL4J
Signed-off-by: Alex Black <blacka101@gmail.com>
* Test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* First pass on SameDiff op exec debug listener
Signed-off-by: Alex Black <blacka101@gmail.com>
* #7555 DL4J helpers - don't fall back on builtin for op profiler exceptions
Signed-off-by: Alex Black <blacka101@gmail.com>
* Exec debugging listener + fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix import counts for TF ops in OpValidationSuite
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix bad DL4J test configuration
Signed-off-by: Alex Black <blacka101@gmail.com>
* Exec debugging listener polish
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* wip
* update interface, add null implementations.
* Breaking one test in a weird way.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* createUninitializedDetached refactored.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Keras model import - updater lr fix
Signed-off-by: eraly <susan.eraly@gmail.com>
* Keras model import - updater lr fix, cleanup
Signed-off-by: eraly <susan.eraly@gmail.com>
When doing classification we need to know the `numPossibleLabels`. If it's set to -1, then we get obscure and confusing null-pointers when accessing labels when calling `ComputationGraph.fit` on the iterator. This PR blocks the user from shooting themselves in the foot.
* System info export for debugging and bug reporting
Signed-off-by: Ryan Nett <rnett@skymind.io>
* class name fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add version information, pointer memory info
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add nvidia-smi and nvcc info
Signed-off-by: Ryan Nett <rnett@skymind.io>
* line cleanup
Signed-off-by: Ryan Nett <rnett@skymind.io>
* nvidia-smi run works
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add oshi dependency
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use OS info, add workspaces info
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use ServiceLoader to load GPU information
Signed-off-by: Ryan Nett <rnett@skymind.io>
* register service
Signed-off-by: Ryan Nett <rnett@skymind.io>
* moved service out of NativeOpsHolder (private constructor)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* added newline
Signed-off-by: Ryan Nett <rnett@skymind.io>
* added license
Signed-off-by: Ryan Nett <rnett@skymind.io>
* and one more
Signed-off-by: Ryan Nett <rnett@skymind.io>
* copyright update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed unused imports
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed more unused imports
Signed-off-by: Ryan Nett <rnett@skymind.io>
* close streams
Signed-off-by: Ryan Nett <rnett@skymind.io>
* and another one
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* one more copyright
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove double license
Signed-off-by: Ryan Nett <rnett@skymind.io>
* moved test to correct package
Signed-off-by: Ryan Nett <rnett@skymind.io>
* classpath update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* classpath for java >8 fix
Signed-off-by: Ryan Nett <rnett@skymind.io>