* RL4J: Add generic update rule (#502)
Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
* Shyrma reduce (#481)
* - start working on improving of cpu legacy code for reduce ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on improving legacy loops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - still working on improving reduce ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on improving reduce ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - testing speed run of new reduce op
Signed-off-by: Yurii <iuriish@yahoo.com>
* - working on improvement of default loop for reduce op
Signed-off-by: Yurii <iuriish@yahoo.com>
* - update signatures of stuff which calls reduce ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - make corrections in cuda reduce kernels
Signed-off-by: Yurii <iuriish@yahoo.com>
* - change loop for default case in broadcast legacy ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - comment some shape stuff
Signed-off-by: Yurii <iuriish@yahoo.com>
* - comment unnecessary prints in RNGtests
Signed-off-by: Yurii <iuriish@yahoo.com>
* - finish to resolve conflicts after master has been merged
Signed-off-by: Yurii <iuriish@yahoo.com>
* - get rid of some compilation mistakes of cuda stuff
Signed-off-by: Yurii <iuriish@yahoo.com>
* - minor changes
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further search for bug causing crash on java test
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add scalar case in reduce_ ... exec stuff
Signed-off-by: Yurii <iuriish@yahoo.com>
* - minor corrections in NAtiveOps.cu
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add switch to scalar case execReduceXD functions
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add support for vectors old shape in ConstantShapeHelper::createShapeInfoWithNoUnitiesForReduce
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct cuda mirrorPad
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add support for vectors old shape in cuda createShapeInfoWithNoUnitiesForReduce
Signed-off-by: Yurii <iuriish@yahoo.com>
Co-authored-by: raver119 <raver119@gmail.com>
* Add support for CUDA 11.0 (#492)
* Add support for CUDA 11.0
* libnd4j tweaks for CUDA 11
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* bindings update, again?
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* * Update versions of JavaCPP Presets for FFmpeg, OpenBLAS, and NumPy
* update API to match CUDA 8
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* * Update version of JavaCPP Presets for CPython
* C++ updated for cuDNN 8.0
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* one more test
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* one more test
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* one more test
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* 128-bit alignment for workspaces
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* change seed in 1 test
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Fix dependecy duplication in python4j-parent pom
* Fix group id for in python4j-numpy
* few tests tweaked
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Remove macosx-x86_64-gpu from nd4j-tests-tensorflow
* few minor tweaks for IndexReduce
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* one test removed
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
Co-authored-by: raver119@gmail.com <raver119@gmail.com>
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
* RL4J: Add SyncTrainer and AgentLearnerBuilder for a few algorithms (#504)
Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
Co-authored-by: Alexandre Boulanger <44292157+aboulang2002@users.noreply.github.com>
Co-authored-by: Yurii Shyrma <iuriish@yahoo.com>
Co-authored-by: raver119 <raver119@gmail.com>
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
* error code check in CudaMemoryManager
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* clear
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* clear model before exiting
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* MultiLayerNetwork/ComputationGraph.close() [WIP] (#460)
* MultiLayerNetwork/ComputationGraph.close()
Signed-off-by: Alex Black <blacka101@gmail.com>
* Copyright header
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* - fix for handling release of nested DataBuffers
- couple of additional tests for released DataBuffers
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* PW test: increase number of epochs slightly
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* Remove debug line from KerasConvolution2D
Signed-off-by: Alex Black <blacka101@gmail.com>
* Remove more debug lines
Signed-off-by: Alex Black <blacka101@gmail.com>
* Update docs links to new website URLs
Signed-off-by: Alex Black <blacka101@gmail.com>
* One more link
Signed-off-by: Alex Black <blacka101@gmail.com>
* Refactor nd4j-common: org.nd4j.* -> org.nd4j.common.*
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix CUDA (missed nd4j-common package refactoring changes)
Signed-off-by: Alex Black <blacka101@gmail.com>
* nd4j-kryo: org.nd4j -> org.nd4j.kryo
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix nd4j-common for deeplearning4j-cuda
Signed-off-by: Alex Black <blacka101@gmail.com>
* nd4j-grppc-client: org.nd4j.graph -> org.nd4j.remote.grpc
Signed-off-by: Alex Black <blacka101@gmail.com>
* deeplearning4j-common: org.deeplearning4.* -> org.deeplearning4j.common.*
Signed-off-by: Alex Black <blacka101@gmail.com>
* deeplearning4j-core: org.deeplearning4j.* -> org.deeplearning.core.*
Signed-off-by: Alex Black <blacka101@gmail.com>
* deeplearning4j-cuda: org.deeplearning4j.nn.layers.* -> org.deeplearning4j.cuda.*
Signed-off-by: Alex Black <blacka101@gmail.com>
* Import fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* deeplearning4j-nlp-*: org.deeplearning4.text.* -> org.deeplearning4j.nlp.(language).*
Signed-off-by: Alex Black <blacka101@gmail.com>
* deeplearning4j-ui-model: org.deeplearning4j.ui -> org.deeplearning4j.ui.model
Signed-off-by: Alex Black <blacka101@gmail.com>
* datavec-spark-inference-{server/model/client}: org.datavec.spark.transform -> org.datavec.spark.inference.{server/model/client}
Signed-off-by: Alex Black <blacka101@gmail.com>
* datavec-jdbc: org.datavec.api -> org.datavec.jdbc
Signed-off-by: Alex Black <blacka101@gmail.com>
* Delete org.deeplearning4j.datasets.iterator.impl.MultiDataSetIteratorAdapter in favor of (essentially identical) org.nd4j.linalg.dataset.adapter.MultiDataSetIteratorAdapter
Signed-off-by: Alex Black <blacka101@gmail.com>
* ND4S fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* nd4j-common-tests: org.nd4j.* -> org.nd4j.common.tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Trigger CI
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8878 Ignore CUDA tests on modules with 'nd4j-native under cuda' issue
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix bad imports in tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Add ignore on test (already failing) due to #8882
Signed-off-by: Alex Black <blacka101@gmail.com>
* Import fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Additional import fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* merge conf
* merge conf
* tfkeras tests
* parameterized tests
* rename
* cuda versions
* jccp versions
* 'updates'
* updates
* rnn+mlp passing
* repeat
* updates
* tests
* Update pom.xml
* Update pom.xml
* rem print
* cnn1d model conversion fixed
* cnn1d activate fixed
* cnn1d outptut shape fix
* cnn1d bprop fix
* cnn1d stack fix
* KerasModelEndToEndTest - Remove permutes for NWC and NHWC format tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes and update test - input shapes (NCHW -> NHWC input)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore for known bad tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Multiple fixes - MergeVertex, CNN1D layers, etc
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix issue with RNN/FF preprocessors, time distributed etc with NWC format
Signed-off-by: Alex Black <blacka101@gmail.com>
* LSTM NWC dropout fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Add sequence embedding layer NWC support (configurable output format)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix expected shape in a couple of tests - NWC expected
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix EmbeddingSequenceLayer backprop for NWC output case + add gradient checks
Signed-off-by: Alex Black <blacka101@gmail.com>
* CnnToFeedForwardPreprocessor: align with Keras/TF; fix Keras reshape/flatten
Signed-off-by: Alex Black <blacka101@gmail.com>
* Update ConvDataFormatTests to match new reshape behaviour
Signed-off-by: Alex Black <blacka101@gmail.com>
* Switch hard-coded path to ResourceUtils.listClassPathfiles for TestTFKerasModelImport
Signed-off-by: Alex Black <blacka101@gmail.com>
* TestUtils fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix JSON serde issue with data formats
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix for input dtype inference; fix 2 tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8891 Ignore for TestVertxUIMultiSession until fixed
Signed-off-by: Alex Black <blacka101@gmail.com>
* Restore but deprecate TensorFlowCnnToFeedForwardPreProcessor for older zoo models
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore for deprecated preprocessor in DTypeTests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Remove debug printlns
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* Fixing issues from Sonar report
* Proper logger of exceptions
* Coding style fixes
* Use dup parameter
* Cleanup, minor issues
* Cuda compilation fixed and some minor fixes
* First steps for DL4J NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Conv2d NHWC forward pass works
Signed-off-by: Alex Black <blacka101@gmail.com>
* Conv2d NHWC backprop
Signed-off-by: Alex Black <blacka101@gmail.com>
* Conv2d backprop + fixes; subsampling fwd/bwd; improve tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Zero padding layer NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Cropping2D NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Deconv2d NHWC + clean up NHWC test framework code duplication
Signed-off-by: Alex Black <blacka101@gmail.com>
* CnnLossLayer NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Upsampling and batchnorm NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Space to depth
Signed-off-by: Alex Black <blacka101@gmail.com>
* Depthwise pt1
Signed-off-by: Alex Black <blacka101@gmail.com>
* Depthwise pt2 and LRN
Signed-off-by: Alex Black <blacka101@gmail.com>
* SpaceToBatch
Signed-off-by: Alex Black <blacka101@gmail.com>
* LocallyConnected2D
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix depthwise nhwc support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Upsampling NHWC - workaround for #8857
Signed-off-by: Alex Black <blacka101@gmail.com>
* Workaround for #8859 - SpaceToDepth
Signed-off-by: Alex Black <blacka101@gmail.com>
* Batch normalization workaround - #8860
Signed-off-by: Alex Black <blacka101@gmail.com>
* cuDNN fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Switch cudnn conv2d to permute based impl due to 'true' NHWC not working
Signed-off-by: Alex Black <blacka101@gmail.com>
* cuDNN subsampling helper NHWC fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Upsampling/batchnorm fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* CNN2D NHWC gradient checks (make CNNGradientCheckTest parameterized)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Gradient checks, SConv2d, bunch of fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Global pooling NHWC support
Signed-off-by: Alex Black <blacka101@gmail.com>
* Also test both float and double for cuDNN NHWC tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Javadoc
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore failing keras import test until next PR
Signed-off-by: Alex Black <blacka101@gmail.com>
* Increase default timeout on Spark tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8840 disable deeplearning4j-nlp-korean module for scala 2.12
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix for change-scala-versions.sh
Signed-off-by: Alex Black <blacka101@gmail.com>
* CUDA test fixes + more timeout issues
Signed-off-by: Alex Black <blacka101@gmail.com>
* More CUDA
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fix for cuDNN subsampling + same mode
Signed-off-by: Alex Black <blacka101@gmail.com>
* Flaky test fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Reduce memory requirements for ValidateCuDNN BN test
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix slow/ineffirient scalnet tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Increase timeouts to avoid failures if CI machines are slower than expected
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore flaky test (issue #8849) and increase timeout for slow CI downloads
Signed-off-by: Alex Black <blacka101@gmail.com>
* libnd4j added optional alpha and beta support to matmul
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j typos fixes
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j add optional alpha and beta to matmul_bp
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j one more typo fix
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j added optional alpha and beta to mkl implementation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* MatMul alpha/beta on java side
Signed-off-by: raver119 <raver119@gmail.com>
* alpha/beta fix in libnd4j
Signed-off-by: raver119 <raver119@gmail.com>
* alpha/beta fix in matmul_bp
Signed-off-by: raver119 <raver119@gmail.com>
* restored view validation
Signed-off-by: raver119 <raver119@gmail.com>
* gemv/gemm now use MatMul op
Signed-off-by: raver119 <raver119@gmail.com>
* few tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* additional INDArray.mmul signature
Signed-off-by: raver119 <raver119@gmail.com>
* make C order default for INDArray.mmul, unless both A/B have F order
Signed-off-by: raver119 <raver119@gmail.com>
* Nd4j.gemm validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* disable mkldnn matmul for xxf with beta != 0 case
Signed-off-by: raver119 <raver119@gmail.com>
* SimpleRnn workspace fix + timeouts
Signed-off-by: Alex Black <blacka101@gmail.com>
* two more tests + minor fix in matmul platform check
Signed-off-by: raver119 <raver119@gmail.com>
* Flaky test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* propagate testresources profile
Signed-off-by: raver119 <raver119@gmail.com>
* Resources fix + flaky test fix
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Oleg <oleg.semeniv@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* tf op initial
* ..
* protobuf parsing working
* model build working
* test passing
* headers
* conffix
* service loader + tests
* revert cuda version
* msg
* override
* refacc
* pom
* rem bad import
* dtype fix + const cast caaching
* rem unnecessary fields
* rem println
* rem dep
* refacc
* rem redundant arg
* Ignore TFOpLayer in DTypeTests
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* Allow scalar op result array auto allocation
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Don't swallow underlying exception for calculateOutputShape execution failures
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ignore for known keras failure
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Fused batch norm fixes-
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests switched off.
* Added test for resize_bicubic.
* Eliminated wasted in test of bicubic resize.
* Switched off multithreading explicit.
* HsvToRgb and RgbToHsv added
* Eliminated waste comments and conform proper float constants.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed multithreading with resize_bicubic helper for cpu platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* ResizeBicubic was fixed.
* Some fixes
* Fix op name
* Validation fixed.
* Clarifications for tests
* Wrappers and small fixes for new ops.
* Keras causal conv1d support first steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Causal conv mode
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Gradient check and fixes for causal conv1d
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix Conv1D import and testing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small keras test fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Don't allow setting causal convolution mode to conv2d/3d layers
Signed-off-by: Alex Black <blacka101@gmail.com>
* More robustly infer nIn for recurrent layers for ambiguous NCW and NWC cases
Signed-off-by: Alex Black <blacka101@gmail.com>
* Polish and cleanup
Signed-off-by: Alex Black <blacka101@gmail.com>
* #6377 Keras sparse cross entropy loss import support
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix small bug in reshape preprocessor
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Keras model import - updater lr fix
Signed-off-by: eraly <susan.eraly@gmail.com>
* Keras model import - updater lr fix, cleanup
Signed-off-by: eraly <susan.eraly@gmail.com>
* Shugeo strided slice zeros (#14)
* Modified strided_slice op to properly work with empty-like shapes.
* Fixed test for reduce_mean with empty-like input.
* [WIP] Last merge (#15)
* correct logsoftmax looss (#2)
* Small SameDiff listener fix (#4)
* Various fixes (#6)
* #7839 Fix for asXMatrix and tests
* #7866 EmbeddingSequenceLayer dtype fix + test
* #7856 SameDiff save/load stream methods
* #7859 RegressionEvaluation rank 4 fix + tests + axis configuration
* EvaluationBinary 3d/4d
* More evaluation 3d/4d tests
* #7847 Evaluation empty checks
* Small test ifx
* #7848 Fix median edge case
* Improve DL4J samediff layer tests
* [WIP] FastText wrapper implemented (#8)
* FastText implemented
* Some fixes
* Fix shapes for wordsNearest
* Validation of input vectors
* Fixes
* Fixed test
* Thread tagged
* Some tweaks
* setContextClassLoader for DeallocatorServiceThread
* Numpy format tests (#1)
* Various fixes (#11)
* #7852 SameDiff gather fix
* #7892 SameDiff placeholder to constant conversion
* #7890 validate input rank for MLN/CG init methods
* Fix broken permute shape calculation
* Permute and gather fixes
* Tests
* #7850 LogSumExp fix + test
* Handful of test fixes
* Empty arrays with non-scalar shapes (#10)
* minor rearrangements for lambdas
* empty tensors with non-scalar shapes
* numpy empty tensors with non-scalar shapes
* few more empty tweaks
* Small fixes
* conv3d signature update
* micro fix in batchnorm mkldnn
* Import fixes
* Fix
* MKL-DNN update
* Small fill fix
* fill with empty input + test
* Fixes
* Small error improvement
* Fix
* one special test
* couple of fixes for lstm
* Rewrite TFGraphMapper.getNDArrayFromTensor to be maintainable and less error prone
* Fixes
* FP16
* Unsigned
* BFloat16
* Fill op - empty tweaks
* - couple of fixes for empty arrays construction
- stack updated
* strided slice fix
* one transform test
* provide method for reducing shapeInfo in case of input array is empty
* Fixed reduceAlongDimensions to use empty input properly.
* couple of broadcast tests
* couple of tests broadcast tests + tweak to make them pass
* add check of non-empty to methods producing sub-arrays
* Fixed reshapeC with zeros in shape.
* complete empty check in reduce_... legacy ops
* Concat and cumsum/prod
* Tweak to empty shape inference on import
* add empty check to the rest of reduce legacy ops
* one more test
* correct typo in evalReduceShapeInfoEmpty
* Added tests for reduce_* ops to tests with zero shapes.
* few more tests for empty reductions
* Fixed strided_slice op with empty case and tests.
* one more empty reduction test
* Fixed strided_slice test.
* add empty check to NDArray::reshapei
* infOrMax
* empty min/max with infinity tests
* made unstack working correctly with empty arrays
* few IndexReduce tests + tweaks for empty shapes
* add test for empty concat
* few tests fixed
* Validation fix for reductions on empty shapes
* Reverse fix
* Reduction shape calc fixes
* SameDiff.generateOutputVariable: don't use shape function to determine number of outputs
* Range fix
* - NDArray constructor updated for scalars/empty arrays
- few tests fixed
* More fixes
* Empty creator fixes
* concat fix
* concat fix
* TF import tests: allow 'both all NaN' and 'both all inf' to pass
* Slice, zero fraction, and reshape fixes
* transpose, gather
* Zero fraction
* scalar cast fix
* Empty reduction axis support
* few more tests fixed
* Fixed input checks conforming with TF for concat op and tests.
* few tests fixed
* matmul scalar shape fix
* Fixed checkout for data type and scalarity with concat to allow non-empty scalars with vector concats.
* broadcast bool fix
* few more tests
* few more tests
* correct evalReduceShapeInfoEmpty
* argmax/argmin + tests
* one more empty edge case + one more test
* argmax/argmin/realdiv_bp tweaks
* empty reshape test + fix
* Helper fixes
* Small fixes
* Gather test fix
* Gather test fix
* Small fixes
* reduce scalar zero values
* scalar mean workaround
* Remove debug code
* along dim mean workaround
* one more test
* - equalsTo() tweak for empty arrays
- one more test
* broadcast tweaks
* [WIP] Fixing outstanding issues for NLP (#9)
* Avoid using not-inited objects
* Test fixed.
* Redundant method avoided for models like FastText
* KMeans++ implementation
* KMeans++ implementation
* Disable parallel execution
* KMeans++
* Tests
* Dev branch merge (#16)
* SameDiff: convertDataType and gradient check util improvements (#12)
* GradCheck util improvements
* StopGradient constructor + test
* SameDiff: Add datatype conversion
* Javadoc and add DataType.isNumerical()
* Small fix
* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)
* TFGraphTestAllHelper: check intermediates in execution order
* Add missing debug listener
* [WIP] lstmBlock fix + other changes (#13)
- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite
* Small test fix
* CheckNumerics op wrapper
* Fix some issues on master (#17)
* Fix DataVec test issue
* Fix issue with dl4j SameDiff output layer
* Dtype fix for lambda layers
* #7912 BertIterator dtype fix (use float32 not global default)
* [WIP] Next set of CUDA stuff (#7)
New CUDA implementations and improvements
* bad file
* Dev branch master merge (#23)
* SameDiff: convertDataType and gradient check util improvements (#12)
* GradCheck util improvements
* StopGradient constructor + test
* SameDiff: Add datatype conversion
* Javadoc and add DataType.isNumerical()
* Small fix
* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)
* TFGraphTestAllHelper: check intermediates in execution order
* Add missing debug listener
* [WIP] lstmBlock fix + other changes (#13)
- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite
* Small test fix
* CheckNumerics op wrapper
* Compatibility of deserialization (#18)
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* SameDiff: add activation gradient checking support for debugging (#19)
* SameDiff gradient checker: first pass on activation gradient checks
* Fixes + tests for activation gradient checking
* Javadoc
* [WIP] Some nd4j data type corrections (#20)
* Adjust data type
* Set correct Data type.
* Size of proper data type.
* fix averaged cpu load (#22)
* SameDiff ops, TF import and fixes (#24)
* CheckNumerics tests + fixes + misc fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fake quant
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* FakeQuantWithMinMaxArgs
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* CheckNumerics fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix libnd4j ALL_INTS and ALL_FLOATS declaration (uint and bfloat types)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Javadoc
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Exception tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix for out of scope stack allocated var use
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ignores
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ignore for known failing test (already logged issue)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Merge upstream to fork (#25)
* Add thousand-separator commas to TotalParams (#7915)
* Add thousand-separator commas to TotalParams
The number of parameters can be quite large, and it would help the reading of the summary printout to have the TotalParams column & values at the bottom have thousand-separator-commas in them.
* Add thousand-separator commas to MultiLayerNetwork
Corresponding change to MultiLayerNetwork
Signed-off-by: Jxtps Jxtps <jxtps435@gmail.com>
* Update contributing and issue/PR templates (#7934)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix link to AdaDelta paper (#7942)
Fix link to AdaDelta paper hosted on matthewzeiler.com
Signed-off-by: Jxtps
* Fixes, and ignores for known/logged failing issues (#7943)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* SameDiff + DL4J/SameDiff: Multiple fixes (#28)
* #7919 HDF5 attribute buffer length fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7909 Arbiter constructor exception ux improvements
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7925 RNN output layer length checks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7939 Add listener for validating inputs are not incorrectly modified
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7939 Integrate NonInplaceValidationListener into tests
* #7844 DL4J SameDiff fixes for variable minibatch size
* DL4J SameDiff fixes - ensure gradient for input placeholder is available
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tweaks to ExternalErrorsFunction - use placeholders, make more robust
* Another fix
* More fixes
* More SameDiff/DL4J fixes
* Scope out scalar array creation in BaseScalarOp
* Remove debug code
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* [WIP] Final dev branch merge (#29)
* SameDiff: convertDataType and gradient check util improvements (#12)
* GradCheck util improvements
* StopGradient constructor + test
* SameDiff: Add datatype conversion
* Javadoc and add DataType.isNumerical()
* Small fix
* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)
* TFGraphTestAllHelper: check intermediates in execution order
* Add missing debug listener
* [WIP] lstmBlock fix + other changes (#13)
- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite
* Small test fix
* CheckNumerics op wrapper
* Compatibility of deserialization (#18)
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* SameDiff: add activation gradient checking support for debugging (#19)
* SameDiff gradient checker: first pass on activation gradient checks
* Fixes + tests for activation gradient checking
* Javadoc
* [WIP] Some nd4j data type corrections (#20)
* Adjust data type
* Set correct Data type.
* Size of proper data type.
* fix averaged cpu load (#22)
* [WIP] Multiple dataset iterators (#27)
* Splitting dataset into arbitrary number
* Fixes
* Multiple split of iterator
* Test
* Test
* Some fixes
* signature change
* one more tweak
Signed-off-by: raver119 <raver119@gmail.com>
* one more test for sequential use of DataSetIteratorSplitter
Signed-off-by: raver119 <raver119@gmail.com>
* Fixes
* Fixes
* one more test for Alexander
Signed-off-by: raver119 <raver119@gmail.com>
* Some fixes
* Some fixes
* one more test for Alexander
Signed-off-by: raver119 <raver119@gmail.com>
* minor test fix
Signed-off-by: raver119 <raver119@gmail.com>
* Some fixes
* Some fixes
* couple of assertions tweaked
Signed-off-by: raver119 <raver119@gmail.com>
* MDS splitter test :/
Signed-off-by: raver119 <raver119@gmail.com>
* Minor refactoring
* Multi dataset
* Some fixes
* More tests
* Small number of test fixes/improvements (failures on CI) (#31)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* [WIP] More CUDA stuff (#26)
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* LRN BP CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* less memory
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed bug with crop_and_resize op helper.
* get rid of unnecessary index-calculation dunction
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed sort with nth_element cuda-based helper.
* Refactored nth_element.
* Refactored nth_element op and tests.
* Modified usage of dim array with sortTad routine.
* Refactored main routine of helper for non_max_image_suppression op.
* non_max_image_suppression op helper with cuda kernel implementation. Initial revision.
* fix vol2col cuda kernel
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* topK concept
Signed-off-by: raver119 <raver119@gmail.com>
* unsorted topK with scanWitdh of 1
Signed-off-by: raver119 <raver119@gmail.com>
* correct vol2col tests
* sorted/unsorted topK
Signed-off-by: raver119 <raver119@gmail.com>
* implementation and fixing col2im/col2vol
* Corrected usage flags with input/output with reverse op.
* dup is const now
Signed-off-by: raver119 <raver119@gmail.com>
* percentile op
Signed-off-by: raver119 <raver119@gmail.com>
* group tests for mapool2d
Signed-off-by: Yurii <yurii@skymind.io>
* special test for george
Signed-off-by: raver119 <raver119@gmail.com>
* less threads for sortTad
Signed-off-by: raver119 <raver119@gmail.com>
* provide conv2d for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* remove auther in sort tad kernel code
Signed-off-by: Yurii <yurii@skymind.io>
* provide depthwise_conv2d for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* - max_pooling_with_argmax
- null check for special use
Signed-off-by: raver119 <raver119@gmail.com>
* dts cuda
Signed-off-by: raver119 <raver119@gmail.com>
* provide sconv2d for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* std cuda
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored non_max_suppression op to conform TF implementation.
* Improved suppression helper.
* provide pooling3d for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* minor lstm rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* more of minor lstm rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* (bi)dynamic_rnn
Signed-off-by: raver119 <raver119@gmail.com>
* templates init order
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored non_max_suppression op.
* Added cuda kernel for non_max_suppression.
* CPU sort by key/value
Signed-off-by: raver119 <raver119@gmail.com>
* CPU sort TAD by key/value
Signed-off-by: raver119 <raver119@gmail.com>
* CPU sort TAD by key/value tests
Signed-off-by: raver119 <raver119@gmail.com>
* Eliminate compiler error with cuda implementation.
* - repaired gradCheck in cuda
- provide conv2d_bp for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* missed signature
Signed-off-by: raver119 <raver119@gmail.com>
* provide depthwise_conv2d_bp for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of lup helper with cuda kernel. Initial commit.
* further work on backprops for convolutions
Signed-off-by: Yurii <yurii@skymind.io>
* CUDA linear sort by key/val
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA tad sort by key/val
Signed-off-by: raver119 <raver119@gmail.com>
* start providing of backprop for pooling2d/3d
Signed-off-by: Yurii <yurii@skymind.io>
* Added atomicAdd for bool datatype.
* dynamic partition concept
Signed-off-by: raver119 <raver119@gmail.com>
* dynamic partition concept
Signed-off-by: raver119 <raver119@gmail.com>
* dynamic partition scalar CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* important comment
Signed-off-by: raver119 <raver119@gmail.com>
* fix pooling2d/3d backprop helpers
Signed-off-by: Yurii <yurii@skymind.io>
* Added non-linear test with dynamic_partition.
* Improved test for dynamic_partition.
* dynamic_partition TAD concept
Signed-off-by: raver119 <raver119@gmail.com>
* - dynamic_partition TAD CUDA impl
- dynamic_partition TAD CPU fix
Signed-off-by: raver119 <raver119@gmail.com>
* - rewrite cpu code for usampling2d/3d
- write cuda code for usampling2d/3d
Signed-off-by: Yurii <yurii@skymind.io>
* dynamic_stitch CUDA vector case
Signed-off-by: raver119 <raver119@gmail.com>
* dynamic_stitch CUDA TAD case concept
Signed-off-by: raver119 <raver119@gmail.com>
* dynamic_stitch CUDA TAD case impl
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests for dynamic_stitch 3D-4D cases.
* minor tests tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed type check for dynamic stitch.
* min/max bp
Signed-off-by: raver119 <raver119@gmail.com>
* rewrite code for upsampling2d/3d cpu
Signed-off-by: Yurii <yurii@skymind.io>
* reduce min/max/norm_max bp
Signed-off-by: raver119 <raver119@gmail.com>
* lup implementation. Additional enhancements.
* provide code for upsamling2d/3d backprop
Signed-off-by: Yurii <yurii@skymind.io>
* weightedCrossEntropyWithLogits
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed template math atomicMul for 64bit ints.
* Refactored dynamic_partition_bp op.
* inverseBroadcast fix
Signed-off-by: raver119 <raver119@gmail.com>
* DynamicPartitionBP test datatype fixed.
* - nd4j_atomicMul Windows fix
- cpu/NDArrayLambda.hpp excluded from CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* Capsnet test runtime improvements
* Slow test speedups
* Next round of test speed improvements
* More test improvements
* Improve test speed
* Next round of test speedups
* Another round
* More test speedups
* Another round
* Another round of test speedups
* Another round of speedups...
* CuDNN test speedups + more tests extending BaseDL4JTest
* Minor fix + more BaseDL4JTest use in other modules