Commit Graph

133 Commits (99c727f15b2c592cf9ab7fc3755c1a760c0ba318)

Author SHA1 Message Date
Alex Black 99c727f15b
Add javacpp classifier dependency to nd4j-native and nd4j-cuda (#366)
Signed-off-by: Alex Black <blacka101@gmail.com>
2020-04-09 10:00:27 +10:00
Andrii T d86dd5b131
DL4J and SameDiff integration tests + LSTMLayer java op class (#353)
* init in this branch

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Lenetet Mnist workflow

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* small fix for calculations

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* for Alex to check placeholder null pointer issue

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* CNN3D workflow

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* state for launching on dxg to regenterate dl4j examples

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* SD RNN test case workflow

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* small fixes

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* checkpoint at lstmBlock: Input array 1 (x) rank must be got input with rank 2 issue

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Fix LSTMLayer inputs order

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* lstm mismatch with c++ op issue

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* LSTMLayer config draft

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* LSTMLayer config draft v2

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* have doubt I had to do this

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* NDRNN generated by codegen

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* LSTMLayerTestCases draft

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* minor fixes again

* added LSTMLayer testcases to nd4j-tests + setted Preconditions in LSTMLayer constructors

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* added lost SDCNNtestcases

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* overrided getNumOutputs from DynamicCustomOp in LSTMLayer and reorganized LSTMLayerOutputs according to cpp op

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* finished with LSTMLayerOutputs

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Fix MKLDNN platform checks (i.e., when MKLDNN can be used vs. not)

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix LSTMLayerWeights input order

Signed-off-by: Alex Black <blacka101@gmail.com>

* More fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* minor fixes

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* fixed LSTMLayer testcases

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* finished SameDiffRNNTestCase

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* finished all testcases + minor fixes

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Multiple generation-related fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix multiple issues

Signed-off-by: Alex Black <blacka101@gmail.com>

* More fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* LSTM fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Regenerate ND4J namespaces and fix multiple issues

Signed-off-by: Alex Black <blacka101@gmail.com>

* changed SameDiffRNNTestCase

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Small fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* added  Nd4j.getRandom().setSeed(12345) where needed

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* #8828 Fix ND4J profiler NaN/Inf checks when using OpContext

Signed-off-by: Alex Black <blacka101@gmail.com>

* #8828 Fix ND4J profiler NaN/Inf checks when using OpContext

Signed-off-by: Alex Black <blacka101@gmail.com>

* Tweak to weight init for SameDiff CNN test case

Signed-off-by: Alex Black <blacka101@gmail.com>

* Tweaks for test cases

Signed-off-by: Alex Black <blacka101@gmail.com>

* Ignore failing tests until fixed

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix

Signed-off-by: Alex Black <blacka101@gmail.com>

Co-authored-by: Alex Black <blacka101@gmail.com>
2020-04-09 00:20:48 +10:00
raver119 04b2b4f9b6
Few fixes (#361)
* INDArray.close() fix for CPU

Signed-off-by: raver119 <raver119@gmail.com>

* - BroadcastableBoolOp introduced
- ConfusionMatrix now supports explicit DataType argument

Signed-off-by: raver119 <raver119@gmail.com>

* confusion_matrix: dtype is still optional

Signed-off-by: raver119 <raver119@gmail.com>

* disable bert tests in debug builds

Signed-off-by: raver119 <raver119@gmail.com>

* Affinity fix

Signed-off-by: raver119 <raver119@gmail.com>

* minor workspace tweak to allow close() on scoped out borrowed workspace

Signed-off-by: raver119 <raver119@gmail.com>
2020-04-06 21:01:59 +03:00
mheck136 364f9a660c
Set source and target version of nd4j-native to java 7 (#8808)
Signed-off-by: Martin Heck <mheck136@gmail.com>
2020-03-26 11:33:56 +11:00
Serhii Shepel 3cbba49518
Bugfix failing builds (#341)
* Fix interpreter for libnd4j tests and drop test script

* Remove mingw when specifying javacpp.platform, add new profile that triggers when javacpp.platform is windows-x86_64

* Update android 32 bit toolchain for x86

* Try triple instead of -target

* Change to -target

* Update 32 bit arm

* Change android bin path

* Update arm 32 bit build again

Co-authored-by: Adam Gibson <1144306+agibsonccc@users.noreply.github.com>
2020-03-24 12:55:47 +11:00
Oleh 69c92ca5ae
Learning updaters for gradient (#335)
* libnd4j raw implementation of sgd upader

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections and simple test added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections after discussion

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j integrate applyScalar

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j raw implementation of rmsPropUpdater on cpu

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fix operations declaration

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j rmsPropUpdater added, test cases for sgd, etc

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed several typos

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some fixes and improvements for rmsPropUpdater based on Java tests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed cuda implementation, update tests and corrected behavior according java tests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j adaGrad updater added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one minor fix for ada grad

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several more fixes for ada_grad

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j nesterovs updater added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed nesterovs updater behavior, several typos and rename file

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one minor typo

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j ada max updater added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed several typos in adaMax updater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed several typos in adaMaxUpdater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several fixes for adaMax, added Adam Updater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j adaDeltaUpdater added, minor fixes for adamUpdater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several fixes for adaDeltaUpdater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j nadamUpdater added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one more correction for nadam updater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several fixes for nadam updater and added amsGradUpdater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several typos fixed in amsGradUpdater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections and added f order support rmsProp updater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added support of f order for all updaters and modify tests for testing in place

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed issues for updates when not in place mode used, added tests for f order

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added input shape checks

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections for different cases handling

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some code clean up and optimize per request

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j updaters refactoring after review

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* SgdUpdater wrapper

Signed-off-by: raver119 <raver119@gmail.com>

* first test

Signed-off-by: raver119 <raver119@gmail.com>

* RmsPropUpdater added

Signed-off-by: raver119 <raver119@gmail.com>

* NadamUpdater + NesterovsUpdater

Signed-off-by: raver119 <raver119@gmail.com>

* AmsGradUpdater

Signed-off-by: raver119 <raver119@gmail.com>

* AdamUpdater added

Signed-off-by: raver119 <raver119@gmail.com>

* AdaGradUpdater + AdaDeltaUpdater + AdaMaxUpdater

Signed-off-by: raver119 <raver119@gmail.com>

* AdaGradUpdater test added

Signed-off-by: raver119 <raver119@gmail.com>

* libnd4j remove input parameters parsing through NDArray, split implementation of helpers to separate files, added some rename, etc

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j next step to split operations implementation into separate files

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master and minor corrections

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j revert some changes of split implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j forgot to add header file

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* public default constructors

Signed-off-by: raver119 <raver119@gmail.com>

* ImportClassMapping updated

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-03-23 07:28:31 +03:00
Adam Gibson 015147b713
Fix openblas linking issues (#340)
* Fix cmake detection in msys

* Revert windows change

* Update to unix line endings

* Fix linking issues
2020-03-21 17:30:26 +09:00
Alex Black f79207033b
SameDiff multi-threaded inference (#263)
* #8682 Don't log openmp BLAS threads for CUDA

Signed-off-by: Alex Black <blacka101@gmail.com>

* #8654 Add SameDiff multi-threaded tests

Signed-off-by: Alex Black <blacka101@gmail.com>

* Switching to op context for SameDiff exec

Signed-off-by: Alex Black <blacka101@gmail.com>

* Next steps

Signed-off-by: Alex Black <blacka101@gmail.com>

* Most back to passing

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Better tests, test refactoring

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small tweak

Signed-off-by: Alex Black <blacka101@gmail.com>

* Code duplication reduction

Signed-off-by: Alex Black <blacka101@gmail.com>

* More code deduplication

Signed-off-by: Alex Black <blacka101@gmail.com>

* CUDA fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* More CUDA fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* More fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* ND4S small fixes

Signed-off-by: Alex Black <blacka101@gmail.com>
2020-03-20 21:24:39 +11:00
raver119 7a2ac800dd
Nullify (#304)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* hamming distance nullification

Signed-off-by: raver119 <raver119@gmail.com>

* Add output array value assignment for testing/debugging

Signed-off-by: Alex Black <blacka101@gmail.com>

* don't assign empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d/conv3d/depthwise2d nullified

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d/conv3d/depthwise2d nullified

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d/conv3d/depthwise2d nullified

Signed-off-by: raver119 <raver119@gmail.com>

* few more fixes

Signed-off-by: raver119 <raver119@gmail.com>

* im2col

Signed-off-by: raver119 <raver119@gmail.com>

* pooling?

Signed-off-by: raver119 <raver119@gmail.com>

* more nullified

Signed-off-by: raver119 <raver119@gmail.com>

* ismax nullified

Signed-off-by: raver119 <raver119@gmail.com>

* rollback ismax nullification

Signed-off-by: raver119 <raver119@gmail.com>

* synchronized cublas handle use on per-device basis

Signed-off-by: raver119 <raver119@gmail.com>

* hiding method from jcpp

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of test assigns in DeclarableOp

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of assigns

Signed-off-by: raver119 <raver119@gmail.com>

* proper deviceId is back

Signed-off-by: raver119 <raver119@gmail.com>

* include fixed

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: Alex Black <blacka101@gmail.com>
2020-03-20 08:49:28 +03:00
Adam Gibson 30a28fae45
Windows fix (#333)
* Fix cmake detection in msys

* Revert windows change

* Update to unix line endings
2020-03-20 12:14:03 +09:00
Adam Gibson 0cf4a45573
Fixes #8763 (#310)
* Fix cmake detection in msys

* Fix toolchain file on windows

* Make android 64 bit work

* Fix libnd4j build script on msys

* Update build script for windows/linux

* Encoding issue for ci

* Update pom.xml

* Update pom.xml

* Update pom.xml

* Remove mingw

* Ensure android x86 builds are inline with arm builds

* Update toolchains and env variables for x86

* Move profile for build program up to parent

* Fix blas vendor and add comment

* Update cuda presets version

* Set default value and move properties back to child pom

* Change program from hard coded to use the script as the program

* Update pom.xml

* Update pom.xml

* Static lib fix

* Update static lib output

* Get rid of old comments

* Update static for buiding
2020-03-19 14:53:21 +09:00
Shams Ul Azeem 9c77bfa85f
Support for more numpy datatypes (#241)
* Adding more datatypes support in datavec-python

* Using numpy C API for creating numpy arrays

* Adding parameterized tests

* Adding support for BFLOAT16 (by converting it to FLOAT)

* Cleanup

* Using casting instead of creating an array

* Giving out a warning while casting array from BFLOAT16 to FLOAT

* Add syncToPrimary and syncToSpecial methods to BaseDataBuffer

Signed-off-by: Alex Black <blacka101@gmail.com>

* Python exec: sync to host before passing pointers

Signed-off-by: Alex Black <blacka101@gmail.com>

* Added copyright header

* use np api (#267)

* python exec / numpy - check object type before cast (#268)

* use np api

* verify object before cast

* fix cong

* cuda fix

* inplace test + tiny fix

* more test

* fix double alloc

* rem tags

* fix cuda check

* Fix implicit CUDA dependency in datavec-python tests; remove new method, add test

Signed-off-by: Alex Black <blacka101@gmail.com>

Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Fariz Rahman <farizrahman4u@gmail.com>
2020-03-19 00:48:37 +11:00
raver119 bac130bd78
CUDA host pointer fix (#322)
* CUDA fix: host pointer propagation

Signed-off-by: raver119 <raver119@gmail.com>

* disable logging

Signed-off-by: raver119 <raver119@gmail.com>
2020-03-16 12:07:34 +03:00
Yurii Shyrma e42b4e96c3
correct output empty shapes deducing in split op (#311)
* - correct output empty shapes deducing in split op

Signed-off-by: Yurii <iuriish@yahoo.com>

* java test fixed

Signed-off-by: raver119 <raver119@gmail.com>

* - split broadcast::exec function on individual functions corresponding to switch arg

Signed-off-by: Yurii <iuriish@yahoo.com>

* - split broadcast::exec _int and _bool function on individual functions corresponding to switch arg

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-03-12 18:25:54 +03:00
Alex Black a80fb99a5f
DL4J integrations tests updates + add SameDiff support (#298)
* Revive and start updating DL4J integration tests

Signed-off-by: Alex Black <blacka101@gmail.com>

* Add SameDiff support - first pass

Signed-off-by: Alex Black <blacka101@gmail.com>

* SameDiff test case generation

Signed-off-by: Alex Black <blacka101@gmail.com>

* SameDiff integration tests polishing

Signed-off-by: Alex Black <blacka101@gmail.com>

* More SameDiff integration test fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Final polish

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small test tweak

Signed-off-by: Alex Black <blacka101@gmail.com>
2020-03-07 22:44:41 +11:00
Samuel Audet 1c89512ec0
Add Maven profiles for ARM builds to pom.xml files (#265)
* Add Maven profiles for ARM builds to pom.xml files

Signed-off-by: Samuel Audet <samuel.audet@gmail.com>

* Remove mkl from dependencies when running on non intel/amd platforms

* Downgrade openblas for now

* Change back to 0.3.8

Co-authored-by: Adam Gibson <1144306+agibsonccc@users.noreply.github.com>
2020-03-04 11:11:01 +03:00
raver119 63fa3c2ef3
libnd4j polishing (#273)
* initial set of include changes

Signed-off-by: raver119 <raver119@gmail.com>

* one more tweak

Signed-off-by: raver119 <raver119@gmail.com>

* few more rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* few more rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* few more rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* cuda includes rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* java update

Signed-off-by: raver119 <raver119@gmail.com>

* = namespace changed to sd
- few CMake variables renamed with SD_ prefix

Signed-off-by: raver119 <raver119@gmail.com>

* java update

Signed-off-by: raver119 <raver119@gmail.com>

* LoopKind minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes

Signed-off-by: raver119 <raver119@gmail.com>

* sanitizer is optional now

Signed-off-by: raver119 <raver119@gmail.com>

* dev tests updated

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes

Signed-off-by: raver119 <raver119@gmail.com>

* last update

Signed-off-by: raver119 <raver119@gmail.com>

* java update

Signed-off-by: raver119 <raver119@gmail.com>
2020-03-02 12:49:41 +03:00
Yurii Shyrma f7a9190407
profiling of concat op (both cuda and cpu) (#151)
* - profiling of concat op (both cuda and cpu)

Signed-off-by: Yurii <iuriish@yahoo.com>

* better comparison for large concat

Signed-off-by: raver119 <raver119@gmail.com>

* - further improving of concat op

Signed-off-by: Yurii <iuriish@yahoo.com>

* some loggin

Signed-off-by: raver119 <raver119@gmail.com>

* - add possibility to verify presence of trailing unities in shape and set strides/ews correspondingly
- restrict second simple case in concat op to c order only

Signed-off-by: Yurii <iuriish@yahoo.com>

* - move concat op to specials_single.cpp file

Signed-off-by: Yurii <iuriish@yahoo.com>

* - get rid of second concat op declaration in transforms.cpp file

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-02-20 21:19:01 +03:00
raver119 9e3c1b02b1
Perf improvements (#242)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* better ExpandDims impl

Signed-off-by: raver119 <raver119@gmail.com>

* better Squeeze impl

Signed-off-by: raver119 <raver119@gmail.com>

* better Softmax impl

Signed-off-by: raver119 <raver119@gmail.com>

* one test disabled

Signed-off-by: raver119 <raver119@gmail.com>

* more accurate impl

Signed-off-by: raver119 <raver119@gmail.com>

* - GraphProfiler now prints full shapeInfo instead of shape
- softmax typo fix

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-14 16:20:31 +03:00
raver119 1dfac9a736
DataBuffer.write() tweak (#221)
* special workaround methods for DataBuffer.write

Signed-off-by: raver119 <raver119@gmail.com>

* one test removed

Signed-off-by: raver119 <raver119@gmail.com>

* more of unsynced

Signed-off-by: raver119 <raver119@gmail.com>

* missing asLong for BaseCudaDataBuffer

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-07 18:16:11 +03:00
raver119 a0da5a9e47
Events removed from Java (#219)
* replace mutex with lock_guards

Signed-off-by: raver119 <raver119@gmail.com>

* Events ditched from Java CUDA logic

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-07 12:34:55 +03:00
Alex Black 569a46f87d
Fixes (#213)
* Increase timeouts for 2 tests occasionally failing on CI

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Explicitly set character encoding via argline for maven surefire tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* CUDA gradient check timeout fix + simple rnn masking fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-02-05 17:07:36 +11:00
raver119 5d28e6143d
OpContext handling (#214)
* nano tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* OpContext tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* OpContext deallocators

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of few mkldnn safety checks

Signed-off-by: raver119 <raver119@gmail.com>

* databuffer setSpecial fix

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-05 07:27:24 +03:00
raver119 81efa5c3b6
[WIP] one small fix (#207)
* one small fix

Signed-off-by: raver119 <raver119@gmail.com>

* assert added

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-02 19:17:26 +03:00
Alex Black 0756e3fe70
Small fixes. (#206)
* Logging format tweaks for file logging

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Min abs error tweak for Util layer gradient checks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8648 Fix SameDiff NPE instead of error for missing placeholders

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test runtime reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-02-01 18:19:36 +11:00
raver119 5d98cfcf47
Configurable DataType for ops (#201)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* - one more test for OneHot with dtype
- one more signature in Nd4j

Signed-off-by: raver119 <raver119@gmail.com>

* ones_as/zeros_as now accept dtype

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* - more updates for configurable data types
- ones_as/zeros_as java side + tests

Signed-off-by: raver119 <raver119@gmail.com>

* few c++ tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes around DArgs

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-30 18:46:12 +03:00
raver119 ba961c7601
DataTypes & FlatBuffers (#197)
* flatbuffers version upgrade

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers version upgrade java side

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers dependency version upgrade java side

Signed-off-by: raver119 <raver119@gmail.com>

* MKLDNN version upgrade

Signed-off-by: raver119 <raver119@gmail.com>

* DArgs first pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures first pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures second pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures third pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures third pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures fourth pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures fifth pass

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers UI version upgrade java side

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers ui update

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers downgrade

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers downgrade java side

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-30 10:07:24 +03:00
raver119 9f719488b9
CUDA sync tweaks (#194)
* ThreadLocal cache for CudaContext

Signed-off-by: raver119 <raver119@gmail.com>

* temp commit

Signed-off-by: raver119 <raver119@gmail.com>

* remove unwanted synchronization

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-28 10:55:06 +03:00
raver119 7ef0ef907e
Packages fix (#193)
* packages fix

Signed-off-by: raver119 <raver119@gmail.com>

* few imports fixed

Signed-off-by: raver119 <raver119@gmail.com>

* few imports fixed

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-27 23:04:21 +03:00
raver119 531a72fabd
execution mode (#183)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* execution mode java side

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* move exec mode to ContextPrototype

Signed-off-by: raver119 <raver119@gmail.com>

* copyrights

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-27 10:00:07 +03:00
Alexander Stoyakin 4db28a9300 Cleanup of multiple projects (#175)
* Cleanup modules

* Moving subprojects to nd4j-api

* Project cleanup

* Dropped AWS sub-project

* dl4j-util moved to core

* dl4j-perf moved to core

* Tests coverage

* Revert "Moving subprojects to nd4j-api"

This reverts commit bc6eb573c6b60c407ade47172c5d204725077e6b.

* Moved nd4j-buffer and nd4j-context to nd4j-api

* Rolled back change

* Revert "Project cleanup"

This reverts commit 64ac7f369b2d968f7be437718034f093fc886ffc.

* Datavec cleaned up

* Revert "Moved nd4j-buffer and nd4j-context to nd4j-api"

This reverts commit 75f4e8da80d2551e44e1251dd6c5923289fff8e1.

# Conflicts:
#	nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/opvalidation/ReductionBpOpValidation.java

* Resolve conflict

* Compilation fixed.

* nd4j-context and nd4j-buffer moved to nd4j-api

* Fixed TF mapping for mmul

* Fix for dl4j-cuda tests

Signed-off-by: Alex Black <blacka101@gmail.com>

* Move last few tests from deeplearning4j-nn to -core

Signed-off-by: Alex Black <blacka101@gmail.com>

* Remove incorrect TF import mapping for TensorMmul op

Signed-off-by: Alex Black <blacka101@gmail.com>

* Cleaned TF mapping

* Fix path for test results on windows

* Remove old dependency

Signed-off-by: Alex Black <blacka101@gmail.com>

* One more attempt to fix path for test results on windows

* fixup! One more attempt to fix path for test results on windows

* fixup! One more attempt to fix path for test results on windows

Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
2020-01-24 22:35:00 +03:00
raver119 5d69069177
[WIP] Memory limits (#167)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* one more initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* additional initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* subsequent initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit testing

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit per device

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit per group

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda + few missed lines

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda + missed includes

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda + one more missed include

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit shouldn't count host mem as dev0 in cuda

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit that tracks HOST group limits for CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with some Environment changes

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with more Environment changes

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with maxMasterThreads fix

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with maxMasterThreads fix

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit without maxMasterThreads exception

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit without Nd4jULong in Environment

Signed-off-by: raver119 <raver119@gmail.com>

* add sleep and more iterations for OOM cases

Signed-off-by: raver119 <raver119@gmail.com>

* limits propagation from java side

Signed-off-by: raver119 <raver119@gmail.com>

* - consume ErrorCode every time
- one test for memory limits

Signed-off-by: raver119 <raver119@gmail.com>

* unordered_map

Signed-off-by: raver119 <raver119@gmail.com>

* unordered_map

Signed-off-by: raver119 <raver119@gmail.com>

* unordered_map

Signed-off-by: raver119 <raver119@gmail.com>

* RSub op mapping fixed

Signed-off-by: raver119 <raver119@gmail.com>

* typo fixed

Signed-off-by: raver119 <raver119@gmail.com>

* one bad test fixed

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-24 10:11:09 +03:00
raver119 256c9d20b0
alloc check for RNG (#179)
* missing alloc validation in RandomGenerator for CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* set error message if rng alloc failed

Signed-off-by: raver119 <raver119@gmail.com>

* check for error code during RNG creation in java

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-23 09:51:02 +03:00
raver119 25db3a44f1
[WIP] few fixes for tests (#177)
* nd4j-aeron profiles

Signed-off-by: raver119 <raver119@gmail.com>

* nd4j-aeron profiles

Signed-off-by: raver119 <raver119@gmail.com>

* skip one long test

Signed-off-by: raver119 <raver119@gmail.com>

* skip one long test

Signed-off-by: raver119 <raver119@gmail.com>

* kryo profile

Signed-off-by: raver119 <raver119@gmail.com>

* few more profiles

Signed-off-by: raver119 <raver119@gmail.com>

* few more profiles

Signed-off-by: raver119 <raver119@gmail.com>

* few more profiles

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-22 16:12:30 +03:00
Alex Black a25bb6a11c
Unit/integration test split + test speedup (#166)
* Add maven profile + base tests methods for integration tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Switch from system property to environment variable; seems more reliable in intellij

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add nd4j-common-tests module, and common base test; cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ensure all ND4J tests extend BaseND4JTest

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test spam reduction, import fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add test logging to nd4j-aeron

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix unintended change

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Reduce sprint test log spam

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More test spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Significantly speed up TSNE tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* W2V iterator test unit/integration split

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More NLP test speedups

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Avoid debug/verbose mode leaking between tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* test tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Arbiter extends base DL4J test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Arbiter test speedup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* nlp-uima test speedup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More test speedups

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix ND4J base test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Few small ND4J test speed improvements

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J tests speedup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More tweaks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Even more test speedups

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More tweaks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Various test fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* More test fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Add ability to specify number of threads for C++ ops in BaseDL4JTest and BaseND4JTest

Signed-off-by: Alex Black <blacka101@gmail.com>

* nd4j-aeron test profile fix for CUDA

Signed-off-by: Alex Black <blacka101@gmail.com>
2020-01-22 22:27:01 +11:00
raver119 7783012f39
cuDNN integration (#150)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* one file

Signed-off-by: raver119 <raver119@gmail.com>

* few more includes

Signed-off-by: raver119 <raver119@gmail.com>

* m?

Signed-off-by: raver119 <raver119@gmail.com>

* const

Signed-off-by: raver119 <raver119@gmail.com>

* cudnn linkage in tests

Signed-off-by: raver119 <raver119@gmail.com>

* culibos

Signed-off-by: raver119 <raver119@gmail.com>

* static reminder

Signed-off-by: raver119 <raver119@gmail.com>

* platform engine tag

Signed-off-by: raver119 <raver119@gmail.com>

* HAVE_CUDNN moved to config.h.in

Signed-off-by: raver119 <raver119@gmail.com>

* include

Signed-off-by: raver119 <raver119@gmail.com>

* include

Signed-off-by: raver119 <raver119@gmail.com>

* skip cudnn handle creation if there's not cudnn

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* target device in context

Signed-off-by: raver119 <raver119@gmail.com>

* platform engines

Signed-off-by: raver119 <raver119@gmail.com>

* platform engines

Signed-off-by: raver119 <raver119@gmail.com>

* allow multiple -h args

Signed-off-by: raver119 <raver119@gmail.com>

* allow multiple -h args

Signed-off-by: raver119 <raver119@gmail.com>

* move mkldnn out of CPU block

Signed-off-by: raver119 <raver119@gmail.com>

* link to mkldnn on cuda

Signed-off-by: raver119 <raver119@gmail.com>

* less prints

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* next step

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d NCHW draft

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d biasAdd

Signed-off-by: raver119 <raver119@gmail.com>

* test for MKL/CUDNN combined use

Signed-off-by: raver119 <raver119@gmail.com>

* - provide additional code for conv2d ff based on cudnn api, not tested yet

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on conv2d helper based on using cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - fixing several cuda bugs which appeared after cudnn lib had been started to use

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation of conv2d backprop op based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementaion of conv3d and conv3d_bp ops based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - bugs fixing in conv3d/conv3d_bp ops (cudnn in use)

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation of depthwiseConv2d (ff/bp) op based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation of batchnorm ff op based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - disable cudnn batchnorm temporary

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add minor change in cmake

Signed-off-by: Yurii <iuriish@yahoo.com>

* engine for depthwise mkldnn

Signed-off-by: raver119 <raver119@gmail.com>

* couple of includes

Signed-off-by: raver119 <raver119@gmail.com>

* - provide permutation to cudnn batchnorm ff when format is NHWC

Signed-off-by: Yurii <iuriish@yahoo.com>

* lgamma fix

Signed-off-by: raver119 <raver119@gmail.com>

* - eliminate memory leak in two tests

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: Yurii Shyrma <iuriish@yahoo.com>
2020-01-20 21:32:46 +03:00
raver119 29e8e09db6
String changes (#3)
* initial commit

* additional data types & tensor type

Signed-off-by: raver119 <raver119@gmail.com>

* next step

Signed-off-by: raver119 <raver119@gmail.com>

* missing include

* sparse_to_dense

Signed-off-by: raver119 <raver119@gmail.com>

* few more tests files

Signed-off-by: raver119 <raver119@gmail.com>

* draft

Signed-off-by: raver119 <raver119@gmail.com>

* numeric sparse_to_dense

Signed-off-by: raver119 <raver119@gmail.com>

* comment

Signed-off-by: raver119 <raver119@gmail.com>

* string sparse_to_dense version

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA DataBuffer expand

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks for CUDA build

Signed-off-by: raver119 <raver119@gmail.com>

* shape fn for string_split

Signed-off-by: raver119 <raver119@gmail.com>

* one more comment

Signed-off-by: raver119 <raver119@gmail.com>

* string_split indices

Signed-off-by: raver119 <raver119@gmail.com>

* next step

Signed-off-by: raver119 <raver119@gmail.com>

* test passes

Signed-off-by: raver119 <raver119@gmail.com>

* few rearrangements for databuffer implementations

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer: move inline methods to common implementations

Signed-off-by: raver119 <raver119@gmail.com>

* add native DataBuffer to Nd4j presets

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer creation

Signed-off-by: raver119 <raver119@gmail.com>

* use DataBuffer for allocation

Signed-off-by: raver119 <raver119@gmail.com>

* cpu databuffer as deallocatable

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer setters for bufers

Signed-off-by: raver119 <raver119@gmail.com>

* couple of wrappers

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffers being passed around

Signed-off-by: raver119 <raver119@gmail.com>

* Bunch of ByteBuffer-related signatures gone

Signed-off-by: raver119 <raver119@gmail.com>

* - few more Nd4j signatures removed
- minor fix for bfloat16

Signed-off-by: raver119 <raver119@gmail.com>

* nullptr pointer is still a pointer, but 0 as address :)

Signed-off-by: raver119 <raver119@gmail.com>

* one special test

Signed-off-by: raver119 <raver119@gmail.com>

* empty string array init

Signed-off-by: raver119 <raver119@gmail.com>

* one more test in cpp

Signed-off-by: raver119 <raver119@gmail.com>

* memcpy instead of databuffer swap

Signed-off-by: raver119 <raver119@gmail.com>

* special InteropDataBuffer for front-end languages

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks for java

Signed-off-by: raver119 <raver119@gmail.com>

* pointer/indexer actualization

Signed-off-by: raver119 <raver119@gmail.com>

* CustomOp returns list for inputArumgents and outputArguments instead of array

Signed-off-by: raver119 <raver119@gmail.com>

* redundant call

Signed-off-by: raver119 <raver119@gmail.com>

* print_variable op

Signed-off-by: raver119 <raver119@gmail.com>

* - view handling (but wrong one)
- print_variable java wrapper

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* - empty arrays handling

Signed-off-by: raver119 <raver119@gmail.com>

* - deserialization works now

Signed-off-by: raver119 <raver119@gmail.com>

* minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* one more fix

Signed-off-by: raver119 <raver119@gmail.com>

* initial cuda commit

Signed-off-by: raver119 <raver119@gmail.com>

* print_variable message validation

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA views

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA special buffer size

Signed-off-by: raver119 <raver119@gmail.com>

* minor update to match master changes

Signed-off-by: raver119 <raver119@gmail.com>

* - consider arrays always actual on device for CUDA
- additional PrintVariable constructor
- CudaUtf8Buffer now allocates host buffer by default

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* - print_variable now allows print from device

Signed-off-by: raver119 <raver119@gmail.com>

* InteropDataBuffer data type fix

Signed-off-by: raver119 <raver119@gmail.com>

* ...

Signed-off-by: raver119 <raver119@gmail.com>

* disable some debug messages

Signed-off-by: raver119 <raver119@gmail.com>

* master pulled in

Signed-off-by: raver119 <raver119@gmail.com>

* couple of new methods for DataBuffer interop

Signed-off-by: raver119 <raver119@gmail.com>

* java side

Signed-off-by: raver119 <raver119@gmail.com>

* offsetted constructor

Signed-off-by: raver119 <raver119@gmail.com>

* new CUDA deallocator

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA backend torn apart

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA backend torn apart 2

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA backend torn apart 3

Signed-off-by: raver119 <raver119@gmail.com>

* - few new tests
- few new methods for DataBuffer management

Signed-off-by: raver119 <raver119@gmail.com>

* few more tests + few more tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* two failing tests

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* two failing tests pass

Signed-off-by: raver119 <raver119@gmail.com>

* now we pass DataBuffer to legacy ops too

Signed-off-by: raver119 <raver119@gmail.com>

* Native DataBuffer for legacy ops, Java side

Signed-off-by: raver119 <raver119@gmail.com>

* CPU java side update

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA java side update

Signed-off-by: raver119 <raver119@gmail.com>

* no more prepare/register action on java side

Signed-off-by: raver119 <raver119@gmail.com>

* NDArray::prepare/register use now accepts vectors

Signed-off-by: raver119 <raver119@gmail.com>

* InteropDataBuffer now has few more convenience methods

Signed-off-by: raver119 <raver119@gmail.com>

* java bindings update

Signed-off-by: raver119 <raver119@gmail.com>

* tick device in NativeOps

Signed-off-by: raver119 <raver119@gmail.com>

* Corrected usage of OpaqueBuffer for tests.

* Corrected usage of OpaqueBuffer for java tests.

* NativeOpsTests fixes.

* print_variable now returns scalar

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* compat_string_split fix for CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* - CUDA execScalar fix
- CUDA lazyAllocateHostPointer now checks java indexer/pointer instead of native pointer

Signed-off-by: raver119 <raver119@gmail.com>

* legacy ops DataBuffer migration prototype

Signed-off-by: raver119 <raver119@gmail.com>

* ignore device shapeinfo coming from java

Signed-off-by: raver119 <raver119@gmail.com>

* minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* minor transformAny fix

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweak for lazy host allocation

Signed-off-by: raver119 <raver119@gmail.com>

* - DataBuffer::memcpy method
- bitcast now uses memcpy

Signed-off-by: raver119 <raver119@gmail.com>

* - IndexReduce CUDA dimension buffer fix

Signed-off-by: raver119 <raver119@gmail.com>

* views for CPU and CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* less spam

Signed-off-by: raver119 <raver119@gmail.com>

* optional memory init

Signed-off-by: raver119 <raver119@gmail.com>

* async memset

Signed-off-by: raver119 <raver119@gmail.com>

* - SummaryStats CUDA fix
- DataBuffer.sameUnderlyingData() impl
- execBroadcast fix

Signed-off-by: raver119 <raver119@gmail.com>

* - reduce3All fix
switch to CUDA 10 temporarily

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA version

Signed-off-by: raver119 <raver119@gmail.com>

* proper memory deallocator registration

Signed-off-by: raver119 <raver119@gmail.com>

* HOST_ONLY workspace allocation

Signed-off-by: raver119 <raver119@gmail.com>

* temp commit

Signed-off-by: raver119 <raver119@gmail.com>

* few conflicts resolved

Signed-off-by: raver119 <raver119@gmail.com>

* few minor fixes

Signed-off-by: raver119 <raver119@gmail.com>

* one more minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* NDArray permute should operate on JVM primitives

Signed-off-by: raver119 <raver119@gmail.com>

* - create InteropDataBuffer for shapes as well
- update pointers after view creation in Java

Signed-off-by: raver119 <raver119@gmail.com>

* - addressPointer temporary moved to C++

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA: don't account offset twice

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA: DataBuffer pointer constructor updated

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA NDArray.unsafeDuplication() simplified

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA minor workspace-related fixes

Signed-off-by: raver119 <raver119@gmail.com>

* CPU DataBuffer.reallocate()

Signed-off-by: raver119 <raver119@gmail.com>

* print_affinity op

Signed-off-by: raver119 <raver119@gmail.com>

* print_affinity java side

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA more tweaks for data locality

Signed-off-by: raver119 <raver119@gmail.com>

* - compat_string_split tweak
- CudaUtf8Buffer update

Signed-off-by: raver119 <raver119@gmail.com>

* INDArray.close() mechanic restored

Signed-off-by: raver119 <raver119@gmail.com>

* one more test fixed

Signed-off-by: raver119 <raver119@gmail.com>

* - CUDA DataBuffer.reallocate() updated
- cudaMemcpy (synchronous) restored

Signed-off-by: raver119 <raver119@gmail.com>

* one last fix

Signed-off-by: raver119 <raver119@gmail.com>

* bad import removed

Signed-off-by: raver119 <raver119@gmail.com>

* another small fix

Signed-off-by: raver119 <raver119@gmail.com>

* one special test

Signed-off-by: raver119 <raver119@gmail.com>

* fix bad databuffer size

Signed-off-by: raver119 <raver119@gmail.com>

* release primaryBuffer on replace

Signed-off-by: raver119 <raver119@gmail.com>

* higher timeout

Signed-off-by: raver119 <raver119@gmail.com>

* disable timeouts

Signed-off-by: raver119 <raver119@gmail.com>

* dbCreateView now validates offset and length of a view

Signed-off-by: raver119 <raver119@gmail.com>

* additional validation for dbExpand

Signed-off-by: raver119 <raver119@gmail.com>

* restore timeout back again

Signed-off-by: raver119 <raver119@gmail.com>

* smaller distribution for rng test to prevent timeouts

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA DataBuffer::memcpy now copies to device all the time

Signed-off-by: raver119 <raver119@gmail.com>

* OpaqueDataBuffer now contains all required methods for interop

Signed-off-by: raver119 <raver119@gmail.com>

* some javadoc

Signed-off-by: raver119 <raver119@gmail.com>

* GC on failed allocations

Signed-off-by: raver119 <raver119@gmail.com>

* minoe memcpu tweak

Signed-off-by: raver119 <raver119@gmail.com>

* one more bitcast test

Signed-off-by: raver119 <raver119@gmail.com>

* - NDArray::deviceId() propagation
- special multi-threaded test for data locality checks

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer additional syncStream

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer additional syncStream

Signed-off-by: raver119 <raver119@gmail.com>

* one ignored test

Signed-off-by: raver119 <raver119@gmail.com>

* skip host alloc for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* ByteBuffer support is back

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer::memcpy minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* few minor prelu/bp tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* nullify-related fixes

Signed-off-by: raver119 <raver119@gmail.com>

* PReLU fixes (#157)

Signed-off-by: Alex Black <blacka101@gmail.com>

* Build fixed

* Fix tests

* one more ByteBuffer signature restored

Signed-off-by: raver119 <raver119@gmail.com>

* nd4j-jdbc-hsql profiles fix

Signed-off-by: raver119 <raver119@gmail.com>

* nd4j-jdbc-hsql profiles fix

Signed-off-by: raver119 <raver119@gmail.com>

* PReLU weight init fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small PReLU fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* - INDArray.migrate() reactivated
- DataBuffer::setDeviceId(...) added
- InteropDataBuffer Z syncToDevice added for views

Signed-off-by: raver119 <raver119@gmail.com>

* missed file

Signed-off-by: raver119 <raver119@gmail.com>

* Small tweak

Signed-off-by: Alex Black <blacka101@gmail.com>

* cuda 10.2

Signed-off-by: raver119 <raver119@gmail.com>

* minor fix

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-04 13:27:50 +03:00
raver119 451d9d57fd
shape function override (#161)
Signed-off-by: raver119 <raver119@gmail.com>
2020-01-04 09:06:44 +03:00
Alex Black 29104083cc
Various fixes (#143)
* #8568 ArrayUtil optimization

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6171 Keras ReLU and ELU support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Keras softmax layer import

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8549 Webjars dependency management

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for TF import names ':0' suffix issue / NPE

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* BiasAdd: fix default data format for TF import

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update zoo test ignores

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8509 SameDiff Listener API - provide frame + iteration

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8520 ND4J Environment

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Deconv3d

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Deconv3d fixes + gradient check

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Conv3d fixes + deconv3d DType test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix issue with deconv3d gradinet check weight init

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8579 Fix BaseCudaDataBuffer constructor fix for UINT16

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DataType.isNumerical() returns false for BOOL type

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8504 Reduce Spark log spam for tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clean up DL4J gradient check test spam

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More Gradient check spam reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff test spam reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes for FlatBuffers mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff log spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tests should extend BaseNd4jTest

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove debug line in c++ op

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ND4J test spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J test spam reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More Dl4J and datavec test spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for bad conv3d test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Additional test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Embedding layers: don't inherit global default activation function

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Trigger CI

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Consolidate all BaseDL4JTest classes to single class used everywhere; make timeout configurable per class

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test fixes and timeout increases

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Timeouts and PReLU fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Restore libnd4j build threads arg for CUDA build

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Increase timeouts on a few tests to avoid spurious failures on some CI machines

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More timeout fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More test timeout fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tweak timeout for one more test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Final tweaks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* One more ignore

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-01-04 13:45:07 +11:00
Alex Black ce02b6fae7
Small fixes (#140)
* Allow scalar op result array auto allocation

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Don't swallow underlying exception for calculateOutputShape execution failures

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ignore for known keras failure

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-12-21 17:00:46 +11:00
raver119 a5f5ac72b1
reduce bool changes (#118)
* reduce bool changes

Signed-off-by: raver119 <raver119@gmail.com>

* reduce bool tweaks

Signed-off-by: raver119 <raver119@gmail.com>
2019-12-09 20:08:59 +03:00
shugeo e09a785232 Shugeo resize fix5 (#102)
* Refactored resize images ops to use TF-like bool args as input.

* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.

* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.

* Refactored nearest_neighbor resize op.

* Added a pair of tests for special case of resize_bilinear algorithm.

* Fixed issue with resize_bilinear op.

* Refactored cpu implementation for helpers with resize_nearest_neighbor op.

* Final fixed for resize ops to conform TF v.1.5

* Refactored cuda helpers for resize_neares_neighbor op.

* Fixed resize_bilinear to accept proper data.

* Fixed issue with non-float input for resize_bilinear op.

* Refactored cuda helper for resize_bilinear to proper process non-float inputs.

* Added tests for resize_bilinear to int inputs.

* Fixed ResizeBilinear wrapper

* Tests fixed

* Fixed float and bool constant to avoid overflow for some kind of compilers.

* Corrected float constants with float data type.

* Added f suffix for float constants.

* Corrected float constant to avoid overflow with initializing lists.

* Corrected float initializing list with float input.

* Corrected bool constant with initalizing list.

* Corrected float and bool values with initializing lists.

* Fixed wrong constant.

* Fixed issue with 1x1 input picture for resize.

* ResizeBilinear default values on import fix

Signed-off-by: raver119 <raver119@gmail.com>
2019-12-05 22:05:33 +03:00
Fariz Rahman 0d14032d26 TF Updates (#87)
* tf updates

* pom

* copyright

* graphrunner tests

* gpu test

* getSessionOptionsConfigProto

* dtype fix

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* cast graphs

* savemodel test fix

* testresource instead of local

* Logging level

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* gson dependency issue fix; fix GraphRunnerTest for no session options config case

Signed-off-by: Alex Black <blacka101@gmail.com>

* Final tweaks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* few minor fixes

Signed-off-by: raver119 <raver119@gmail.com>

* one more fix

Signed-off-by: raver119 <raver119@gmail.com>

* Tweak configuration for GraphRunnerTest

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* nd4j align config

* tf warmup
2019-12-04 17:11:03 +11:00
Yurii Shyrma d19eeaec52 Shyrma casual conv1d (#90)
* - add causal mode of padding to convolutions

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add additional tests for causal conv1d

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add causal mode for cuda conv kernels

Signed-off-by: Yurii <iuriish@yahoo.com>

* Java side of Conv1D changes

Signed-off-by: raver119 <raver119@gmail.com>

* Add Conv1DDerivative op

Signed-off-by: Alex Black <blacka101@gmail.com>

* Causal Conv1D gradient checks

Signed-off-by: Alex Black <blacka101@gmail.com>

* Tweaks

Signed-off-by: Alex Black <blacka101@gmail.com>

* - add causal padding mode to conv2d_bp

Signed-off-by: Yurii <iuriish@yahoo.com>

* More thorough causal conv1d tests

Signed-off-by: Alex Black <blacka101@gmail.com>
2019-11-29 14:14:30 +03:00
Samuel Audet 5e07998e59 Add support for CUDA 10.2 (#89) 2019-11-29 16:31:03 +11:00
shugeo 009007120b Shugeo_release_fixes3 (#81)
* Implementation for non_max_suppression_v3 was added. Initial version

* Added check for overcome threshold.

* Added definition for V3 method.

* java remapping for NonMaxSuppressionV3

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed proporly processing of an empty output and test.

* Refactored op to less threshold data to float.

* Implemented cuda-based helper for non_max_suppression_v3 op.

* Fixed fake_quant_with_min_max_vars op.

* Fixed tests with float numbers.

* - assert now stops execution
- sortByKey/sortByValue now have input validation

Signed-off-by: raver119 <raver119@gmail.com>

* missing var

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed proper processing for zero max_size inputs.

* Refactored kernel callers.

* Fixed return statement for logdet op helper.

* Refactored unsorted segment SqrtN op.

* get back 8 tail bytes on CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* Refactored segment prod ops and helpers for cuda and tests.

* Additional test.

* CudaWorkspace tests updated for 8 tail bytes

Signed-off-by: raver119 <raver119@gmail.com>

* special atomic test

Signed-off-by: raver119 <raver119@gmail.com>

* atomicMul/atomicDiv fix for 16bit values

Signed-off-by: raver119 <raver119@gmail.com>

* Eliminated waste prints.
2019-11-28 21:08:51 +03:00
Alex Black e910ce75ec
Various Fixes (#75)
* #8431 Cast loss function weights array automatically

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add 'regex verbose mode' printing (ExecDebugListener) for TFGraphTestAllSameDiff'

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Class import mapping fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Reshape fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Don't swallow first exception in NativeOpExecutioner.exec(CustomOp)

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-23 20:06:12 +11:00
shugeo 4187190609 Shugeo release fix2 (#70)
* Corrected input checking and tests for bitcast op.

* Fixed an issue with non_max_suppression form generation and processing with score threshold given.

* Fixed bilinear resize kernel and tests.

* push for Serhii

Signed-off-by: raver119 <raver119@gmail.com>

* Added test for nearest_neighbor resize with int input.

* Added data type check for input/output match.

* Eliminate error in macros.

* Improved output message for type checking.

* Fixed input/output types for op.

* Eliminated waste logging.

* Refactored resize_bilinear helper for multithreading for cpu platform.

* Cosmetic changes only.

* Fixed error for string substitution.

* Skip test for cbow_batch with cuda.

* fix for resizeNearestNeighbor output dtype

Signed-off-by: raver119 <raver119@gmail.com>

* Refactored non_max_suppression helper.

* Refactored shape generation and input handling.

* Added additional test.
2019-11-22 22:42:44 +03:00
Samuel Audet ff73e6da3f ND4J: Fix OpenBLAS loading for nd4j-native (#64)
* ND4J: Fix OpenBLAS loading for nd4j-native and remove bundling of OpenMP

Signed-off-by: Samuel Audet <samuel.audet@gmail.com>

* Bundle back libgomp.so.1 for Linux

Signed-off-by: Samuel Audet <samuel.audet@gmail.com>

* Readd preload directories for ARM

Signed-off-by: Samuel Audet <samuel.audet@gmail.com>

* Add back preloads for GCC on Windows

Signed-off-by: Samuel Audet <samuel.audet@gmail.com>

* Add explicit preloadpaths for ARM and POWER to bundle correct library

Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
2019-11-21 15:54:41 +03:00
raver119 064a56ccf1
Few fixes (#66)
* skip legacy transforms execution in case of empty input arrays

Signed-off-by: raver119 <raver119@gmail.com>

* - BroadcastBool ops now accept extraParams to make MatchCondition possible
- TrueBroadcastHelper now uses samediff::threads

Signed-off-by: raver119 <raver119@gmail.com>

* java side

Signed-off-by: raver119 <raver119@gmail.com>

* trigger jenkins

Signed-off-by: raver119 <raver119@gmail.com>

* update LessThanOrEqual opNum mapping

Signed-off-by: raver119 <raver119@gmail.com>

* update LessThanOrEqual opNum mapping

Signed-off-by: raver119 <raver119@gmail.com>
2019-11-21 15:43:03 +03:00