shugeo
4187190609
Shugeo release fix2 ( #70 )
...
* Corrected input checking and tests for bitcast op.
* Fixed an issue with non_max_suppression form generation and processing with score threshold given.
* Fixed bilinear resize kernel and tests.
* push for Serhii
Signed-off-by: raver119 <raver119@gmail.com>
* Added test for nearest_neighbor resize with int input.
* Added data type check for input/output match.
* Eliminate error in macros.
* Improved output message for type checking.
* Fixed input/output types for op.
* Eliminated waste logging.
* Refactored resize_bilinear helper for multithreading for cpu platform.
* Cosmetic changes only.
* Fixed error for string substitution.
* Skip test for cbow_batch with cuda.
* fix for resizeNearestNeighbor output dtype
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored non_max_suppression helper.
* Refactored shape generation and input handling.
* Added additional test.
2019-11-22 22:42:44 +03:00
Samuel Audet
ff73e6da3f
ND4J: Fix OpenBLAS loading for nd4j-native ( #64 )
...
* ND4J: Fix OpenBLAS loading for nd4j-native and remove bundling of OpenMP
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
* Bundle back libgomp.so.1 for Linux
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
* Readd preload directories for ARM
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
* Add back preloads for GCC on Windows
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
* Add explicit preloadpaths for ARM and POWER to bundle correct library
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
2019-11-21 15:54:41 +03:00
raver119
064a56ccf1
Few fixes ( #66 )
...
* skip legacy transforms execution in case of empty input arrays
Signed-off-by: raver119 <raver119@gmail.com>
* - BroadcastBool ops now accept extraParams to make MatchCondition possible
- TrueBroadcastHelper now uses samediff::threads
Signed-off-by: raver119 <raver119@gmail.com>
* java side
Signed-off-by: raver119 <raver119@gmail.com>
* trigger jenkins
Signed-off-by: raver119 <raver119@gmail.com>
* update LessThanOrEqual opNum mapping
Signed-off-by: raver119 <raver119@gmail.com>
* update LessThanOrEqual opNum mapping
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-21 15:43:03 +03:00
raver119
83cb0d9329
[WIP] Create and small fix ( #67 )
...
* - create op
- skip exec for empty inputs for non_max_suppression
- EmptyHandling idea
Signed-off-by: raver119 <raver119@gmail.com>
* Create op and mapping for it
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-21 13:31:20 +03:00
Yurii Shyrma
66b84b38cf
Shyrma mmul ( #58 )
...
* - get rid of some copy procedures in mmulHelper ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on embedding cuda api for batched gemm (cublasGemmBatchedEx) in our mmulHelper class
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on cuda batched gamm api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - write own cuda kernel performing batched gemm
Signed-off-by: Yurii <iuriish@yahoo.com>
* missing include in MmulHelper
Signed-off-by: raver119 <raver119@gmail.com>
* - forgot to keep in code previous correct kernels for mmulNxN, since it may happen that new onw will fail for some reason in future
Signed-off-by: Yurii <iuriish@yahoo.com>
* disable old tensordot
Signed-off-by: raver119 <raver119@gmail.com>
* - rewrite cuda kernels for usualGemm and usualGemv
Signed-off-by: Yurii <iuriish@yahoo.com>
* - profiling mmul helpers
Signed-off-by: Yurii <iuriish@yahoo.com>
* - prints to check shapes were added
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct type of output array Cin mmulNxN
Signed-off-by: Yurii <iuriish@yahoo.com>
* - take into account possible nans in C array
Signed-off-by: Yurii <iuriish@yahoo.com>
* slightly change numThreads message
Signed-off-by: raver119 <raver119@gmail.com>
* - make corrections in accordance to given notes in pr review
Signed-off-by: Yurii <iuriish@yahoo.com>
2019-11-19 15:39:36 +02:00
raver119
1780dcc883
[WIP] Small fixes here and there ( #50 )
...
* one range test
Signed-off-by: raver119 <raver119@gmail.com>
* few Context convenience singatures
Signed-off-by: raver119 <raver119@gmail.com>
* one more range test
Signed-off-by: raver119 <raver119@gmail.com>
* "range" "fix"
Signed-off-by: raver119 <raver119@gmail.com>
* adjuct_contrast_v2 now allows scale factor to be provided via input_variable
Signed-off-by: raver119 <raver119@gmail.com>
* adjust_contrast now allows scale factor as variable too
Signed-off-by: raver119 <raver119@gmail.com>
* bitcast shape tests
Signed-off-by: raver119 <raver119@gmail.com>
* BitCast import dtype added
Signed-off-by: raver119 <raver119@gmail.com>
* few more BitCast signatures
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-15 17:04:29 +03:00
raver119
1eb3de90d7
[WIP] Platform helpers switches ( #44 )
...
* - platform helpers can be disabled on per-op basis now via Context::allowHelpers
- java has access to it as well
Signed-off-by: raver119 <raver119@gmail.com>
* global platform-helpers trigger
Signed-off-by: raver119 <raver119@gmail.com>
* few signatures renamed
Signed-off-by: raver119 <raver119@gmail.com>
* - few new env variables to follow
- maxThreads/masterThreads differentiation
Signed-off-by: raver119 <raver119@gmail.com>
* Javadoc update
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-14 14:35:02 +03:00
raver119
48df1acdfb
[WIP] ThreadPool ( #8 )
...
This PR removes OpenMP use in 95% of cases
2019-11-13 17:04:59 +03:00
Samuel Audet
73b5a508fc
Update dependencies to just released JavaCPP and JavaCV 1.5.2
...
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
2019-11-07 17:57:34 +09:00
Alex Black
d82877b18b
Various SameDiff fixes ( #21 )
...
* MKLDNN LSTM forward implementation (disabled pending #8331 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8318 add SameDiff.calculateGradientsAndOutputs
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Disable mkldnn backprop for now - pending fix, issue #8335
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8337 Fix CudaExecutioner unnecessary result array allocation/replacement
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small FlatBuffers serde fix, UInt8
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8135 ImagePreProcessingScaler - add segmentation support
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8319 Ensure listeners are called when they are supposed to be called
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8214 UNet (non-pretrained) last conv layer kernal size fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-02 11:25:53 +11:00
Alexander Stoyakin
b816845797
Fixing nd4j-cuda build ( #20 )
...
* Roll back recent fix to restore build.
* Fix compilation.
* presets updated
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-01 15:59:29 +02:00
Alexander Stoyakin
45a40c8a89
DL4J/ND4J: Do pass on integer casts ( #15 )
...
* Int cast fixes.
* Revert "Int cast fixes."
This reverts commit aa36e8ca
* Int casts
* Int cast
* Int casts
* Get rid of int casts. Dropping deprecated aggregate ops.
* java scatterUpdate changes
Signed-off-by: raver119 <raver119@gmail.com>
* c++ scatterUpdate changes
Signed-off-by: raver119 <raver119@gmail.com>
* Remove aggregated ops.
* Restored test
* Tests restored.
* Minor fixes
2019-10-31 11:23:09 +02:00
Alex Black
d333d29099
SameDiff cleanup and fixes ( #12 )
...
* #8160 Remove resolvePrepertiesFromSameDiffBeforeExecution
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* SameDiff API cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More SameDiff cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8248 Switch SameDiff variable init from lazy to creation time for more predictable behaviour
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8252 TanhDerivative javadoc
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8225 Deconvolution2D input validation
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8265 Switch SameDiff.outputs() to user settable, instead of unreliable 'best guess'
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8224 SameDiff.zero and .one create constants, not variables
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More cleanup and fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J SameDiff fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Re-add hack for Deconvolution2DLayer until #8315 is resolved
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8270 Move CUDA device/version logging to Java; can be disabled via existing org.nd4j.log.initialization system property
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* All ND4J init logging checks system property
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove redundant device logging
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* One more fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* UX improvements
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Deconv fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add deconv tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove debug code
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-26 12:38:08 +11:00
Alexander Stoyakin
f31661e13b
Merge pull request #7 from KonduitAI/asto_nd4s_10172019
...
KDTree optimization
2019-10-23 12:11:25 +03:00
Robert Altena
83d958d536
Sparse matrix refactoring. ( #8238 )
...
* remove sparse method from INDArray.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove gemm
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove useage of n4j.sparseFactory
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Nd4j.sparseFactory removed.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* sparseNDArray deleted.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* iremove more sparse calls and constants.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove SparseBlasWrapper.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* delete BaseSparseBlaswrapper.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove 3 sparse factory classes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* delete SparseCPULevel.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* deletes JcusparseLevel, CUDASparselevel.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* delete nativeCPU sparse classes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* removes sparse methods from NDArrayFactory.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* more deletes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* delete (ignored) tests. BaseSparseNDArray.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* deletes ISparseNDArray.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove sparse methods from indArray.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* deletes sparse classes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-17 22:56:29 +03:00
AlexDBlack
a66e03355e
Merge remote-tracking branch 'fork/master'
2019-09-12 12:20:57 +10:00
raver119
98e2814879
Platform helpers ( #8216 )
...
* platform helpers draft
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* disable platform cmake
Signed-off-by: raver119 <raver119@gmail.com>
* another draft
Signed-off-by: raver119 <raver119@gmail.com>
* mkldnn convolution refactored
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* one more safety check
Signed-off-by: raver119 <raver119@gmail.com>
* prototype works
Signed-off-by: raver119 <raver119@gmail.com>
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* force static library mode for mkldnn
Signed-off-by: raver119 <raver119@gmail.com>
* - ismax fix
- experimental arg fix
- don't enforce openblas on Apple hardware
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of small fixes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* declare concurrent
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* - MKLDNN version upgrade to 1.0.2
- avgpool2d/maxpool2d APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* - avgpool2d_bp/maxpool2d_bp APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* - conv2d/batchnorm APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* - lrn/conv2d_bp/conv3d/conv3d_bp APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* all ops converted to MKLDNN 1.x
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* namespace for platform helpers
Signed-off-by: raver119 <raver119@gmail.com>
* make sure platform helpers aren't opimized out
Signed-off-by: raver119 <raver119@gmail.com>
* build cpu_features on x86 systems
Signed-off-by: raver119 <raver119@gmail.com>
* build cpu_features on x86 systems
Signed-off-by: raver119 <raver119@gmail.com>
* more of cpu_features
Signed-off-by: raver119 <raver119@gmail.com>
* - mkldnn removed from java
- cpu_features checks in CpuNDArrayFactory
Signed-off-by: raver119 <raver119@gmail.com>
* F16C definition renamed
Signed-off-by: raver119 <raver119@gmail.com>
* some mkldnn rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* check supported instructions before doing anything
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* missied impl
Signed-off-by: raver119 <raver119@gmail.com>
* BUILD_PIC option
Signed-off-by: raver119 <raver119@gmail.com>
* conv2d fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool3d fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool3d_bp fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool2d_bp leak fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool3d_bp leak fix
Signed-off-by: raver119 <raver119@gmail.com>
* maxpool bp leaks fixed
Signed-off-by: raver119 <raver119@gmail.com>
* printf removed
Signed-off-by: raver119 <raver119@gmail.com>
* batchnorm fix
Signed-off-by: raver119 <raver119@gmail.com>
* AVX warning/error polishing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* remove previous MKL-DNN support layer
Signed-off-by: raver119 <raver119@gmail.com>
* avx2 tweak
Signed-off-by: raver119 <raver119@gmail.com>
* allow static for apple
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* exclude mkldnn in one more place
Signed-off-by: raver119 <raver119@gmail.com>
* exclude mkldnn in one more place
Signed-off-by: raver119 <raver119@gmail.com>
* restore OPENBLAS_PATH use
Signed-off-by: raver119 <raver119@gmail.com>
* add runtime check for avx/avx2 support
Signed-off-by: raver119 <raver119@gmail.com>
* convolution_auto
Signed-off-by: raver119 <raver119@gmail.com>
* Add logic for helper argument
* minor test fix
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* skip OpTracker props for non-x86 builds
Signed-off-by: raver119 <raver119@gmail.com>
* linux arm isn't x86 :)
Signed-off-by: raver119 <raver119@gmail.com>
* avx-512
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA presets fix
Signed-off-by: raver119 <raver119@gmail.com>
* BUILD_PIC
Signed-off-by: raver119 <raver119@gmail.com>
* prefetchw for avx2
Signed-off-by: raver119 <raver119@gmail.com>
* BUILD_PIC again
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-11 21:50:28 +03:00
Robert Altena
c99f980513
INDArray javadoc ( #246 )
...
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* review fixes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-09 13:09:31 +10:00
AlexDBlack
a76a44e198
Merge remote-tracking branch 'fork/master'
2019-09-05 22:04:25 +10:00
Robert Altena
f25e3e71e5
remove lengthLong ( #236 )
...
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-05 11:19:38 +10:00
AlexDBlack
b7226bdd7a
Merge
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-05 00:54:11 +10:00
raver119
a90c7dd995
[WIP] Last set of changes ( #234 )
...
* mmul op instead of cublasSgemm
Signed-off-by: raver119 <raver119@gmail.com>
* transB
Signed-off-by: raver119 <raver119@gmail.com>
* jcpp handles
Signed-off-by: raver119 <raver119@gmail.com>
* bitwise and/or/xor
Signed-off-by: raver119 <raver119@gmail.com>
* bitwise and/or/xor mapping
Signed-off-by: raver119 <raver119@gmail.com>
* cuda/cublas version check
Signed-off-by: raver119 <raver119@gmail.com>
* add expected version
Signed-off-by: raver119 <raver119@gmail.com>
* cuda/cublas version check in java
Signed-off-by: raver119 <raver119@gmail.com>
* one more error check
Signed-off-by: raver119 <raver119@gmail.com>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* one more fix
Signed-off-by: raver119 <raver119@gmail.com>
* skip CUDA version check for now
Signed-off-by: raver119 <raver119@gmail.com>
* better wording
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-04 14:41:08 +03:00
raver119
7abc574eeb
Snapshot update ( #8194 )
...
* fix double consumption of rng on cpu
Signed-off-by: raver119 <raver119@gmail.com>
* Shyrma docs (#222 )
* - documenting and profiling matrix_set_diag cuda kernel
Signed-off-by: Yurii <yurii@skymind.io>
* - correct formula of pnorm pooling in cuda 2d/3d kernels
- remove helper matrix_diag which duplicates work of helper matrix_set_diag
Signed-off-by: Yurii <yurii@skymind.io>
* cublasHandle sharing + lock
Signed-off-by: raver119 <raver119@gmail.com>
* cublasHandle sharing + lock
Signed-off-by: raver119 <raver119@gmail.com>
* Documentation from serialization/deserialization in NLP (#221 )
* refactoring
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Javadocs
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Javadoc fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Cleanup
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* dedicated lock for getCudaCublasHandle
Signed-off-by: raver119 <raver119@gmail.com>
* Small fixes (#223 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ELU DL4J fixes (#224 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* javadoc (#225 )
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Small test compilation fix (#226 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8182 remove spark version suffix (#227 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* [WIP] Thread safety (#229 )
* sync after cublas*gemm
Signed-off-by: raver119 <raver119@gmail.com>
* mutex for CublasHelper
Signed-off-by: raver119 <raver119@gmail.com>
* don't store cublasHandle in LaunchContext, it's per-device anyway
Signed-off-by: raver119 <raver119@gmail.com>
* some printout
Signed-off-by: raver119 <raver119@gmail.com>
* check for field instead
Signed-off-by: raver119 <raver119@gmail.com>
* pew-pew
Signed-off-by: raver119 <raver119@gmail.com>
* don't release ContextBuffers until device changed
Signed-off-by: raver119 <raver119@gmail.com>
* small tweak
Signed-off-by: raver119 <raver119@gmail.com>
* some logging in sgemm
Signed-off-by: raver119 <raver119@gmail.com>
* stream sync
Signed-off-by: raver119 <raver119@gmail.com>
* some more logging
Signed-off-by: raver119 <raver119@gmail.com>
* some more error checks
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* minor AffinityManager fix
Signed-off-by: raver119 <raver119@gmail.com>
* cudaEvent error logging improvement
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* - minor corrections in ConstantTadHelper
Signed-off-by: Yurii <yurii@skymind.io>
* ConstantShapeHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantTadHelper.cu updated
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-03 22:02:02 +03:00
raver119
dddc8a1143
[WIP] Thread safety ( #229 )
...
* sync after cublas*gemm
Signed-off-by: raver119 <raver119@gmail.com>
* mutex for CublasHelper
Signed-off-by: raver119 <raver119@gmail.com>
* don't store cublasHandle in LaunchContext, it's per-device anyway
Signed-off-by: raver119 <raver119@gmail.com>
* some printout
Signed-off-by: raver119 <raver119@gmail.com>
* check for field instead
Signed-off-by: raver119 <raver119@gmail.com>
* pew-pew
Signed-off-by: raver119 <raver119@gmail.com>
* don't release ContextBuffers until device changed
Signed-off-by: raver119 <raver119@gmail.com>
* small tweak
Signed-off-by: raver119 <raver119@gmail.com>
* some logging in sgemm
Signed-off-by: raver119 <raver119@gmail.com>
* stream sync
Signed-off-by: raver119 <raver119@gmail.com>
* some more logging
Signed-off-by: raver119 <raver119@gmail.com>
* some more error checks
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* minor AffinityManager fix
Signed-off-by: raver119 <raver119@gmail.com>
* cudaEvent error logging improvement
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* - minor corrections in ConstantTadHelper
Signed-off-by: Yurii <yurii@skymind.io>
* ConstantShapeHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantTadHelper.cu updated
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-03 22:00:38 +03:00
raver119
d3253aff3f
dedicated lock for getCudaCublasHandle
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-02 20:01:13 +03:00
raver119
2129d5bcac
cublasHandle sharing + lock
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-02 16:52:28 +03:00
raver119
18828f9725
cublasHandle sharing + lock
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-02 16:52:10 +03:00
AlexDBlack
7ded4416cb
Merge remote-tracking branch 'fork/master'
2019-09-02 18:52:12 +10:00
Robert Altena
6d04d30c94
INDArray.java javadoc ( #215 )
...
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-02 16:06:20 +10:00
Robert Altena
ef1de6a4aa
rcorbish #8617 ( #8188 )
...
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-01 07:17:36 +03:00
raver119
1003428a18
[WIP] Int broadcastables ( #195 )
...
* Removed invalid resource and fixed tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* legacy scalar/pairwise/broadcast int ops
Signed-off-by: raver119 <raver119@gmail.com>
* NDArray int broadcastables
Signed-off-by: raver119 <raver119@gmail.com>
* few more bitwise tests
Signed-off-by: raver119 <raver119@gmail.com>
* java side update
Signed-off-by: raver119 <raver119@gmail.com>
* Argument type changed for shift ops
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* legacy scalar/pairwise/broadcast int ops
Signed-off-by: raver119 <raver119@gmail.com>
* NDArray int broadcastables
Signed-off-by: raver119 <raver119@gmail.com>
* few more bitwise tests
Signed-off-by: raver119 <raver119@gmail.com>
* java side update
Signed-off-by: raver119 <raver119@gmail.com>
* Argument type changed for shift ops
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2019-08-30 10:12:40 +03:00
Alex Black
dcc2baa676
Version upgrades ( #199 )
...
* DataVec fixes for Jackson version upgrade
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J jackson updates + databind version 2.9.9.3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Shade snakeyaml along with jackson
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Version fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Switch DataVec legacy JSON format handling to mixins
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next set of fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup for legacy JSON mapping
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade commons compress to 1.18; small test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* New Jackson backward compatibility for DL4J - Round 1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* New Jackson backward compatibility for DL4J - Round 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More fixes, all but legacy custom passing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Provide an upgrade path for custom layers for models in pre-1.0.0-beta JSON format
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Legacy deserialization cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small amount of polish - legacy JSON
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade guava version
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* IEvaluation legacy format deserialization fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade play version to 2.7.3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update nd4j-parameter-server-status to new Play API
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update DL4J UI for new play version
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More play framework updates
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove Spark 1/2 adapter code from DataVec
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec-spark dependency cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 4
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Another fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Breeze upgrade, dependency cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add Scala 2.12 version to pom.xml
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* change-scala-versions.sh - add scala 2.12, remove 2.10
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Move Spark version properties to parent pom (now that only one spark version is supported)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DataVec Play fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec play dependency fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Clean up old spark/jackson stuff
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup jackson unused dependencies
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Dropping redundant dependency
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Removed scalaxy dependency
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* DataVec fixes for Jackson version upgrade
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J jackson updates + databind version 2.9.9.3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Shade snakeyaml along with jackson
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Version fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Switch DataVec legacy JSON format handling to mixins
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next set of fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup for legacy JSON mapping
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade commons compress to 1.18; small test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* New Jackson backward compatibility for DL4J - Round 1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* New Jackson backward compatibility for DL4J - Round 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More fixes, all but legacy custom passing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Provide an upgrade path for custom layers for models in pre-1.0.0-beta JSON format
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Legacy deserialization cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small amount of polish - legacy JSON
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade guava version
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* IEvaluation legacy format deserialization fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade play version to 2.7.3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update nd4j-parameter-server-status to new Play API
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update DL4J UI for new play version
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More play framework updates
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove Spark 1/2 adapter code from DataVec
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec-spark dependency cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 4
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Another fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Breeze upgrade, dependency cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add Scala 2.12 version to pom.xml
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* change-scala-versions.sh - add scala 2.12, remove 2.10
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Move Spark version properties to parent pom (now that only one spark version is supported)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DataVec Play fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec play dependency fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Clean up old spark/jackson stuff
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup jackson unused dependencies
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add shaded guava
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Dropping redundant dependency
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Removed scalaxy dependency
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Ensure not possible to import pre-shaded classes, and remove direct guava dependencies in favor of shaded
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ND4J Shaded guava import fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DataVec and DL4J guava shading
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter, RL4J fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Build fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Fix dependency
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Fix bad merge
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Jackson shading fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Set play secret, datavec-spark-inference-server
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix for datavec-spark-inference-server
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-30 14:35:27 +10:00
raver119
b472d7d8c8
[WIP] few more fixes ( #182 )
...
* one noop test
Signed-off-by: raver119 <raver119@gmail.com>
* skip input validation for no-input ops
Signed-off-by: raver119 <raver119@gmail.com>
* - one more noop empty test
- one more validation before sync
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* one more validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA empty reductions java side
Signed-off-by: raver119 <raver119@gmail.com>
* one svd test
Signed-off-by: raver119 <raver119@gmail.com>
* Corrected segment_mean helpers and added another test.
* Refactored segment_mean kernels to avoid race_condition.
2019-08-27 21:00:38 +03:00
raver119
a49f7c908b
[WIP] More fixes ( #178 )
...
* skip string arrays for device validation
Signed-off-by: raver119 <raver119@gmail.com>
* histogram_fixed_width now really supports indexing types
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 13:21:01 +03:00
raver119
05d45ec050
IndexReduce along dim CUDA fix
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 11:31:59 +03:00
raver119
df84bc7255
[WIP] More tweaks ( #173 )
...
* CUDA empty reduction
Signed-off-by: raver119 <raver119@gmail.com>
* - listdiff synchronization fix for CUDA
- listdiff test
Signed-off-by: raver119 <raver119@gmail.com>
* - IndexReduce ops now allow INDEXING_TYPES output
- topK op accepts only INDEXING_TYPES as output
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 10:37:10 +03:00
raver119
25e5c23eae
[WIP] Error handling ( #169 )
...
* CUDA reverse rewrite + couple of tests
Signed-off-by: raver119 <raver119@gmail.com>
* don't throw exception on invalid pointer
Signed-off-by: raver119 <raver119@gmail.com>
* data types validation for fastpath exec mode + 2 tests
Signed-off-by: raver119 <raver119@gmail.com>
* data types validation for fastpath exec mode + 2 tests
Signed-off-by: raver119 <raver119@gmail.com>
* ismax allowed dtypes tweak
Signed-off-by: raver119 <raver119@gmail.com>
* lastErrorCode + lastErrorMessage for native exceptions handling
Signed-off-by: raver119 <raver119@gmail.com>
* exportable ErrorReference
Signed-off-by: raver119 <raver119@gmail.com>
* check error codes in java
Signed-off-by: raver119 <raver119@gmail.com>
* - consume lastErrorCode
- fast_in dtype validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* - sg/cb allowed output type change
- minor logging fix for data type validation
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:57:51 +03:00
raver119
f03b0ee78f
[WIP] more fixes ( #159 )
...
* Added test for MatrixInverse with double input. Fixed matrixDeterminantKernel.
* Fixed kernels to avoid waste templating.
* Fixed logDeterminant kernel.
* Refactored type check for lup'
* - decrease blockDim value for zeta op
Signed-off-by: Yurii <yurii@skymind.io>
* Added print for compound matrix with CUDA.
* Refactored upper matrix invertion kernels.
* - provide move constructor and move assignment operator for OpArgsHoder class
Signed-off-by: Yurii <yurii@skymind.io>
* Refactored usage of launch context.
* - add test for mergemax
Signed-off-by: Yurii <yurii@skymind.io>
* get rid of AveragingArrayProxy
Signed-off-by: raver119 <raver119@gmail.com>
* Refactoring of LUP inversion.
* Added prints for invertion.
* - add OpArgsHolder copy constructor and assignment operator
Signed-off-by: Yurii <yurii@skymind.io>
* Added test for lower inversion
* - fix bug in upsampling2d/3d_bp op
Signed-off-by: Yurii <yurii@skymind.io>
* Added expensive printfs to kernel.
* Refactored expensive kernel prints.
* Refactored expensive printfs
* - remove nullify
Signed-off-by: Yurii <yurii@skymind.io>
* Eliminated waste prints with tests.
* upsampling2d_bp test
Signed-off-by: raver119 <raver119@gmail.com>
* test updated
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 19:20:50 +03:00
raver119
729dc5e879
[WIP] size etc ( #155 )
...
* one test for size
Signed-off-by: raver119 <raver119@gmail.com>
* - few tests for size op
- size/rank/size_at ops now use p instead of assign
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 12:31:12 +03:00
raver119
dbea687903
better handling of INDArray.close() ( #154 )
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 10:24:56 +03:00
raver119
243bf866c4
[WIP] Few fixes ( #153 )
...
* throw exception if op execution failed
Signed-off-by: raver119 <raver119@gmail.com>
* expected for test
Signed-off-by: raver119 <raver119@gmail.com>
* one more ismax test
Signed-off-by: raver119 <raver119@gmail.com>
* ismax view fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 09:00:10 +03:00
raver119
930b49e87f
[WIP] DeviceLocalNDArray updates ( #149 )
...
* ContextBuffers are released upon device change
Signed-off-by: raver119 <raver119@gmail.com>
* DeviceLocalNDArray updates + tests
Signed-off-by: raver119 <raver119@gmail.com>
* special array for delayed mode
Signed-off-by: raver119 <raver119@gmail.com>
* additional detach()
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-22 20:01:29 +03:00
raver119
3f4379927a
scalar constructor fix
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-21 08:50:59 +03:00
raver119
4310e87860
include path fix for java
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-21 07:32:21 +03:00
raver119
269d508ba5
[WIP] cross-device migrations ( #134 )
...
* two more tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA device afinity tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* prepareAction/registerAction for CustomOps
Signed-off-by: raver119 <raver119@gmail.com>
* lazy allocate host bufer before relocation
Signed-off-by: raver119 <raver119@gmail.com>
* one special test for migration in cpp
Signed-off-by: raver119 <raver119@gmail.com>
* tests update for msvc
Signed-off-by: raver119 <raver119@gmail.com>
* logging
Signed-off-by: raver119 <raver119@gmail.com>
* stick to old col2im impl
Signed-off-by: raver119 <raver119@gmail.com>
* cudaStreams reorganization
Signed-off-by: raver119 <raver119@gmail.com>
* buffer size fix
Signed-off-by: raver119 <raver119@gmail.com>
* c++ data migration
Signed-off-by: raver119 <raver119@gmail.com>
* fix CropAndResize test
Signed-off-by: raver119 <raver119@gmail.com>
* - minor improvment
Signed-off-by: Yurii <yurii@skymind.io>
2019-08-20 18:52:41 +03:00
raver119
aceb915557
[WIP] tests fixes ( #130 )
...
* no openmp for ClipByGlobalNorm
Signed-off-by: raver119 <raver119@gmail.com>
* one more bfloat16 rng test
Signed-off-by: raver119 <raver119@gmail.com>
* assertion fix
Signed-off-by: raver119 <raver119@gmail.com>
* - legacy IsMax gone
- linear IsMax gets shapeInfo argument
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of legacy IsMax tests
Signed-off-by: raver119 <raver119@gmail.com>
* IsMax is custom op now
Signed-off-by: raver119 <raver119@gmail.com>
* more blocks for ismax
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* - sqrt test
- some legacy code removed from CudaExecutioner
- Transforms.asin tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* - TransformFloat fix
Signed-off-by: raver119 <raver119@gmail.com>
* - ismax fix
- SpaceToBatchND/BatchToSpaceND wrappers
- couple of legacy tests removed
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-19 11:33:15 +03:00
raver119
2f3d7330ce
[WIP] build fix ( #124 )
...
* AffinityManager changes
Signed-off-by: raver119 <raver119@gmail.com>
* build fixes
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-16 08:14:18 +03:00
raver119
53ca9a76e8
[WIP] multi-device support ( #80 )
...
* fix pad javadoc and @see links. (#72 )
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* [WIP] More fixes (#73 )
* special tests for ConstantTadHelper/ConstantShapeHelper
Signed-off-by: raver119 <raver119@gmail.com>
* release methods for data buffers
Signed-off-by: raver119 <raver119@gmail.com>
* delete temporary buffer Java side
Signed-off-by: raver119 <raver119@gmail.com>
* delete temporary buffer Java side
Signed-off-by: raver119 <raver119@gmail.com>
* delete temporary TadPack C++/Java side (#74 )
Signed-off-by: raver119 <raver119@gmail.com>
* Zoo model TF import test updates (#75 )
* argLine fix, update compression_gru comment
* updated comment for xception
* undid but commented argLine change
* updated xlnet comment
* copyright headers
* - new NDArray methods like()/ulike() (#77 )
- fix for depthwise_conv2d_bp + special test
Signed-off-by: raver119 <raver119@gmail.com>
* upsampling2d fix CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* DL4J trace logging (#79 )
* MLN/CG trace logging for debugging
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tiny tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* strided_slice_bp shape fn leak fix
Signed-off-by: raver119 <raver119@gmail.com>
* SameDiff fixes and naming (#78 )
* remove SDVariable inplace methods
* import methods
* npe fix in OpVal
* removed SameDiff inplace ops from tests
* Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything
* quick fixes
* javadoc
* SDVariable eval with placeholders
* use regex match
* better matching
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* fix javadoc. (#76 )
* fix javadoc.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* replace most @see with @link s.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* 4 additional tests
Signed-off-by: raver119 <raver119@gmail.com>
* launch context reorganization
Signed-off-by: raver119 <raver119@gmail.com>
* LaunchContext reorganization
Signed-off-by: raver119 <raver119@gmail.com>
* per-device LaunchContext
Signed-off-by: raver119 <raver119@gmail.com>
* Various DL4J/ND4J fixes (#81 )
* #7954 Force refresh of UI when switching tabs on overview page
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8017 Concurrent modification exception (synchronize) fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8033 Don't initialize updater in middle of writing memory crash dump
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8208 Fix shape checks for ND4J int[] creator methods
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #6385 #7992 Keras import naming fixes + cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8016 Upsampling3D - add NDHWC format support
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ContextBuffers as separate entity
Signed-off-by: raver119 <raver119@gmail.com>
* Refactor NativeOps.h to export C functions
* Actually export functions from NativeOps.h
* Adapt the Java wrappers in ND4J generated with JavaCPP
* Create C wrappers for some of the C++ classes currently used by ND4J
* ContextBuffers as separate entity
Signed-off-by: raver119 <raver119@gmail.com>
* remove duplicate code in createBufferDetached. (#83 )
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Keras model import - updater lr fix (#84 )
* Keras model import - updater lr fix
Signed-off-by: eraly <susan.eraly@gmail.com>
* Keras model import - updater lr fix, cleanup
Signed-off-by: eraly <susan.eraly@gmail.com>
* ContextBuffers as separate entity
Signed-off-by: raver119 <raver119@gmail.com>
* ContextBuffers as separate entity
Signed-off-by: raver119 <raver119@gmail.com>
* Fix functions of OpaqueVariablesSet
* thread-local buffers/affinity
Signed-off-by: raver119 <raver119@gmail.com>
* thread safety for LaunchContext
Signed-off-by: raver119 <raver119@gmail.com>
* more of thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* one more multi threaded test
Signed-off-by: raver119 <raver119@gmail.com>
* SameDiff Convolution Config validation, better output methods (#82 )
* Conv Config validation & tests
Signed-off-by: Ryan Nett <rnett@skymind.io>
* stackOutputs utility method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use constructor for validation, support negative kernel sizes (infered from weights)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better output methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
* move output to be with fit and evaluate
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* refactor duplicate code from pad methods. (#86 )
* refactor duplicate code from pad methods.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* replace switch with if.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Various ND4J/DL4J fixes and improvements (#87 )
* Reshape and reallocate - small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Reshape and reallocate - small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #6488 ElementWiseVertex broadcast support
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Constructors and broadcast supported it Transforms.max/min
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8054 ElementWiseVertex now supports broadcast inputs
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8057 Nd4j.create overload dtype fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7551 ND4J Shape validation fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* [WIP] Numpy boolean import (#91 )
* numpy bool type
Signed-off-by: raver119 <raver119@gmail.com>
* numpy bool java side
Signed-off-by: raver119 <raver119@gmail.com>
* remove create method with unused parameter. (#89 )
* remove create method with unused parameter.
* removed more unused methods.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* removing more unused code.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* last removal of unused code.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove createSparse methods. (#92 )
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Various ND4J/DL4J fixes (#90 )
* Deprecate Old*Op instances
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8063 #8054 Broadcast exceptions + cleanup inplace ops
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove bad test condition
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7993 Fix shape function issue in crop_and_resize op
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J SameDiff lambda layer fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8029 Fix for pnorm backprop math
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8038 Fix Op profiler NaN/Inf triggering + add tests (#93 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* createUninitializedDetached refactoring. (#94 )
* wip
* update interface, add null implementations.
* Breaking one test in a weird way.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* createUninitializedDetached refactored.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* cuda build fix for issues introduced by recent refactoring
Signed-off-by: raver119 <raver119@gmail.com>
* [WIP] More of CUDA (#95 )
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* Implementation of hashcode cuda helper. Working edition.
* Fixed parallel test input arangements.
* Fixed tests for hashcode op.
* Fixed shape calculation for image:crop_and_resize op and test.
* NativeOps tests. Initial test suite.
* Added tests for indexReduce methods.
* Added test on execBroadcast with NDArray as dimensions.
* Added test on execBroadcastBool with NDArray as dimensions.
* Added tests on execPairwiseTransform and execPairwiseTransofrmBool.
* Added tests for execReduce with scalar results.
* Added reduce tests for non-empty dims array.
* Added tests for reduce3.
* Added tests for execScalar.
* Added tests for execSummaryStats.
* - provide cpu/cuda code for batch_to_space
- testing it
Signed-off-by: Yurii <yurii@skymind.io>
* - remove old test for batch_to_space (had wrong format and numbers were not checked)
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed complilation errors with test.
* Added test for execTransformFloat.
* Added test for execTransformSame.
* Added test for execTransformBool.
* Added test for execTransformStrict.
* Added tests for execScalar/execScalarBool with TADs.
* Added test for flatten.
* - provide cpu/cuda code for space_to_Batch operaion
Signed-off-by: Yurii <yurii@skymind.io>
* Added test for concat.
* comment unnecessary stuff in s_t_b
Signed-off-by: Yurii <yurii@skymind.io>
* Added test for specialConcat.
* Added tests for memcpy/set routines.
* Fixed pullRow cuda test.
* Added pullRow test.
* Added average test.
* - correct typo in NDArray::applyPairwiseTransform(nd4j::pairwise::BoolOps op...)
Signed-off-by: Yurii <yurii@skymind.io>
* - debugging and fixing cuda tests in JavaInteropTests file
Signed-off-by: Yurii <yurii@skymind.io>
* - correct some tests
Signed-off-by: Yurii <yurii@skymind.io>
* Added test for shuffle.
* Fixed ops declarations.
* Restored omp and added shuffle test.
* Added convertTypes test.
* Added tests for execRandom. Eliminated usage of RandomBuffer with NativeOps.
* Added sort tests.
* Added tests for execCustomOp.
* - further debuging and fixing tests terminated with crash
Signed-off-by: Yurii <yurii@skymind.io>
* Added tests for calculateOutputShapes.
* Addded Benchmarks test.
* Commented benchmark tests.
* change assertion
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests for apply_sgd op. Added cpu helper for that op.
* Implement cuda helper for aplly_sgd op. Fixed tests for NativeOps.
* Added test for assign broadcastable.
* Added tests for assign_bp op.
* Added tests for axpy op.
* - assign/execScalar/execTransformAny signature change
- minor test fix
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed axpy op.
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* - fix tests for nativeOps::concat
Signed-off-by: Yurii <yurii@skymind.io>
* sequential transform/scalar
Signed-off-by: raver119 <raver119@gmail.com>
* allow nested parallelism
Signed-off-by: raver119 <raver119@gmail.com>
* assign_bp leak fix
Signed-off-by: raver119 <raver119@gmail.com>
* block setRNG fix
Signed-off-by: raver119 <raver119@gmail.com>
* enable parallelism by default
Signed-off-by: raver119 <raver119@gmail.com>
* enable nested parallelism by default
Signed-off-by: raver119 <raver119@gmail.com>
* Added cuda implementation for row_count helper.
* Added implementation for tnse gains op helper.
* - take into account possible situations when input arrays are empty in reduce_ cuda stuff
Signed-off-by: Yurii <yurii@skymind.io>
* Implemented tsne/edge_forces op cuda-based helper. Parallelized cpu-based helper for edge_forces.
* Added kernel for tsne/symmetrized op heleper.
* Implementation of tsne/symmetrized op cuda helper. Working edition.
* Eliminated waste printfs.
* Added test for broadcastgradientargs op.
* host-only fallback for empty reduce float
Signed-off-by: raver119 <raver119@gmail.com>
* - some tests fixes
Signed-off-by: Yurii <yurii@skymind.io>
* - correct the rest of reduce_ stuff
Signed-off-by: Yurii <yurii@skymind.io>
* - further correction of reduce_ stuff
Signed-off-by: Yurii <yurii@skymind.io>
* Added test for Cbow op. Also added cuda implementation for cbow helpers.
* - improve code of stack operation for scalar case
Signed-off-by: Yurii <yurii@skymind.io>
* - provide cuda kernel for gatherND operation
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of cbow helpers with cuda kernels.
* minor tests tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* minor tests tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* - further correction of cuda stuff
Signed-off-by: Yurii <yurii@skymind.io>
* Implementatation of cbow op helper with cuda kernels. Working edition.
* Skip random testing for cudablas case.
* lstmBlockCell context fix
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests for ELU and ELU_BP ops.
* Added tests for eq_scalar, gt_scalar, gte_scalar and lte_scalar ops.
* Added tests for neq_scalar.
* Added test for noop.
* - further work on clipbynorm_bp
Signed-off-by: Yurii <yurii@skymind.io>
* - get rid of concat op call, use instead direct concat helper call
Signed-off-by: Yurii <yurii@skymind.io>
* lstmBlockCell context fix
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests for lrelu and lrelu_bp.
* Added tests for selu and selu_bp.
* Fixed lrelu derivative helpers.
* - some corrections in lstm
Signed-off-by: Yurii <yurii@skymind.io>
* operator * result shape fix
Signed-off-by: raver119 <raver119@gmail.com>
* - correct typo in lstmCell
Signed-off-by: Yurii <yurii@skymind.io>
* few tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA inverse broadcast bool fix
Signed-off-by: raver119 <raver119@gmail.com>
* disable MMAP test for CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* BooleanOp syncToDevice
Signed-off-by: raver119 <raver119@gmail.com>
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* additional data types for im2col/col2im
Signed-off-by: raver119 <raver119@gmail.com>
* Added test for firas_sparse op.
* one more RandomBuffer test excluded
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests for flatten op.
* Added test for Floor op.
* bunch of tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* mmulDot tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* more tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* Implemented floordiv_bp op and tests.
* Fixed scalar case with cuda implementation for bds.
* - work on cuda kernel for clip_by_norm backprop op is completed
Signed-off-by: Yurii <yurii@skymind.io>
* Eliminate cbow crach.
* more tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* more tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* Eliminated abortion with batched nlp test.
* more tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed shared flag initializing.
* disabled bunch of cpu workspaces tests
Signed-off-by: raver119 <raver119@gmail.com>
* scalar operators fix: missing registerSpecialUse call
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed logdet for cuda and tests.
* - correct clipBynorm_bp
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed crop_and_resize shape datatype.
* - correct some mmul tests
Signed-off-by: Yurii <yurii@skymind.io>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* exclude two methods for JNI
Signed-off-by: raver119 <raver119@gmail.com>
* exclude two methods for JNI
Signed-off-by: raver119 <raver119@gmail.com>
* exclude two methods for JNI (#97 )
Signed-off-by: raver119 <raver119@gmail.com>
* temporary stack fix
Signed-off-by: raver119 <raver119@gmail.com>
* round robin affinity test
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of legacy CudaContext methods
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of legacy ContextPool classes/methods
Signed-off-by: raver119 <raver119@gmail.com>
* one legacy test removed
Signed-off-by: raver119 <raver119@gmail.com>
* few more fields rearranged
Signed-off-by: raver119 <raver119@gmail.com>
* OpaqueLaunchContext
Signed-off-by: raver119 <raver119@gmail.com>
* OpaqueLaunchContext++
Signed-off-by: raver119 <raver119@gmail.com>
* more of OpaqueLaunchContext methods
Signed-off-by: raver119 <raver119@gmail.com>
* LaunchContext -> CudaContext
Signed-off-by: raver119 <raver119@gmail.com>
* AffinityManger changes
Signed-off-by: raver119 <raver119@gmail.com>
* AffinityManger changes
Signed-off-by: raver119 <raver119@gmail.com>
* cusolver handles
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* cusolver method
Signed-off-by: raver119 <raver119@gmail.com>
* cusolver handle propagated
Signed-off-by: raver119 <raver119@gmail.com>
* blas/solver handles
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* legacy concat implementations replaced with new CustomOp
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* concat now uses way more blocks
Signed-off-by: raver119 <raver119@gmail.com>
* print
Signed-off-by: raver119 <raver119@gmail.com>
* no more triple template mmul
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of kernels have dtypes reconsidered
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of kernels have dtypes reconsidered
Signed-off-by: raver119 <raver119@gmail.com>
* bitonic sort reorganized
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of cpu stuff removed from cuda scope
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of cpu stuff removed from cuda scope
Signed-off-by: raver119 <raver119@gmail.com>
* type conversions moved to generic impl
Signed-off-by: raver119 <raver119@gmail.com>
* cpu data types pass
Signed-off-by: raver119 <raver119@gmail.com>
* non_max_suppression
Signed-off-by: raver119 <raver119@gmail.com>
* sortByValue fix
Signed-off-by: raver119 <raver119@gmail.com>
* ignore all mixed datatype tests for mmul
Signed-off-by: raver119 <raver119@gmail.com>
* special handling of OpProfiler exceptions
Signed-off-by: raver119 <raver119@gmail.com>
* - one failing concat test in cpp
- Nd4j.tile now uses op internally
Signed-off-by: raver119 <raver119@gmail.com>
* get back dtype exception for legacy arrays deserialization
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-14 16:52:34 +03:00
raver119
ec847e034b
[WIP] Remote inference ( #96 )
...
* fix pad javadoc and @see links. (#72 )
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* [WIP] More fixes (#73 )
* special tests for ConstantTadHelper/ConstantShapeHelper
Signed-off-by: raver119 <raver119@gmail.com>
* release methods for data buffers
Signed-off-by: raver119 <raver119@gmail.com>
* delete temporary buffer Java side
Signed-off-by: raver119 <raver119@gmail.com>
* delete temporary buffer Java side
Signed-off-by: raver119 <raver119@gmail.com>
* delete temporary TadPack C++/Java side (#74 )
Signed-off-by: raver119 <raver119@gmail.com>
* Zoo model TF import test updates (#75 )
* argLine fix, update compression_gru comment
* updated comment for xception
* undid but commented argLine change
* updated xlnet comment
* copyright headers
* - new NDArray methods like()/ulike() (#77 )
- fix for depthwise_conv2d_bp + special test
Signed-off-by: raver119 <raver119@gmail.com>
* upsampling2d fix CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* DL4J trace logging (#79 )
* MLN/CG trace logging for debugging
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tiny tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* strided_slice_bp shape fn leak fix
Signed-off-by: raver119 <raver119@gmail.com>
* SameDiff fixes and naming (#78 )
* remove SDVariable inplace methods
* import methods
* npe fix in OpVal
* removed SameDiff inplace ops from tests
* Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything
* quick fixes
* javadoc
* SDVariable eval with placeholders
* use regex match
* better matching
* fix javadoc. (#76 )
* fix javadoc.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* replace most @see with @link s.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* 4 additional tests
Signed-off-by: raver119 <raver119@gmail.com>
* Various DL4J/ND4J fixes (#81 )
* #7954 Force refresh of UI when switching tabs on overview page
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8017 Concurrent modification exception (synchronize) fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8033 Don't initialize updater in middle of writing memory crash dump
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8208 Fix shape checks for ND4J int[] creator methods
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #6385 #7992 Keras import naming fixes + cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8016 Upsampling3D - add NDHWC format support
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Refactor NativeOps.h to export C functions
* Actually export functions from NativeOps.h
* Adapt the Java wrappers in ND4J generated with JavaCPP
* Create C wrappers for some of the C++ classes currently used by ND4J
* remove duplicate code in createBufferDetached. (#83 )
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Keras model import - updater lr fix (#84 )
* Keras model import - updater lr fix
Signed-off-by: eraly <susan.eraly@gmail.com>
* Keras model import - updater lr fix, cleanup
Signed-off-by: eraly <susan.eraly@gmail.com>
* Fix functions of OpaqueVariablesSet
* SameDiff Convolution Config validation, better output methods (#82 )
* Conv Config validation & tests
Signed-off-by: Ryan Nett <rnett@skymind.io>
* stackOutputs utility method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use constructor for validation, support negative kernel sizes (infered from weights)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better output methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
* move output to be with fit and evaluate
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* refactor duplicate code from pad methods. (#86 )
* refactor duplicate code from pad methods.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* replace switch with if.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Various ND4J/DL4J fixes and improvements (#87 )
* Reshape and reallocate - small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Reshape and reallocate - small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #6488 ElementWiseVertex broadcast support
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Constructors and broadcast supported it Transforms.max/min
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8054 ElementWiseVertex now supports broadcast inputs
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8057 Nd4j.create overload dtype fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7551 ND4J Shape validation fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* [WIP] Numpy boolean import (#91 )
* numpy bool type
Signed-off-by: raver119 <raver119@gmail.com>
* numpy bool java side
Signed-off-by: raver119 <raver119@gmail.com>
* remove create method with unused parameter. (#89 )
* remove create method with unused parameter.
* removed more unused methods.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* removing more unused code.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* last removal of unused code.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove createSparse methods. (#92 )
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Various ND4J/DL4J fixes (#90 )
* Deprecate Old*Op instances
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8063 #8054 Broadcast exceptions + cleanup inplace ops
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove bad test condition
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7993 Fix shape function issue in crop_and_resize op
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J SameDiff lambda layer fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8029 Fix for pnorm backprop math
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8038 Fix Op profiler NaN/Inf triggering + add tests (#93 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* createUninitializedDetached refactoring. (#94 )
* wip
* update interface, add null implementations.
* Breaking one test in a weird way.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* createUninitializedDetached refactored.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* cuda build fix for issues introduced by recent refactoring
Signed-off-by: raver119 <raver119@gmail.com>
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* deps tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* initial prototype
Signed-off-by: raver119 <raver119@gmail.com>
* modules reorganized
Signed-off-by: raver119 <raver119@gmail.com>
* gprc module moved to nd4j-remote as well
Signed-off-by: raver119 <raver119@gmail.com>
* gprc module moved to nd4j-remote as well
Signed-off-by: raver119 <raver119@gmail.com>
* serving prototype
Signed-off-by: raver119 <raver119@gmail.com>
* serving prototype
Signed-off-by: raver119 <raver119@gmail.com>
* serving prototype
Signed-off-by: raver119 <raver119@gmail.com>
* serving prototype
Signed-off-by: raver119 <raver119@gmail.com>
* [WIP] More of CUDA (#95 )
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* Implementation of hashcode cuda helper. Working edition.
* Fixed parallel test input arangements.
* Fixed tests for hashcode op.
* Fixed shape calculation for image:crop_and_resize op and test.
* NativeOps tests. Initial test suite.
* Added tests for indexReduce methods.
* Added test on execBroadcast with NDArray as dimensions.
* Added test on execBroadcastBool with NDArray as dimensions.
* Added tests on execPairwiseTransform and execPairwiseTransofrmBool.
* Added tests for execReduce with scalar results.
* Added reduce tests for non-empty dims array.
* Added tests for reduce3.
* Added tests for execScalar.
* Added tests for execSummaryStats.
* - provide cpu/cuda code for batch_to_space
- testing it
Signed-off-by: Yurii <yurii@skymind.io>
* - remove old test for batch_to_space (had wrong format and numbers were not checked)
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed complilation errors with test.
* Added test for execTransformFloat.
* Added test for execTransformSame.
* Added test for execTransformBool.
* Added test for execTransformStrict.
* Added tests for execScalar/execScalarBool with TADs.
* Added test for flatten.
* - provide cpu/cuda code for space_to_Batch operaion
Signed-off-by: Yurii <yurii@skymind.io>
* Added test for concat.
* comment unnecessary stuff in s_t_b
Signed-off-by: Yurii <yurii@skymind.io>
* Added test for specialConcat.
* Added tests for memcpy/set routines.
* Fixed pullRow cuda test.
* Added pullRow test.
* Added average test.
* - correct typo in NDArray::applyPairwiseTransform(nd4j::pairwise::BoolOps op...)
Signed-off-by: Yurii <yurii@skymind.io>
* - debugging and fixing cuda tests in JavaInteropTests file
Signed-off-by: Yurii <yurii@skymind.io>
* - correct some tests
Signed-off-by: Yurii <yurii@skymind.io>
* Added test for shuffle.
* Fixed ops declarations.
* Restored omp and added shuffle test.
* Added convertTypes test.
* Added tests for execRandom. Eliminated usage of RandomBuffer with NativeOps.
* Added sort tests.
* Added tests for execCustomOp.
* - further debuging and fixing tests terminated with crash
Signed-off-by: Yurii <yurii@skymind.io>
* Added tests for calculateOutputShapes.
* Addded Benchmarks test.
* Commented benchmark tests.
* change assertion
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests for apply_sgd op. Added cpu helper for that op.
* Implement cuda helper for aplly_sgd op. Fixed tests for NativeOps.
* Added test for assign broadcastable.
* Added tests for assign_bp op.
* Added tests for axpy op.
* - assign/execScalar/execTransformAny signature change
- minor test fix
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed axpy op.
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* - fix tests for nativeOps::concat
Signed-off-by: Yurii <yurii@skymind.io>
* sequential transform/scalar
Signed-off-by: raver119 <raver119@gmail.com>
* allow nested parallelism
Signed-off-by: raver119 <raver119@gmail.com>
* assign_bp leak fix
Signed-off-by: raver119 <raver119@gmail.com>
* block setRNG fix
Signed-off-by: raver119 <raver119@gmail.com>
* enable parallelism by default
Signed-off-by: raver119 <raver119@gmail.com>
* enable nested parallelism by default
Signed-off-by: raver119 <raver119@gmail.com>
* Added cuda implementation for row_count helper.
* Added implementation for tnse gains op helper.
* - take into account possible situations when input arrays are empty in reduce_ cuda stuff
Signed-off-by: Yurii <yurii@skymind.io>
* Implemented tsne/edge_forces op cuda-based helper. Parallelized cpu-based helper for edge_forces.
* Added kernel for tsne/symmetrized op heleper.
* Implementation of tsne/symmetrized op cuda helper. Working edition.
* Eliminated waste printfs.
* Added test for broadcastgradientargs op.
* host-only fallback for empty reduce float
Signed-off-by: raver119 <raver119@gmail.com>
* - some tests fixes
Signed-off-by: Yurii <yurii@skymind.io>
* - correct the rest of reduce_ stuff
Signed-off-by: Yurii <yurii@skymind.io>
* - further correction of reduce_ stuff
Signed-off-by: Yurii <yurii@skymind.io>
* Added test for Cbow op. Also added cuda implementation for cbow helpers.
* - improve code of stack operation for scalar case
Signed-off-by: Yurii <yurii@skymind.io>
* - provide cuda kernel for gatherND operation
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of cbow helpers with cuda kernels.
* minor tests tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* minor tests tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* - further correction of cuda stuff
Signed-off-by: Yurii <yurii@skymind.io>
* Implementatation of cbow op helper with cuda kernels. Working edition.
* Skip random testing for cudablas case.
* lstmBlockCell context fix
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests for ELU and ELU_BP ops.
* Added tests for eq_scalar, gt_scalar, gte_scalar and lte_scalar ops.
* Added tests for neq_scalar.
* Added test for noop.
* - further work on clipbynorm_bp
Signed-off-by: Yurii <yurii@skymind.io>
* - get rid of concat op call, use instead direct concat helper call
Signed-off-by: Yurii <yurii@skymind.io>
* lstmBlockCell context fix
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests for lrelu and lrelu_bp.
* Added tests for selu and selu_bp.
* Fixed lrelu derivative helpers.
* - some corrections in lstm
Signed-off-by: Yurii <yurii@skymind.io>
* operator * result shape fix
Signed-off-by: raver119 <raver119@gmail.com>
* - correct typo in lstmCell
Signed-off-by: Yurii <yurii@skymind.io>
* few tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA inverse broadcast bool fix
Signed-off-by: raver119 <raver119@gmail.com>
* disable MMAP test for CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* BooleanOp syncToDevice
Signed-off-by: raver119 <raver119@gmail.com>
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* additional data types for im2col/col2im
Signed-off-by: raver119 <raver119@gmail.com>
* Added test for firas_sparse op.
* one more RandomBuffer test excluded
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests for flatten op.
* Added test for Floor op.
* bunch of tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* mmulDot tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* more tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* Implemented floordiv_bp op and tests.
* Fixed scalar case with cuda implementation for bds.
* - work on cuda kernel for clip_by_norm backprop op is completed
Signed-off-by: Yurii <yurii@skymind.io>
* Eliminate cbow crach.
* more tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* more tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* Eliminated abortion with batched nlp test.
* more tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed shared flag initializing.
* disabled bunch of cpu workspaces tests
Signed-off-by: raver119 <raver119@gmail.com>
* scalar operators fix: missing registerSpecialUse call
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed logdet for cuda and tests.
* - correct clipBynorm_bp
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed crop_and_resize shape datatype.
* - correct some mmul tests
Signed-off-by: Yurii <yurii@skymind.io>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* exclude two methods for JNI
Signed-off-by: raver119 <raver119@gmail.com>
* exclude two methods for JNI
Signed-off-by: raver119 <raver119@gmail.com>
* exclude two methods for JNI (#97 )
Signed-off-by: raver119 <raver119@gmail.com>
* temporary stack fix
Signed-off-by: raver119 <raver119@gmail.com>
* downgrade jetty to latest stable version
Signed-off-by: raver119 <raver119@gmail.com>
* test and profiles
Signed-off-by: raver119 <raver119@gmail.com>
* Servlet skeleton
* one test case
Signed-off-by: raver119 <raver119@gmail.com>
* one test case
Signed-off-by: raver119 <raver119@gmail.com>
* compilation fix
Signed-off-by: raver119 <raver119@gmail.com>
* draft improvements
Signed-off-by: raver119 <raver119@gmail.com>
* draft improvements
Signed-off-by: raver119 <raver119@gmail.com>
* proof of concept works
Signed-off-by: raver119 <raver119@gmail.com>
* proof of concept works
Signed-off-by: raver119 <raver119@gmail.com>
* Servlet
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* logging + simple timing
Signed-off-by: raver119 <raver119@gmail.com>
* Content type fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Profile required
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Servlet tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Post test
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Tests added:
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Minor tweaks
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Constants used
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Check content type
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Some tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Errors checking
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Constraints and tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Minor tweaks
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Dl4j servlet skeleton
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Moving class to dl4j
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Builder extended
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* initial dl4j commit
Signed-off-by: raver119 <raver119@gmail.com>
* unirest version change
Signed-off-by: raver119 <raver119@gmail.com>
* temp fallback
Signed-off-by: raver119 <raver119@gmail.com>
* Reverted unirest version
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Reverted unirest version
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* revert back unirest version change
Signed-off-by: raver119 <raver119@gmail.com>
* revert unirest change
Signed-off-by: raver119 <raver119@gmail.com>
* some additional checks in builder
Signed-off-by: raver119 <raver119@gmail.com>
* few more fields
Signed-off-by: raver119 <raver119@gmail.com>
* Test added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* lombok
Signed-off-by: raver119 <raver119@gmail.com>
* Tests added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* deps
Signed-off-by: raver119 <raver119@gmail.com>
* profiles re-introduced
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Model servlet
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* builders
Signed-off-by: raver119 <raver119@gmail.com>
* builders
Signed-off-by: raver119 <raver119@gmail.com>
* Servlet skeleton
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Servlet tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* builders
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of old class
Signed-off-by: raver119 <raver119@gmail.com>
* use PI for inference
Signed-off-by: raver119 <raver119@gmail.com>
* superbuilder
Signed-off-by: raver119 <raver119@gmail.com>
* get back builder
Signed-off-by: raver119 <raver119@gmail.com>
* Servlet builder
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* PI setup
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of superbuilder
Signed-off-by: raver119 <raver119@gmail.com>
* SameDiffServlet inheritance constructor
Signed-off-by: raver119 <raver119@gmail.com>
* dl4jservlet attached to samediffservlet
Signed-off-by: raver119 <raver119@gmail.com>
* builder types fix
Signed-off-by: raver119 <raver119@gmail.com>
* dummy model
Signed-off-by: raver119 <raver119@gmail.com>
* single out
Signed-off-by: raver119 <raver119@gmail.com>
* loss
Signed-off-by: raver119 <raver119@gmail.com>
* Tests added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* missed builder type
Signed-off-by: raver119 <raver119@gmail.com>
* working serving example
Signed-off-by: raver119 <raver119@gmail.com>
* sd model fix
Signed-off-by: raver119 <raver119@gmail.com>
* fix unirest version
Signed-off-by: raver119 <raver119@gmail.com>
* More tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests added:
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Minor tests fixes
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Tests fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Build fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Test added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Tests fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Ser/deser added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* one more unirest fix
Signed-off-by: raver119 <raver119@gmail.com>
* Custom serializers
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Tests disabled
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* revert back unirest version change
Signed-off-by: raver119 <raver119@gmail.com>
* update
Signed-off-by: raver119 <raver119@gmail.com>
* some default fields values
Signed-off-by: raver119 <raver119@gmail.com>
* some comments/javadoc
Signed-off-by: raver119 <raver119@gmail.com>
* - move serde impls to client module
- get rid of INDArray serde for now
Signed-off-by: raver119 <raver119@gmail.com>
* jackson-based serde for float[], double[] and String
Signed-off-by: raver119 <raver119@gmail.com>
* more of basic ser/de + tests
Signed-off-by: raver119 <raver119@gmail.com>
* minor api changes
Signed-off-by: raver119 <raver119@gmail.com>
* change imports/signatures
Signed-off-by: raver119 <raver119@gmail.com>
* Optional parralel inference
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Insert pause between tests as workaround for unavailable port issue
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* few unused imports removed
Signed-off-by: raver119 <raver119@gmail.com>
* Models usage
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Models usage
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* - InputAdapter + OutputAdapter = InferenceAdapter
- JsonModelServer now allows separate configuration of InputAdapter and OutputAdapter
Signed-off-by: raver119 <raver119@gmail.com>
* unused import
Signed-off-by: raver119 <raver119@gmail.com>
* input adapter..
Signed-off-by: raver119 <raver119@gmail.com>
* minor signature change
Signed-off-by: raver119 <raver119@gmail.com>
* few more signatures updated
Signed-off-by: raver119 <raver119@gmail.com>
* input/output adapter
Signed-off-by: raver119 <raver119@gmail.com>
* Tests added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* javadocs added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Test fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* minor polishing
Signed-off-by: raver119 <raver119@gmail.com>
* more of javadoc
Signed-off-by: raver119 <raver119@gmail.com>
* signature change
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-14 12:11:09 +03:00
raver119
62a025439b
java cuda compilation fix
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-07 21:36:27 +03:00
raver119
aefdbc17d6
exclude two methods for JNI
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:27:05 +10:00
raver119
e565788329
cuda build fix for issues introduced by recent refactoring
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:27:05 +10:00
Samuel Audet
8d1fe8b1b3
Fix functions of OpaqueVariablesSet
2019-08-05 11:24:20 +10:00
Samuel Audet
526b782e51
Create C wrappers for some of the C++ classes currently used by ND4J
2019-08-05 11:22:59 +10:00
Samuel Audet
8881bfe7aa
Adapt the Java wrappers in ND4J generated with JavaCPP
2019-08-05 11:22:59 +10:00
raver119
b9708be5db
delete temporary TadPack C++/Java side ( #74 )
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:11:22 +10:00
raver119
59a006ce29
[WIP] More fixes ( #73 )
...
* special tests for ConstantTadHelper/ConstantShapeHelper
Signed-off-by: raver119 <raver119@gmail.com>
* release methods for data buffers
Signed-off-by: raver119 <raver119@gmail.com>
* delete temporary buffer Java side
Signed-off-by: raver119 <raver119@gmail.com>
* delete temporary buffer Java side
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:11:22 +10:00
raver119
c499dc962f
- numpy import fix for CUDA ( #64 )
...
- skip tagLocation for empty arrays
Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:16:19 +10:00
raver119
fd6c0df024
[WIP] More CUDA fixes/updates ( #62 )
...
* CUDA reallocation update
Signed-off-by: raver119 <raver119@gmail.com>
* Legacy SoftMax/LogSoftMax/SoftMaxDerivative removed from cpp
Signed-off-by: raver119 <raver119@gmail.com>
* SoftMaxDerivative op removed
Signed-off-by: raver119 <raver119@gmail.com>
* few tests updates
Signed-off-by: raver119 <raver119@gmail.com>
* RNG fixes
Signed-off-by: raver119 <raver119@gmail.com>
* few more tests updates
Signed-off-by: raver119 <raver119@gmail.com>
* legacy Histogram/Pooling2D removed
Signed-off-by: raver119 <raver119@gmail.com>
* legacy Histogram removed
Signed-off-by: raver119 <raver119@gmail.com>
* histogram moved
Signed-off-by: raver119 <raver119@gmail.com>
* histogram moved cuda
Signed-off-by: raver119 <raver119@gmail.com>
* Histogram custom op
Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:07:42 +10:00
raver119
9cf28ea6c9
[WIP] CUDA tweaks ( #60 )
...
* special cpu concat
Signed-off-by: raver119 <raver119@gmail.com>
* special concat fix
Signed-off-by: raver119 <raver119@gmail.com>
* OpProfiler tweak for absent host pointers
Signed-off-by: raver119 <raver119@gmail.com>
* minor test tweak to see orders
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA broadcasting diff orders fix
Signed-off-by: raver119 <raver119@gmail.com>
* faster iterations
Signed-off-by: raver119 <raver119@gmail.com>
* OldSoftMax/OldLogSoftMax gone
Signed-off-by: raver119 <raver119@gmail.com>
* RandomLauncher tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* additional check int randomtests
Signed-off-by: raver119 <raver119@gmail.com>
* skip prepare/register action for empty arrays
Signed-off-by: raver119 <raver119@gmail.com>
* npz float16 fix
Signed-off-by: raver119 <raver119@gmail.com>
* empty reduction cuda fixes
Signed-off-by: raver119 <raver119@gmail.com>
* ShapeBufferTests tweaks
Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:06:48 +10:00
raver119
6ce458e949
[WIP] CUDA Java side ( #58 )
...
* one crashing test
Signed-off-by: raver119 <raver119@gmail.com>
* stupid issue fixed
Signed-off-by: raver119 <raver119@gmail.com>
* one fix
Signed-off-by: raver119 <raver119@gmail.com>
* dont ensure location for empty arrays
Signed-off-by: raver119 <raver119@gmail.com>
* few more signatures fixed
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks for DataBuffer creation from java primitives
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of legacy im2col/col2im intercept
Signed-off-by: raver119 <raver119@gmail.com>
* rsubi scalar array fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:06:25 +10:00
raver119
c969b724bb
[WIP] more CUDA stuff ( #57 )
...
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* Added gradcheck test for dynamic_partition_bp op.
* - implementation of dilation op (cpu and cuda)
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed broadcast_dynamic_shape 1D case and tests.
* Fixed usage of default integer arguments.
* Fixed dynamic_partition_bp op and tests.
* Eliminated test with grad check for dynamic_partition_bp op.
* start working on cuda svd - porting available corresponding api from cuSOLVER library
Signed-off-by: Yurii <yurii@skymind.io>
* provide prelu_bp
Signed-off-by: Yurii <yurii@skymind.io>
* - provide gruCell_bp (old version ??)
Signed-off-by: Yurii <yurii@skymind.io>
* - polishing cumsum_bp and cumprod_bp tests
Signed-off-by: Yurii <yurii@skymind.io>
* provide sparseSoftmaxCrossEntropyWithLogits and sparseSoftmaxCrossEntropyWithLogits_grad
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed atomicMul with float input/output
* implementation of cuda kernel for triu_bp operation
Signed-off-by: Yurii <yurii@skymind.io>
* Refactored lup helper to add parrallel computing.
* cusolver libraries
Signed-off-by: raver119 <raver119@gmail.com>
* uncomment cuSolver APIs in svd.cu
Signed-off-by: Yurii <yurii@skymind.io>
* cusolver var
Signed-off-by: raver119 <raver119@gmail.com>
* - further work on cuSolver svd
Signed-off-by: Yurii <yurii@skymind.io>
* Implement usage of cuda solver to LUP decomposition.
* - correct naames in lup functions
Signed-off-by: Yurii <yurii@skymind.io>
* correct svdQR cuda
Signed-off-by: Yurii <yurii@skymind.io>
* - provide transpositions of input matrices in case of c order in svdCudaQR
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed implementation issues with LUP usign cuda solver.
* Implementation of matrix_determinant helper with cuda kernels. Working revision.
* Implemented log_matrix_determinant helper with cuda kernels.
* - implementation of batched cuda svd
Signed-off-by: Yurii <yurii@skymind.io>
* Refactored cholesky helper and implementation of cuda solver cholesky batch.
* - implementation of cuda kernel for tile bp
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of cholesky and logdet with cuda kernels.
* - implementation of cuda kernel for sru_bidirectional
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed cholesky helper.
* Cholesky op helper implementation. Working double-based cublas implementation.
* bad import excluded
Signed-off-by: raver119 <raver119@gmail.com>
* Finished with cuda implementation of cholesky helper and tests.
* - implementation of cuda kernel for sru_bidirectional_backprop operation
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of matrix_inverse op helper with cuda kernels. The first revision.
* - start working on gruCell_bp
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of matrix_inverse helper.
* - further work on new gruCell_bp
Signed-off-by: Yurii <yurii@skymind.io>
* cuBLAS related fixes
Signed-off-by: raver119 <raver119@gmail.com>
* calculateOutputShapes() now passes device buffers as well
Signed-off-by: raver119 <raver119@gmail.com>
* special concat/average/accumulate init host pointers now
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* additional CudaDataBufferFactory signatures certain for data types
Signed-off-by: raver119 <raver119@gmail.com>
* cuSolver host buffer
Signed-off-by: raver119 <raver119@gmail.com>
* buffer to buffer memcpy host ptr allocation
Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:05:21 +10:00
Alex Black
cb6654bebb
Add libnd4j benchmarks ( #3 )
...
This PR adds 2 libnd4j benchmarking suits
2019-07-20 22:54:44 +10:00
raver119
15e7984392
Small fixes ( #46 )
...
* - fix for eclipse/#7959
* createFromNpy scalar fix
Signed-off-by: raver119 <raver119@gmail.com>
* remove spam
Signed-off-by: raver119 <raver119@gmail.com>
* rng custom ops tests
Signed-off-by: raver119 <raver119@gmail.com>
* Numpy headers validation + tests
Signed-off-by: raver119 <raver119@gmail.com>
* fix for scalar string flat serde
Signed-off-by: raver119 <raver119@gmail.com>
* Where empty shape test
Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 22:20:14 +10:00
Samuel Audet
deba1e0df9
Update dependencies to just released JavaCPP and JavaCV 1.5.1 ( #8004 )
...
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
2019-07-14 21:07:33 +03:00
Alex Black
1170827c18
Merge master to upstream ( #7945 )
...
* Shugeo strided slice zeros (#14 )
* Modified strided_slice op to properly work with empty-like shapes.
* Fixed test for reduce_mean with empty-like input.
* [WIP] Last merge (#15 )
* correct logsoftmax looss (#2 )
* Small SameDiff listener fix (#4 )
* Various fixes (#6 )
* #7839 Fix for asXMatrix and tests
* #7866 EmbeddingSequenceLayer dtype fix + test
* #7856 SameDiff save/load stream methods
* #7859 RegressionEvaluation rank 4 fix + tests + axis configuration
* EvaluationBinary 3d/4d
* More evaluation 3d/4d tests
* #7847 Evaluation empty checks
* Small test ifx
* #7848 Fix median edge case
* Improve DL4J samediff layer tests
* [WIP] FastText wrapper implemented (#8 )
* FastText implemented
* Some fixes
* Fix shapes for wordsNearest
* Validation of input vectors
* Fixes
* Fixed test
* Thread tagged
* Some tweaks
* setContextClassLoader for DeallocatorServiceThread
* Numpy format tests (#1 )
* Various fixes (#11 )
* #7852 SameDiff gather fix
* #7892 SameDiff placeholder to constant conversion
* #7890 validate input rank for MLN/CG init methods
* Fix broken permute shape calculation
* Permute and gather fixes
* Tests
* #7850 LogSumExp fix + test
* Handful of test fixes
* Empty arrays with non-scalar shapes (#10 )
* minor rearrangements for lambdas
* empty tensors with non-scalar shapes
* numpy empty tensors with non-scalar shapes
* few more empty tweaks
* Small fixes
* conv3d signature update
* micro fix in batchnorm mkldnn
* Import fixes
* Fix
* MKL-DNN update
* Small fill fix
* fill with empty input + test
* Fixes
* Small error improvement
* Fix
* one special test
* couple of fixes for lstm
* Rewrite TFGraphMapper.getNDArrayFromTensor to be maintainable and less error prone
* Fixes
* FP16
* Unsigned
* BFloat16
* Fill op - empty tweaks
* - couple of fixes for empty arrays construction
- stack updated
* strided slice fix
* one transform test
* provide method for reducing shapeInfo in case of input array is empty
* Fixed reduceAlongDimensions to use empty input properly.
* couple of broadcast tests
* couple of tests broadcast tests + tweak to make them pass
* add check of non-empty to methods producing sub-arrays
* Fixed reshapeC with zeros in shape.
* complete empty check in reduce_... legacy ops
* Concat and cumsum/prod
* Tweak to empty shape inference on import
* add empty check to the rest of reduce legacy ops
* one more test
* correct typo in evalReduceShapeInfoEmpty
* Added tests for reduce_* ops to tests with zero shapes.
* few more tests for empty reductions
* Fixed strided_slice op with empty case and tests.
* one more empty reduction test
* Fixed strided_slice test.
* add empty check to NDArray::reshapei
* infOrMax
* empty min/max with infinity tests
* made unstack working correctly with empty arrays
* few IndexReduce tests + tweaks for empty shapes
* add test for empty concat
* few tests fixed
* Validation fix for reductions on empty shapes
* Reverse fix
* Reduction shape calc fixes
* SameDiff.generateOutputVariable: don't use shape function to determine number of outputs
* Range fix
* - NDArray constructor updated for scalars/empty arrays
- few tests fixed
* More fixes
* Empty creator fixes
* concat fix
* concat fix
* TF import tests: allow 'both all NaN' and 'both all inf' to pass
* Slice, zero fraction, and reshape fixes
* transpose, gather
* Zero fraction
* scalar cast fix
* Empty reduction axis support
* few more tests fixed
* Fixed input checks conforming with TF for concat op and tests.
* few tests fixed
* matmul scalar shape fix
* Fixed checkout for data type and scalarity with concat to allow non-empty scalars with vector concats.
* broadcast bool fix
* few more tests
* few more tests
* correct evalReduceShapeInfoEmpty
* argmax/argmin + tests
* one more empty edge case + one more test
* argmax/argmin/realdiv_bp tweaks
* empty reshape test + fix
* Helper fixes
* Small fixes
* Gather test fix
* Gather test fix
* Small fixes
* reduce scalar zero values
* scalar mean workaround
* Remove debug code
* along dim mean workaround
* one more test
* - equalsTo() tweak for empty arrays
- one more test
* broadcast tweaks
* [WIP] Fixing outstanding issues for NLP (#9 )
* Avoid using not-inited objects
* Test fixed.
* Redundant method avoided for models like FastText
* KMeans++ implementation
* KMeans++ implementation
* Disable parallel execution
* KMeans++
* Tests
* Dev branch merge (#16 )
* SameDiff: convertDataType and gradient check util improvements (#12 )
* GradCheck util improvements
* StopGradient constructor + test
* SameDiff: Add datatype conversion
* Javadoc and add DataType.isNumerical()
* Small fix
* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)
* TFGraphTestAllHelper: check intermediates in execution order
* Add missing debug listener
* [WIP] lstmBlock fix + other changes (#13 )
- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite
* Small test fix
* CheckNumerics op wrapper
* Fix some issues on master (#17 )
* Fix DataVec test issue
* Fix issue with dl4j SameDiff output layer
* Dtype fix for lambda layers
* #7912 BertIterator dtype fix (use float32 not global default)
* [WIP] Next set of CUDA stuff (#7 )
New CUDA implementations and improvements
* bad file
* Dev branch master merge (#23 )
* SameDiff: convertDataType and gradient check util improvements (#12 )
* GradCheck util improvements
* StopGradient constructor + test
* SameDiff: Add datatype conversion
* Javadoc and add DataType.isNumerical()
* Small fix
* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)
* TFGraphTestAllHelper: check intermediates in execution order
* Add missing debug listener
* [WIP] lstmBlock fix + other changes (#13 )
- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite
* Small test fix
* CheckNumerics op wrapper
* Compatibility of deserialization (#18 )
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* SameDiff: add activation gradient checking support for debugging (#19 )
* SameDiff gradient checker: first pass on activation gradient checks
* Fixes + tests for activation gradient checking
* Javadoc
* [WIP] Some nd4j data type corrections (#20 )
* Adjust data type
* Set correct Data type.
* Size of proper data type.
* fix averaged cpu load (#22 )
* SameDiff ops, TF import and fixes (#24 )
* CheckNumerics tests + fixes + misc fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fake quant
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* FakeQuantWithMinMaxArgs
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* CheckNumerics fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix libnd4j ALL_INTS and ALL_FLOATS declaration (uint and bfloat types)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Javadoc
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Exception tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix for out of scope stack allocated var use
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ignores
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ignore for known failing test (already logged issue)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Merge upstream to fork (#25 )
* Add thousand-separator commas to TotalParams (#7915 )
* Add thousand-separator commas to TotalParams
The number of parameters can be quite large, and it would help the reading of the summary printout to have the TotalParams column & values at the bottom have thousand-separator-commas in them.
* Add thousand-separator commas to MultiLayerNetwork
Corresponding change to MultiLayerNetwork
Signed-off-by: Jxtps Jxtps <jxtps435@gmail.com>
* Update contributing and issue/PR templates (#7934 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix link to AdaDelta paper (#7942 )
Fix link to AdaDelta paper hosted on matthewzeiler.com
Signed-off-by: Jxtps
* Fixes, and ignores for known/logged failing issues (#7943 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* SameDiff + DL4J/SameDiff: Multiple fixes (#28 )
* #7919 HDF5 attribute buffer length fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7909 Arbiter constructor exception ux improvements
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7925 RNN output layer length checks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7939 Add listener for validating inputs are not incorrectly modified
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #7939 Integrate NonInplaceValidationListener into tests
* #7844 DL4J SameDiff fixes for variable minibatch size
* DL4J SameDiff fixes - ensure gradient for input placeholder is available
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tweaks to ExternalErrorsFunction - use placeholders, make more robust
* Another fix
* More fixes
* More SameDiff/DL4J fixes
* Scope out scalar array creation in BaseScalarOp
* Remove debug code
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* [WIP] Final dev branch merge (#29 )
* SameDiff: convertDataType and gradient check util improvements (#12 )
* GradCheck util improvements
* StopGradient constructor + test
* SameDiff: Add datatype conversion
* Javadoc and add DataType.isNumerical()
* Small fix
* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)
* TFGraphTestAllHelper: check intermediates in execution order
* Add missing debug listener
* [WIP] lstmBlock fix + other changes (#13 )
- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite
* Small test fix
* CheckNumerics op wrapper
* Compatibility of deserialization (#18 )
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* SameDiff: add activation gradient checking support for debugging (#19 )
* SameDiff gradient checker: first pass on activation gradient checks
* Fixes + tests for activation gradient checking
* Javadoc
* [WIP] Some nd4j data type corrections (#20 )
* Adjust data type
* Set correct Data type.
* Size of proper data type.
* fix averaged cpu load (#22 )
* [WIP] Multiple dataset iterators (#27 )
* Splitting dataset into arbitrary number
* Fixes
* Multiple split of iterator
* Test
* Test
* Some fixes
* signature change
* one more tweak
Signed-off-by: raver119 <raver119@gmail.com>
* one more test for sequential use of DataSetIteratorSplitter
Signed-off-by: raver119 <raver119@gmail.com>
* Fixes
* Fixes
* one more test for Alexander
Signed-off-by: raver119 <raver119@gmail.com>
* Some fixes
* Some fixes
* one more test for Alexander
Signed-off-by: raver119 <raver119@gmail.com>
* minor test fix
Signed-off-by: raver119 <raver119@gmail.com>
* Some fixes
* Some fixes
* couple of assertions tweaked
Signed-off-by: raver119 <raver119@gmail.com>
* MDS splitter test :/
Signed-off-by: raver119 <raver119@gmail.com>
* Minor refactoring
* Multi dataset
* Some fixes
* More tests
* Small number of test fixes/improvements (failures on CI) (#31 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* [WIP] More CUDA stuff (#26 )
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* LRN BP CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* less memory
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed bug with crop_and_resize op helper.
* get rid of unnecessary index-calculation dunction
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed sort with nth_element cuda-based helper.
* Refactored nth_element.
* Refactored nth_element op and tests.
* Modified usage of dim array with sortTad routine.
* Refactored main routine of helper for non_max_image_suppression op.
* non_max_image_suppression op helper with cuda kernel implementation. Initial revision.
* fix vol2col cuda kernel
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* topK concept
Signed-off-by: raver119 <raver119@gmail.com>
* unsorted topK with scanWitdh of 1
Signed-off-by: raver119 <raver119@gmail.com>
* correct vol2col tests
* sorted/unsorted topK
Signed-off-by: raver119 <raver119@gmail.com>
* implementation and fixing col2im/col2vol
* Corrected usage flags with input/output with reverse op.
* dup is const now
Signed-off-by: raver119 <raver119@gmail.com>
* percentile op
Signed-off-by: raver119 <raver119@gmail.com>
* group tests for mapool2d
Signed-off-by: Yurii <yurii@skymind.io>
* special test for george
Signed-off-by: raver119 <raver119@gmail.com>
* less threads for sortTad
Signed-off-by: raver119 <raver119@gmail.com>
* provide conv2d for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* remove auther in sort tad kernel code
Signed-off-by: Yurii <yurii@skymind.io>
* provide depthwise_conv2d for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* - max_pooling_with_argmax
- null check for special use
Signed-off-by: raver119 <raver119@gmail.com>
* dts cuda
Signed-off-by: raver119 <raver119@gmail.com>
* provide sconv2d for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* std cuda
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored non_max_suppression op to conform TF implementation.
* Improved suppression helper.
* provide pooling3d for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* minor lstm rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* more of minor lstm rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* (bi)dynamic_rnn
Signed-off-by: raver119 <raver119@gmail.com>
* templates init order
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored non_max_suppression op.
* Added cuda kernel for non_max_suppression.
* CPU sort by key/value
Signed-off-by: raver119 <raver119@gmail.com>
* CPU sort TAD by key/value
Signed-off-by: raver119 <raver119@gmail.com>
* CPU sort TAD by key/value tests
Signed-off-by: raver119 <raver119@gmail.com>
* Eliminate compiler error with cuda implementation.
* - repaired gradCheck in cuda
- provide conv2d_bp for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* missed signature
Signed-off-by: raver119 <raver119@gmail.com>
* provide depthwise_conv2d_bp for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of lup helper with cuda kernel. Initial commit.
* further work on backprops for convolutions
Signed-off-by: Yurii <yurii@skymind.io>
* CUDA linear sort by key/val
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA tad sort by key/val
Signed-off-by: raver119 <raver119@gmail.com>
* start providing of backprop for pooling2d/3d
Signed-off-by: Yurii <yurii@skymind.io>
* Added atomicAdd for bool datatype.
* dynamic partition concept
Signed-off-by: raver119 <raver119@gmail.com>
* dynamic partition concept
Signed-off-by: raver119 <raver119@gmail.com>
* dynamic partition scalar CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* important comment
Signed-off-by: raver119 <raver119@gmail.com>
* fix pooling2d/3d backprop helpers
Signed-off-by: Yurii <yurii@skymind.io>
* Added non-linear test with dynamic_partition.
* Improved test for dynamic_partition.
* dynamic_partition TAD concept
Signed-off-by: raver119 <raver119@gmail.com>
* - dynamic_partition TAD CUDA impl
- dynamic_partition TAD CPU fix
Signed-off-by: raver119 <raver119@gmail.com>
* - rewrite cpu code for usampling2d/3d
- write cuda code for usampling2d/3d
Signed-off-by: Yurii <yurii@skymind.io>
* dynamic_stitch CUDA vector case
Signed-off-by: raver119 <raver119@gmail.com>
* dynamic_stitch CUDA TAD case concept
Signed-off-by: raver119 <raver119@gmail.com>
* dynamic_stitch CUDA TAD case impl
Signed-off-by: raver119 <raver119@gmail.com>
* Added tests for dynamic_stitch 3D-4D cases.
* minor tests tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed type check for dynamic stitch.
* min/max bp
Signed-off-by: raver119 <raver119@gmail.com>
* rewrite code for upsampling2d/3d cpu
Signed-off-by: Yurii <yurii@skymind.io>
* reduce min/max/norm_max bp
Signed-off-by: raver119 <raver119@gmail.com>
* lup implementation. Additional enhancements.
* provide code for upsamling2d/3d backprop
Signed-off-by: Yurii <yurii@skymind.io>
* weightedCrossEntropyWithLogits
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed template math atomicMul for 64bit ints.
* Refactored dynamic_partition_bp op.
* inverseBroadcast fix
Signed-off-by: raver119 <raver119@gmail.com>
* DynamicPartitionBP test datatype fixed.
* - nd4j_atomicMul Windows fix
- cpu/NDArrayLambda.hpp excluded from CUDA
Signed-off-by: raver119 <raver119@gmail.com>
2019-06-27 18:37:04 +03:00
skymindops
b5f0ec072f
Eclipse Migration Initial Commit
2019-06-06 15:21:15 +03:00