shugeo
02d8616692
Implementation of cuda kernel for fake_quant_with_min_max_vars_per_channels op.
2019-10-10 16:40:56 +03:00
shugeo
3504b0cda9
Implemented fake_quant_with_min_max_vars_per_channel fop cuda helper. The first working revision.
2019-10-10 15:44:50 +03:00
shugeo
753565145c
Refactored fake_quant_with_min_max_vars op cuda implementation.
2019-10-10 14:00:49 +03:00
shugeo
c13e945a96
Fixed fake_quant_with_min_max_vars op and tests.
2019-10-10 13:23:11 +03:00
shugeo
3c0c59ab88
Refactored fake_quant_with_min_max_vars op.
2019-10-09 22:09:33 +03:00
shugeo
352f1eee80
Implemented fake_quant_with_min_max_per_channel helper for cpu platform. The first approach.
2019-10-09 21:39:59 +03:00
shugeo
d0cbd33b0e
Added input checks for op.
2019-10-09 15:52:13 +03:00
shugeo
cb56b0b06a
The first approach for fake_quant_with_min_max_vars_per_channel op implementation.
2019-10-08 19:00:41 +03:00
shugeo
8fe5a1fa96
The working implementation of draw_bounding_boxes op.
2019-10-08 15:42:27 +03:00
shugeo
30a8af566c
The first working implementation of cuda kernel for draw_bounding_boxes op helper.
2019-10-08 13:45:18 +03:00
shugeo
ae09cfee32
Next approach of cuda imlementation for draw_bounding_boxes op helper.
2019-10-08 00:09:46 +03:00
shugeo
6cf3a8fa9c
Refactored cpu implementatio and added cuda aproach.
2019-10-07 17:51:07 +03:00
shugeo
78443ffebf
Working implementation of draw_bounding_boxes op for cpu.
2019-10-07 15:04:44 +03:00
shugeo
16a66a65e3
Added helper declaration for draw_bounding_boxes op.
2019-10-04 21:16:34 +03:00
shugeo
53a2ebddbe
Added test and helpers for draw_bounding_boxes op both cpu and cuda related.
2019-10-04 20:46:26 +03:00
shugeo
8f70b4441f
draw_bounding_boxes op implementation. Inital revision.
2019-10-04 18:32:21 +03:00
shugeo
908e4c4912
Added implementation for divide_no_nan op and tests.
2019-10-04 10:29:15 +03:00
raver119
cff26f13c5
Revert "Implement divide_no_nan op."
2019-10-03 20:25:52 +03:00
shugeo
6eaca179d6
Implement divide_no_nan op.
2019-10-03 18:22:17 +03:00
shugeo
130ee25682
Implemented compare_and_bitpack op.
2019-10-03 10:57:48 +03:00
shugeo
f3e42173ef
Refactored buffer copying to avoid wrong usage of buffers.
2019-10-02 16:51:09 +03:00
shugeo
1c6173d218
Added implementation of bitcast op.
2019-10-02 15:04:59 +03:00
shugeo
a27e61553a
Added tests and fixed op name.
2019-10-02 15:04:28 +03:00
shugeo
863ff76878
Added declaration for bincast op.
2019-10-02 12:17:00 +03:00
shugeo
afeb524238
Refactored implementation for adjust_contrast ops.
2019-10-01 14:13:09 +03:00
shugeo
1575c704ae
Added implementation for adjust_contrast_v2 op and tests.
2019-10-01 11:44:27 +03:00
raver119
44a8d19ac6
[WIP] Broadcast changes ( #8257 )
...
* - provide correct call NDArray::applyBroadcast inside of NDArray::applyTrueBroadcast
Signed-off-by: Yurii <yurii@skymind.io>
* - provide new trueBroadcast helper
Signed-off-by: Yurii <yurii@skymind.io>
* example for yurii
Signed-off-by: raver119 <raver119@gmail.com>
* - provide new trueBroadcast helper for cpu
Signed-off-by: Yurii <yurii@skymind.io>
* - start working on new trueBroadcat helper for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* - further work on trueBroadcast for cuda
Signed-off-by: Yurii <yurii@skymind.io>
* - fix bugs in cuda helper trueBroadcast
Signed-off-by: Yurii <yurii@skymind.io>
2019-10-01 09:10:19 +03:00
shugeo
e06dfb5dcc
Implementation of adjust_contrast op.
2019-09-30 18:24:12 +03:00
AlexDBlack
a66e03355e
Merge remote-tracking branch 'fork/master'
2019-09-12 12:20:57 +10:00
raver119
98e2814879
Platform helpers ( #8216 )
...
* platform helpers draft
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* disable platform cmake
Signed-off-by: raver119 <raver119@gmail.com>
* another draft
Signed-off-by: raver119 <raver119@gmail.com>
* mkldnn convolution refactored
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* one more safety check
Signed-off-by: raver119 <raver119@gmail.com>
* prototype works
Signed-off-by: raver119 <raver119@gmail.com>
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* force static library mode for mkldnn
Signed-off-by: raver119 <raver119@gmail.com>
* - ismax fix
- experimental arg fix
- don't enforce openblas on Apple hardware
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of small fixes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* declare concurrent
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* - MKLDNN version upgrade to 1.0.2
- avgpool2d/maxpool2d APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* - avgpool2d_bp/maxpool2d_bp APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* - conv2d/batchnorm APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* - lrn/conv2d_bp/conv3d/conv3d_bp APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* all ops converted to MKLDNN 1.x
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* namespace for platform helpers
Signed-off-by: raver119 <raver119@gmail.com>
* make sure platform helpers aren't opimized out
Signed-off-by: raver119 <raver119@gmail.com>
* build cpu_features on x86 systems
Signed-off-by: raver119 <raver119@gmail.com>
* build cpu_features on x86 systems
Signed-off-by: raver119 <raver119@gmail.com>
* more of cpu_features
Signed-off-by: raver119 <raver119@gmail.com>
* - mkldnn removed from java
- cpu_features checks in CpuNDArrayFactory
Signed-off-by: raver119 <raver119@gmail.com>
* F16C definition renamed
Signed-off-by: raver119 <raver119@gmail.com>
* some mkldnn rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* check supported instructions before doing anything
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* missied impl
Signed-off-by: raver119 <raver119@gmail.com>
* BUILD_PIC option
Signed-off-by: raver119 <raver119@gmail.com>
* conv2d fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool3d fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool3d_bp fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool2d_bp leak fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool3d_bp leak fix
Signed-off-by: raver119 <raver119@gmail.com>
* maxpool bp leaks fixed
Signed-off-by: raver119 <raver119@gmail.com>
* printf removed
Signed-off-by: raver119 <raver119@gmail.com>
* batchnorm fix
Signed-off-by: raver119 <raver119@gmail.com>
* AVX warning/error polishing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* remove previous MKL-DNN support layer
Signed-off-by: raver119 <raver119@gmail.com>
* avx2 tweak
Signed-off-by: raver119 <raver119@gmail.com>
* allow static for apple
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* exclude mkldnn in one more place
Signed-off-by: raver119 <raver119@gmail.com>
* exclude mkldnn in one more place
Signed-off-by: raver119 <raver119@gmail.com>
* restore OPENBLAS_PATH use
Signed-off-by: raver119 <raver119@gmail.com>
* add runtime check for avx/avx2 support
Signed-off-by: raver119 <raver119@gmail.com>
* convolution_auto
Signed-off-by: raver119 <raver119@gmail.com>
* Add logic for helper argument
* minor test fix
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* skip OpTracker props for non-x86 builds
Signed-off-by: raver119 <raver119@gmail.com>
* linux arm isn't x86 :)
Signed-off-by: raver119 <raver119@gmail.com>
* avx-512
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA presets fix
Signed-off-by: raver119 <raver119@gmail.com>
* BUILD_PIC
Signed-off-by: raver119 <raver119@gmail.com>
* prefetchw for avx2
Signed-off-by: raver119 <raver119@gmail.com>
* BUILD_PIC again
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-11 21:50:28 +03:00
shugeo
e1a7460f8e
Shugeo cuda doc2 ( #255 )
...
* Added comments to tileKernel routine.
* Refactored kernel and added doc to it.
* Refactored setDiagonal kernel and added doc for it.
* Added doc for tnse cuda helpers.
* Added doc for diag kernels.
* Added doc for kernel.
* Refactored code with fake quantization.
* Added docs for image resize and crop kernels.
* Added docs for image suppression helpers.
* Added docs to matrix_band helpers.
* Added docs for matrix_diag_part and nth_element helpers.
* Fixed syntax error and refactored getIndexOffset usage.
2019-09-11 21:04:43 +03:00
raver119
589401477d
[WIP] bunch of improvements ( #257 )
...
* - profiling bias_add op
- add some docementation
Signed-off-by: Yurii <yurii@skymind.io>
* - minor change
Signed-off-by: Yurii <yurii@skymind.io>
* - provide addBias cuda kernel
Signed-off-by: Yurii <yurii@skymind.io>
* - improve shape::getIndexOfffset and change its signature
Signed-off-by: Yurii <yurii@skymind.io>
* - same as previous
Signed-off-by: Yurii <yurii@skymind.io>
* - improve and change signature in some shape:: stuff which has to do with calculation of offsets for array elements
Signed-off-by: Yurii <yurii@skymind.io>
* - minor changes in flatten
Signed-off-by: Yurii <shyrma@skymind.io>
* - add function shape::getIndexOffsetOrdered
Signed-off-by: Yurii <shyrma@skymind.io>
* - correct shape::getIndexOffsetOrdered()
Signed-off-by: Yurii <shyrma@skymind.io>
* - move getIndexOffsetOrdered to flatten.h header in order to isolate this function
Signed-off-by: Yurii <shyrma@skymind.io>
2019-09-11 20:12:09 +03:00
raver119
ffae024cda
disable threadlocal for apple
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-10 07:46:35 +03:00
shugeo
c9f8a904ad
Shugeo cuda docs1 ( #249 )
...
* Comments axis shifts.
* Fixed LUP solver usage. Added helpers doc.
* Switch off OMP for roll and lup. Fixed omp usage for ClipByGlobalNorm.
* Switch off omp for ClipByGlobalNorm to reduce omp ambigiousness.
2019-09-09 16:27:45 +03:00
raver119
1de9fb218e
- bits_hamming_distance dtype fix ( #8208 )
...
- DataTypeUtils::asString fixe + new dtypes added
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-06 08:59:05 +03:00
raver119
46f8c58502
- bits_hamming_distance dtype fix
...
- DataTypeUtils::asString fixe + new dtypes added
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-06 08:57:53 +03:00
AlexDBlack
a76a44e198
Merge remote-tracking branch 'fork/master'
2019-09-05 22:04:25 +10:00
raver119
fc222d3325
logging off
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-05 14:45:46 +03:00
Ryan Nett
79867f5c5a
cleanup SDRNN and rnn ops ( #238 )
...
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-05 12:25:03 +10:00
AlexDBlack
b7226bdd7a
Merge
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-05 00:54:11 +10:00
shugeo
548044a1e2
Shugeo doc ( #235 )
...
* Actualized doc to tnse ops.
* Added comments for dynamic_stitch op.
* Added comments to dynamic_stitch op implementation.
* Modified comment for unstack_list op.
* Added doc for space_to_depth and depth_to_space ops.
* Added doc for space_to_batch op.
* Enlarge test type for adjustSaturation.
* Added doc for runner.
2019-09-04 14:57:59 +03:00
raver119
a90c7dd995
[WIP] Last set of changes ( #234 )
...
* mmul op instead of cublasSgemm
Signed-off-by: raver119 <raver119@gmail.com>
* transB
Signed-off-by: raver119 <raver119@gmail.com>
* jcpp handles
Signed-off-by: raver119 <raver119@gmail.com>
* bitwise and/or/xor
Signed-off-by: raver119 <raver119@gmail.com>
* bitwise and/or/xor mapping
Signed-off-by: raver119 <raver119@gmail.com>
* cuda/cublas version check
Signed-off-by: raver119 <raver119@gmail.com>
* add expected version
Signed-off-by: raver119 <raver119@gmail.com>
* cuda/cublas version check in java
Signed-off-by: raver119 <raver119@gmail.com>
* one more error check
Signed-off-by: raver119 <raver119@gmail.com>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* one more fix
Signed-off-by: raver119 <raver119@gmail.com>
* skip CUDA version check for now
Signed-off-by: raver119 <raver119@gmail.com>
* better wording
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-04 14:41:08 +03:00
Alex Black
6cc887bee9
Rename flatbuffers DataType to DType ( #228 )
...
* Rename flatbuffers DataType enum to DType
Signed-off-by: Alex Black <blacka101@gmail.com>
* Rename flatbuffers DataType enum to DType
Signed-off-by: Alex Black <blacka101@gmail.com>
* Updates for flatbuffers datatype enum renaming
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-09-04 16:36:11 +10:00
raver119
64eaafb4cd
remove unwanted noexcept
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-04 08:38:22 +03:00
raver119
7abc574eeb
Snapshot update ( #8194 )
...
* fix double consumption of rng on cpu
Signed-off-by: raver119 <raver119@gmail.com>
* Shyrma docs (#222 )
* - documenting and profiling matrix_set_diag cuda kernel
Signed-off-by: Yurii <yurii@skymind.io>
* - correct formula of pnorm pooling in cuda 2d/3d kernels
- remove helper matrix_diag which duplicates work of helper matrix_set_diag
Signed-off-by: Yurii <yurii@skymind.io>
* cublasHandle sharing + lock
Signed-off-by: raver119 <raver119@gmail.com>
* cublasHandle sharing + lock
Signed-off-by: raver119 <raver119@gmail.com>
* Documentation from serialization/deserialization in NLP (#221 )
* refactoring
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Javadocs
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Javadoc fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Cleanup
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* dedicated lock for getCudaCublasHandle
Signed-off-by: raver119 <raver119@gmail.com>
* Small fixes (#223 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ELU DL4J fixes (#224 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* javadoc (#225 )
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Small test compilation fix (#226 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8182 remove spark version suffix (#227 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* [WIP] Thread safety (#229 )
* sync after cublas*gemm
Signed-off-by: raver119 <raver119@gmail.com>
* mutex for CublasHelper
Signed-off-by: raver119 <raver119@gmail.com>
* don't store cublasHandle in LaunchContext, it's per-device anyway
Signed-off-by: raver119 <raver119@gmail.com>
* some printout
Signed-off-by: raver119 <raver119@gmail.com>
* check for field instead
Signed-off-by: raver119 <raver119@gmail.com>
* pew-pew
Signed-off-by: raver119 <raver119@gmail.com>
* don't release ContextBuffers until device changed
Signed-off-by: raver119 <raver119@gmail.com>
* small tweak
Signed-off-by: raver119 <raver119@gmail.com>
* some logging in sgemm
Signed-off-by: raver119 <raver119@gmail.com>
* stream sync
Signed-off-by: raver119 <raver119@gmail.com>
* some more logging
Signed-off-by: raver119 <raver119@gmail.com>
* some more error checks
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* minor AffinityManager fix
Signed-off-by: raver119 <raver119@gmail.com>
* cudaEvent error logging improvement
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* - minor corrections in ConstantTadHelper
Signed-off-by: Yurii <yurii@skymind.io>
* ConstantShapeHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantTadHelper.cu updated
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-03 22:02:02 +03:00
raver119
dddc8a1143
[WIP] Thread safety ( #229 )
...
* sync after cublas*gemm
Signed-off-by: raver119 <raver119@gmail.com>
* mutex for CublasHelper
Signed-off-by: raver119 <raver119@gmail.com>
* don't store cublasHandle in LaunchContext, it's per-device anyway
Signed-off-by: raver119 <raver119@gmail.com>
* some printout
Signed-off-by: raver119 <raver119@gmail.com>
* check for field instead
Signed-off-by: raver119 <raver119@gmail.com>
* pew-pew
Signed-off-by: raver119 <raver119@gmail.com>
* don't release ContextBuffers until device changed
Signed-off-by: raver119 <raver119@gmail.com>
* small tweak
Signed-off-by: raver119 <raver119@gmail.com>
* some logging in sgemm
Signed-off-by: raver119 <raver119@gmail.com>
* stream sync
Signed-off-by: raver119 <raver119@gmail.com>
* some more logging
Signed-off-by: raver119 <raver119@gmail.com>
* some more error checks
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* minor AffinityManager fix
Signed-off-by: raver119 <raver119@gmail.com>
* cudaEvent error logging improvement
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* - minor corrections in ConstantTadHelper
Signed-off-by: Yurii <yurii@skymind.io>
* ConstantShapeHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantTadHelper.cu updated
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-03 22:00:38 +03:00
raver119
9d03bb9425
allow atomicAdd for CUDA 10 only
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-03 13:30:16 +03:00
Yurii Shyrma
cb4c9377b1
Shyrma docs ( #222 )
...
* - documenting and profiling matrix_set_diag cuda kernel
Signed-off-by: Yurii <yurii@skymind.io>
* - correct formula of pnorm pooling in cuda 2d/3d kernels
- remove helper matrix_diag which duplicates work of helper matrix_set_diag
Signed-off-by: Yurii <yurii@skymind.io>
2019-09-02 16:25:58 +03:00
raver119
106524663b
fix double consumption of rng on cpu
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-02 15:24:51 +03:00
AlexDBlack
7ded4416cb
Merge remote-tracking branch 'fork/master'
2019-09-02 18:52:12 +10:00
raver119
5b8ea3e830
one more tiny cuda fux
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-02 11:49:13 +03:00
raver119
e42c34ca55
[WIP] minor ( #218 )
...
* - initial docs commit
- merge* cuda fix
Signed-off-by: raver119 <raver119@gmail.com>
* one more fix
Signed-off-by: raver119 <raver119@gmail.com>
* one more fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-02 11:25:48 +03:00
raver119
c34826da4d
fixed args
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-01 22:06:01 +03:00
raver119
00cf28f477
get rid of builtin_popcount to please ppc
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-01 19:57:39 +03:00
raver119
3679e55c49
fix bits_hamming_distance for ppc
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-01 19:33:23 +03:00
Yurii Shyrma
a35926c6e9
- add parameter alpha to elu and lrelu_bp ( #213 )
...
* - add parameter alpha to elu and lrelu_bp
Signed-off-by: Yurii <yurii@skymind.io>
* - forgot to correct header activations.h
Signed-off-by: Yurii <yurii@skymind.io>
2019-08-31 20:57:39 +03:00
raver119
b71c993ded
[WIP] maxpool_bp cuda fix ( #212 )
...
* one test for alex
Signed-off-by: raver119 <raver119@gmail.com>
* fix
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of safety offset in cpp
Signed-off-by: raver119 <raver119@gmail.com>
* bfloat16
Signed-off-by: raver119 <raver119@gmail.com>
* minor test rearrangement to fastpath launch
Signed-off-by: raver119 <raver119@gmail.com>
* - atomicAdd/Mul/Div fix for float16/bfloat16 misalignment
- one special test for maxpoolbp java
- safety offset of 8 bytes is back to libnd4j legacy
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-31 20:57:05 +03:00
Yurii Shyrma
00fd50cee2
Shyrma softmax ( #209 )
...
* - provide new cuda kernel for softmax
Signed-off-by: Yurii <yurii@skymind.io>
* - further work on cuda kernel for softmax
Signed-off-by: Yurii <yurii@skymind.io>
* - correction cuda kernel for softmax
Signed-off-by: Yurii <yurii@skymind.io>
2019-08-30 20:31:05 +03:00
raver119
bdc3eacafd
one small playground test
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-30 20:13:01 +03:00
raver119
70a9ae5068
[WIP] few tweaks ( #206 )
...
* scatter empty check
Signed-off-by: raver119 <raver119@gmail.com>
* scatter empty test
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* two tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* dup tweak
Signed-off-by: raver119 <raver119@gmail.com>
* - put empty checking of indices array immediately prior helper run
Signed-off-by: Yurii <yurii@skymind.io>
* minor tests fix
Signed-off-by: raver119 <raver119@gmail.com>
* minor tests fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-30 16:32:01 +03:00
raver119
1003428a18
[WIP] Int broadcastables ( #195 )
...
* Removed invalid resource and fixed tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* legacy scalar/pairwise/broadcast int ops
Signed-off-by: raver119 <raver119@gmail.com>
* NDArray int broadcastables
Signed-off-by: raver119 <raver119@gmail.com>
* few more bitwise tests
Signed-off-by: raver119 <raver119@gmail.com>
* java side update
Signed-off-by: raver119 <raver119@gmail.com>
* Argument type changed for shift ops
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* legacy scalar/pairwise/broadcast int ops
Signed-off-by: raver119 <raver119@gmail.com>
* NDArray int broadcastables
Signed-off-by: raver119 <raver119@gmail.com>
* few more bitwise tests
Signed-off-by: raver119 <raver119@gmail.com>
* java side update
Signed-off-by: raver119 <raver119@gmail.com>
* Argument type changed for shift ops
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2019-08-30 10:12:40 +03:00
Yurii Shyrma
5395d4fbe5
- rewrite broadcast_dynamic_shape and delete corresponding helpers ( #194 )
...
Signed-off-by: Yurii <yurii@skymind.io>
2019-08-29 20:38:02 +03:00
Yurii Shyrma
70af8c2afc
Shyrma svd ( #191 )
...
* - add one additional test for svd
* - provide float argument in eye op to be a type of output array
Signed-off-by: Yurii <yurii@skymind.io>
* - add cuda capability check to mmulHelper
Signed-off-by: Yurii <yurii@skymind.io>
* - make use another method for divice id evaluation
Signed-off-by: Yurii <yurii@skymind.io>
* Eye data type as T argument
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-28 18:27:08 +03:00
raver119
dec296da17
[WIP] bits_hamming_distance ( #192 )
...
* bits_hamming_distance op
Signed-off-by: raver119 <raver119@gmail.com>
* bits_hamming_distance cuda
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-28 18:20:44 +03:00
raver119
f4860574d7
[WIP] More fixes ( #190 )
...
* Refactored kernels for segment_max/min/sum ops.
* Refactored segment_prod kernels.
* Refactored segment_prod kernels.
* DynamicPartition test
Signed-off-by: raver119 <raver119@gmail.com>
* Addede linear test for dynamic_partition op.
* Refactored test with int datatype.
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* dynamicPartition fix
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of some logging
Signed-off-by: raver119 <raver119@gmail.com>
* one more test for dynamic_stitch
Signed-off-by: raver119 <raver119@gmail.com>
* one more test for dynamic_stitch
Signed-off-by: raver119 <raver119@gmail.com>
* empty check for stitch
Signed-off-by: raver119 <raver119@gmail.com>
* minor print changes
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-28 15:38:57 +03:00
raver119
3157ec110c
[WIP] reverse_sequence ( #188 )
...
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* one more print
Signed-off-by: raver119 <raver119@gmail.com>
* minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* reverse_sequence fix
Signed-off-by: raver119 <raver119@gmail.com>
* confusion_matrix test updated
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweak
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweak
Signed-off-by: raver119 <raver119@gmail.com>
* one more reverse_sequence test
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-28 11:14:22 +03:00
raver119
b472d7d8c8
[WIP] few more fixes ( #182 )
...
* one noop test
Signed-off-by: raver119 <raver119@gmail.com>
* skip input validation for no-input ops
Signed-off-by: raver119 <raver119@gmail.com>
* - one more noop empty test
- one more validation before sync
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* one more validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA empty reductions java side
Signed-off-by: raver119 <raver119@gmail.com>
* one svd test
Signed-off-by: raver119 <raver119@gmail.com>
* Corrected segment_mean helpers and added another test.
* Refactored segment_mean kernels to avoid race_condition.
2019-08-27 21:00:38 +03:00
Yurii Shyrma
2144941313
Shyrma fix2 ( #186 )
...
* - further work on layer_norm
Signed-off-by: Yurii <yurii@skymind.io>
* - further work on layer_norm 2
Signed-off-by: Yurii <yurii@skymind.io>
* - correct helpers for svd cuda
Signed-off-by: Yurii <yurii@skymind.io>
2019-08-27 19:57:59 +03:00
shugeo
0849b3c1a4
Shugeo segment fix2 ( #185 )
...
* Added test for segment_mean.
* Added another test for segment_mean.
* Fixed segment_* ops helpers for cuda to proper use external data.
2019-08-27 18:25:39 +03:00
raver119
7f0c660d8b
[WIP] HGemm ( #181 )
...
* skip string arrays for device validation
Signed-off-by: raver119 <raver119@gmail.com>
* confusion_matrix fix
Signed-off-by: raver119 <raver119@gmail.com>
* exclude cublasHGemm from archs < 530
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 15:05:43 +03:00
raver119
0e523490e9
[WIP] confusion ( #180 )
...
* skip string arrays for device validation
Signed-off-by: raver119 <raver119@gmail.com>
* confusion_matrix fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 14:30:37 +03:00
raver119
a49f7c908b
[WIP] More fixes ( #178 )
...
* skip string arrays for device validation
Signed-off-by: raver119 <raver119@gmail.com>
* histogram_fixed_width now really supports indexing types
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 13:21:01 +03:00
raver119
efbfafe3f7
[WIP] gatherND fix ( #176 )
...
* one test for gather_nd
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of old concat tests
Signed-off-by: raver119 <raver119@gmail.com>
* one printf
Signed-off-by: raver119 <raver119@gmail.com>
* one more legacy test removed
Signed-off-by: raver119 <raver119@gmail.com>
* gatherNd launch params fix
Signed-off-by: raver119 <raver119@gmail.com>
* gatherNd launch params fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 12:35:14 +03:00
raver119
df84bc7255
[WIP] More tweaks ( #173 )
...
* CUDA empty reduction
Signed-off-by: raver119 <raver119@gmail.com>
* - listdiff synchronization fix for CUDA
- listdiff test
Signed-off-by: raver119 <raver119@gmail.com>
* - IndexReduce ops now allow INDEXING_TYPES output
- topK op accepts only INDEXING_TYPES as output
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 10:37:10 +03:00
raver119
25e5c23eae
[WIP] Error handling ( #169 )
...
* CUDA reverse rewrite + couple of tests
Signed-off-by: raver119 <raver119@gmail.com>
* don't throw exception on invalid pointer
Signed-off-by: raver119 <raver119@gmail.com>
* data types validation for fastpath exec mode + 2 tests
Signed-off-by: raver119 <raver119@gmail.com>
* data types validation for fastpath exec mode + 2 tests
Signed-off-by: raver119 <raver119@gmail.com>
* ismax allowed dtypes tweak
Signed-off-by: raver119 <raver119@gmail.com>
* lastErrorCode + lastErrorMessage for native exceptions handling
Signed-off-by: raver119 <raver119@gmail.com>
* exportable ErrorReference
Signed-off-by: raver119 <raver119@gmail.com>
* check error codes in java
Signed-off-by: raver119 <raver119@gmail.com>
* - consume lastErrorCode
- fast_in dtype validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* - sg/cb allowed output type change
- minor logging fix for data type validation
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:57:51 +03:00
raver119
bb5fc36e5e
[WIP] ops fixes ( #168 )
...
* - correct layer_norm
Signed-off-by: Yurii <yurii@skymind.io>
* - further fix of layer norm
Signed-off-by: Yurii <yurii@skymind.io>
* - correct scatter_upd op
Signed-off-by: Yurii <yurii@skymind.io>
* - correct cuda kernel for histogram_fixed_width op
Signed-off-by: Yurii <yurii@skymind.io>
* - delete comments
Signed-off-by: Yurii <yurii@skymind.io>
* enabled one ignored test
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:37:05 +03:00
Alex Black
b417ca21bf
Fix for concat op shape function (empty shapes) ( #167 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-26 23:10:28 +10:00
raver119
ece6a17b11
lup context fix ( #164 )
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-24 16:57:48 +03:00
raver119
841eeb56c5
get rid of context variable
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-24 16:18:38 +03:00
raver119
b091e972ef
- string NDArray flat serde impl + tests ( #163 )
...
- string NDArray equalsTo impl
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-24 14:16:34 +03:00
raver119
f8364997c0
[WIP] maxpool2d_bp fix ( #160 )
...
* one test for maxpool2d_bp
Signed-off-by: raver119 <raver119@gmail.com>
* - maxpool2d_bp cuda fix for NaNs
- streamSync after each custom op execution
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-24 09:20:57 +03:00
raver119
f03b0ee78f
[WIP] more fixes ( #159 )
...
* Added test for MatrixInverse with double input. Fixed matrixDeterminantKernel.
* Fixed kernels to avoid waste templating.
* Fixed logDeterminant kernel.
* Refactored type check for lup'
* - decrease blockDim value for zeta op
Signed-off-by: Yurii <yurii@skymind.io>
* Added print for compound matrix with CUDA.
* Refactored upper matrix invertion kernels.
* - provide move constructor and move assignment operator for OpArgsHoder class
Signed-off-by: Yurii <yurii@skymind.io>
* Refactored usage of launch context.
* - add test for mergemax
Signed-off-by: Yurii <yurii@skymind.io>
* get rid of AveragingArrayProxy
Signed-off-by: raver119 <raver119@gmail.com>
* Refactoring of LUP inversion.
* Added prints for invertion.
* - add OpArgsHolder copy constructor and assignment operator
Signed-off-by: Yurii <yurii@skymind.io>
* Added test for lower inversion
* - fix bug in upsampling2d/3d_bp op
Signed-off-by: Yurii <yurii@skymind.io>
* Added expensive printfs to kernel.
* Refactored expensive kernel prints.
* Refactored expensive printfs
* - remove nullify
Signed-off-by: Yurii <yurii@skymind.io>
* Eliminated waste prints with tests.
* upsampling2d_bp test
Signed-off-by: raver119 <raver119@gmail.com>
* test updated
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 19:20:50 +03:00
raver119
99cdf6d42b
- cpu isMax fix for multidim case + test
...
- INDArray.wasClosed() fix for empty array edge case
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 18:44:37 +03:00
raver119
fb8de5006f
- concat empty scalar fix
...
- couple of tests for empty scalar concat
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 13:16:50 +03:00
raver119
729dc5e879
[WIP] size etc ( #155 )
...
* one test for size
Signed-off-by: raver119 <raver119@gmail.com>
* - few tests for size op
- size/rank/size_at ops now use p instead of assign
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 12:31:12 +03:00
raver119
243bf866c4
[WIP] Few fixes ( #153 )
...
* throw exception if op execution failed
Signed-off-by: raver119 <raver119@gmail.com>
* expected for test
Signed-off-by: raver119 <raver119@gmail.com>
* one more ismax test
Signed-off-by: raver119 <raver119@gmail.com>
* ismax view fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 09:00:10 +03:00
raver119
930b49e87f
[WIP] DeviceLocalNDArray updates ( #149 )
...
* ContextBuffers are released upon device change
Signed-off-by: raver119 <raver119@gmail.com>
* DeviceLocalNDArray updates + tests
Signed-off-by: raver119 <raver119@gmail.com>
* special array for delayed mode
Signed-off-by: raver119 <raver119@gmail.com>
* additional detach()
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-22 20:01:29 +03:00
Alex Black
e855e47f73
More fixes ( #148 )
...
* Small batch norm fix (cuda/no-mkldnn)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Dropout fix for RnnOutputLayer
Signed-off-by: Alex Black <blacka101@gmail.com>
* Allow block size < 2 in batch_to_space_nd and space_to_batch_nd for import, in spite of what TF docs say
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-22 19:55:27 +10:00
raver119
eea3062ccf
[WIP] stb/bts nd ( #144 )
...
* - start working on space_to_batch_nd
Signed-off-by: Yurii <yurii@skymind.io>
* - provide cpu helper for space_to_batch_nd op
Signed-off-by: Yurii <yurii@skymind.io>
* few typos fixed
Signed-off-by: raver119 <raver119@gmail.com>
* - add tests for space_to_batch and correct bugs
Signed-off-by: Yurii <yurii@skymind.io>
* - write cuda kernel for space_to_batch op
Signed-off-by: Yurii <yurii@skymind.io>
* - add order argument to shape::index2coords method in convolution cuda ops
Signed-off-by: Yurii <yurii@skymind.io>
* - restore some previous code
Signed-off-by: Yurii <yurii@skymind.io>
* old col2im kernel activated
Signed-off-by: raver119 <raver119@gmail.com>
* - change coords calculation in col2im kernel
Signed-off-by: Yurii <yurii@skymind.io>
* - restore old col2im kernel
Signed-off-by: Yurii <yurii@skymind.io>
* - add custom op for batch_to_space
Signed-off-by: Yurii <yurii@skymind.io>
* - provide cpu version for batch_to_space_nd op
Signed-off-by: Yurii <yurii@skymind.io>
* - provide cuda kernel for batch_to_space_nd op
Signed-off-by: Yurii <yurii@skymind.io>
2019-08-21 21:11:46 +03:00
raver119
e604ffe0d2
[WIP] repeat op ( #143 )
...
* - write new repeat helper (cpu)
Signed-off-by: Yurii <yurii@skymind.io>
* - update NDArray::cpu
Signed-off-by: Yurii <yurii@skymind.io>
* - update NDArray::repeat cuda
Signed-off-by: Yurii <yurii@skymind.io>
2019-08-21 21:10:29 +03:00
raver119
3cf72e5e30
[WIP] More fixes ( #142 )
...
* atomicAdd cc 70+
Signed-off-by: raver119 <raver119@gmail.com>
* additional 8 bytes alocation
Signed-off-by: raver119 <raver119@gmail.com>
* missed include 2019
Signed-off-by: raver119 <raver119@gmail.com>
* less spam
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-21 20:18:29 +03:00
raver119
d9ab299759
[WIP] Minor fixes ( #140 )
...
* - Tile java shape fn removed
- Tile 0 validation added
- scatter_upd test
Signed-off-by: raver119 <raver119@gmail.com>
* additional tile validation
Signed-off-by: raver119 <raver119@gmail.com>
* - provide vector case in cuda scatter op
Signed-off-by: Yurii <yurii@skymind.io>
* cpu ismax view fix
Signed-off-by: raver119 <raver119@gmail.com>
* exp
Signed-off-by: raver119 <raver119@gmail.com>
* cuda ismax fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-21 15:05:47 +03:00
raver119
77805cb7fa
[WIP] cpu ismax fix ( #137 )
...
* cpu ismax fix
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of smaller scalar tests
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-21 10:12:11 +03:00
raver119
269d508ba5
[WIP] cross-device migrations ( #134 )
...
* two more tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA device afinity tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* prepareAction/registerAction for CustomOps
Signed-off-by: raver119 <raver119@gmail.com>
* lazy allocate host bufer before relocation
Signed-off-by: raver119 <raver119@gmail.com>
* one special test for migration in cpp
Signed-off-by: raver119 <raver119@gmail.com>
* tests update for msvc
Signed-off-by: raver119 <raver119@gmail.com>
* logging
Signed-off-by: raver119 <raver119@gmail.com>
* stick to old col2im impl
Signed-off-by: raver119 <raver119@gmail.com>
* cudaStreams reorganization
Signed-off-by: raver119 <raver119@gmail.com>
* buffer size fix
Signed-off-by: raver119 <raver119@gmail.com>
* c++ data migration
Signed-off-by: raver119 <raver119@gmail.com>
* fix CropAndResize test
Signed-off-by: raver119 <raver119@gmail.com>
* - minor improvment
Signed-off-by: Yurii <yurii@skymind.io>
2019-08-20 18:52:41 +03:00
raver119
23c8738d4a
syncthreads ( #136 )
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-20 18:28:43 +03:00
AlexDBlack
01cb57041a
Merge
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-19 18:46:47 +10:00
raver119
aceb915557
[WIP] tests fixes ( #130 )
...
* no openmp for ClipByGlobalNorm
Signed-off-by: raver119 <raver119@gmail.com>
* one more bfloat16 rng test
Signed-off-by: raver119 <raver119@gmail.com>
* assertion fix
Signed-off-by: raver119 <raver119@gmail.com>
* - legacy IsMax gone
- linear IsMax gets shapeInfo argument
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of legacy IsMax tests
Signed-off-by: raver119 <raver119@gmail.com>
* IsMax is custom op now
Signed-off-by: raver119 <raver119@gmail.com>
* more blocks for ismax
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* - sqrt test
- some legacy code removed from CudaExecutioner
- Transforms.asin tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* - TransformFloat fix
Signed-off-by: raver119 <raver119@gmail.com>
* - ismax fix
- SpaceToBatchND/BatchToSpaceND wrappers
- couple of legacy tests removed
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-19 11:33:15 +03:00
raver119
bb80fe4f94
Merge remote-tracking branch 'origin/master'
2019-08-17 14:52:13 +03:00
raver119
8944e5f67f
no thread_local for cpu
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-17 14:51:54 +03:00
raver119
56910ddee7
no thread_local for cpu
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-17 14:51:39 +03:00