Commit Graph

352 Commits (b3805a132f8de6d6db80607ba46e7f8b8847cca6)

Author SHA1 Message Date
raver119 320924278d
Legacy API changes (#441)
* initial commit

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* another initial commit

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* another initial commit

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more initial commit

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next step

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next step

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next step

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next step

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Refactored buffer() and shapeInfo() methods usage with NDArray class.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt Graph class methods to use const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt choose op to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt where op shape method to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt lstsq op to use constant empty shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt matrix_diag_part op shape routine to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt determinant ops to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt mean_pairwssqerr_loss ops to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt ops shape methods.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt shape methods for loss ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt log_loss op shape method.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt shape methods for ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt dilation2d ops shape methods.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted deconv2d ops shape methods.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted dynamicRNN op shape method.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted shape methods for ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted shape methods for lstm layer ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* few updates

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* first cuda tweak

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Adopt constant shapes for sconv2d ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt constant shapes for gru ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt constant shapes with shape methods for segment ops and so on.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted constant shapes with unsorted_segment_* ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted constant shapes with gamma op shape method.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted shape methods of reduce_stddev ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted shape methods for reduce_* ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt shape method for squeeze op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt strided_slice shape method.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored concat op shape method to adopt constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted shape method for mirror_pad op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted split op shape method.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted tile ops shape methods.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added const cast for mkldnn routines handles.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored logSoftMaxForVector_ routine to conform with proper data and shape pointer casts.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Cosmetic changes to proper usage of constant pointers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored a couple shape comparators for strides and addBias helpers to proper use data pointers with inplace option.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored depthToSpace helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored histogram helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored im2col helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored gather and gatherND helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage on percentile helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed gather shape with helpers and range buffer usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with space to depth helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage and constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with LUP decomposition>

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored onehot_ helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored pad and prefix to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactoed softmax helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed space to batch helpers to use buffers properly.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed stack and split helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with sparse to dense helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with mindistance_ helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with tile helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed constant shape usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed constant shape usage with legacy pairwise bool ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored a couple of methods to adopt constant shape usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed broadcasting with constant shape."

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const usage with inplace reverse and constant shapes with legacy reduction.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored legacy ops with const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored sort to adopt constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected sort for constant shape usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed constant shape usage with special methods.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored Context to conform with constant shape usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* CUDA broadcasting headers

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* pairwise/indexreduce/random headers

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Refactored native ops to adopt constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* legacy reduce3/scalar headers

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Corrected pullRow signature and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected routines to proper use of constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored tests to use constant shapes properly.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored legacy ops tests to use constant shapes properly.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored buffer usage with NDArray tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed native ops tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed special concat routine.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with test.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with a test.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored TAD.h and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored calcStrides* routines to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed miscelaneous errors with constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* NativeOps const changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Corrected definitions for declared functions.

Signed-off-by: shugeo <sgazeos@gmail.com>

* NativeOps const changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* few more const changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Fixed const shapes with shape routines.

Signed-off-by: shugeo <sgazeos@gmail.com>

* few more const changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Fixed shape method for broadcastable case.

Signed-off-by: shugeo <sgazeos@gmail.com>

* few more const changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* xw_plus_b BP shape fn restored

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Fixed signatures with broadcasting.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Repaired backprops shape methods for a set of operations.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored broadcast bool for cuda.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored methods for 3 args with const qualifier.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed a couple of kernel signatures for broadcasting.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed kernels signatures for const buffers and shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored pairwise methods to persistent buffers and shapes usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt const to buffers and shapes with kernels.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt const to buffers and shapes with scalar kernels.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored indexreduce kernels signatures to use const buffers and shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored pairwise kernels to adopt cons shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored pairwise bool kernels to adopt cons shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored random special ops to conform with const shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored native ops to conform with const shapes and buffers under cuda platform.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Cosmetical changes only.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const shapes and buffers error.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected start pos routine.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored methods to conform with const shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored helpers to use proper methods instead.

Signed-off-by: shugeo <sgazeos@gmail.com>

* bunch of changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next bunch of changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next bunch of changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Fixed execScalar declaration.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed execScalar declaration.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected const shape cases with sort and so on.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const shapes for sort.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored kernel declarations to adopt const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed kernels declarations to adopt const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected kernel declarations to adopt const shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed kernels declarations to adopt const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed segment helpers kernels declarations and so on to adopt const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const shape usage with segment and solve helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed kernel declaration with adjustWeight helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed cuda implementations for constant shape helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted const shape usage with kernels.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted top_k kernels to use const shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected kernels declarations to adopt const shapes with helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored NDArray definitions to adopt const shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const shapes with image suppression helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Slight improvement with buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored buffer usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored buffer usage with tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const shape usage with definitions.

Signed-off-by: shugeo <sgazeos@gmail.com>

* minor updates on cpu side

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Refactored const shape usage with ConstantDescritor and native ops with cuda platform.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored tear and tile kernels to adopt with const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* softmax_loop fix

Signed-off-by: raver119 <raver119@gmail.com>

* update missing signature

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* softmax again

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* few more missing consts

Signed-off-by: raver119 <raver119@gmail.com>

* new methods updated

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

Co-authored-by: shugeo <sgazeos@gmail.com>
2020-05-09 08:06:14 +03:00
raver119 0613485654
compression ops (#436)
* Added declarations for decode/encode_bitmap ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added implementation for bitmap encoding/decoding ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added helpers for encode/decode bitmap ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored encodingBitmap helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* threshold encode/decode skeleton

* helper skeleton

* minor import fix

* encoder shape fn & op impl

* thresholdEncode cpu impl

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* thresholdDecode cpu impl

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Only cosmetical changes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* placeholder

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Added cuda implementation for bitmap decode helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* cuda thresholdEstimate

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* cuda thresholdDecode

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next step

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* - nano cmakelist update (get rid of Clion section)
- fixed forgotten throw in AtomicTests

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* thesholdEncode cuda impl

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Added tests for bitmap encoding/decoding ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed tests for encode/decode bitmaps.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored decode/encode helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed crashes with bitmap decode/encode helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* bitmap encode/decode CPU

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* bitmap encode/decode CUDA

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* C API removed for threshold/bitmap encode

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* EncodeBitmap/DecodeBitmap Java side

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* EncodeThreshold/DecodeThreshold Java side

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* EncodeThreshold/DecodeThreshold Java side

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* few more tests for threshold encoding

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* minor test tweak

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* two special tests

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* encodeBitmap CPU fix

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* parallel_long/parallel_double proper spans fix

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* encodeThreshold CUDA fix

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* nano fix

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* grid tweaks

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* RTX adaptation for thresholdEncode

Signed-off-by: raver119 <raver119@gmail.com>

* don't allow threshold encoding for length < 2

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* get rid of NDArrayCompressor in EncodingHandler

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more minor update of EncodingHandler

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more minor tweak of EncodingHandler

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* - matmul allows integer data types use
- EncodingHandler boundary default value
- few tests for integer matmul

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* minor fix of CUDA bitmap encode

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* boundary changed to integer everywhere

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* boundary changed to integer everywhere

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* re-enable CUDA deallocator

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* threshold encoder fix for systems without omp

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* - encode_threshold now requires non-negative boundary
- minor tweak in EncodingHandler

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* restore parallelism in decode_bitmap

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* fall back to omp for encode_bitmap cpu

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* single time casts

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* - additional test for encode_threshold
- sync buffers to device before calling for shape function

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

Co-authored-by: shugeo <sgazeos@gmail.com>
2020-05-08 20:59:39 +03:00
raver119 c9d1454743
MKLDNN tweaks (#415)
* one simple test

Signed-off-by: raver119 <raver119@gmail.com>

* fix

Signed-off-by: raver119 <raver119@gmail.com>

* hmmmm...

Signed-off-by: raver119 <raver119@gmail.com>

* mkl matmul skip tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* minor fix for MemoryTracker

* long shapes in matmul

* - 2 new tests for mkldnn tanh
- mkldnn isn't used for scalar tanh
2020-04-27 17:37:53 +03:00
Andrii T 58b11bfecc
Map C++ ops to Java (#392)
* MergeMaxIndex, ReverseBp, Tri, Triu and TriuBp added

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Upsamling3d draft

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* minor fix (upsampling3dBp inputDatatype.size=2)

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* polished testcases

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* matching of Upsampling3d input format according to cpp iArg

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* ops generated from codegen

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* requested changes

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* added super() for Triu

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* everything passes except TriuOp

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Tri op dtype arg (output datatype config support) + default float32

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* temporary commit with manually edited sd/nd ops

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Cannot use 'val' here because initializer expression does not have a representable type: Type cannot be resolved

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* all tests passed

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* few requested changes

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Small fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Ignore reverse_bp test due to logged issue

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix reverse op

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix MergeMaxIndex dtype -> iarg

Signed-off-by: Alex Black <blacka101@gmail.com>

Co-authored-by: Alex Black <blacka101@gmail.com>
2020-04-25 23:56:02 +10:00
shugeo fc3f5d4ffb
Shugeo exponential distribution infinities fix (#403)
* Fixed bound problem with Exponential distribution implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added test for Exponential distribution to avoid infinities.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a test for exponential distribution with 1M elements.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Cosmetical changes only and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Modified test and implementation for exponential_distribution op.

Signed-off-by: shugeo <sgazeos@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-04-22 12:12:00 +03:00
raver119 a10fd4524a
Fix for certain non-ews cases (#402)
* BtS/StB/StD/DtS dup for views

Signed-off-by: raver119 <raver119@gmail.com>

* batchnorm_bp dup for views

Signed-off-by: raver119 <raver119@gmail.com>

* two java tests for bad strides

Signed-off-by: raver119 <raver119@gmail.com>
2020-04-21 12:41:30 +03:00
shugeo a5db0e33be
Shugeo segment fix4 (#385)
* Added test and fixed error message for unsorted_segment_sqrt_n op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed error message for unsorted_segment_* ops when 1 segment is given.

Signed-off-by: shugeo <sgazeos@gmail.com>
2020-04-20 09:04:35 +03:00
shugeo 0eca33ad94
Shugeo cuda solver fix (#383)
* Refactored cuSolver handle usage to handle LaunchContext instance properly.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored svd solver usage with LaunchContext instance singleton.

Signed-off-by: shugeo <sgazeos@gmail.com>

* add device locks for cuSolver uses

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-04-17 16:52:08 +03:00
Andrii T 5fbb04531d
At cpp ops (#378)
* crelu op added

* crelu op added

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* minor fixes

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* crelu(bp)+transformOpValidation op

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* added ClipByAvgNorm and DepthwiseConv2DBp

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* ClipByAvgNorm passes forward check

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* EmbeddingLookup draft

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* DepthwiseConv2DB gradient check

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* EmbeddingLookup and DepthwiseConv2dBp finished + tests added

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* ImageResize draft

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* DepthwiseConv2DB gradient check

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* ImageResize passed tests except helper::resizeFunctor:Non implemented

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* replaced ImageResizeMethods enum by codegen

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* minor fixes

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* polished checkpoint (OPValidationSuite passed and mvn install build succesfull after codegen)

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* manually merged LSTMLayerTestCases from master
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* MaximumBp added and tested

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* MergeAddBp draft

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* MergeMaxBp and MergeAvgBP added and tests passed

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* minor fix

* draft LSTMLayerBp (big relative layer in gradient check)

* LSTMLayerBp check

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* LSTMLayerBp check v2

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* requested changes (test passes)

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* LSTMLayer testcases passed gradientcheck

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* small LSTMLayer testcase1 improvement (cLast, yLast)

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Warnings issue solved

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Fixes for MKLDNN LSTM layer helper

Signed-off-by: Alex Black <blacka101@gmail.com>

* stable version

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
2020-04-17 15:16:14 +10:00
Oleh 3d15706ffa
Lin_space operation improve (#373)
* libnd4j update linspace op

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j #8513 update lin_space op, tests added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* - minor linspace tweaks (num_elements now iArg)
- java linspace updates
- couple of additional tests for linspace

Signed-off-by: raver119 <raver119@gmail.com>

* roll back timeout change

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-04-16 14:53:56 +03:00
Yurii Shyrma 4247718f61
Shyrma gru bp (#377)
* - update gru ff op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation and testing gru_bp op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - neglect dependencies between dLdh/dLdhLast/dLdcLast in lstmLayer backprop

Signed-off-by: Yurii <iuriish@yahoo.com>
2020-04-16 08:09:04 +03:00
Yurii Shyrma 23e4aa99ad
Shyrma lstm layer bp (#370)
* - start working on bp for lstm

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further working on bp for lstmLayer

Signed-off-by: Yurii <iuriish@yahoo.com>

* - minor change

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on bp for lstmLayer 2

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on bp for lstmLayer 3

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on bp for lstmLayer 4

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on bp for lstmLayer 5

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on bp for lstmLayer 6

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on bp for lstmLayer 7

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on bp for lstmLayer 8

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on bp for lstmLayer 9

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide lstmLayerCell and lstmLayerCellBp as separate CUSTOM_OPs

Signed-off-by: Yurii <iuriish@yahoo.com>

* - testing and fixing lstmLayerCellBp helper

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implement lstmLayerCellBp as separate op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implement lstmLayerBp as separate op (not tested)

Signed-off-by: Yurii <iuriish@yahoo.com>

* - fixing calculations of dLdWp and dLdb in lstmLayerCellBp

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on bp for lstmLayer 10

Signed-off-by: Yurii <iuriish@yahoo.com>

* - fixing typo in lstmLayerTimeLoop

Signed-off-by: Yurii <iuriish@yahoo.com>

* - forgot to perform clipping of c array and calculate corresponding derivative in lstmLayerCellBp

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on bp for lstmLayer 10

Signed-off-by: Yurii <iuriish@yahoo.com>

* - testing and fixing bugs in lstmLayer_bp op 1

Signed-off-by: Yurii <iuriish@yahoo.com>

* - testing and fixing bugs in lstmLayer_bp op 2

Signed-off-by: Yurii <iuriish@yahoo.com>

* - turn off heavy tests for cuda for lstmLayer_bp op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - forgot to nullify gradients at eliminated time steps (when sequnce length array is present )

Signed-off-by: Yurii <iuriish@yahoo.com>
2020-04-13 13:21:51 +03:00
raver119 3e2dbc65dd
MatMul for gemm/gemv calls (#365)
* libnd4j added optional alpha and beta support to matmul

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j typos fixes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j add optional alpha and beta to matmul_bp

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one more typo fix

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added optional alpha and beta to mkl implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* MatMul alpha/beta on java side

Signed-off-by: raver119 <raver119@gmail.com>

* alpha/beta fix in libnd4j

Signed-off-by: raver119 <raver119@gmail.com>

* alpha/beta fix in matmul_bp

Signed-off-by: raver119 <raver119@gmail.com>

* restored view validation

Signed-off-by: raver119 <raver119@gmail.com>

* gemv/gemm now use MatMul op

Signed-off-by: raver119 <raver119@gmail.com>

* few tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* additional INDArray.mmul signature

Signed-off-by: raver119 <raver119@gmail.com>

* make C order default for INDArray.mmul, unless both A/B have F order

Signed-off-by: raver119 <raver119@gmail.com>

* Nd4j.gemm validation fix

Signed-off-by: raver119 <raver119@gmail.com>

* disable mkldnn matmul for xxf with beta != 0 case

Signed-off-by: raver119 <raver119@gmail.com>

* SimpleRnn workspace fix + timeouts

Signed-off-by: Alex Black <blacka101@gmail.com>

* two more tests + minor fix in matmul platform check

Signed-off-by: raver119 <raver119@gmail.com>

* Flaky test fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* propagate testresources profile

Signed-off-by: raver119 <raver119@gmail.com>

* Resources fix + flaky test fix

Signed-off-by: Alex Black <blacka101@gmail.com>

Co-authored-by: Oleg <oleg.semeniv@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
2020-04-10 17:57:02 +03:00
Andrii T d86dd5b131
DL4J and SameDiff integration tests + LSTMLayer java op class (#353)
* init in this branch

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Lenetet Mnist workflow

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* small fix for calculations

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* for Alex to check placeholder null pointer issue

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* CNN3D workflow

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* state for launching on dxg to regenterate dl4j examples

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* SD RNN test case workflow

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* small fixes

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* checkpoint at lstmBlock: Input array 1 (x) rank must be got input with rank 2 issue

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Fix LSTMLayer inputs order

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* lstm mismatch with c++ op issue

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* LSTMLayer config draft

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* LSTMLayer config draft v2

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* have doubt I had to do this

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* NDRNN generated by codegen

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* LSTMLayerTestCases draft

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* minor fixes again

* added LSTMLayer testcases to nd4j-tests + setted Preconditions in LSTMLayer constructors

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* added lost SDCNNtestcases

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* overrided getNumOutputs from DynamicCustomOp in LSTMLayer and reorganized LSTMLayerOutputs according to cpp op

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* finished with LSTMLayerOutputs

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Fix MKLDNN platform checks (i.e., when MKLDNN can be used vs. not)

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix LSTMLayerWeights input order

Signed-off-by: Alex Black <blacka101@gmail.com>

* More fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* minor fixes

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* fixed LSTMLayer testcases

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* finished SameDiffRNNTestCase

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* finished all testcases + minor fixes

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Multiple generation-related fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix multiple issues

Signed-off-by: Alex Black <blacka101@gmail.com>

* More fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* LSTM fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Regenerate ND4J namespaces and fix multiple issues

Signed-off-by: Alex Black <blacka101@gmail.com>

* changed SameDiffRNNTestCase

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* Small fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* added  Nd4j.getRandom().setSeed(12345) where needed

Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>

* #8828 Fix ND4J profiler NaN/Inf checks when using OpContext

Signed-off-by: Alex Black <blacka101@gmail.com>

* #8828 Fix ND4J profiler NaN/Inf checks when using OpContext

Signed-off-by: Alex Black <blacka101@gmail.com>

* Tweak to weight init for SameDiff CNN test case

Signed-off-by: Alex Black <blacka101@gmail.com>

* Tweaks for test cases

Signed-off-by: Alex Black <blacka101@gmail.com>

* Ignore failing tests until fixed

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix

Signed-off-by: Alex Black <blacka101@gmail.com>

Co-authored-by: Alex Black <blacka101@gmail.com>
2020-04-09 00:20:48 +10:00
raver119 04b2b4f9b6
Few fixes (#361)
* INDArray.close() fix for CPU

Signed-off-by: raver119 <raver119@gmail.com>

* - BroadcastableBoolOp introduced
- ConfusionMatrix now supports explicit DataType argument

Signed-off-by: raver119 <raver119@gmail.com>

* confusion_matrix: dtype is still optional

Signed-off-by: raver119 <raver119@gmail.com>

* disable bert tests in debug builds

Signed-off-by: raver119 <raver119@gmail.com>

* Affinity fix

Signed-off-by: raver119 <raver119@gmail.com>

* minor workspace tweak to allow close() on scoped out borrowed workspace

Signed-off-by: raver119 <raver119@gmail.com>
2020-04-06 21:01:59 +03:00
Yurii Shyrma 48102c61d0
- correct reshape op for empty shapes (#354)
* - correct reshape op for empty shape in case of -1 at the end

Signed-off-by: Yurii <iuriish@yahoo.com>

* Fix test + new reshape op constructor

Signed-off-by: Alex Black <blacka101@gmail.com>

Co-authored-by: Alex Black <blacka101@gmail.com>
2020-04-01 15:13:34 +11:00
Oleh 1d004b542a
xw_plus_b mkldnn implementation (#247)
* libnd4j first step of mkldnn for xw_plus_b and test of aurora crash in imageHelper

* libnd4j sync folders with master

* libnd4j merge master, raw implementation of xw_plus_b on mkldnn, clean up, need testing and adding checks for corresponded input shapes

* libnd4j corrections and checks added to xw_plus_b mkl

* libnd4j corrected dataType description based on mkl operation description, need more investigation

* libnd4j fixe xw_blus_b mkl implementation, need testing

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j two unit tests added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed check input dimensions bug

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libndj4 one more test added to cover different order handling

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added optional int arg support to define weights format, if arg == 1, mkldnn (do not need transpose in mkldnn implementation), else mmul weights format, corrected check points, added unit test

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some improvements to avoid NDArray transpose in xw_plus_b operation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed issues connected with weights rank, also added support of one case based on tf (for mkldnn, cpu, cuda), test case added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added proper handling of empty inputs (all implementations)

* libnd4j fixed compilation error

* libnd4j several more corrections after conflict solve and fixed typos

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j removed unsupported data types

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master and fixed issues

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added propagation implementation for xw_plus_b, fixed issue connected with mkl weights data format, avoided data copy in transpose mode, test cases added, manually tested with gradCheck

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one minor fix of double operation declaration

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j code clean up

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j minor tests fixes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed build problem, integrate helpers changes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-03-31 13:03:10 +03:00
Yurii Shyrma 29e61579c1
Shyrma reshape empty (#338)
* - start working on reshape op which operates with empty shapes

Signed-off-by: Yurii <iuriish@yahoo.com>

* - correct reshaping for empty arrays

Signed-off-by: Yurii <iuriish@yahoo.com>

* - remove unnecessary check in Loopkind

Signed-off-by: Yurii <iuriish@yahoo.com>
2020-03-31 07:41:16 +03:00
Oleh bf0ddbc06c
libnd4j fixes for context sync in operation execution (#350)
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-30 16:33:51 +03:00
Oleh e8cbf5255a
Backpropagation implementation of mergemax, mergeadd and mergeavg ops (#343)
* libnd4j: first step of merge_max implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed typos

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections for mergeMaxBp

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some minor corrections

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j test added for mergemax_bp

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed several problems tests added, check with gradCheck

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j remove duplicated tests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j split implementation of transforms ops into separate file implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j code clean up, added mergeavg_bp and mergeadd_bp, need testing

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master, fixed typos and added tests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some minor fixes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added helper for mergeAddBp operation, this permits to skip nullify

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j file renaming changes and cuda some corrections, need some additional corrections

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some additional corrections for merge ops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j more corrections per request for cuda more proper usage

Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-25 08:40:30 +03:00
raver119 226f0672bc size op fixed
Signed-off-by: raver119 <raver119@gmail.com>
2020-03-23 17:02:03 +03:00
Oleh 55ec207eb8
Split convolutions implementations for compilation speed up (#339)
* libnd4j first step of convolutions implementation split

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j convolutions cuda implementation split

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j code clean up

Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-23 07:30:26 +03:00
raver119 1f3e4c18e1
some structure for ops (#337)
Signed-off-by: raver119 <raver119@gmail.com>
2020-03-23 07:28:54 +03:00
Oleh 69c92ca5ae
Learning updaters for gradient (#335)
* libnd4j raw implementation of sgd upader

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections and simple test added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections after discussion

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j integrate applyScalar

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j raw implementation of rmsPropUpdater on cpu

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fix operations declaration

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j rmsPropUpdater added, test cases for sgd, etc

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed several typos

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some fixes and improvements for rmsPropUpdater based on Java tests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed cuda implementation, update tests and corrected behavior according java tests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j adaGrad updater added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one minor fix for ada grad

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several more fixes for ada_grad

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j nesterovs updater added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed nesterovs updater behavior, several typos and rename file

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one minor typo

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j ada max updater added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed several typos in adaMax updater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed several typos in adaMaxUpdater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several fixes for adaMax, added Adam Updater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j adaDeltaUpdater added, minor fixes for adamUpdater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several fixes for adaDeltaUpdater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j nadamUpdater added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one more correction for nadam updater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several fixes for nadam updater and added amsGradUpdater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several typos fixed in amsGradUpdater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections and added f order support rmsProp updater

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added support of f order for all updaters and modify tests for testing in place

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed issues for updates when not in place mode used, added tests for f order

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added input shape checks

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections for different cases handling

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some code clean up and optimize per request

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j updaters refactoring after review

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* SgdUpdater wrapper

Signed-off-by: raver119 <raver119@gmail.com>

* first test

Signed-off-by: raver119 <raver119@gmail.com>

* RmsPropUpdater added

Signed-off-by: raver119 <raver119@gmail.com>

* NadamUpdater + NesterovsUpdater

Signed-off-by: raver119 <raver119@gmail.com>

* AmsGradUpdater

Signed-off-by: raver119 <raver119@gmail.com>

* AdamUpdater added

Signed-off-by: raver119 <raver119@gmail.com>

* AdaGradUpdater + AdaDeltaUpdater + AdaMaxUpdater

Signed-off-by: raver119 <raver119@gmail.com>

* AdaGradUpdater test added

Signed-off-by: raver119 <raver119@gmail.com>

* libnd4j remove input parameters parsing through NDArray, split implementation of helpers to separate files, added some rename, etc

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j next step to split operations implementation into separate files

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master and minor corrections

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j revert some changes of split implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j forgot to add header file

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* public default constructors

Signed-off-by: raver119 <raver119@gmail.com>

* ImportClassMapping updated

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-03-23 07:28:31 +03:00
Alex Black 5a34ccf3d4
Remove printf in ones_as c++ op (#336)
Signed-off-by: Alex Black <blacka101@gmail.com>
2020-03-20 23:50:17 +11:00
Yurii Shyrma e700b59f80
Shyrma weights format (#329)
* - start to introduce additional weights formats into conv2d ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide weights format variety in backprop conv2d and deconv2d ops, testing and fixing bugs

Signed-off-by: Yurii <iuriish@yahoo.com>

* - forgot to recover kernels sizes in deconv2d_bp test

Signed-off-by: Yurii <iuriish@yahoo.com>

* - built in weights format in depthwise conv 2d op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in mkl dnn conv ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in cuda conv helpers

Signed-off-by: Yurii <iuriish@yahoo.com>

* - working with new weights format in cudnn conv api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - take into account order of arrays in cudnn tensor descriptions

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in cpu conv3d (ff/bp)

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in cpu deconv3d (ff/bp)

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in conv3d ops (ff/bp) based on mkl api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in conv3d ops (ff/bp) based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - resolve conflicts 2

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-03-20 12:11:27 +03:00
shugeo 5dae4069cf
Shugeo random expo fix2 (#295)
* Refactored exponential distribution implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored exponential distribution and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored test to new result sets.

Signed-off-by: shugeo <sgazeos@gmail.com>
2020-03-20 11:33:20 +03:00
raver119 7a2ac800dd
Nullify (#304)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* hamming distance nullification

Signed-off-by: raver119 <raver119@gmail.com>

* Add output array value assignment for testing/debugging

Signed-off-by: Alex Black <blacka101@gmail.com>

* don't assign empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d/conv3d/depthwise2d nullified

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d/conv3d/depthwise2d nullified

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d/conv3d/depthwise2d nullified

Signed-off-by: raver119 <raver119@gmail.com>

* few more fixes

Signed-off-by: raver119 <raver119@gmail.com>

* im2col

Signed-off-by: raver119 <raver119@gmail.com>

* pooling?

Signed-off-by: raver119 <raver119@gmail.com>

* more nullified

Signed-off-by: raver119 <raver119@gmail.com>

* ismax nullified

Signed-off-by: raver119 <raver119@gmail.com>

* rollback ismax nullification

Signed-off-by: raver119 <raver119@gmail.com>

* synchronized cublas handle use on per-device basis

Signed-off-by: raver119 <raver119@gmail.com>

* hiding method from jcpp

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of test assigns in DeclarableOp

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of assigns

Signed-off-by: raver119 <raver119@gmail.com>

* proper deviceId is back

Signed-off-by: raver119 <raver119@gmail.com>

* include fixed

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: Alex Black <blacka101@gmail.com>
2020-03-20 08:49:28 +03:00
raver119 77244f5496
avg/max pooling3d bp fixed (#323)
Signed-off-by: raver119 <raver119@gmail.com>
2020-03-16 18:17:42 +03:00
raver119 4cf2afad2b
benchmarks fixes (#321)
* bunch of small fixes

Signed-off-by: raver119 <raver119@gmail.com>

* validation for legacy random op

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of test

Signed-off-by: raver119 <raver119@gmail.com>
2020-03-16 10:31:06 +03:00
Oleh e7a995e959
Tanh backpropagation mkldnn implementation (#308)
* libnd4j first step of tanh_bp operation implementation on mkldnn

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j optimize several places and added test case for tanh_bp

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j minor corrections and renaming, added one more test case

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j missed mkldnn data format definition

Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-13 19:01:00 +03:00
Yurii Shyrma e42b4e96c3
correct output empty shapes deducing in split op (#311)
* - correct output empty shapes deducing in split op

Signed-off-by: Yurii <iuriish@yahoo.com>

* java test fixed

Signed-off-by: raver119 <raver119@gmail.com>

* - split broadcast::exec function on individual functions corresponding to switch arg

Signed-off-by: Yurii <iuriish@yahoo.com>

* - split broadcast::exec _int and _bool function on individual functions corresponding to switch arg

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-03-12 18:25:54 +03:00
Oleh 41bde8f885
Softmax BP mkldnn implementation (#301)
* libnd4j mkldnn softmax_bp operation implementation and integration, 2 tests added, need some refactoring and code clean up and more testing with different input shapes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j softmax_bp update, code refactoring, etc

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master, fixed typos, minor tweaks, code clean up

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j  integrate mkldnnUtils helpers in other mkldnn operations

Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-12 18:25:29 +03:00
Yurii Shyrma 58550b7c98
[WIP] Shyrma coords (#305)
* - provide faster index2coords function for cpu

Signed-off-by: Yurii <iuriish@yahoo.com>

* - new faster index2coords function is introduced into cpu code

Signed-off-by: Yurii <iuriish@yahoo.com>

* - replace long long coordinates with int coordinates

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add missed reload of coords2index function

Signed-off-by: Yurii <iuriish@yahoo.com>

* - reststart  jenkins

Signed-off-by: Yurii <iuriish@yahoo.com>

* - rollback changes in convolutions.cu and addBias.cu

Signed-off-by: Yurii <iuriish@yahoo.com>
2020-03-11 16:21:59 +03:00
Oleh c3223dbc7a
Improve ResultSet usage in libnd4j (#281)
* libnd4j profiling DeclarableOp and Tests by replacing return ResultSet pointer by instance

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j profiling semantic change in tests cases

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections to make new ResultSet semantic works, fixed one test

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j more tests fixes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* - correct copy and move assignment operators of ResultSet class

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: Yurii <iuriish@yahoo.com>
Co-authored-by: raver119 <raver119@gmail.com>
2020-03-10 07:42:50 +03:00
raver119 57210b936c
Revert "OpenMP Threads execution (#297)" (#299)
This reverts commit dd2043ef48.
2020-03-09 08:22:49 +03:00
raver119 dd2043ef48
OpenMP Threads execution (#297)
* omp threads backported

Signed-off-by: raver119 <raver119@gmail.com>

* omp scalar reduce

Signed-off-by: raver119 <raver119@gmail.com>

* timing

Signed-off-by: raver119 <raver119@gmail.com>

* timing

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* namespace change

Signed-off-by: raver119 <raver119@gmail.com>

* num_threads

Signed-off-by: raver119 <raver119@gmail.com>

* one minor fix

Signed-off-by: raver119 <raver119@gmail.com>
2020-03-09 08:21:44 +03:00
Oleh ead5162c97
Tanh mkldnn implementation (#296)
* libnd4j first step of softmax mkldnn implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j raw implementation of mkldnn softmax

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master and added softmax to MklDnnTests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections for softmax mkldnn

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge branch, fixed problem with negative axis, fixed dnnl::memory::format_tag selection, test cases added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j minor corrections to avoid risk connected with negative axis usage

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed windows builds, added switcher to use mkldnn sofmax version only for 3D, 4D, 5D, 6D arrays

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed dataType selection per request

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fix for mac and windows builds

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j builds fix

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j first spet of elementwize tanh implementation on mkldnn

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed typo in error message for softmax MKLDNN, test case added, implementation of tanh on MKLDNN, need supported DataType testing

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several fixes for tanh and temporary performance test added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed mkldnn platform loader for tanh

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j MklDnn tanh removed unsupported data types, removed performance test case, added more appropriate equivalence test case, code clean up

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed problem with empty input case for MklDnn tanh and softmax

Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-06 17:11:22 +03:00
raver119 784a2d13f8
separate omp impl for softmax (#289)
Signed-off-by: raver119 <raver119@gmail.com>
2020-03-05 11:14:22 +03:00
raver119 3bb22a6ff8
strided_slice without view (#288)
Signed-off-by: raver119 <raver119@gmail.com>
2020-03-05 09:56:52 +03:00
raver119 ca96a13ed0 softmax as standalone compilation unit
Signed-off-by: raver119 <raver119@gmail.com>
2020-03-05 08:45:10 +03:00
Oleh 4d81af9fe9
Softmax operation implementation for mkldnn (#286)
* libnd4j first step of softmax mkldnn implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j raw implementation of mkldnn softmax

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master and added softmax to MklDnnTests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some corrections for softmax mkldnn

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge branch, fixed problem with negative axis, fixed dnnl::memory::format_tag selection, test cases added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j minor corrections to avoid risk connected with negative axis usage

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed windows builds, added switcher to use mkldnn sofmax version only for 3D, 4D, 5D, 6D arrays

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed dataType selection per request

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fix for mac and windows builds

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j builds fix

Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-04 19:36:42 +03:00
raver119 f990b2486d
simplified addBias2D for CUDA (#285)
Signed-off-by: raver119 <raver119@gmail.com>
2020-03-04 09:50:55 +03:00
Yurii Shyrma 78934c17ad
profiling of stack and unstack ops (#261)
* - profiling of stack and unstack ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - fix bug in cpu concat op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - correction of cuda stack and unstack

Signed-off-by: Yurii <iuriish@yahoo.com>

* - change shape.h method which operates with unity dimensions strides

Signed-off-by: Yurii <iuriish@yahoo.com>

* - rearrange stack tests

Signed-off-by: Yurii <iuriish@yahoo.com>

* - correct evaluation of smallest stride for moving through contiguous axis

Signed-off-by: Yurii <iuriish@yahoo.com>

* - forgot to update signature of function strideOverContigAxis in cuda concat and split ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - remove ShapeUtils::shapeAsString method applied before input arrays validations

Signed-off-by: Yurii <iuriish@yahoo.com>

* -  further removing of ShapeUtils::shapeAsString

Signed-off-by: Yurii <iuriish@yahoo.com>

* - take sub-array shapeIndo/offset calculation out of NDArray class
- add possibility of contiguous memory copy in execTransformAny op if opNum == assign

Signed-off-by: Yurii <iuriish@yahoo.com>

* - correct test_empty_scatter_2 in EmptyTests.cpp

Signed-off-by: Yurii <iuriish@yahoo.com>

* - profiling of slice op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - get rid of contiguous memcpy for some cases in concat and split ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - forgot to declare oid nd4j::SpecialMethods<T>::splitCpuGeneric

Signed-off-by: Yurii <iuriish@yahoo.com>

* - correct typo in calculation of threads in cuda split op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - forgot to correct another set of threads variables in split cuda ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further conflicts resolving

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-03-03 07:32:37 +03:00
raver119 63fa3c2ef3
libnd4j polishing (#273)
* initial set of include changes

Signed-off-by: raver119 <raver119@gmail.com>

* one more tweak

Signed-off-by: raver119 <raver119@gmail.com>

* few more rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* few more rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* few more rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* cuda includes rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* java update

Signed-off-by: raver119 <raver119@gmail.com>

* = namespace changed to sd
- few CMake variables renamed with SD_ prefix

Signed-off-by: raver119 <raver119@gmail.com>

* java update

Signed-off-by: raver119 <raver119@gmail.com>

* LoopKind minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes

Signed-off-by: raver119 <raver119@gmail.com>

* sanitizer is optional now

Signed-off-by: raver119 <raver119@gmail.com>

* dev tests updated

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes

Signed-off-by: raver119 <raver119@gmail.com>

* last update

Signed-off-by: raver119 <raver119@gmail.com>

* java update

Signed-off-by: raver119 <raver119@gmail.com>
2020-03-02 12:49:41 +03:00
Oleh f116f53d61
Loops auto-vectorization problem fix (#277)
* libnd4j cast loop types

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j more type castination added to loops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j sync casting types of iterated variable in loops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j more loops reviewed for vectorization problem fix

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed several typos

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several more files reviewed to fix auto-vectorization problem in loops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master and reviewed more files to fix auto-vectorization problem in loops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several type casting added in broadcasting that were missed, fixed mac builds

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j double check all files and fix several more places in loops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed builds

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j revert changes for lup.cpp

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j more files reviewed for auto-vectorization problem fix

Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-02-28 17:04:45 +03:00
raver119 5332ace32b
better inplace exec with FastPath (#280)
Signed-off-by: raver119 <raver119@gmail.com>
2020-02-28 12:06:30 +03:00
shugeo 330a69d4e2
Shugeo solve ls (#203)
* lstsq op. Initial commit.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Least squares linear problem solve op (lstsq). Cpu draft implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed shape routine and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added test for lstsq op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Rectification for lstsq op implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected test to avoid numerical inconsistensy.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added prints for check computing.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected tests to use evalueate facility instead.

Signed-off-by: shugeo <sgazeos@gmail.com>

* CPU implementation of MatrixSolveLs op and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added cuda implementation for helpers with lstsq op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored tests for lstsq op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added processing for empty inputs.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Merged tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored lstsq op for fast case.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed test.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored lstsq op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed some issues with solve.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed lstsq op to avoid erros.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added kernel for giagonal factor

Signed-off-by: shugeo <sgazeos@gmail.com>

* lstsq wrapper and triangular_solve fixed

* Added proper processing empty inputs and test.

Signed-off-by: shugeo <sgazeos@gmail.com>

* SequenceMask test

* Build fixed

* Added proper processing of empty inputs with solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Mapping added

* Added check of input shapes with solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a couple of tests for lstsq op and minor changes with cuda helper for one.'

Signed-off-by: shugeo <sgazeos@gmail.com>

* Tests on

* Refactored test for lstsq op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed test

* Added another approach for lstsq op aka solve_ls.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Finished cpu part for solve_ls op helpers.

* Added helper for low triangular matrix inversion.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored alternate solve_ls cpu implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Removed alternate approach for solve_ls op. Added multithreading with matrix inversion.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Assert fixed

* Refactored multithreading for inverse matricies.

Signed-off-by: shugeo <sgazeos@gmail.com>

Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-02-28 11:37:26 +03:00
raver119 358c650b62 one micro fix
Signed-off-by: raver119 <raver119@gmail.com>
2020-02-27 19:28:26 +03:00
raver119 31e3a2f7a5
transparent conversion to FastPath execution within Graph (#278)
Signed-off-by: raver119 <raver119@gmail.com>
2020-02-27 16:10:38 +03:00
Oleh b4575d11e9
Loops auto-vectorization problem fix (#274)
* libnd4j cast loop types

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j more type castination added to loops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j sync casting types of iterated variable in loops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j more loops reviewed for vectorization problem fix

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed several typos

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several more files reviewed to fix auto-vectorization problem in loops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master and reviewed more files to fix auto-vectorization problem in loops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j several type casting added in broadcasting that were missed, fixed mac builds

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j double check all files and fix several more places in loops

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed builds

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j revert changes for lup.cpp

Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-02-26 21:12:19 +03:00
raver119 5c806d2fb5
reshape tweak (#275)
* - expand dims tweak
- reshape memcpy

Signed-off-by: raver119 <raver119@gmail.com>

* validation fix

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-26 14:05:32 +03:00
Oleh b686368b82
Refactoring split operation (#266)
* libnd4j moved split operation implementation to helpers before special case adding

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j minor fixes for general split operation move, merge master

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libndj4 split cpu implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* - provide cuda helper for split op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - minor correction

Signed-off-by: Yurii <iuriish@yahoo.com>

* - minor correction 2

Signed-off-by: Yurii <iuriish@yahoo.com>

* libnd4j moved split implementation from specials to split.cpp

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j update loopkind selections for 3D, 4D and 5D cases

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j removed unnecessary BUILD_SINGLE_TEMPLATE

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

Co-authored-by: Yurii Shyrma <iuriish@yahoo.com>
2020-02-26 10:20:39 +03:00
raver119 cf67c7165a nano fix
Signed-off-by: raver119 <raver119@gmail.com>
2020-02-25 15:20:51 +03:00
raver119 f6442b6724
few minor tweaks (#272)
Signed-off-by: raver119 <raver119@gmail.com>
2020-02-25 11:13:23 +03:00
Oleh f0706b21aa
Split operation improvement (#262)
* libnd4j moved split operation implementation to helpers before special case adding

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j minor fixes for general split operation move, merge master

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libndj4 split cpu implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* - provide cuda helper for split op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - minor correction

Signed-off-by: Yurii <iuriish@yahoo.com>

* - minor correction 2

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: Yurii Shyrma <iuriish@yahoo.com>
Co-authored-by: raver119 <raver119@gmail.com>
2020-02-24 08:22:41 +03:00
shugeo 1bb3ae4b03
Shugeo unordered map (#256)
* Refactored usage of std::map to std::unordered_map instead.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Eliminated crash with wrong ShapeDescriptor hash.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Eliminated crash with TadDescriptor hash.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored Stash hash.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored hashes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored TadDescriptor hash and top_k mapping.

* Refactored hashes for ShapeDescriptor and TadDescriptor classes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored hash for ConstantDescriptor and ShapeDescriptor classes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed map using with cuda platform.

Signed-off-by: shugeo <sgazeos@gmail.com>

* - few rearrangements for hash functions
- shared openblas allowed

Signed-off-by: raver119 <raver119@gmail.com>

* exports

Signed-off-by: raver119 <raver119@gmail.com>

* exports

Signed-off-by: raver119 <raver119@gmail.com>

* Stash reverted to std::map

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Added additional test.

Signed-off-by: shugeo <sgazeos@gmail.com>

* different maps for different compilers

Signed-off-by: raver119 <raver119@gmail.com>

* missing include

Signed-off-by: raver119 <raver119@gmail.com>

* fix the leak

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-02-24 07:51:01 +03:00
raver119 e78be14cc1
arm fix (#260)
* range check for scalar_int

Signed-off-by: raver119 <raver119@gmail.com>

* no simd

Signed-off-by: raver119 <raver119@gmail.com>

* no ops

Signed-off-by: raver119 <raver119@gmail.com>

* cyclic shift?

Signed-off-by: raver119 <raver119@gmail.com>

* left split

Signed-off-by: raver119 <raver119@gmail.com>

* left split

Signed-off-by: raver119 <raver119@gmail.com>

* rot ops unrolled templates

Signed-off-by: raver119 <raver119@gmail.com>

* no rotl/rotr for uint64

Signed-off-by: raver119 <raver119@gmail.com>

* no rotl/rotr for uint64 2

Signed-off-by: raver119 <raver119@gmail.com>

* no rotl/rotr for uint64 3

Signed-off-by: raver119 <raver119@gmail.com>

* ARM_BUILD declared

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-21 14:31:00 +03:00
Yurii Shyrma f7a9190407
profiling of concat op (both cuda and cpu) (#151)
* - profiling of concat op (both cuda and cpu)

Signed-off-by: Yurii <iuriish@yahoo.com>

* better comparison for large concat

Signed-off-by: raver119 <raver119@gmail.com>

* - further improving of concat op

Signed-off-by: Yurii <iuriish@yahoo.com>

* some loggin

Signed-off-by: raver119 <raver119@gmail.com>

* - add possibility to verify presence of trailing unities in shape and set strides/ews correspondingly
- restrict second simple case in concat op to c order only

Signed-off-by: Yurii <iuriish@yahoo.com>

* - move concat op to specials_single.cpp file

Signed-off-by: Yurii <iuriish@yahoo.com>

* - get rid of second concat op declaration in transforms.cpp file

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-02-20 21:19:01 +03:00
raver119 215641ea9e
Minor improvements (#255)
* static increments in loops

Signed-off-by: raver119 <raver119@gmail.com>

* specials and concat split into separate units

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-20 11:43:26 +03:00
Yurii Shyrma c5193ecb81
Shyrma gather (#254)
* - profiling gather op for aurora

Signed-off-by: Yurii <iuriish@yahoo.com>

* - include contiguous memcpy in gather op

Signed-off-by: Yurii <iuriish@yahoo.com>
2020-02-19 09:35:52 +03:00
Yurii Shyrma 22c7aa9acf
Shyrma mkl matmul (#250)
* - provide matmul code based on mkl api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - correct typo in mkl matmul op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - take into account empty arrays in mkl matmul op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - fix bug in mkl matmul and group all matmul tests in one file

Signed-off-by: Yurii <iuriish@yahoo.com>
2020-02-18 08:58:01 +03:00
raver119 f9d51b7278
More compilation units (#246)
* weird edge case

Signed-off-by: raver119 <raver119@gmail.com>

* weird edge case

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of it

Signed-off-by: raver119 <raver119@gmail.com>

* crop and resize reorganized

Signed-off-by: raver119 <raver119@gmail.com>

* restore test

Signed-off-by: raver119 <raver119@gmail.com>

* remove unwanted unit refs in cmale

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-17 10:23:05 +03:00
Yurii Shyrma 011c272fde
Shyrma transpose (#244)
* - provide contiguous strides for ouput in transpose op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide contiguous strides for output in permute op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - take into account empty shapes properly in transpose/permute op

Signed-off-by: Yurii <iuriish@yahoo.com>
2020-02-17 08:04:28 +03:00
raver119 9e3c1b02b1
Perf improvements (#242)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* better ExpandDims impl

Signed-off-by: raver119 <raver119@gmail.com>

* better Squeeze impl

Signed-off-by: raver119 <raver119@gmail.com>

* better Softmax impl

Signed-off-by: raver119 <raver119@gmail.com>

* one test disabled

Signed-off-by: raver119 <raver119@gmail.com>

* more accurate impl

Signed-off-by: raver119 <raver119@gmail.com>

* - GraphProfiler now prints full shapeInfo instead of shape
- softmax typo fix

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-14 16:20:31 +03:00
raver119 3de3cd8277
R119 tests (#238)
* one small test

Signed-off-by: raver119 <raver119@gmail.com>

* one small test

Signed-off-by: raver119 <raver119@gmail.com>

* bert test

Signed-off-by: raver119 <raver119@gmail.com>

* Graph FlowPath fix

Signed-off-by: raver119 <raver119@gmail.com>

* - GraphProfiler tweaks
- NodeProfile now includes shapes

Signed-off-by: raver119 <raver119@gmail.com>

* RELU_layer inplace tweak

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* identity tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* bert result validation

Signed-off-by: raver119 <raver119@gmail.com>

* - bunch of Shape ops have inplace exec forbidden now
- Legacy ops have inplace exec disabled by default now

Signed-off-by: raver119 <raver119@gmail.com>

* ffast-math enabled

Signed-off-by: raver119 <raver119@gmail.com>

* ffast-math enabled

Signed-off-by: raver119 <raver119@gmail.com>

* allow some legacy ops to be inplace

Signed-off-by: raver119 <raver119@gmail.com>

* disable -fast_math

Signed-off-by: raver119 <raver119@gmail.com>

* disable expensive test for cuda

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-13 20:59:35 +03:00
Yurii Shyrma fe47f52896
Oleh tenzor mmul (#231)
* Libnd4j: TensorMMul backprop op #8174, raw implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: TensorMMul backprop op #8174 merge master and some corrections

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: TensorMMul backprop op #8174 algorithm update, need testing, sync with  master

* Libnd4j: TensorMMul backprop op #8174 fixed incorrect B axes calculation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: TensorMMul backprop op #8174 optimize axes identification and fix bug of indeces overlapping, added first test. need testing with different shapes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: TensorMMul backprop op #8174 some fixes and improvements need more testing

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: TensorMMul backprop op #8174 fixed order of matrix multiply

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: TensorMMul backprop op #8174 fixed issue of incorrect axes definition, add tests based on TF, need additional testing for case dLdC not equal 1

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: TensorMMul backprop op #8174 fixed scalar case add test

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: TensorMMul backprop op #8174 fixed bp algorithm, axes definition, need some mode testing with different orders combination f,c; c,f f,f and add some checks for inputs

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: TensorMMul backprop op #8174 some checks and corrections added tests, exists the problem with different input orders support A-f B-c and A-f B-f

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: TensorMMul backprop op #8174 sync master

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* - correct bug in MmulHelper::tensorDot(a, b, c, axes_a, axes_b,permutForC)

Signed-off-by: Yurii <iuriish@yahoo.com>

* Libnd4j: TensorMMul backprop op #8174 code clean up and refactoring

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* - add check for linspase ordered permutations in ShapeUtils::evalShapeForTensorDot

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide additional code in shape::reshape stuff in order to reduce amount of allocation/copy operations during reshaping procedure

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on problem of wrong shape evaluation during permute/reshape procedures

Signed-off-by: Yurii <iuriish@yahoo.com>

* - still looking for bug reason in reshape/permute stuff

Signed-off-by: Yurii <iuriish@yahoo.com>

* - correct bug in transform cuda native ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - correct bug in NDArray::assign

Signed-off-by: Yurii <iuriish@yahoo.com>

* - remove old shape::reshape stuff

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add possibility to disable copy of old buffer to new buffer during reshape operation in NDArray class

Signed-off-by: Yurii <iuriish@yahoo.com>

* - correct bug in tensorDot which had to do with wrong pointers assigments

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: Oleh <oleg.semeniv@gmail.com>
2020-02-13 20:33:54 +03:00
shugeo f0c684020f
Shugeo resize area fix4 (#229)
* Fixed a couple of issues with resize_area op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added additional test for alternate params for resize_area testing.

Signed-off-by: shugeo <sgazeos@gmail.com>
2020-02-12 19:02:42 +03:00
raver119 237c137166 few more smaller compilation units (#226)
Signed-off-by: raver119 <raver119@gmail.com>
2020-02-10 10:57:18 +03:00
raver119 8a0d5e3b97
Compilation units (#224)
* - TrueBroadcastHelper split into multiple compilation units
- legacy gemm.cpp disabled

Signed-off-by: raver119 <raver119@gmail.com>

* - IndexReduce int32/int64 split into multiple compilation units

Signed-off-by: raver119 <raver119@gmail.com>

* - Reduce3 ops split into multiple compilation units

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-09 19:48:32 +03:00
Abdelrauf bead656feb
Initial performance improvement for Bias Add and etc #8556 (#217)
* Initial  performance improvement for Bias Add, loop coords helpers and increment aligned parallel threading

Signed-off-by: AbdelRauf <rauf@konduit.ai>

* One more test for Rauf

Signed-off-by: raver119 <raver119@gmail.com>

* disable couple of perf tests

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-02-08 15:31:30 +03:00
Yurii Shyrma 948646b32d
Shyrma mkl test (#211)
* - provide nhwc format in mkl conv ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - corrections in mkl conv3d

Signed-off-by: Yurii <iuriish@yahoo.com>

* - corrections in mkl batchnorm

Signed-off-by: Yurii <iuriish@yahoo.com>

* - corrections in mkl maxpooling2d

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add format format_tag::any to outputs in mkl conv ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - complete corrections in mkl conv ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add test for comparison of execution speeds of mkl conv2d op with different weights format

Signed-off-by: Yurii <iuriish@yahoo.com>

* - take into account order f in mkl conv ops

Signed-off-by: Yurii <iuriish@yahoo.com>
2020-02-06 21:12:54 +03:00
shugeo 5ae40f6e38
Shugeo sequence mask fix2 (#216)
* Fixed sequence_mask op and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Cuda fix for sequence_mask op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed sequence_mask op for both platforms and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed solve and triangular_solve for more than 2D for adjoint cases.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added adjoint solve test again.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a set of tests for triangual_solve and generic solve ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a pair tests for triangular_solve

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added tests for triangular_solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>
2020-02-06 21:06:50 +03:00
raver119 5d28e6143d
OpContext handling (#214)
* nano tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* OpContext tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* OpContext deallocators

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of few mkldnn safety checks

Signed-off-by: raver119 <raver119@gmail.com>

* databuffer setSpecial fix

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-05 07:27:24 +03:00
shugeo 41ff907bc6
Shugeo solve linear (#191)
* linear equations systems solve op. Initial commit.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed compiling issues.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Linear equations systems solve. The next stage commit.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added test for linear equations systems solve operation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added additional test and fixed lower matrix retrievance.

* Implementation for solve of the systems of linear equations."

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored permutation generation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added restore for permutations batched with cuda helper for solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Finished cuda implementation for solve op helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored cpu helpers for solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fix gtest output on Windows

* Fixed issue with permutation matrix for cuda implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed issue with permutation matrix for cpu implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Eliminated waste comments.

Signed-off-by: shugeo <sgazeos@gmail.com>

* LinearSolve added

* Mapping added

* Javadoc added

* Refactored implementation of triangular_solve helpers and tests for solve matrix equations generally.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a test for solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Solve test added

* Fix for TF import

Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-02-04 08:59:11 +03:00
raver119 9bb5798cac
Null arrays fix (#208)
* don't skip null arrays

Signed-off-by: raver119 <raver119@gmail.com>

* one test tweak

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-02 23:14:00 +03:00
Oleh d52e67209e
Oleh convert (#200)
* StringUtils for utf convertor raw implementation of all possible combinations, need to be add counter of bytes per symbol for any type and add api to call convertors and store data

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor more corrections to support convertors

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor some corrections and bug fixes, need review to discuss how to add multi-threading

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 some corrections to move to multi-threading, add one test need discussion data inputs/outputs array presentation, need discussion the way of multi-threading

* StringUtils for utf convertor #8613 tests added some corrections to optimize build

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 some corrections and code clean up

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 code clean up and optimize usage, need update ndarray factory before replace std usage

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 some staff to integrate converters into NDArrayFactory, update tests and add some functionality

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 minor corrections and bug fix before discussion

* StringUtils for utf convertor #8613 some fixes and tets

* StringUtils for utf convertor #8613 some more staff to support different unicode

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 fix linking bug

* StringUtils for utf convertor #8613 corrected several tests as defaults for string ndarray changed

* StringUtils for utf convertor #8613 replace some incorrect implementation, revert some test changes, need sync before testing

* StringUtils for utf convertor #8613 fixed several thing that were badly implemented yesterday, need optimization, testing (before testing have to be add support of u32 and u16 buffer visualization)

* StringUtils for utf convertor #8613 fixed to support u16 and u32, and convertor in ndarray, fix buffer print, etc

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 merge master and sync with server

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 some correction for string cast, need print check only asci support

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 merge master, remove copies and add cast, need test, refactoring according review and clean up

* StringUtils for utf convertor #8613 fixed cast and copy issues

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 fixed cuda and update tests

* StringUtils for utf convertor #8613 integration into NdArray, fix several tests for build pass, refactoring, etc

* - avoid ambiguity of NDArray ctrs overloading in some tests

Signed-off-by: Yurii <iuriish@yahoo.com>

* StringUtils for utf convertor #8613 NDArray string constructors added, updated NDArrayFactory, refactoring unicode and tests, etc

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 fixed cuda build and test, refactoring and void* added to some functions

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613  void* integration, removed copy operation, refactoring, added tests for NDArray string constructors, etc

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 several more fixes, improvements and updates

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 master merge, code clean up and optimization before review

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 minor fixes string element size define

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 revert last changes as mistake

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 fixed NDArray constructor build problem, remove order from string factory, fixed order use for factory via project, added catch of incorrect sync in cast of arrays to data types, fixed e method for strings, etc

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 added javacpp hack, added multi-threading, minor corrections in license agreement

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* StringUtils for utf convertor #8613 windows builds fix, as "sting" is not treated as utf8

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

Co-authored-by: Yurii Shyrma <iuriish@yahoo.com>
2020-01-31 16:30:49 +03:00
raver119 1ab86d1306
Range op data type (#204)
* - range op now accepts dargs
- dargs now can be in signature

Signed-off-by: raver119 <raver119@gmail.com>

* range dtype java side

Signed-off-by: raver119 <raver119@gmail.com>

* linspace fix

Signed-off-by: raver119 <raver119@gmail.com>

* lin_space fix for scalar outputs

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-31 10:45:40 +03:00
raver119 5d98cfcf47
Configurable DataType for ops (#201)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* - one more test for OneHot with dtype
- one more signature in Nd4j

Signed-off-by: raver119 <raver119@gmail.com>

* ones_as/zeros_as now accept dtype

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* - more updates for configurable data types
- ones_as/zeros_as java side + tests

Signed-off-by: raver119 <raver119@gmail.com>

* few c++ tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes around DArgs

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-30 18:46:12 +03:00
raver119 ba961c7601
DataTypes & FlatBuffers (#197)
* flatbuffers version upgrade

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers version upgrade java side

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers dependency version upgrade java side

Signed-off-by: raver119 <raver119@gmail.com>

* MKLDNN version upgrade

Signed-off-by: raver119 <raver119@gmail.com>

* DArgs first pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures first pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures second pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures third pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures third pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures fourth pass

Signed-off-by: raver119 <raver119@gmail.com>

* signatures fifth pass

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers UI version upgrade java side

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers ui update

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers downgrade

Signed-off-by: raver119 <raver119@gmail.com>

* flatbuffers downgrade java side

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-30 10:07:24 +03:00
Yurii Shyrma 7a7ee4b021 Shyrma cudnn (#192)
* - implementation of cudnn batchnorm_bp op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - testing and fixing bugs in batchnorm_bp based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - move pooling mkl code and delete some unnecessary files

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation and testing cudnn pooling2d ops (avg/max, ff/bp)

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation and testing cudnn pooling 3d (ff/bp) ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide ff step in case of cudnn maxpool3d_bp op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - remove half type from set of supported types in mkl dpethwise conv op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - bring back cudaStreamSynchronize in batchnorm and pooling cudnn ops

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-01-28 18:23:07 +03:00
shugeo 99a54829c2 Shugeo resize area fix2 (#181)
* Added test for issue with resize_area op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a pair of tests for resize_are op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored resize_area kernel to avoid shared memory overflow.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Eliminated prints with tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* ignore bad test

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed test with resize_area.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed test for float constants.

Signed-off-by: shugeo <sgazeos@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-01-24 20:55:25 +03:00
raver119 5d69069177
[WIP] Memory limits (#167)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* one more initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* additional initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* subsequent initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit testing

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit per device

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit per group

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda + few missed lines

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda + missed includes

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda + one more missed include

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit shouldn't count host mem as dev0 in cuda

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit that tracks HOST group limits for CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with some Environment changes

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with more Environment changes

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with maxMasterThreads fix

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with maxMasterThreads fix

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit without maxMasterThreads exception

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit without Nd4jULong in Environment

Signed-off-by: raver119 <raver119@gmail.com>

* add sleep and more iterations for OOM cases

Signed-off-by: raver119 <raver119@gmail.com>

* limits propagation from java side

Signed-off-by: raver119 <raver119@gmail.com>

* - consume ErrorCode every time
- one test for memory limits

Signed-off-by: raver119 <raver119@gmail.com>

* unordered_map

Signed-off-by: raver119 <raver119@gmail.com>

* unordered_map

Signed-off-by: raver119 <raver119@gmail.com>

* unordered_map

Signed-off-by: raver119 <raver119@gmail.com>

* RSub op mapping fixed

Signed-off-by: raver119 <raver119@gmail.com>

* typo fixed

Signed-off-by: raver119 <raver119@gmail.com>

* one bad test fixed

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-24 10:11:09 +03:00
shugeo 2717b25931 Shugeo qr (#153)
* Added qr op implementation. Initial version.

* Fixed doc for qr op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Implementation of QR decomposition. CPU platform version.

* Added a pair of tests for qr op testing.

Signed-off-by: shugeo <sgazeos@gmail.com>

* QR implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected norm using.

* Properly calculated intermediate results with QR decomposition.

* Another step to implement QR algorithm by householder.

* Cpu implementatio for QR decomposition. The first working edition.

* Corrected test to QR decomposition.

* Added tad multithreading with QR implementation.

* Finished cpu implementation for QR decomposition helpers.

* Refactored tests and improved multithreading.

* Refactored QR cpu implementation and update cuda implementation helpers.

* Cuda QR helper implementation. The first working edition.

* Eliminated waste prints.

* Restore multithreading with cuda implementation.

* Ops names corrected

* Refactored qr op helpers to optimize.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Eliminated waste manual ticking.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored memory allocation to avoid waste memory usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored matrixMinor method both for cuda and cpu platforms.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored method of vmul to use raw buffers instead type conversion.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored temporary array of matricies.

Signed-off-by: shugeo <sgazeos@gmail.com>

Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
Co-authored-by: raver119 <raver119@gmail.com>
2020-01-22 13:59:36 +03:00
shugeo 815a2908af Shugeo solve triangular (#173)
* Added implementation of the triangular_solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed compilation issues.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added verification of input data and helpers facilities for triangular_solve op.'

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added cpu implementation for triangular_solve helpers.

* Added tests and implementation for upper triangular equations.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a pair of cases to tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added multithreading with cpu helpers for triangular_solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added cuda implementation of triangular_solve op helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Finished cuda implementation of triangular_solve helpers and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed copyright marks.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected grammar errors with doc and error messages.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored matricies processing with triangular_solve cuda helper implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added triangular_solve wrapper

* Fixed mapping

* Added processing for adjoint with cpu helpers of triangular_solve op implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added implementation for adjoint routine with cuda platform.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added multithreading with adjoint routine for cpu platform.

Signed-off-by: shugeo <sgazeos@gmail.com>

Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-22 10:48:03 +03:00
shugeo e50b285c2c Shugeo resize area (#162)
* Added implementation for resize_area op. Initial commit.

* Added implementation of resize_area op. Initial revision.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected resizeArea functor call.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Implementation of resize_area. Cpu platform helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Implementation for resize_area helpers. The first part revision.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a set of tests for resize_area op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Cuda implementation for resize_area. Initial approach.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adding multithreading for resize_area algorithm.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Cuda implementation of resize_area helpers. Shared memory approach.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored resizeAreaKernel with cuda implementation.

* Eliminated compilation errors.

* ResizeArea helpers for cuda platform. The first working revision.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added test for batched resize_area op testing.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Implementation of resize_are for cuda platform and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed multithreading with resize_area op helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected copyright marks with sources.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected copyright mark for resize_area op implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected copyright mark for parity ops header.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected typo in strings and so on with image resize ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored resize_area helpers and multithreading.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added ResizeArea wrapper

* Added test with align_corners and fixed shape processing with only int args given for output size.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added test

* TF mapping for ResizeArea

* Fixed implementation issues with resize_area op for both platforms.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored image resizer struct to use flexible types for ints and floats.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Improved multithreading with resizeAreaKernel launch.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Use asynchronical memory copying with cuda platform image resize allocations.

Signed-off-by: shugeo <sgazeos@gmail.com>

Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-22 10:46:33 +03:00
raver119 7783012f39
cuDNN integration (#150)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* one file

Signed-off-by: raver119 <raver119@gmail.com>

* few more includes

Signed-off-by: raver119 <raver119@gmail.com>

* m?

Signed-off-by: raver119 <raver119@gmail.com>

* const

Signed-off-by: raver119 <raver119@gmail.com>

* cudnn linkage in tests

Signed-off-by: raver119 <raver119@gmail.com>

* culibos

Signed-off-by: raver119 <raver119@gmail.com>

* static reminder

Signed-off-by: raver119 <raver119@gmail.com>

* platform engine tag

Signed-off-by: raver119 <raver119@gmail.com>

* HAVE_CUDNN moved to config.h.in

Signed-off-by: raver119 <raver119@gmail.com>

* include

Signed-off-by: raver119 <raver119@gmail.com>

* include

Signed-off-by: raver119 <raver119@gmail.com>

* skip cudnn handle creation if there's not cudnn

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* target device in context

Signed-off-by: raver119 <raver119@gmail.com>

* platform engines

Signed-off-by: raver119 <raver119@gmail.com>

* platform engines

Signed-off-by: raver119 <raver119@gmail.com>

* allow multiple -h args

Signed-off-by: raver119 <raver119@gmail.com>

* allow multiple -h args

Signed-off-by: raver119 <raver119@gmail.com>

* move mkldnn out of CPU block

Signed-off-by: raver119 <raver119@gmail.com>

* link to mkldnn on cuda

Signed-off-by: raver119 <raver119@gmail.com>

* less prints

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* next step

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d NCHW draft

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d biasAdd

Signed-off-by: raver119 <raver119@gmail.com>

* test for MKL/CUDNN combined use

Signed-off-by: raver119 <raver119@gmail.com>

* - provide additional code for conv2d ff based on cudnn api, not tested yet

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on conv2d helper based on using cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - fixing several cuda bugs which appeared after cudnn lib had been started to use

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation of conv2d backprop op based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementaion of conv3d and conv3d_bp ops based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - bugs fixing in conv3d/conv3d_bp ops (cudnn in use)

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation of depthwiseConv2d (ff/bp) op based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation of batchnorm ff op based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - disable cudnn batchnorm temporary

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add minor change in cmake

Signed-off-by: Yurii <iuriish@yahoo.com>

* engine for depthwise mkldnn

Signed-off-by: raver119 <raver119@gmail.com>

* couple of includes

Signed-off-by: raver119 <raver119@gmail.com>

* - provide permutation to cudnn batchnorm ff when format is NHWC

Signed-off-by: Yurii <iuriish@yahoo.com>

* lgamma fix

Signed-off-by: raver119 <raver119@gmail.com>

* - eliminate memory leak in two tests

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: Yurii Shyrma <iuriish@yahoo.com>
2020-01-20 21:32:46 +03:00
Oleh 8fc0e63ce7 Oleh powderev (#171)
* Libnd4j: Add broadcastable elementwise power derivative #7461 first step of Pow_bp operation implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 some corrections of calculation steps

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 some bug fixes, the PowDerevative op made broadcastable, add the raw tests for op, need refactoring to use broadcast ops

* Libnd4j: Add broadcastable elementwise power derivative #7461 fixed several bugs add broadcast support and tests, need to fix scalar+array and array+scalar

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 fixed bugs for scalar inputs, fixed multinomial tests, added tests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 fised bugs for different shapes support, tests updated

* Libnd4j: Add broadcastable elementwise power derivative #7461 applied all possible variants via tiled arrays, add support of broadcast for Pow and PowDerivative ops, covered by tests, before review have to be replaced tiled implementation by applyTrueBroadcast

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 replaced tile by broadcast implementation, fixed issue with negative x input, corrected tests, need additional testing

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 added and corrected test cases, corrected implementation need review

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 code clean up

* Libnd4j: Add broadcastable elementwise power derivative #7461 code clean up, removed some tests, add tests with scalar

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 code improvement and clean up, split tests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 some code clean up

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative replace __isnanf by internal realization

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* pow_bp wrapper

* Fixed PowBp wrapper

* Tests added

* Test fixed

* Fix return type

* Disable powBp usage

* Pow backprop changed

Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-20 12:59:12 +03:00
shugeo 6943a5f57a Shugeo lgamma (#170)
* lgamma op. Initial version.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored lgamma op and test.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Lgamma wrapper

* Added TF mapping

Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-20 12:29:36 +03:00
Yurii Shyrma cae5ef4180 Shyrma depthconv (#156)
* - implementation of depthwise_conv2d (both ff/bp) based on mkl dnn api

* - minor corrections in deconv3d

Signed-off-by: Yurii <iuriish@yahoo.com>

* - remove unnecessary time test

Signed-off-by: Yurii <iuriish@yahoo.com>

* - update mkl dnn version in cmake

Signed-off-by: Yurii <iuriish@yahoo.com>

* - take into account several notes given by pr reviewer

Signed-off-by: Yurii <iuriish@yahoo.com>

* - fix bug in depthwise conv2d op based on mkl

Signed-off-by: Yurii <iuriish@yahoo.com>
2020-01-11 07:36:40 +03:00
Yurii Shyrma bbf88b53dd - fix wrong calculation of elements offsets in batchnorm op when input arrays have unusual (#169)
Signed-off-by: Yurii <iuriish@yahoo.com>
2020-01-11 00:14:20 +03:00
Oleh 2404be5fe0 Oleh multinomial (#163)
* libnd4j: Multinomial op #8570 first raw step of multinomial random data generator implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op #8570 next step of multinomial random categories generator implementation on both cpu and cuda, need corrections and code clean up before review and testing

* libnd4j: Multinomial op #8570 code clean up and fixed issues data selecting, moved from coords to tads

* libnd4j: Multinomial op #8570 fixed cuda build add reference for math materials that was used for implementation

* libnd4j: Multinomial op #8570 fixed several bugs, added several tests and improved cuda version. current implementation works, need testing of reproduction with the same seed

* libnd4j: Multinomial op #8570 fixes and optimization after discussion in both cuda and cpu

* libnd4j: Multinomial op #8570 add corrections after review, removed tads, replace 2D parallel loop by 3D

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op fixed declaration and add tests need discussion

* libnd4j: Multinomial op fix in test

* libnd4j: Multinomial op corrected behavior to get reproducible results, fixed issue in uniform value getting, tests added, need cuda review and cuda testing

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op fixed indexing on uniform calculation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op some corrections in max min declaration

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op fixed index calculation, added rewind, corrected input declaration, added stats tests, both cuda and cpu. cuda need testing

* libnd4j: Multinomial op fixed bugs on cuda nad cpu. need review

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op corrected tests to handle different orders

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op some improvements after code review

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op more corrections after review

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op fixed seed usage, update tests, fixed cuda based on comments, fixed bug of rewind, removed one behavior, minor corrections.

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op minor corrections

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op rise the bound of fluctuation for random cases

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: Multinomial op modified operation inputs and update implementation and tests on both cpu and cuda

* libnd4j: Multinomial op corrected data types according ops.proto

Co-authored-by: raver119 <raver119@gmail.com>
2020-01-06 22:35:05 +03:00
raver119 d9ef5e2467
Minor fixes (#165)
* ios-arm excluded

Signed-off-by: raver119 <raver119@gmail.com>

* histogram single threaded

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-04 15:27:16 +03:00
raver119 29e8e09db6
String changes (#3)
* initial commit

* additional data types & tensor type

Signed-off-by: raver119 <raver119@gmail.com>

* next step

Signed-off-by: raver119 <raver119@gmail.com>

* missing include

* sparse_to_dense

Signed-off-by: raver119 <raver119@gmail.com>

* few more tests files

Signed-off-by: raver119 <raver119@gmail.com>

* draft

Signed-off-by: raver119 <raver119@gmail.com>

* numeric sparse_to_dense

Signed-off-by: raver119 <raver119@gmail.com>

* comment

Signed-off-by: raver119 <raver119@gmail.com>

* string sparse_to_dense version

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA DataBuffer expand

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks for CUDA build

Signed-off-by: raver119 <raver119@gmail.com>

* shape fn for string_split

Signed-off-by: raver119 <raver119@gmail.com>

* one more comment

Signed-off-by: raver119 <raver119@gmail.com>

* string_split indices

Signed-off-by: raver119 <raver119@gmail.com>

* next step

Signed-off-by: raver119 <raver119@gmail.com>

* test passes

Signed-off-by: raver119 <raver119@gmail.com>

* few rearrangements for databuffer implementations

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer: move inline methods to common implementations

Signed-off-by: raver119 <raver119@gmail.com>

* add native DataBuffer to Nd4j presets

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer creation

Signed-off-by: raver119 <raver119@gmail.com>

* use DataBuffer for allocation

Signed-off-by: raver119 <raver119@gmail.com>

* cpu databuffer as deallocatable

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer setters for bufers

Signed-off-by: raver119 <raver119@gmail.com>

* couple of wrappers

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffers being passed around

Signed-off-by: raver119 <raver119@gmail.com>

* Bunch of ByteBuffer-related signatures gone

Signed-off-by: raver119 <raver119@gmail.com>

* - few more Nd4j signatures removed
- minor fix for bfloat16

Signed-off-by: raver119 <raver119@gmail.com>

* nullptr pointer is still a pointer, but 0 as address :)

Signed-off-by: raver119 <raver119@gmail.com>

* one special test

Signed-off-by: raver119 <raver119@gmail.com>

* empty string array init

Signed-off-by: raver119 <raver119@gmail.com>

* one more test in cpp

Signed-off-by: raver119 <raver119@gmail.com>

* memcpy instead of databuffer swap

Signed-off-by: raver119 <raver119@gmail.com>

* special InteropDataBuffer for front-end languages

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks for java

Signed-off-by: raver119 <raver119@gmail.com>

* pointer/indexer actualization

Signed-off-by: raver119 <raver119@gmail.com>

* CustomOp returns list for inputArumgents and outputArguments instead of array

Signed-off-by: raver119 <raver119@gmail.com>

* redundant call

Signed-off-by: raver119 <raver119@gmail.com>

* print_variable op

Signed-off-by: raver119 <raver119@gmail.com>

* - view handling (but wrong one)
- print_variable java wrapper

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* - empty arrays handling

Signed-off-by: raver119 <raver119@gmail.com>

* - deserialization works now

Signed-off-by: raver119 <raver119@gmail.com>

* minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* one more fix

Signed-off-by: raver119 <raver119@gmail.com>

* initial cuda commit

Signed-off-by: raver119 <raver119@gmail.com>

* print_variable message validation

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA views

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA special buffer size

Signed-off-by: raver119 <raver119@gmail.com>

* minor update to match master changes

Signed-off-by: raver119 <raver119@gmail.com>

* - consider arrays always actual on device for CUDA
- additional PrintVariable constructor
- CudaUtf8Buffer now allocates host buffer by default

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* - print_variable now allows print from device

Signed-off-by: raver119 <raver119@gmail.com>

* InteropDataBuffer data type fix

Signed-off-by: raver119 <raver119@gmail.com>

* ...

Signed-off-by: raver119 <raver119@gmail.com>

* disable some debug messages

Signed-off-by: raver119 <raver119@gmail.com>

* master pulled in

Signed-off-by: raver119 <raver119@gmail.com>

* couple of new methods for DataBuffer interop

Signed-off-by: raver119 <raver119@gmail.com>

* java side

Signed-off-by: raver119 <raver119@gmail.com>

* offsetted constructor

Signed-off-by: raver119 <raver119@gmail.com>

* new CUDA deallocator

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA backend torn apart

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA backend torn apart 2

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA backend torn apart 3

Signed-off-by: raver119 <raver119@gmail.com>

* - few new tests
- few new methods for DataBuffer management

Signed-off-by: raver119 <raver119@gmail.com>

* few more tests + few more tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* two failing tests

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* two failing tests pass

Signed-off-by: raver119 <raver119@gmail.com>

* now we pass DataBuffer to legacy ops too

Signed-off-by: raver119 <raver119@gmail.com>

* Native DataBuffer for legacy ops, Java side

Signed-off-by: raver119 <raver119@gmail.com>

* CPU java side update

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA java side update

Signed-off-by: raver119 <raver119@gmail.com>

* no more prepare/register action on java side

Signed-off-by: raver119 <raver119@gmail.com>

* NDArray::prepare/register use now accepts vectors

Signed-off-by: raver119 <raver119@gmail.com>

* InteropDataBuffer now has few more convenience methods

Signed-off-by: raver119 <raver119@gmail.com>

* java bindings update

Signed-off-by: raver119 <raver119@gmail.com>

* tick device in NativeOps

Signed-off-by: raver119 <raver119@gmail.com>

* Corrected usage of OpaqueBuffer for tests.

* Corrected usage of OpaqueBuffer for java tests.

* NativeOpsTests fixes.

* print_variable now returns scalar

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* compat_string_split fix for CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* - CUDA execScalar fix
- CUDA lazyAllocateHostPointer now checks java indexer/pointer instead of native pointer

Signed-off-by: raver119 <raver119@gmail.com>

* legacy ops DataBuffer migration prototype

Signed-off-by: raver119 <raver119@gmail.com>

* ignore device shapeinfo coming from java

Signed-off-by: raver119 <raver119@gmail.com>

* minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* minor transformAny fix

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweak for lazy host allocation

Signed-off-by: raver119 <raver119@gmail.com>

* - DataBuffer::memcpy method
- bitcast now uses memcpy

Signed-off-by: raver119 <raver119@gmail.com>

* - IndexReduce CUDA dimension buffer fix

Signed-off-by: raver119 <raver119@gmail.com>

* views for CPU and CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* less spam

Signed-off-by: raver119 <raver119@gmail.com>

* optional memory init

Signed-off-by: raver119 <raver119@gmail.com>

* async memset

Signed-off-by: raver119 <raver119@gmail.com>

* - SummaryStats CUDA fix
- DataBuffer.sameUnderlyingData() impl
- execBroadcast fix

Signed-off-by: raver119 <raver119@gmail.com>

* - reduce3All fix
switch to CUDA 10 temporarily

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA version

Signed-off-by: raver119 <raver119@gmail.com>

* proper memory deallocator registration

Signed-off-by: raver119 <raver119@gmail.com>

* HOST_ONLY workspace allocation

Signed-off-by: raver119 <raver119@gmail.com>

* temp commit

Signed-off-by: raver119 <raver119@gmail.com>

* few conflicts resolved

Signed-off-by: raver119 <raver119@gmail.com>

* few minor fixes

Signed-off-by: raver119 <raver119@gmail.com>

* one more minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* NDArray permute should operate on JVM primitives

Signed-off-by: raver119 <raver119@gmail.com>

* - create InteropDataBuffer for shapes as well
- update pointers after view creation in Java

Signed-off-by: raver119 <raver119@gmail.com>

* - addressPointer temporary moved to C++

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA: don't account offset twice

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA: DataBuffer pointer constructor updated

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA NDArray.unsafeDuplication() simplified

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA minor workspace-related fixes

Signed-off-by: raver119 <raver119@gmail.com>

* CPU DataBuffer.reallocate()

Signed-off-by: raver119 <raver119@gmail.com>

* print_affinity op

Signed-off-by: raver119 <raver119@gmail.com>

* print_affinity java side

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA more tweaks for data locality

Signed-off-by: raver119 <raver119@gmail.com>

* - compat_string_split tweak
- CudaUtf8Buffer update

Signed-off-by: raver119 <raver119@gmail.com>

* INDArray.close() mechanic restored

Signed-off-by: raver119 <raver119@gmail.com>

* one more test fixed

Signed-off-by: raver119 <raver119@gmail.com>

* - CUDA DataBuffer.reallocate() updated
- cudaMemcpy (synchronous) restored

Signed-off-by: raver119 <raver119@gmail.com>

* one last fix

Signed-off-by: raver119 <raver119@gmail.com>

* bad import removed

Signed-off-by: raver119 <raver119@gmail.com>

* another small fix

Signed-off-by: raver119 <raver119@gmail.com>

* one special test

Signed-off-by: raver119 <raver119@gmail.com>

* fix bad databuffer size

Signed-off-by: raver119 <raver119@gmail.com>

* release primaryBuffer on replace

Signed-off-by: raver119 <raver119@gmail.com>

* higher timeout

Signed-off-by: raver119 <raver119@gmail.com>

* disable timeouts

Signed-off-by: raver119 <raver119@gmail.com>

* dbCreateView now validates offset and length of a view

Signed-off-by: raver119 <raver119@gmail.com>

* additional validation for dbExpand

Signed-off-by: raver119 <raver119@gmail.com>

* restore timeout back again

Signed-off-by: raver119 <raver119@gmail.com>

* smaller distribution for rng test to prevent timeouts

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA DataBuffer::memcpy now copies to device all the time

Signed-off-by: raver119 <raver119@gmail.com>

* OpaqueDataBuffer now contains all required methods for interop

Signed-off-by: raver119 <raver119@gmail.com>

* some javadoc

Signed-off-by: raver119 <raver119@gmail.com>

* GC on failed allocations

Signed-off-by: raver119 <raver119@gmail.com>

* minoe memcpu tweak

Signed-off-by: raver119 <raver119@gmail.com>

* one more bitcast test

Signed-off-by: raver119 <raver119@gmail.com>

* - NDArray::deviceId() propagation
- special multi-threaded test for data locality checks

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer additional syncStream

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer additional syncStream

Signed-off-by: raver119 <raver119@gmail.com>

* one ignored test

Signed-off-by: raver119 <raver119@gmail.com>

* skip host alloc for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* ByteBuffer support is back

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer::memcpy minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* few minor prelu/bp tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* nullify-related fixes

Signed-off-by: raver119 <raver119@gmail.com>

* PReLU fixes (#157)

Signed-off-by: Alex Black <blacka101@gmail.com>

* Build fixed

* Fix tests

* one more ByteBuffer signature restored

Signed-off-by: raver119 <raver119@gmail.com>

* nd4j-jdbc-hsql profiles fix

Signed-off-by: raver119 <raver119@gmail.com>

* nd4j-jdbc-hsql profiles fix

Signed-off-by: raver119 <raver119@gmail.com>

* PReLU weight init fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small PReLU fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* - INDArray.migrate() reactivated
- DataBuffer::setDeviceId(...) added
- InteropDataBuffer Z syncToDevice added for views

Signed-off-by: raver119 <raver119@gmail.com>

* missed file

Signed-off-by: raver119 <raver119@gmail.com>

* Small tweak

Signed-off-by: Alex Black <blacka101@gmail.com>

* cuda 10.2

Signed-off-by: raver119 <raver119@gmail.com>

* minor fix

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-04 13:27:50 +03:00
raver119 451d9d57fd
shape function override (#161)
Signed-off-by: raver119 <raver119@gmail.com>
2020-01-04 09:06:44 +03:00
Alex Black 29104083cc
Various fixes (#143)
* #8568 ArrayUtil optimization

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6171 Keras ReLU and ELU support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Keras softmax layer import

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8549 Webjars dependency management

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for TF import names ':0' suffix issue / NPE

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* BiasAdd: fix default data format for TF import

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update zoo test ignores

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8509 SameDiff Listener API - provide frame + iteration

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8520 ND4J Environment

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Deconv3d

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Deconv3d fixes + gradient check

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Conv3d fixes + deconv3d DType test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix issue with deconv3d gradinet check weight init

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8579 Fix BaseCudaDataBuffer constructor fix for UINT16

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DataType.isNumerical() returns false for BOOL type

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8504 Reduce Spark log spam for tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clean up DL4J gradient check test spam

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More Gradient check spam reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff test spam reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes for FlatBuffers mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff log spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tests should extend BaseNd4jTest

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove debug line in c++ op

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ND4J test spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J test spam reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More Dl4J and datavec test spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for bad conv3d test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Additional test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Embedding layers: don't inherit global default activation function

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Trigger CI

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Consolidate all BaseDL4JTest classes to single class used everywhere; make timeout configurable per class

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test fixes and timeout increases

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Timeouts and PReLU fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Restore libnd4j build threads arg for CUDA build

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Increase timeouts on a few tests to avoid spurious failures on some CI machines

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More timeout fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More test timeout fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tweak timeout for one more test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Final tweaks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* One more ignore

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-01-04 13:45:07 +11:00
shugeo fbf7c9d38b Fixed lu for cuda platform and tests. (#158)
Signed-off-by: shugeo <sgazeos@gmail.com>
2020-01-02 23:25:41 +03:00
raver119 9b329d2601
[WIP] bias_add NHWC loop (#149)
* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* bias_add nhwc 4D

Signed-off-by: raver119 <raver119@gmail.com>

* bias_add nhwc 4D

Signed-off-by: raver119 <raver119@gmail.com>

* bias_add nhwc 4D

Signed-off-by: raver119 <raver119@gmail.com>

* bias_add nhwc 4D

Signed-off-by: raver119 <raver119@gmail.com>

* disable test

Signed-off-by: raver119 <raver119@gmail.com>
2019-12-24 20:56:49 +03:00
raver119 fc760de348 RgbToYuv & YuvToRgb skip empty arrays
Signed-off-by: raver119 <raver119@gmail.com>
2019-12-24 18:45:54 +03:00
Oleh 75123b0a4c [WIP] Oleh rgb yuv (#147)
* libnd4j: RgbToYuv and YuvToRgb, both implementations for both cpu and cuda. Need adding tests and review

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: RgbToYuv and YuvToRgb, replace coords method on Tad in both cpu and cuda, add tests, fixed bugs

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: RgbToYuv and YuvToRgb minor corrections

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j: RgbToYuv and YuvToRgb corrections to use operations in-place
2019-12-24 18:30:54 +03:00