Commit Graph

79 Commits (c94013f0a162419d7634ed58bbc870f60c443554)

Author SHA1 Message Date
shugeo 95f7ad7b94
Shugeo suppression overlaps (#9)
* Added non_max_suppression_overlaps op and tests.

* Refactored implementation of non_max_suppression_overlaps.

* Refactoring of implementation of non_max_suppression_overlaps op.

* Refactoring of implementation of non_max_suppression op.

* Fixed portion error.

* Added cuda frontends for image suppression ops.

* Eliminated crash with cuda arch on image.non_max_suppression_overlaps op.

* Improved implementation of image_suppression helper for cpu platform.

* The generic approach of non_max_suppression_overlaps op helper with cuda platform.

* Working cuda implementation of helper non_max_suppression_overlaps op.

* Eliminated waste comments.

* Improved implementations for both platforms

* Refactored cuda implementation of image.non_max_suppression_overlaps op helper.

* Improved cuda implementation of non_max_suppression op helper.

* Refactored cuda implementation of image.non_max_suppression_overlaps op helper.

* Improved cuda implementation of image.non_max_suppression_overlaps op helper.

* Added modifications into cuda implementation for image suppression overlaps op.

* Correct queue emulation with cuda implementation of non_max_suppression_overlaps op.

* Prefinal stage of cuda implementation of non_max_suppression_overlaps.

* Worked cuda implementation of non_max_suppresion_overlaps helper.

* Fixed return to proper thread.

* Improvements for cuda implementation of image.non_max_suppression_overlaps op helper.

* Fixed implementation issues with non_max_suppression_overlaps on cuda platform.

* Fixed skip for non_max_suppression_overlaps on cuda platform.

* Finalize implementation of image_suppression helper and tests.

* Cosmetic changes only.
2019-10-30 13:43:45 +02:00
Yurii Shyrma 029a69a835
Shyrma bn mkl bp (#14)
* - write code for new batchnorm backprop

Signed-off-by: Yurii <iuriish@yahoo.com>

* - testing batchnorm backprop

Signed-off-by: Yurii <iuriish@yahoo.com>

* - write code for batchnorm backprop based on mkl dnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - testing and fixing bugs in batchnorm_bp mkl dnn

Signed-off-by: Yurii <iuriish@yahoo.com>

* - made corrections required by reviewer

Signed-off-by: Yurii <iuriish@yahoo.com>

* - change name in java wrapper for batchnorm op

Signed-off-by: Yurii <iuriish@yahoo.com>
2019-10-26 14:14:21 +03:00
Alex Black d333d29099
SameDiff cleanup and fixes (#12)
* #8160 Remove resolvePrepertiesFromSameDiffBeforeExecution

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff API cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More SameDiff cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8248 Switch SameDiff variable init from lazy to creation time for more predictable behaviour

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8252 TanhDerivative javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8225 Deconvolution2D input validation

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8265 Switch SameDiff.outputs() to user settable, instead of unreliable 'best guess'

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8224 SameDiff.zero and .one create constants, not variables

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More cleanup and fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J SameDiff fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Re-add hack for Deconvolution2DLayer until #8315 is resolved

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8270 Move CUDA device/version logging to Java; can be disabled via existing org.nd4j.log.initialization system property

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* All ND4J init logging checks system property

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove redundant device logging

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* One more fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* UX improvements

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Deconv fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add deconv tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove debug code

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-26 12:38:08 +11:00
Alexander Stoyakin f31661e13b
Merge pull request #7 from KonduitAI/asto_nd4s_10172019
KDTree optimization
2019-10-23 12:11:25 +03:00
Yurii 70bd925abd - write 2 versions of new lstmLayer: one is based on own code, second uses mkl dnn api 2019-10-17 20:44:52 +03:00
Alexander Stoyakin 630bb3c9b6
Merge pull request #2 from KonduitAI/asto_ops_wrapper
[WIP] New ops wrapper
2019-10-16 20:21:50 +03:00
Alexander Stoyakin 96a9a1a733 Fixed output from operation. 2019-10-16 18:07:52 +03:00
shugeo 478a0c1f97 Added igamma and igammac broadcastable ops implementations and tests. 2019-10-16 14:02:53 +03:00
shugeo 92636b0b86 Eliminated waste operator. 2019-10-10 17:08:59 +03:00
shugeo 02d8616692 Implementation of cuda kernel for fake_quant_with_min_max_vars_per_channels op. 2019-10-10 16:40:56 +03:00
shugeo d0cbd33b0e Added input checks for op. 2019-10-09 15:52:13 +03:00
shugeo cb56b0b06a The first approach for fake_quant_with_min_max_vars_per_channel op implementation. 2019-10-08 19:00:41 +03:00
shugeo 53a2ebddbe Added test and helpers for draw_bounding_boxes op both cpu and cuda related. 2019-10-04 20:46:26 +03:00
shugeo 8f70b4441f draw_bounding_boxes op implementation. Inital revision. 2019-10-04 18:32:21 +03:00
shugeo 908e4c4912 Added implementation for divide_no_nan op and tests. 2019-10-04 10:29:15 +03:00
shugeo 130ee25682 Implemented compare_and_bitpack op. 2019-10-03 10:57:48 +03:00
shugeo f3e42173ef Refactored buffer copying to avoid wrong usage of buffers. 2019-10-02 16:51:09 +03:00
shugeo 1c6173d218 Added implementation of bitcast op. 2019-10-02 15:04:59 +03:00
shugeo afeb524238 Refactored implementation for adjust_contrast ops. 2019-10-01 14:13:09 +03:00
shugeo 1575c704ae Added implementation for adjust_contrast_v2 op and tests. 2019-10-01 11:44:27 +03:00
shugeo e06dfb5dcc Implementation of adjust_contrast op. 2019-09-30 18:24:12 +03:00
AlexDBlack a66e03355e Merge remote-tracking branch 'fork/master' 2019-09-12 12:20:57 +10:00
raver119 98e2814879
Platform helpers (#8216)
* platform helpers draft

Signed-off-by: raver119 <raver119@gmail.com>

* typo

Signed-off-by: raver119 <raver119@gmail.com>

* disable platform cmake

Signed-off-by: raver119 <raver119@gmail.com>

* another draft

Signed-off-by: raver119 <raver119@gmail.com>

* mkldnn convolution refactored

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* one more safety check

Signed-off-by: raver119 <raver119@gmail.com>

* prototype works

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* force static library mode for mkldnn

Signed-off-by: raver119 <raver119@gmail.com>

* - ismax fix
- experimental arg fix
- don't enforce openblas on Apple hardware

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of small fixes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* declare concurrent

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* - MKLDNN version upgrade to 1.0.2
- avgpool2d/maxpool2d APIs update

Signed-off-by: raver119 <raver119@gmail.com>

* - avgpool2d_bp/maxpool2d_bp APIs update

Signed-off-by: raver119 <raver119@gmail.com>

* - conv2d/batchnorm APIs update

Signed-off-by: raver119 <raver119@gmail.com>

* - lrn/conv2d_bp/conv3d/conv3d_bp APIs update

Signed-off-by: raver119 <raver119@gmail.com>

* all ops converted to MKLDNN 1.x

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* namespace for platform helpers

Signed-off-by: raver119 <raver119@gmail.com>

* make sure platform helpers aren't opimized out

Signed-off-by: raver119 <raver119@gmail.com>

* build cpu_features on x86 systems

Signed-off-by: raver119 <raver119@gmail.com>

* build cpu_features on x86 systems

Signed-off-by: raver119 <raver119@gmail.com>

* more of cpu_features

Signed-off-by: raver119 <raver119@gmail.com>

* - mkldnn removed from java
- cpu_features checks in CpuNDArrayFactory

Signed-off-by: raver119 <raver119@gmail.com>

* F16C definition renamed

Signed-off-by: raver119 <raver119@gmail.com>

* some mkldnn rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* check supported instructions before doing anything

Signed-off-by: raver119 <raver119@gmail.com>

* typo

Signed-off-by: raver119 <raver119@gmail.com>

* missied impl

Signed-off-by: raver119 <raver119@gmail.com>

* BUILD_PIC option

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d fix

Signed-off-by: raver119 <raver119@gmail.com>

* avgpool3d fix

Signed-off-by: raver119 <raver119@gmail.com>

* avgpool3d_bp fix

Signed-off-by: raver119 <raver119@gmail.com>

* avgpool2d_bp leak fix

Signed-off-by: raver119 <raver119@gmail.com>

* avgpool3d_bp leak fix

Signed-off-by: raver119 <raver119@gmail.com>

* maxpool bp leaks fixed

Signed-off-by: raver119 <raver119@gmail.com>

* printf removed

Signed-off-by: raver119 <raver119@gmail.com>

* batchnorm fix

Signed-off-by: raver119 <raver119@gmail.com>

* AVX warning/error polishing

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More polish

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Polish

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* remove previous MKL-DNN support layer

Signed-off-by: raver119 <raver119@gmail.com>

* avx2 tweak

Signed-off-by: raver119 <raver119@gmail.com>

* allow static for apple

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* exclude mkldnn in one more place

Signed-off-by: raver119 <raver119@gmail.com>

* exclude mkldnn in one more place

Signed-off-by: raver119 <raver119@gmail.com>

* restore OPENBLAS_PATH use

Signed-off-by: raver119 <raver119@gmail.com>

* add runtime check for avx/avx2 support

Signed-off-by: raver119 <raver119@gmail.com>

* convolution_auto

Signed-off-by: raver119 <raver119@gmail.com>

* Add logic for helper argument

* minor test fix

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* skip OpTracker props for non-x86 builds

Signed-off-by: raver119 <raver119@gmail.com>

* linux arm isn't x86 :)

Signed-off-by: raver119 <raver119@gmail.com>

* avx-512

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA presets fix

Signed-off-by: raver119 <raver119@gmail.com>

* BUILD_PIC

Signed-off-by: raver119 <raver119@gmail.com>

* prefetchw for avx2

Signed-off-by: raver119 <raver119@gmail.com>

* BUILD_PIC again

Signed-off-by: raver119 <raver119@gmail.com>
2019-09-11 21:50:28 +03:00
raver119 589401477d
[WIP] bunch of improvements (#257)
* - profiling bias_add op
- add some docementation

Signed-off-by: Yurii <yurii@skymind.io>

* - minor change

Signed-off-by: Yurii <yurii@skymind.io>

* - provide addBias cuda kernel

Signed-off-by: Yurii <yurii@skymind.io>

* - improve shape::getIndexOfffset and change its signature

Signed-off-by: Yurii <yurii@skymind.io>

* - same as previous

Signed-off-by: Yurii <yurii@skymind.io>

* - improve and change signature in some shape:: stuff which has to do with calculation of offsets for array elements

Signed-off-by: Yurii <yurii@skymind.io>

* - minor changes in flatten

Signed-off-by: Yurii <shyrma@skymind.io>

* - add function shape::getIndexOffsetOrdered

Signed-off-by: Yurii <shyrma@skymind.io>

* - correct shape::getIndexOffsetOrdered()

Signed-off-by: Yurii <shyrma@skymind.io>

* - move getIndexOffsetOrdered to flatten.h header in order to isolate this function

Signed-off-by: Yurii <shyrma@skymind.io>
2019-09-11 20:12:09 +03:00
raver119 1de9fb218e
- bits_hamming_distance dtype fix (#8208)
- DataTypeUtils::asString fixe + new dtypes added

Signed-off-by: raver119 <raver119@gmail.com>
2019-09-06 08:59:05 +03:00
raver119 46f8c58502 - bits_hamming_distance dtype fix
- DataTypeUtils::asString fixe + new dtypes added

Signed-off-by: raver119 <raver119@gmail.com>
2019-09-06 08:57:53 +03:00
Ryan Nett 79867f5c5a cleanup SDRNN and rnn ops (#238)
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-05 12:25:03 +10:00
shugeo 548044a1e2 Shugeo doc (#235)
* Actualized doc to tnse ops.

* Added comments for dynamic_stitch op.

* Added comments to dynamic_stitch op implementation.

* Modified comment for unstack_list op.

* Added doc for space_to_depth and depth_to_space ops.

* Added doc for space_to_batch op.

* Enlarge test type for adjustSaturation.

* Added doc for runner.
2019-09-04 14:57:59 +03:00
raver119 a90c7dd995
[WIP] Last set of changes (#234)
* mmul op instead of cublasSgemm

Signed-off-by: raver119 <raver119@gmail.com>

* transB

Signed-off-by: raver119 <raver119@gmail.com>

* jcpp handles

Signed-off-by: raver119 <raver119@gmail.com>

* bitwise and/or/xor

Signed-off-by: raver119 <raver119@gmail.com>

* bitwise and/or/xor mapping

Signed-off-by: raver119 <raver119@gmail.com>

* cuda/cublas version check

Signed-off-by: raver119 <raver119@gmail.com>

* add expected version

Signed-off-by: raver119 <raver119@gmail.com>

* cuda/cublas version check in java

Signed-off-by: raver119 <raver119@gmail.com>

* one more error check

Signed-off-by: raver119 <raver119@gmail.com>

* build fix

Signed-off-by: raver119 <raver119@gmail.com>

* build fix

Signed-off-by: raver119 <raver119@gmail.com>

* build fix

Signed-off-by: raver119 <raver119@gmail.com>

* one more fix

Signed-off-by: raver119 <raver119@gmail.com>

* skip CUDA version check for now

Signed-off-by: raver119 <raver119@gmail.com>

* better wording

Signed-off-by: raver119 <raver119@gmail.com>

* few more tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* few more tweaks

Signed-off-by: raver119 <raver119@gmail.com>
2019-09-04 14:41:08 +03:00
Yurii Shyrma cb4c9377b1 Shyrma docs (#222)
* - documenting and profiling matrix_set_diag cuda kernel

Signed-off-by: Yurii <yurii@skymind.io>

* - correct formula of pnorm pooling in cuda 2d/3d kernels
- remove helper matrix_diag which duplicates work of helper matrix_set_diag

Signed-off-by: Yurii <yurii@skymind.io>
2019-09-02 16:25:58 +03:00
Yurii Shyrma a35926c6e9 - add parameter alpha to elu and lrelu_bp (#213)
* - add parameter alpha to elu and lrelu_bp

Signed-off-by: Yurii <yurii@skymind.io>

* - forgot to correct header activations.h

Signed-off-by: Yurii <yurii@skymind.io>
2019-08-31 20:57:39 +03:00
raver119 70a9ae5068
[WIP] few tweaks (#206)
* scatter empty check

Signed-off-by: raver119 <raver119@gmail.com>

* scatter empty test

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* two tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* dup tweak

Signed-off-by: raver119 <raver119@gmail.com>

* - put empty checking of indices array immediately prior  helper run

Signed-off-by: Yurii <yurii@skymind.io>

* minor tests fix

Signed-off-by: raver119 <raver119@gmail.com>

* minor tests fix

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-30 16:32:01 +03:00
raver119 1003428a18
[WIP] Int broadcastables (#195)
* Removed invalid resource and fixed tests

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* legacy scalar/pairwise/broadcast int ops

Signed-off-by: raver119 <raver119@gmail.com>

* NDArray int broadcastables

Signed-off-by: raver119 <raver119@gmail.com>

* few more bitwise tests

Signed-off-by: raver119 <raver119@gmail.com>

* java side update

Signed-off-by: raver119 <raver119@gmail.com>

* Argument type changed for shift ops

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* legacy scalar/pairwise/broadcast int ops

Signed-off-by: raver119 <raver119@gmail.com>

* NDArray int broadcastables

Signed-off-by: raver119 <raver119@gmail.com>

* few more bitwise tests

Signed-off-by: raver119 <raver119@gmail.com>

* java side update

Signed-off-by: raver119 <raver119@gmail.com>

* Argument type changed for shift ops

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2019-08-30 10:12:40 +03:00
Yurii Shyrma 5395d4fbe5 - rewrite broadcast_dynamic_shape and delete corresponding helpers (#194)
Signed-off-by: Yurii <yurii@skymind.io>
2019-08-29 20:38:02 +03:00
Yurii Shyrma 70af8c2afc Shyrma svd (#191)
* - add one additional test for svd

* - provide float argument in eye op to be a type of output array

Signed-off-by: Yurii <yurii@skymind.io>

* - add cuda capability check to mmulHelper

Signed-off-by: Yurii <yurii@skymind.io>

* - make use another method for divice id evaluation

Signed-off-by: Yurii <yurii@skymind.io>

* Eye data type as T argument

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-28 18:27:08 +03:00
raver119 dec296da17
[WIP] bits_hamming_distance (#192)
* bits_hamming_distance op

Signed-off-by: raver119 <raver119@gmail.com>

* bits_hamming_distance cuda

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-28 18:20:44 +03:00
Yurii Shyrma 2144941313 Shyrma fix2 (#186)
* - further work on layer_norm

Signed-off-by: Yurii <yurii@skymind.io>

* - further work on layer_norm 2

Signed-off-by: Yurii <yurii@skymind.io>

* - correct helpers for svd cuda

Signed-off-by: Yurii <yurii@skymind.io>
2019-08-27 19:57:59 +03:00
raver119 a49f7c908b
[WIP] More fixes (#178)
* skip string arrays for device validation

Signed-off-by: raver119 <raver119@gmail.com>

* histogram_fixed_width now really supports indexing types

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 13:21:01 +03:00
raver119 df84bc7255
[WIP] More tweaks (#173)
* CUDA empty reduction

Signed-off-by: raver119 <raver119@gmail.com>

* - listdiff synchronization fix for CUDA
- listdiff test

Signed-off-by: raver119 <raver119@gmail.com>

* - IndexReduce ops now allow INDEXING_TYPES output
- topK op accepts only INDEXING_TYPES as output

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 10:37:10 +03:00
raver119 25e5c23eae
[WIP] Error handling (#169)
* CUDA reverse rewrite + couple of tests

Signed-off-by: raver119 <raver119@gmail.com>

* don't throw exception on invalid pointer

Signed-off-by: raver119 <raver119@gmail.com>

* data types validation for fastpath exec mode + 2 tests

Signed-off-by: raver119 <raver119@gmail.com>

* data types validation for fastpath exec mode + 2 tests

Signed-off-by: raver119 <raver119@gmail.com>

* ismax allowed dtypes tweak

Signed-off-by: raver119 <raver119@gmail.com>

* lastErrorCode + lastErrorMessage for native exceptions handling

Signed-off-by: raver119 <raver119@gmail.com>

* exportable ErrorReference

Signed-off-by: raver119 <raver119@gmail.com>

* check error codes in java

Signed-off-by: raver119 <raver119@gmail.com>

* - consume lastErrorCode
- fast_in dtype validation fix

Signed-off-by: raver119 <raver119@gmail.com>

* - sg/cb allowed output type change
- minor logging fix for data type validation

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:57:51 +03:00
raver119 bb5fc36e5e
[WIP] ops fixes (#168)
* - correct layer_norm

Signed-off-by: Yurii <yurii@skymind.io>

* - further fix of layer norm

Signed-off-by: Yurii <yurii@skymind.io>

* - correct scatter_upd op

Signed-off-by: Yurii <yurii@skymind.io>

* - correct cuda kernel for histogram_fixed_width op

Signed-off-by: Yurii <yurii@skymind.io>

* - delete comments

Signed-off-by: Yurii <yurii@skymind.io>

* enabled one ignored test

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:37:05 +03:00
Alex Black b417ca21bf
Fix for concat op shape function (empty shapes) (#167)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-26 23:10:28 +10:00
raver119 f03b0ee78f
[WIP] more fixes (#159)
* Added test for MatrixInverse with double input. Fixed matrixDeterminantKernel.

* Fixed kernels to avoid waste templating.

* Fixed logDeterminant kernel.

* Refactored type check for lup'

* - decrease blockDim value for zeta op

Signed-off-by: Yurii <yurii@skymind.io>

* Added print for compound matrix with CUDA.

* Refactored upper matrix invertion kernels.

* - provide move constructor and move assignment operator for OpArgsHoder class

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored usage of launch context.

* - add test for mergemax

Signed-off-by: Yurii <yurii@skymind.io>

* get rid of AveragingArrayProxy

Signed-off-by: raver119 <raver119@gmail.com>

* Refactoring of LUP inversion.

* Added prints for invertion.

* - add OpArgsHolder copy constructor and assignment operator

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for lower inversion

* - fix bug in upsampling2d/3d_bp op

Signed-off-by: Yurii <yurii@skymind.io>

* Added expensive printfs to kernel.

* Refactored expensive kernel prints.

* Refactored expensive printfs

* - remove nullify

Signed-off-by: Yurii <yurii@skymind.io>

* Eliminated waste prints with tests.

* upsampling2d_bp test

Signed-off-by: raver119 <raver119@gmail.com>

* test updated

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 19:20:50 +03:00
raver119 fb8de5006f - concat empty scalar fix
- couple of tests for empty scalar concat

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 13:16:50 +03:00
raver119 729dc5e879
[WIP] size etc (#155)
* one test for size

Signed-off-by: raver119 <raver119@gmail.com>

* - few tests for size op
- size/rank/size_at ops now use p instead of assign

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 12:31:12 +03:00
Alex Black e855e47f73
More fixes (#148)
* Small batch norm fix (cuda/no-mkldnn)

Signed-off-by: Alex Black <blacka101@gmail.com>

* Dropout fix for RnnOutputLayer

Signed-off-by: Alex Black <blacka101@gmail.com>

* Allow block size < 2 in batch_to_space_nd and space_to_batch_nd for import, in spite of what TF docs say

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-22 19:55:27 +10:00
raver119 eea3062ccf
[WIP] stb/bts nd (#144)
* - start working on space_to_batch_nd

Signed-off-by: Yurii <yurii@skymind.io>

* - provide cpu helper for space_to_batch_nd op

Signed-off-by: Yurii <yurii@skymind.io>

* few typos fixed

Signed-off-by: raver119 <raver119@gmail.com>

* - add tests for space_to_batch and correct bugs

Signed-off-by: Yurii <yurii@skymind.io>

* - write cuda kernel for space_to_batch op

Signed-off-by: Yurii <yurii@skymind.io>

* - add order argument to shape::index2coords method in convolution cuda ops

Signed-off-by: Yurii <yurii@skymind.io>

* - restore some previous code

Signed-off-by: Yurii <yurii@skymind.io>

* old col2im kernel activated

Signed-off-by: raver119 <raver119@gmail.com>

* - change coords calculation in col2im kernel

Signed-off-by: Yurii <yurii@skymind.io>

* - restore old col2im kernel

Signed-off-by: Yurii <yurii@skymind.io>

* - add custom op for batch_to_space

Signed-off-by: Yurii <yurii@skymind.io>

* - provide cpu version for batch_to_space_nd op

Signed-off-by: Yurii <yurii@skymind.io>

* - provide cuda kernel for batch_to_space_nd op

Signed-off-by: Yurii <yurii@skymind.io>
2019-08-21 21:11:46 +03:00
raver119 e604ffe0d2
[WIP] repeat op (#143)
* - write new repeat helper (cpu)

Signed-off-by: Yurii <yurii@skymind.io>

* - update NDArray::cpu

Signed-off-by: Yurii <yurii@skymind.io>

* - update NDArray::repeat cuda

Signed-off-by: Yurii <yurii@skymind.io>
2019-08-21 21:10:29 +03:00
raver119 d9ab299759
[WIP] Minor fixes (#140)
* - Tile java shape fn removed
- Tile 0 validation added
- scatter_upd test

Signed-off-by: raver119 <raver119@gmail.com>

* additional tile validation

Signed-off-by: raver119 <raver119@gmail.com>

* - provide vector case in cuda scatter op

Signed-off-by: Yurii <yurii@skymind.io>

* cpu ismax view fix

Signed-off-by: raver119 <raver119@gmail.com>

* exp

Signed-off-by: raver119 <raver119@gmail.com>

* cuda ismax fix

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-21 15:05:47 +03:00
raver119 aceb915557
[WIP] tests fixes (#130)
* no openmp for ClipByGlobalNorm

Signed-off-by: raver119 <raver119@gmail.com>

* one more bfloat16 rng test

Signed-off-by: raver119 <raver119@gmail.com>

* assertion fix

Signed-off-by: raver119 <raver119@gmail.com>

* - legacy IsMax gone
- linear IsMax gets shapeInfo argument

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of legacy IsMax tests

Signed-off-by: raver119 <raver119@gmail.com>

* IsMax is custom op now

Signed-off-by: raver119 <raver119@gmail.com>

* more blocks for ismax

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

*  - sqrt test
 - some legacy code removed from CudaExecutioner
 - Transforms.asin tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* - TransformFloat fix

Signed-off-by: raver119 <raver119@gmail.com>

* - ismax fix
- SpaceToBatchND/BatchToSpaceND wrappers
- couple of legacy tests removed

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-19 11:33:15 +03:00