* - range op now accepts dargs
- dargs now can be in signature
Signed-off-by: raver119 <raver119@gmail.com>
* range dtype java side
Signed-off-by: raver119 <raver119@gmail.com>
* linspace fix
Signed-off-by: raver119 <raver119@gmail.com>
* lin_space fix for scalar outputs
Signed-off-by: raver119 <raver119@gmail.com>
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* - one more test for OneHot with dtype
- one more signature in Nd4j
Signed-off-by: raver119 <raver119@gmail.com>
* ones_as/zeros_as now accept dtype
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* - more updates for configurable data types
- ones_as/zeros_as java side + tests
Signed-off-by: raver119 <raver119@gmail.com>
* few c++ tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* few more changes around DArgs
Signed-off-by: raver119 <raver119@gmail.com>
* - implementation of cudnn batchnorm_bp op
Signed-off-by: Yurii <iuriish@yahoo.com>
* - testing and fixing bugs in batchnorm_bp based on cudnn api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - move pooling mkl code and delete some unnecessary files
Signed-off-by: Yurii <iuriish@yahoo.com>
* - implementation and testing cudnn pooling2d ops (avg/max, ff/bp)
Signed-off-by: Yurii <iuriish@yahoo.com>
* - implementation and testing cudnn pooling 3d (ff/bp) ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - provide ff step in case of cudnn maxpool3d_bp op
Signed-off-by: Yurii <iuriish@yahoo.com>
* - remove half type from set of supported types in mkl dpethwise conv op
Signed-off-by: Yurii <iuriish@yahoo.com>
* - bring back cudaStreamSynchronize in batchnorm and pooling cudnn ops
Signed-off-by: Yurii <iuriish@yahoo.com>
Co-authored-by: raver119 <raver119@gmail.com>
* Cleanup modules
* Moving subprojects to nd4j-api
* Project cleanup
* Dropped AWS sub-project
* dl4j-util moved to core
* dl4j-perf moved to core
* Tests coverage
* Revert "Moving subprojects to nd4j-api"
This reverts commit bc6eb573c6b60c407ade47172c5d204725077e6b.
* Moved nd4j-buffer and nd4j-context to nd4j-api
* Rolled back change
* Revert "Project cleanup"
This reverts commit 64ac7f369b2d968f7be437718034f093fc886ffc.
* Datavec cleaned up
* Revert "Moved nd4j-buffer and nd4j-context to nd4j-api"
This reverts commit 75f4e8da80d2551e44e1251dd6c5923289fff8e1.
# Conflicts:
# nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/opvalidation/ReductionBpOpValidation.java
* Resolve conflict
* Compilation fixed.
* nd4j-context and nd4j-buffer moved to nd4j-api
* Fixed TF mapping for mmul
* Fix for dl4j-cuda tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Move last few tests from deeplearning4j-nn to -core
Signed-off-by: Alex Black <blacka101@gmail.com>
* Remove incorrect TF import mapping for TensorMmul op
Signed-off-by: Alex Black <blacka101@gmail.com>
* Cleaned TF mapping
* Fix path for test results on windows
* Remove old dependency
Signed-off-by: Alex Black <blacka101@gmail.com>
* One more attempt to fix path for test results on windows
* fixup! One more attempt to fix path for test results on windows
* fixup! One more attempt to fix path for test results on windows
Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
* Added test for issue with resize_area op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added a pair of tests for resize_are op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored resize_area kernel to avoid shared memory overflow.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Eliminated prints with tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* ignore bad test
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed test with resize_area.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed test for float constants.
Signed-off-by: shugeo <sgazeos@gmail.com>
Co-authored-by: raver119 <raver119@gmail.com>
* missing alloc validation in RandomGenerator for CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* set error message if rng alloc failed
Signed-off-by: raver119 <raver119@gmail.com>
* check for error code during RNG creation in java
Signed-off-by: raver119 <raver119@gmail.com>
* Added qr op implementation. Initial version.
* Fixed doc for qr op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation of QR decomposition. CPU platform version.
* Added a pair of tests for qr op testing.
Signed-off-by: shugeo <sgazeos@gmail.com>
* QR implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected norm using.
* Properly calculated intermediate results with QR decomposition.
* Another step to implement QR algorithm by householder.
* Cpu implementatio for QR decomposition. The first working edition.
* Corrected test to QR decomposition.
* Added tad multithreading with QR implementation.
* Finished cpu implementation for QR decomposition helpers.
* Refactored tests and improved multithreading.
* Refactored QR cpu implementation and update cuda implementation helpers.
* Cuda QR helper implementation. The first working edition.
* Eliminated waste prints.
* Restore multithreading with cuda implementation.
* Ops names corrected
* Refactored qr op helpers to optimize.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Eliminated waste manual ticking.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored memory allocation to avoid waste memory usage.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored matrixMinor method both for cuda and cpu platforms.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored method of vmul to use raw buffers instead type conversion.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored temporary array of matricies.
Signed-off-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
Co-authored-by: raver119 <raver119@gmail.com>
* Added implementation of the triangular_solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed compilation issues.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added verification of input data and helpers facilities for triangular_solve op.'
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added cpu implementation for triangular_solve helpers.
* Added tests and implementation for upper triangular equations.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added a pair of cases to tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added multithreading with cpu helpers for triangular_solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added cuda implementation of triangular_solve op helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Finished cuda implementation of triangular_solve helpers and tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed copyright marks.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected grammar errors with doc and error messages.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored matricies processing with triangular_solve cuda helper implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added triangular_solve wrapper
* Fixed mapping
* Added processing for adjoint with cpu helpers of triangular_solve op implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added implementation for adjoint routine with cuda platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added multithreading with adjoint routine for cpu platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Added implementation for resize_area op. Initial commit.
* Added implementation of resize_area op. Initial revision.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected resizeArea functor call.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation of resize_area. Cpu platform helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation for resize_area helpers. The first part revision.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added a set of tests for resize_area op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Cuda implementation for resize_area. Initial approach.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adding multithreading for resize_area algorithm.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Cuda implementation of resize_area helpers. Shared memory approach.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored resizeAreaKernel with cuda implementation.
* Eliminated compilation errors.
* ResizeArea helpers for cuda platform. The first working revision.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added test for batched resize_area op testing.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation of resize_are for cuda platform and tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed multithreading with resize_area op helper.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected copyright marks with sources.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected copyright mark for resize_area op implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected copyright mark for parity ops header.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected typo in strings and so on with image resize ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored resize_area helpers and multithreading.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added ResizeArea wrapper
* Added test with align_corners and fixed shape processing with only int args given for output size.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added test
* TF mapping for ResizeArea
* Fixed implementation issues with resize_area op for both platforms.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored image resizer struct to use flexible types for ints and floats.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Improved multithreading with resizeAreaKernel launch.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Use asynchronical memory copying with cuda platform image resize allocations.
Signed-off-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* one file
Signed-off-by: raver119 <raver119@gmail.com>
* few more includes
Signed-off-by: raver119 <raver119@gmail.com>
* m?
Signed-off-by: raver119 <raver119@gmail.com>
* const
Signed-off-by: raver119 <raver119@gmail.com>
* cudnn linkage in tests
Signed-off-by: raver119 <raver119@gmail.com>
* culibos
Signed-off-by: raver119 <raver119@gmail.com>
* static reminder
Signed-off-by: raver119 <raver119@gmail.com>
* platform engine tag
Signed-off-by: raver119 <raver119@gmail.com>
* HAVE_CUDNN moved to config.h.in
Signed-off-by: raver119 <raver119@gmail.com>
* include
Signed-off-by: raver119 <raver119@gmail.com>
* include
Signed-off-by: raver119 <raver119@gmail.com>
* skip cudnn handle creation if there's not cudnn
Signed-off-by: raver119 <raver119@gmail.com>
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* target device in context
Signed-off-by: raver119 <raver119@gmail.com>
* platform engines
Signed-off-by: raver119 <raver119@gmail.com>
* platform engines
Signed-off-by: raver119 <raver119@gmail.com>
* allow multiple -h args
Signed-off-by: raver119 <raver119@gmail.com>
* allow multiple -h args
Signed-off-by: raver119 <raver119@gmail.com>
* move mkldnn out of CPU block
Signed-off-by: raver119 <raver119@gmail.com>
* link to mkldnn on cuda
Signed-off-by: raver119 <raver119@gmail.com>
* less prints
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* next step
Signed-off-by: raver119 <raver119@gmail.com>
* conv2d NCHW draft
Signed-off-by: raver119 <raver119@gmail.com>
* conv2d biasAdd
Signed-off-by: raver119 <raver119@gmail.com>
* test for MKL/CUDNN combined use
Signed-off-by: raver119 <raver119@gmail.com>
* - provide additional code for conv2d ff based on cudnn api, not tested yet
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on conv2d helper based on using cudnn api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - fixing several cuda bugs which appeared after cudnn lib had been started to use
Signed-off-by: Yurii <iuriish@yahoo.com>
* - implementation of conv2d backprop op based on cudnn api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - implementaion of conv3d and conv3d_bp ops based on cudnn api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - bugs fixing in conv3d/conv3d_bp ops (cudnn in use)
Signed-off-by: Yurii <iuriish@yahoo.com>
* - implementation of depthwiseConv2d (ff/bp) op based on cudnn api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - implementation of batchnorm ff op based on cudnn api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - disable cudnn batchnorm temporary
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add minor change in cmake
Signed-off-by: Yurii <iuriish@yahoo.com>
* engine for depthwise mkldnn
Signed-off-by: raver119 <raver119@gmail.com>
* couple of includes
Signed-off-by: raver119 <raver119@gmail.com>
* - provide permutation to cudnn batchnorm ff when format is NHWC
Signed-off-by: Yurii <iuriish@yahoo.com>
* lgamma fix
Signed-off-by: raver119 <raver119@gmail.com>
* - eliminate memory leak in two tests
Signed-off-by: Yurii <iuriish@yahoo.com>
Co-authored-by: Yurii Shyrma <iuriish@yahoo.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 first step of Pow_bp operation implementation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 some corrections of calculation steps
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 some bug fixes, the PowDerevative op made broadcastable, add the raw tests for op, need refactoring to use broadcast ops
* Libnd4j: Add broadcastable elementwise power derivative #7461 fixed several bugs add broadcast support and tests, need to fix scalar+array and array+scalar
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 fixed bugs for scalar inputs, fixed multinomial tests, added tests
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 fised bugs for different shapes support, tests updated
* Libnd4j: Add broadcastable elementwise power derivative #7461 applied all possible variants via tiled arrays, add support of broadcast for Pow and PowDerivative ops, covered by tests, before review have to be replaced tiled implementation by applyTrueBroadcast
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 replaced tile by broadcast implementation, fixed issue with negative x input, corrected tests, need additional testing
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 added and corrected test cases, corrected implementation need review
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 code clean up
* Libnd4j: Add broadcastable elementwise power derivative #7461 code clean up, removed some tests, add tests with scalar
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 code improvement and clean up, split tests
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 some code clean up
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative replace __isnanf by internal realization
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* pow_bp wrapper
* Fixed PowBp wrapper
* Tests added
* Test fixed
* Fix return type
* Disable powBp usage
* Pow backprop changed
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* - implementation of depthwise_conv2d (both ff/bp) based on mkl dnn api
* - minor corrections in deconv3d
Signed-off-by: Yurii <iuriish@yahoo.com>
* - remove unnecessary time test
Signed-off-by: Yurii <iuriish@yahoo.com>
* - update mkl dnn version in cmake
Signed-off-by: Yurii <iuriish@yahoo.com>
* - take into account several notes given by pr reviewer
Signed-off-by: Yurii <iuriish@yahoo.com>
* - fix bug in depthwise conv2d op based on mkl
Signed-off-by: Yurii <iuriish@yahoo.com>
* libnd4j: Multinomial op #8570 first raw step of multinomial random data generator implementation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op #8570 next step of multinomial random categories generator implementation on both cpu and cuda, need corrections and code clean up before review and testing
* libnd4j: Multinomial op #8570 code clean up and fixed issues data selecting, moved from coords to tads
* libnd4j: Multinomial op #8570 fixed cuda build add reference for math materials that was used for implementation
* libnd4j: Multinomial op #8570 fixed several bugs, added several tests and improved cuda version. current implementation works, need testing of reproduction with the same seed
* libnd4j: Multinomial op #8570 fixes and optimization after discussion in both cuda and cpu
* libnd4j: Multinomial op #8570 add corrections after review, removed tads, replace 2D parallel loop by 3D
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op fixed declaration and add tests need discussion
* libnd4j: Multinomial op fix in test
* libnd4j: Multinomial op corrected behavior to get reproducible results, fixed issue in uniform value getting, tests added, need cuda review and cuda testing
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op fixed indexing on uniform calculation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op some corrections in max min declaration
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op fixed index calculation, added rewind, corrected input declaration, added stats tests, both cuda and cpu. cuda need testing
* libnd4j: Multinomial op fixed bugs on cuda nad cpu. need review
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op corrected tests to handle different orders
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op some improvements after code review
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op more corrections after review
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op fixed seed usage, update tests, fixed cuda based on comments, fixed bug of rewind, removed one behavior, minor corrections.
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op minor corrections
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op rise the bound of fluctuation for random cases
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: Multinomial op modified operation inputs and update implementation and tests on both cpu and cuda
* libnd4j: Multinomial op corrected data types according ops.proto
Co-authored-by: raver119 <raver119@gmail.com>
* libnd4j: RgbToYuv and YuvToRgb, both implementations for both cpu and cuda. Need adding tests and review
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: RgbToYuv and YuvToRgb, replace coords method on Tad in both cpu and cuda, add tests, fixed bugs
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: RgbToYuv and YuvToRgb minor corrections
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: RgbToYuv and YuvToRgb corrections to use operations in-place
* one special test
Signed-off-by: raver119 <raver119@gmail.com>
* one special test
Signed-off-by: raver119 <raver119@gmail.com>
* local memory for concat
Signed-off-by: raver119 <raver119@gmail.com>
* fixed grid size for concat
Signed-off-by: raver119 <raver119@gmail.com>
* fixed grid size for concat
Signed-off-by: raver119 <raver119@gmail.com>
* test commented out
Signed-off-by: raver119 <raver119@gmail.com>
* - specifying template instantiation for certain types in float16 and bloat16
Signed-off-by: Yurii <iuriish@yahoo.com>
* - polishing bfloat16 and float16 member functions template specialization
Signed-off-by: Yurii <iuriish@yahoo.com>
* - rewrite and overload array +-*/ scalar and scalar +-*/ arr in NDAray class
Signed-off-by: Yurii <iuriish@yahoo.com>
* - make corrections which have to do with and rvalue lvalue conversions
Signed-off-by: Yurii <iuriish@yahoo.com>
* - provide move semantic in NDArray operators array +-/* array
Signed-off-by: Yurii <iuriish@yahoo.com>
* float16/bfloat16 tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* one more tweak
Signed-off-by: raver119 <raver119@gmail.com>
* - make float16 and bfloat16 to compile successfully on cuda
Signed-off-by: Yurii <iuriish@yahoo.com>
* - do not use resources of view-like arrays when move semantics is applied
Signed-off-by: Yurii <iuriish@yahoo.com>
* - get rid of pointers in signatures NDArray methods 1
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correction of signature of NDArray::dup method
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correction of signature of NDArray::reduceAlongDimension method
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::applyIndexReduce and applyTrueBroadcast methods
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::applyReduce3 and varianceAlongDimension methods
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::tensorsAlongDimension and diagonal methods
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::allTensorsAlongDimension
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::reduceAlongDimension 2
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::applyTransform 2
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::applyPairwiseTransform 2
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::applyBroadcast 2
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::applyTrueBroadcast 2
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::applyScalar and applyScalarArr
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::lambda methods
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::reduce3 methods 2
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of following NDArray methods: add/sub/mul/div row/column and fillAsTriangular
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::tileToShape methods
Signed-off-by: Yurii <iuriish@yahoo.com>
* - signature correction of NDArray::isShapeSameStrict method
Signed-off-by: Yurii <iuriish@yahoo.com>
* minor corrections in tests
Signed-off-by: Yurii <iuriish@yahoo.com>
* - replace reduce op in batchnorm mkldnn
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add explicit templates instantiations for operator+(NDArray&&. const scalar)
Signed-off-by: Yurii <iuriish@yahoo.com>
* - corrections of casts in float16/bfloat16
Signed-off-by: Yurii <iuriish@yahoo.com>
* - provide move semantics in following NDArray methods: transform, applyTrueBroadcast, transpose, reshape, permute
Signed-off-by: Yurii <iuriish@yahoo.com>
* - get rid of input array A duplicate in svd cuda op
Signed-off-by: Yurii <iuriish@yahoo.com>
* - avoid available bug in svd cuda API
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add temporary global memory buffer in svd cuda when calcUV = false and m != n
Signed-off-by: Yurii <iuriish@yahoo.com>
* - remove test with blfoat16 type for betainC
Signed-off-by: Yurii <iuriish@yahoo.com>
* - resolve conflicts after master has been merged in
Signed-off-by: Yurii <iuriish@yahoo.com>
* - changed type of affected input array in fused_batch_norm
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add several explicit type castings
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add ND4J_EXPORT to operators
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add explicit template types in instantiations of template arithm operators of NDArray class
Signed-off-by: Yurii <iuriish@yahoo.com>
* - one more test fix
Signed-off-by: Yurii <iuriish@yahoo.com>
Co-authored-by: raver119 <raver119@gmail.com>
* libnd4j: RgbToGrayscale op #8536 - raw implementation in user branch, need checks for integration and adding other orders
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: RgbToGrayscale op #8536 next step of merging images
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: RgbToGrayscale op #8536, Revert merge of hsv_to_rgb and rgb_to_hsv as cause conflicts in naming need refactoring before merge, implementation of rbg_to_grs added
* libnd4j: RgbToGrayscale op #8536 imlementation and conflict resolve
* libnd4j: RgbToGrayscale op #8536 merged operations with images into image, renamed methods and files
* libnd4j: RgbToGrayscale op #8536 added test for rgbToGrayScale, need clarification and fixed tests case run
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j: RgbToGrayscale op #8536 bug fixing and need review
* libnd4j: RgbToGrayscale op #8536 some additional corrections after review
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* - minor corrections in rgbToGrs test1
Signed-off-by: Yurii <iuriish@yahoo.com>
* libnd4j: RgbToGrayscale op #8536, corrected tests and rbf_to_grs, fixed problems, refactoring, need review
* libnd4j: RgbToGrayscale op #8536 fix for 'f' order in rgbToGrs
* libnd4j: RgbToGrayscale op #8536 fixed several bugs with dimC, test case refactoring and improve
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* - add cuda kernel for rgbToGrs op
Signed-off-by: Yurii <iuriish@yahoo.com>
* - fix linkage errors
Signed-off-by: Yurii <iuriish@yahoo.com>
Co-authored-by: Yurii Shyrma <iuriish@yahoo.com>
* Added infrastructure for implementation op lu for both cuda and cpu platforms.
* Added implementation of helpers with lu op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored LU decomposition to use vector of permutations instead.
* Refactored helpers for lu op.
* Fixed crash with determinant op.
* Refactored cpu LU op heleper.
* Added implementation for lu op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed issue with argmax on column.
* Added multithreaded behaviour for lu op helper.
* Fixed multithreaded cpu implementation helpers for lu op.
* Added cuda implementation for lu op helper.
* Finished lu helper implementation for cuda platform.
* Eliminated waste prints and comments.
* Fixed race condition and multithreading issues.
* Fixed memory leak with shape construction.
* Corrected test for lu op to avoid near zero elements on the main diagonal."
Signed-off-by: shugeo <sgazeos@gmail.com>
* Improved test for adjust_constast op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed issues with cuda implementation of resize_bicubic helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Expanding allowed paddings type to 64bit ints also.
* Extended to int64 paddins data types for mirror_pad op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Fused batch norm fixes-
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests switched off.
* Added test for resize_bicubic.
* Eliminated wasted in test of bicubic resize.
* Switched off multithreading explicit.
* HsvToRgb and RgbToHsv added
* Eliminated waste comments and conform proper float constants.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed multithreading with resize_bicubic helper for cpu platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* ResizeBicubic was fixed.
* Some fixes
* Fix op name
* Validation fixed.
* Clarifications for tests
* Wrappers and small fixes for new ops.
* CUDA implementation for hsv_to_rgb and rgb_to_hsv
Signed-off-by: raver119 <raver119@gmail.com>
* hsv_to_rgb and rgb_to_hsv operations
Test coverage: c order 1d, 2d, 3d array
Signed-off-by: Abdelrauf <rauf@konduit.ai>
* Index check
Signed-off-by: Abdelrauf <rauf@konduit.ai>
* Suppress Msvc floating point errors
Signed-off-by: Abdelrauf <rauf@konduit.ai>
* Added Index Check for adjust_saturation and adjust_hue
Signed-off-by: Abdelrauf <rauf@konduit.ai>
* minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* Fixes missed Msvc floating narrowing errors
Signed-off-by: Abdelrauf <rauf@konduit.ai>
* IndexReduce and Reduce3 split into few units
Signed-off-by: raver119 <raver119@gmail.com>
* IndexReductionLoops split as well
Signed-off-by: raver119 <raver119@gmail.com>
* reduce_float split as well
Signed-off-by: raver119 <raver119@gmail.com>
* working prototype of new CUDA build with cmake
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of older stuff
Signed-off-by: raver119 <raver119@gmail.com>
* remove legacy CUDA debug section
Signed-off-by: raver119 <raver119@gmail.com>
* fPIC for GCC
Signed-off-by: raver119 <raver119@gmail.com>
* - switch to /MD
- make MSVC runtime lib configurable from 1 place
Signed-off-by: raver119 <raver119@gmail.com>
* few last tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* mae static library optional
Signed-off-by: raver119 <raver119@gmail.com>
* typo fixed
Signed-off-by: raver119 <raver119@gmail.com>
* resize_bicubic: allow more dtypes
Signed-off-by: raver119 <raver119@gmail.com>
* resize_bicubic: allow less dtypes
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored resize_bicubic op to full conform with TF1.5 and tests.
* Corrected test to proper data type output.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected double input test to float constant outputs.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Finished with correction of tests for bicubic interpolated resizes expected.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed adjust_contrast ops to allow non-RGB inputs.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored adjust_contrast_v2 to conform with TF one.
Signed-off-by: shugeo <sgazeos@gmail.com>
* AdjustContrast tests activated
* two typos fixed
Signed-off-by: raver119 <raver119@gmail.com>
* cleaned up bert iterator tests (#110)
Signed-off-by: eraly <susan.eraly@gmail.com>
* Various pre-release fixes (#111)
* Various fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix default dtypes for MaxPoolWithArgmax
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small pre-release tweak (#112)
* Log UI address on launch as in previous Play-based UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Logging level tweak for UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* http not https
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec python ensure host (#113)
* ensure host
* one more host ensure
* info->debug
* [WIP] reverse improvements (#115)
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* reverse draft
Signed-off-by: raver119 <raver119@gmail.com>
* reverse kernel
Signed-off-by: raver119 <raver119@gmail.com>
* reverse kernel
Signed-off-by: raver119 <raver119@gmail.com>
* 2 micro fixes
Signed-off-by: raver119 <raver119@gmail.com>
* Shugeo resize fix5 (#102)
* Refactored resize images ops to use TF-like bool args as input.
* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.
* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.
* Refactored nearest_neighbor resize op.
* Added a pair of tests for special case of resize_bilinear algorithm.
* Fixed issue with resize_bilinear op.
* Refactored cpu implementation for helpers with resize_nearest_neighbor op.
* Final fixed for resize ops to conform TF v.1.5
* Refactored cuda helpers for resize_neares_neighbor op.
* Fixed resize_bilinear to accept proper data.
* Fixed issue with non-float input for resize_bilinear op.
* Refactored cuda helper for resize_bilinear to proper process non-float inputs.
* Added tests for resize_bilinear to int inputs.
* Fixed ResizeBilinear wrapper
* Tests fixed
* Fixed float and bool constant to avoid overflow for some kind of compilers.
* Corrected float constants with float data type.
* Added f suffix for float constants.
* Corrected float constant to avoid overflow with initializing lists.
* Corrected float initializing list with float input.
* Corrected bool constant with initalizing list.
* Corrected float and bool values with initializing lists.
* Fixed wrong constant.
* Fixed issue with 1x1 input picture for resize.
* ResizeBilinear default values on import fix
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored resize images ops to use TF-like bool args as input.
* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.
* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.
* Refactored nearest_neighbor resize op.
* Added a pair of tests for special case of resize_bilinear algorithm.
* Fixed issue with resize_bilinear op.
* Refactored cpu implementation for helpers with resize_nearest_neighbor op.
* Final fixed for resize ops to conform TF v.1.5
* Refactored cuda helpers for resize_neares_neighbor op.
* Fixed resize_bilinear to accept proper data.
* Fixed issue with non-float input for resize_bilinear op.
* Refactored cuda helper for resize_bilinear to proper process non-float inputs.
* Added tests for resize_bilinear to int inputs.
* Fixed ResizeBilinear wrapper
* Tests fixed
* Fixed float and bool constant to avoid overflow for some kind of compilers.
* Corrected float constants with float data type.
* Added f suffix for float constants.
* Corrected float constant to avoid overflow with initializing lists.
* Corrected float initializing list with float input.
* Corrected bool constant with initalizing list.
* Corrected float and bool values with initializing lists.
* Fixed wrong constant.
* Fixed issue with 1x1 input picture for resize.
* ResizeBilinear default values on import fix
Signed-off-by: raver119 <raver119@gmail.com>
* - add padding calculation in same mode in causal conv1d op for right mkl paddings
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct causal condition in mkldnnUtils.cpp
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct some code which caused additional round errors is betainc op
Signed-off-by: Yurii <iuriish@yahoo.com>
* - put float in place of template parameter in nan assign in betainc op
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add possibility of passing scalar-array as input parameter for scale factor in adjust hue/contrast/saturation ops
- correct typo in function which calculates regularized incomplete beta integral
Signed-off-by: Yurii <iuriish@yahoo.com>
* - fix bug in betainc cuda kernel
Signed-off-by: Yurii <iuriish@yahoo.com>
* - start working on implementation of digamma function
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on digamma function (cpu)
Signed-off-by: Yurii <iuriish@yahoo.com>
* - testing and fixing bugs in digamma op
Signed-off-by: Yurii <iuriish@yahoo.com>
* - make correction n cuda kernel for polyGamma
Signed-off-by: Yurii <iuriish@yahoo.com>
* - remove unnecessary stuff from betaInc cuda kernel
Signed-off-by: Yurii <iuriish@yahoo.com>
* - resolve conflicts in DeclarableOpsTests3.cpp after master branch has been merged
Signed-off-by: Yurii <iuriish@yahoo.com>
* - restore id number of Not opertion in legacy_ops.h
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct padding calculation in mkl dnn conv1d causal
Signed-off-by: Yurii <iuriish@yahoo.com>
* restore empty check in adjust_contrast_v2
Signed-off-by: raver119 <raver119@gmail.com>
* fix narrowing down cast
Signed-off-by: raver119 <raver119@gmail.com>
* trigger jenkins
Signed-off-by: raver119 <raver119@gmail.com>
* few more fixes for MSVC and Windows
Signed-off-by: raver119 <raver119@gmail.com>
* few more fixes for MSVC and Windows
Signed-off-by: raver119 <raver119@gmail.com>
* few more fixes for MSVC and Windows
Signed-off-by: raver119 <raver119@gmail.com>
* few more fixes for MSVC and Windows
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* - few more tweaks
- tensormmul dtype validation
Signed-off-by: raver119 <raver119@gmail.com>
* - few more tweaks
- batched gemm dtype validation
Signed-off-by: raver119 <raver119@gmail.com>
* - few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* - few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* - few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* - few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* - add causal mode of padding to convolutions
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add additional tests for causal conv1d
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add causal mode for cuda conv kernels
Signed-off-by: Yurii <iuriish@yahoo.com>
* Java side of Conv1D changes
Signed-off-by: raver119 <raver119@gmail.com>
* Add Conv1DDerivative op
Signed-off-by: Alex Black <blacka101@gmail.com>
* Causal Conv1D gradient checks
Signed-off-by: Alex Black <blacka101@gmail.com>
* Tweaks
Signed-off-by: Alex Black <blacka101@gmail.com>
* - add causal padding mode to conv2d_bp
Signed-off-by: Yurii <iuriish@yahoo.com>
* More thorough causal conv1d tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Implementation for non_max_suppression_v3 was added. Initial version
* Added check for overcome threshold.
* Added definition for V3 method.
* java remapping for NonMaxSuppressionV3
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed proporly processing of an empty output and test.
* Refactored op to less threshold data to float.
* Implemented cuda-based helper for non_max_suppression_v3 op.
* Fixed fake_quant_with_min_max_vars op.
* Fixed tests with float numbers.
* - assert now stops execution
- sortByKey/sortByValue now have input validation
Signed-off-by: raver119 <raver119@gmail.com>
* missing var
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed proper processing for zero max_size inputs.
* Refactored kernel callers.
* Fixed return statement for logdet op helper.
* Refactored unsorted segment SqrtN op.
* get back 8 tail bytes on CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored segment prod ops and helpers for cuda and tests.
* Additional test.
* CudaWorkspace tests updated for 8 tail bytes
Signed-off-by: raver119 <raver119@gmail.com>
* special atomic test
Signed-off-by: raver119 <raver119@gmail.com>
* atomicMul/atomicDiv fix for 16bit values
Signed-off-by: raver119 <raver119@gmail.com>
* Eliminated waste prints.
* - improve performance of scatter (no lock) ops for 1D case
Signed-off-by: Yurii <iuriish@yahoo.com>
* - improve scatter lock op performance for 1D case
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add kernel for verification of input indices-array elements in scatter and scatter_nd ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - provide fast indices checking on cpu side for scatter and gather osp
Signed-off-by: Yurii <iuriish@yahoo.com>
* - apply corrections requested by pr reviewer
Signed-off-by: Yurii <iuriish@yahoo.com>
* Corrected input checking and tests for bitcast op.
* Fixed an issue with non_max_suppression form generation and processing with score threshold given.
* Fixed bilinear resize kernel and tests.
* push for Serhii
Signed-off-by: raver119 <raver119@gmail.com>
* Added test for nearest_neighbor resize with int input.
* Added data type check for input/output match.
* Eliminate error in macros.
* Improved output message for type checking.
* Fixed input/output types for op.
* Eliminated waste logging.
* Refactored resize_bilinear helper for multithreading for cpu platform.
* Cosmetic changes only.
* Fixed error for string substitution.
* Skip test for cbow_batch with cuda.
* fix for resizeNearestNeighbor output dtype
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored non_max_suppression helper.
* Refactored shape generation and input handling.
* Added additional test.
* - profiling cuda kernels for vol2col and im2col
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct addBias helper
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct mkl dilation formula and switch off mkl api for dilation deconvolutions
Signed-off-by: Yurii <iuriish@yahoo.com>
* - create op
- skip exec for empty inputs for non_max_suppression
- EmptyHandling idea
Signed-off-by: raver119 <raver119@gmail.com>
* Create op and mapping for it
Signed-off-by: raver119 <raver119@gmail.com>
* Added implementation files for image_resize and resize_bicubic ops.
* Image resize and image.resize_bicubic ops implementation. Initial revision.
* Finished with infrastructure development for image.resize_bilinear op and image_resizo op implementation.
* Refactored resize methods.
* Added processing for Mitchelcubic algorithm.
* Added check for input/output sizes.
* Added int and float types for crop_and_resize op.
* Refactored crop_and_resize output type check.
* Added helper for bicubic interpolation as TF v.1 does.
* Added TF v.1 bicubic helper for cuda platform.
* Added cached class for bicubic algorithm.
* Refactored cuda implementation for crop_and_resize helper to use proper output type.
* Added facilities for bicubic interpolation.
* Portion bicubic interpolation from TF.
* Added tests for resize_bilinear testing.
* Working implementation of bicubic interpolation and tests.
* Refactored routines with image_resize bicubic op helper.
* Refactored code with coding standards.
* Refactored cpu helpers for resize_bicubic op.
* Refactored bicubic helpers.
* Added bicubic resize facilities.
* Implementing cuda kernels for bicubic interpolation. Implementation step.
* Cuda implementation of resize_bicubic op helper.
* Refactor image.resize_bicubic op helpers.
* Refactored helpers for resize_bicubic. Added error checking with cuda implementation.
* Refactored cuda implementation of resize_bicubic op helper. The first working revision.
* Cuda arch implementation for resize_bicubic op helper. Full working single-threaded revision.
* Intermediate bicubic interpolation helper for cuda.
* Refactored cpu helper for resize_bicubic.
* Multithreaded cuda implementation for resize_bicubic.
* Fixed merge issues.
* Refactored nlp helpers.
* Replicated resize_bicubic for 3D also.
* Eliminated waste comments of unused code.
* Eliminated waste comments with unused code.
* Eliminated waste template definitions.
* Eliminated waste debug code.
* Eliminated waste comments.
* Fixed multithreading with helpers.
* Fixed test suites for float and double in float point input lists.
* Fixed usage of reshape with 3D/4D on resizes.
* Final fixes.
* Fixed resize_neighbor op problem.
* Added a pair of tests for failed ops.
* Fixed cpu helper for draw_bounding_boxes op.
* Refactored implementation of draw_bounding_boxes op to full conform with TF.
* Improved multithreading with draw_bounding_boxes op cuda helper.
* Eliminated log messages.
* Changed logging with draw_bounding_boxes op helper and tests.
* Resize_biliear with 3D input allowed.
* Refactored 3D input acception with resize_bilinear op.
* And another improvement.
* Refactored reshape of input/output for resize_bilinear.
* Improvements final.
* Finished with 3D replication for image.resize_bilinear/_nearest_neighbor.
* Added copyrights for TF code.
* Using new form of multithreading for cpu implementation.
* Fixed shape error.
* Added multithreaded with batches on crop_and_resize functor.
* Refactored multithreading with crop_and_resize and draw_bounding_boxes.
* - get rid of some copy procedures in mmulHelper ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on embedding cuda api for batched gemm (cublasGemmBatchedEx) in our mmulHelper class
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on cuda batched gamm api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - write own cuda kernel performing batched gemm
Signed-off-by: Yurii <iuriish@yahoo.com>
* missing include in MmulHelper
Signed-off-by: raver119 <raver119@gmail.com>
* - forgot to keep in code previous correct kernels for mmulNxN, since it may happen that new onw will fail for some reason in future
Signed-off-by: Yurii <iuriish@yahoo.com>
* disable old tensordot
Signed-off-by: raver119 <raver119@gmail.com>
* - rewrite cuda kernels for usualGemm and usualGemv
Signed-off-by: Yurii <iuriish@yahoo.com>
* - profiling mmul helpers
Signed-off-by: Yurii <iuriish@yahoo.com>
* - prints to check shapes were added
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct type of output array Cin mmulNxN
Signed-off-by: Yurii <iuriish@yahoo.com>
* - take into account possible nans in C array
Signed-off-by: Yurii <iuriish@yahoo.com>
* slightly change numThreads message
Signed-off-by: raver119 <raver119@gmail.com>
* - make corrections in accordance to given notes in pr review
Signed-off-by: Yurii <iuriish@yahoo.com>
* Added implementation files for image_resize and resize_bicubic ops.
* Image resize and image.resize_bicubic ops implementation. Initial revision.
* Minor fix
* Some TF imports disabled.
* Finished with infrastructure development for image.resize_bilinear op and image_resizo op implementation.
* Refactored resize methods.
* Added processing for Mitchelcubic algorithm.
* adjust_contrast
* Small fix for TF import expected value loading when variable name starts with the test name
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests
* Tests added.
* Removed tf names absent in mapping.
* Some fixes.
* Small fixes
* Minor change
* Some failing tests.
* Disable failed test
* Ignore some tests
* Fix import class mapping
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix float property mapping (flatbuffers)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Override equality function for model 'dropout'
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fail tests
* Failed tests ignored temporarily.
* Minor fixes
* Small fix
* Conflict resolved
* Default implementations of tensorflowName and onnxName
* one range test
Signed-off-by: raver119 <raver119@gmail.com>
* few Context convenience singatures
Signed-off-by: raver119 <raver119@gmail.com>
* one more range test
Signed-off-by: raver119 <raver119@gmail.com>
* "range" "fix"
Signed-off-by: raver119 <raver119@gmail.com>
* adjuct_contrast_v2 now allows scale factor to be provided via input_variable
Signed-off-by: raver119 <raver119@gmail.com>
* adjust_contrast now allows scale factor as variable too
Signed-off-by: raver119 <raver119@gmail.com>
* bitcast shape tests
Signed-off-by: raver119 <raver119@gmail.com>
* BitCast import dtype added
Signed-off-by: raver119 <raver119@gmail.com>
* few more BitCast signatures
Signed-off-by: raver119 <raver119@gmail.com>
* - platform helpers can be disabled on per-op basis now via Context::allowHelpers
- java has access to it as well
Signed-off-by: raver119 <raver119@gmail.com>
* global platform-helpers trigger
Signed-off-by: raver119 <raver119@gmail.com>
* few signatures renamed
Signed-off-by: raver119 <raver119@gmail.com>
* - few new env variables to follow
- maxThreads/masterThreads differentiation
Signed-off-by: raver119 <raver119@gmail.com>
* Javadoc update
Signed-off-by: raver119 <raver119@gmail.com>
* Corrected randomuniform declaration.
* Refactored uniform distribution for both cuda and cpu platforms.
* Refactored uniform distribution and tests.
* Fixed type usage with indices.
* Refactored uniform distribution implementation and tests to full conform with TF implementation.
* Refactored gamma function to use type util method.
* Copyright changes and fixes with ConstantHelper.
* Added error checking on allocate cuda device memory and operations.
* Added implementation for random_gamma op.
* Added implementation for random_poisson op and support classes.
* Added helpers for random_poisson and random_gamma ops.
* Implementation of random_poisson. The first working edition.
* Implementation of random_poisson. Parallelized working edition.
* Implementation of random_gamma. Parallelized working edition with alpha only.
* Added cuda implementation for helper of poisson distribution.
* Corrected shape calculation with random_gamma and tests.
* Finished cpu implementation for gamma distribution.
* Finished cuda implementation for random_gamma op.
* Refactored cpu helpers for random_gamma and random_poisson ops.
* Refactored cuda helpers for gamma and poisson distribution.
* Refactored cuda helper for gamma distribution.
* Refactored cpu helper for random_poisson op.
* Refactored cpu helper for random_gamma op.
* #8280 biasadd_bp nchw arg fixes (java side) + test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8285 Concat op Java side fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Concat op cpp fix - allow dynamic axis to be negative, same as static axis
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ignores for deconv3d import tests until deconv3d_tf op is implemented
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* - provide possibility to pass axis as last input array in concat op
- corrcect sumation in bias_add_bp op for NHWC case
Signed-off-by: Yurii <iuriish@yahoo.com>
* - write code for deconv2d op based on mkl dnn api
* no unsafe math
Signed-off-by: raver119 <raver119@gmail.com>
* no unsafe math
Signed-off-by: raver119 <raver119@gmail.com>
* - get rid of e<> and p<> methods in svd helper
Signed-off-by: Yurii <iuriish@yahoo.com>
* - provide mkl api support for deconvolution 3d
Signed-off-by: Yurii <iuriish@yahoo.com>
* - write deconv2d_bp based on mkl api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - write deconv3d_bp based on mkl api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - testing and fixing deconv based on mkl api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - remove dilation form conv2d/3d mkl
Signed-off-by: Yurii <iuriish@yahoo.com>
* - minor changes
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further corrections of deconv ops based on mkl dnn api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - provide deconv2d_tf based on mkl dnn api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add minor corrections required by reviewer
Signed-off-by: Yurii <iuriish@yahoo.com>
* Added non_max_suppression_overlaps op and tests.
* Refactored implementation of non_max_suppression_overlaps.
* Refactoring of implementation of non_max_suppression_overlaps op.
* Refactoring of implementation of non_max_suppression op.
* Fixed portion error.
* Added cuda frontends for image suppression ops.
* Eliminated crash with cuda arch on image.non_max_suppression_overlaps op.
* Improved implementation of image_suppression helper for cpu platform.
* The generic approach of non_max_suppression_overlaps op helper with cuda platform.
* Working cuda implementation of helper non_max_suppression_overlaps op.
* Eliminated waste comments.
* Improved implementations for both platforms
* Refactored cuda implementation of image.non_max_suppression_overlaps op helper.
* Improved cuda implementation of non_max_suppression op helper.
* Refactored cuda implementation of image.non_max_suppression_overlaps op helper.
* Improved cuda implementation of image.non_max_suppression_overlaps op helper.
* Added modifications into cuda implementation for image suppression overlaps op.
* Correct queue emulation with cuda implementation of non_max_suppression_overlaps op.
* Prefinal stage of cuda implementation of non_max_suppression_overlaps.
* Worked cuda implementation of non_max_suppresion_overlaps helper.
* Fixed return to proper thread.
* Improvements for cuda implementation of image.non_max_suppression_overlaps op helper.
* Fixed implementation issues with non_max_suppression_overlaps on cuda platform.
* Fixed skip for non_max_suppression_overlaps on cuda platform.
* Finalize implementation of image_suppression helper and tests.
* Cosmetic changes only.
* - write code for new batchnorm backprop
Signed-off-by: Yurii <iuriish@yahoo.com>
* - testing batchnorm backprop
Signed-off-by: Yurii <iuriish@yahoo.com>
* - write code for batchnorm backprop based on mkl dnn api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - testing and fixing bugs in batchnorm_bp mkl dnn
Signed-off-by: Yurii <iuriish@yahoo.com>
* - made corrections required by reviewer
Signed-off-by: Yurii <iuriish@yahoo.com>
* - change name in java wrapper for batchnorm op
Signed-off-by: Yurii <iuriish@yahoo.com>