* - profiling of concat op (both cuda and cpu)
Signed-off-by: Yurii <iuriish@yahoo.com>
* better comparison for large concat
Signed-off-by: raver119 <raver119@gmail.com>
* - further improving of concat op
Signed-off-by: Yurii <iuriish@yahoo.com>
* some loggin
Signed-off-by: raver119 <raver119@gmail.com>
* - add possibility to verify presence of trailing unities in shape and set strides/ews correspondingly
- restrict second simple case in concat op to c order only
Signed-off-by: Yurii <iuriish@yahoo.com>
* - move concat op to specials_single.cpp file
Signed-off-by: Yurii <iuriish@yahoo.com>
* - get rid of second concat op declaration in transforms.cpp file
Signed-off-by: Yurii <iuriish@yahoo.com>
Co-authored-by: raver119 <raver119@gmail.com>
* Libnd4j: TensorMMul backprop op #8174, raw implementation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: TensorMMul backprop op #8174 merge master and some corrections
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: TensorMMul backprop op #8174 algorithm update, need testing, sync with master
* Libnd4j: TensorMMul backprop op #8174 fixed incorrect B axes calculation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: TensorMMul backprop op #8174 optimize axes identification and fix bug of indeces overlapping, added first test. need testing with different shapes
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: TensorMMul backprop op #8174 some fixes and improvements need more testing
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: TensorMMul backprop op #8174 fixed order of matrix multiply
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: TensorMMul backprop op #8174 fixed issue of incorrect axes definition, add tests based on TF, need additional testing for case dLdC not equal 1
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: TensorMMul backprop op #8174 fixed scalar case add test
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: TensorMMul backprop op #8174 fixed bp algorithm, axes definition, need some mode testing with different orders combination f,c; c,f f,f and add some checks for inputs
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: TensorMMul backprop op #8174 some checks and corrections added tests, exists the problem with different input orders support A-f B-c and A-f B-f
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: TensorMMul backprop op #8174 sync master
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* - correct bug in MmulHelper::tensorDot(a, b, c, axes_a, axes_b,permutForC)
Signed-off-by: Yurii <iuriish@yahoo.com>
* Libnd4j: TensorMMul backprop op #8174 code clean up and refactoring
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* - add check for linspase ordered permutations in ShapeUtils::evalShapeForTensorDot
Signed-off-by: Yurii <iuriish@yahoo.com>
* - provide additional code in shape::reshape stuff in order to reduce amount of allocation/copy operations during reshaping procedure
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on problem of wrong shape evaluation during permute/reshape procedures
Signed-off-by: Yurii <iuriish@yahoo.com>
* - still looking for bug reason in reshape/permute stuff
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct bug in transform cuda native ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct bug in NDArray::assign
Signed-off-by: Yurii <iuriish@yahoo.com>
* - remove old shape::reshape stuff
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add possibility to disable copy of old buffer to new buffer during reshape operation in NDArray class
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct bug in tensorDot which had to do with wrong pointers assigments
Signed-off-by: Yurii <iuriish@yahoo.com>
Co-authored-by: Oleh <oleg.semeniv@gmail.com>
* Gradients tests added
* Fix for Standard deviation serialization + test
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Test fixed
* Spark config driver host config for CI
Signed-off-by: Alex Black <blacka101@gmail.com>
* Op validation timeout increase
Signed-off-by: Alex Black <blacka101@gmail.com>
* Gradient check - fix for low probability test failure due to randomly all 0s mask
Signed-off-by: AlexDBlack <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* special workaround methods for DataBuffer.write
Signed-off-by: raver119 <raver119@gmail.com>
* one test removed
Signed-off-by: raver119 <raver119@gmail.com>
* more of unsynced
Signed-off-by: raver119 <raver119@gmail.com>
* missing asLong for BaseCudaDataBuffer
Signed-off-by: raver119 <raver119@gmail.com>
* linear equations systems solve op. Initial commit.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed compiling issues.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Linear equations systems solve. The next stage commit.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added test for linear equations systems solve operation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added additional test and fixed lower matrix retrievance.
* Implementation for solve of the systems of linear equations."
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored permutation generation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added restore for permutations batched with cuda helper for solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Finished cuda implementation for solve op helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored cpu helpers for solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fix gtest output on Windows
* Fixed issue with permutation matrix for cuda implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed issue with permutation matrix for cpu implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Eliminated waste comments.
Signed-off-by: shugeo <sgazeos@gmail.com>
* LinearSolve added
* Mapping added
* Javadoc added
* Refactored implementation of triangular_solve helpers and tests for solve matrix equations generally.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added a test for solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Solve test added
* Fix for TF import
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* - range op now accepts dargs
- dargs now can be in signature
Signed-off-by: raver119 <raver119@gmail.com>
* range dtype java side
Signed-off-by: raver119 <raver119@gmail.com>
* linspace fix
Signed-off-by: raver119 <raver119@gmail.com>
* lin_space fix for scalar outputs
Signed-off-by: raver119 <raver119@gmail.com>
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* - one more test for OneHot with dtype
- one more signature in Nd4j
Signed-off-by: raver119 <raver119@gmail.com>
* ones_as/zeros_as now accept dtype
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* - more updates for configurable data types
- ones_as/zeros_as java side + tests
Signed-off-by: raver119 <raver119@gmail.com>
* few c++ tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* few more changes around DArgs
Signed-off-by: raver119 <raver119@gmail.com>
* Cleanup modules
* Moving subprojects to nd4j-api
* Project cleanup
* Dropped AWS sub-project
* dl4j-util moved to core
* dl4j-perf moved to core
* Tests coverage
* Revert "Moving subprojects to nd4j-api"
This reverts commit bc6eb573c6b60c407ade47172c5d204725077e6b.
* Moved nd4j-buffer and nd4j-context to nd4j-api
* Rolled back change
* Revert "Project cleanup"
This reverts commit 64ac7f369b2d968f7be437718034f093fc886ffc.
* Datavec cleaned up
* Revert "Moved nd4j-buffer and nd4j-context to nd4j-api"
This reverts commit 75f4e8da80d2551e44e1251dd6c5923289fff8e1.
# Conflicts:
# nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/opvalidation/ReductionBpOpValidation.java
* Resolve conflict
* Compilation fixed.
* nd4j-context and nd4j-buffer moved to nd4j-api
* Fixed TF mapping for mmul
* Fix for dl4j-cuda tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Move last few tests from deeplearning4j-nn to -core
Signed-off-by: Alex Black <blacka101@gmail.com>
* Remove incorrect TF import mapping for TensorMmul op
Signed-off-by: Alex Black <blacka101@gmail.com>
* Cleaned TF mapping
* Fix path for test results on windows
* Remove old dependency
Signed-off-by: Alex Black <blacka101@gmail.com>
* One more attempt to fix path for test results on windows
* fixup! One more attempt to fix path for test results on windows
* fixup! One more attempt to fix path for test results on windows
Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
* Add maven profile + base tests methods for integration tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Switch from system property to environment variable; seems more reliable in intellij
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add nd4j-common-tests module, and common base test; cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ensure all ND4J tests extend BaseND4JTest
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test spam reduction, import fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add test logging to nd4j-aeron
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix unintended change
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Reduce sprint test log spam
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More test spam cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Significantly speed up TSNE tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* W2V iterator test unit/integration split
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More NLP test speedups
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Avoid debug/verbose mode leaking between tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* test tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter extends base DL4J test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter test speedup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* nlp-uima test speedup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More test speedups
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix ND4J base test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Few small ND4J test speed improvements
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J tests speedup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More tweaks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Even more test speedups
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More tweaks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Various test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* More test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Add ability to specify number of threads for C++ ops in BaseDL4JTest and BaseND4JTest
Signed-off-by: Alex Black <blacka101@gmail.com>
* nd4j-aeron test profile fix for CUDA
Signed-off-by: Alex Black <blacka101@gmail.com>
* Added implementation of the triangular_solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed compilation issues.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added verification of input data and helpers facilities for triangular_solve op.'
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added cpu implementation for triangular_solve helpers.
* Added tests and implementation for upper triangular equations.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added a pair of cases to tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added multithreading with cpu helpers for triangular_solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added cuda implementation of triangular_solve op helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Finished cuda implementation of triangular_solve helpers and tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed copyright marks.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected grammar errors with doc and error messages.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored matricies processing with triangular_solve cuda helper implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added triangular_solve wrapper
* Fixed mapping
* Added processing for adjoint with cpu helpers of triangular_solve op implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added implementation for adjoint routine with cuda platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added multithreading with adjoint routine for cpu platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Added implementation for resize_area op. Initial commit.
* Added implementation of resize_area op. Initial revision.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected resizeArea functor call.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation of resize_area. Cpu platform helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation for resize_area helpers. The first part revision.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added a set of tests for resize_area op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Cuda implementation for resize_area. Initial approach.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adding multithreading for resize_area algorithm.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Cuda implementation of resize_area helpers. Shared memory approach.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored resizeAreaKernel with cuda implementation.
* Eliminated compilation errors.
* ResizeArea helpers for cuda platform. The first working revision.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added test for batched resize_area op testing.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation of resize_are for cuda platform and tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed multithreading with resize_area op helper.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected copyright marks with sources.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected copyright mark for resize_area op implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected copyright mark for parity ops header.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected typo in strings and so on with image resize ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored resize_area helpers and multithreading.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added ResizeArea wrapper
* Added test with align_corners and fixed shape processing with only int args given for output size.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added test
* TF mapping for ResizeArea
* Fixed implementation issues with resize_area op for both platforms.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored image resizer struct to use flexible types for ints and floats.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Improved multithreading with resizeAreaKernel launch.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Use asynchronical memory copying with cuda platform image resize allocations.
Signed-off-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 first step of Pow_bp operation implementation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 some corrections of calculation steps
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 some bug fixes, the PowDerevative op made broadcastable, add the raw tests for op, need refactoring to use broadcast ops
* Libnd4j: Add broadcastable elementwise power derivative #7461 fixed several bugs add broadcast support and tests, need to fix scalar+array and array+scalar
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 fixed bugs for scalar inputs, fixed multinomial tests, added tests
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 fised bugs for different shapes support, tests updated
* Libnd4j: Add broadcastable elementwise power derivative #7461 applied all possible variants via tiled arrays, add support of broadcast for Pow and PowDerivative ops, covered by tests, before review have to be replaced tiled implementation by applyTrueBroadcast
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 replaced tile by broadcast implementation, fixed issue with negative x input, corrected tests, need additional testing
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 added and corrected test cases, corrected implementation need review
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 code clean up
* Libnd4j: Add broadcastable elementwise power derivative #7461 code clean up, removed some tests, add tests with scalar
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 code improvement and clean up, split tests
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 some code clean up
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative replace __isnanf by internal realization
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* pow_bp wrapper
* Fixed PowBp wrapper
* Tests added
* Test fixed
* Fix return type
* Disable powBp usage
* Pow backprop changed
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* SameDiff exec: Fix for switch op when predicate is constant, and op is inside loop
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update ignores for failing zoo models
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Allow scalar op result array auto allocation
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Don't swallow underlying exception for calculateOutputShape execution failures
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ignore for known keras failure
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Profiler
Signed-off-by: Alex Black <blacka101@gmail.com>
* Next steps, polishing, and loading SD/TF format JSON
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Profile comparison method
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Make profiling result writing async to reduce main thread overhead
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Profiling polishing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Profile analyzer fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Polish
Signed-off-by: Alex Black <blacka101@gmail.com>
* Cleanup
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small formatting improvement
Signed-off-by: Alex Black <blacka101@gmail.com>
* Formatting tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* License headers
Signed-off-by: Alex Black <blacka101@gmail.com>
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Fused batch norm fixes-
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests switched off.
* Added test for resize_bicubic.
* Eliminated wasted in test of bicubic resize.
* Switched off multithreading explicit.
* HsvToRgb and RgbToHsv added
* Eliminated waste comments and conform proper float constants.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed multithreading with resize_bicubic helper for cpu platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* ResizeBicubic was fixed.
* Some fixes
* Fix op name
* Validation fixed.
* Clarifications for tests
* Wrappers and small fixes for new ops.
* resize_bicubic: allow more dtypes
Signed-off-by: raver119 <raver119@gmail.com>
* resize_bicubic: allow less dtypes
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored resize_bicubic op to full conform with TF1.5 and tests.
* Corrected test to proper data type output.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected double input test to float constant outputs.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Finished with correction of tests for bicubic interpolated resizes expected.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed adjust_contrast ops to allow non-RGB inputs.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored adjust_contrast_v2 to conform with TF one.
Signed-off-by: shugeo <sgazeos@gmail.com>
* AdjustContrast tests activated
* two typos fixed
Signed-off-by: raver119 <raver119@gmail.com>
* cleaned up bert iterator tests (#110)
Signed-off-by: eraly <susan.eraly@gmail.com>
* Various pre-release fixes (#111)
* Various fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix default dtypes for MaxPoolWithArgmax
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small pre-release tweak (#112)
* Log UI address on launch as in previous Play-based UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Logging level tweak for UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* http not https
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec python ensure host (#113)
* ensure host
* one more host ensure
* info->debug
* [WIP] reverse improvements (#115)
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* reverse draft
Signed-off-by: raver119 <raver119@gmail.com>
* reverse kernel
Signed-off-by: raver119 <raver119@gmail.com>
* reverse kernel
Signed-off-by: raver119 <raver119@gmail.com>
* 2 micro fixes
Signed-off-by: raver119 <raver119@gmail.com>
* Shugeo resize fix5 (#102)
* Refactored resize images ops to use TF-like bool args as input.
* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.
* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.
* Refactored nearest_neighbor resize op.
* Added a pair of tests for special case of resize_bilinear algorithm.
* Fixed issue with resize_bilinear op.
* Refactored cpu implementation for helpers with resize_nearest_neighbor op.
* Final fixed for resize ops to conform TF v.1.5
* Refactored cuda helpers for resize_neares_neighbor op.
* Fixed resize_bilinear to accept proper data.
* Fixed issue with non-float input for resize_bilinear op.
* Refactored cuda helper for resize_bilinear to proper process non-float inputs.
* Added tests for resize_bilinear to int inputs.
* Fixed ResizeBilinear wrapper
* Tests fixed
* Fixed float and bool constant to avoid overflow for some kind of compilers.
* Corrected float constants with float data type.
* Added f suffix for float constants.
* Corrected float constant to avoid overflow with initializing lists.
* Corrected float initializing list with float input.
* Corrected bool constant with initalizing list.
* Corrected float and bool values with initializing lists.
* Fixed wrong constant.
* Fixed issue with 1x1 input picture for resize.
* ResizeBilinear default values on import fix
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored resize images ops to use TF-like bool args as input.
* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.
* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.
* Refactored nearest_neighbor resize op.
* Added a pair of tests for special case of resize_bilinear algorithm.
* Fixed issue with resize_bilinear op.
* Refactored cpu implementation for helpers with resize_nearest_neighbor op.
* Final fixed for resize ops to conform TF v.1.5
* Refactored cuda helpers for resize_neares_neighbor op.
* Fixed resize_bilinear to accept proper data.
* Fixed issue with non-float input for resize_bilinear op.
* Refactored cuda helper for resize_bilinear to proper process non-float inputs.
* Added tests for resize_bilinear to int inputs.
* Fixed ResizeBilinear wrapper
* Tests fixed
* Fixed float and bool constant to avoid overflow for some kind of compilers.
* Corrected float constants with float data type.
* Added f suffix for float constants.
* Corrected float constant to avoid overflow with initializing lists.
* Corrected float initializing list with float input.
* Corrected bool constant with initalizing list.
* Corrected float and bool values with initializing lists.
* Fixed wrong constant.
* Fixed issue with 1x1 input picture for resize.
* ResizeBilinear default values on import fix
Signed-off-by: raver119 <raver119@gmail.com>
* - add causal mode of padding to convolutions
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add additional tests for causal conv1d
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add causal mode for cuda conv kernels
Signed-off-by: Yurii <iuriish@yahoo.com>
* Java side of Conv1D changes
Signed-off-by: raver119 <raver119@gmail.com>
* Add Conv1DDerivative op
Signed-off-by: Alex Black <blacka101@gmail.com>
* Causal Conv1D gradient checks
Signed-off-by: Alex Black <blacka101@gmail.com>
* Tweaks
Signed-off-by: Alex Black <blacka101@gmail.com>
* - add causal padding mode to conv2d_bp
Signed-off-by: Yurii <iuriish@yahoo.com>
* More thorough causal conv1d tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* - create op
- skip exec for empty inputs for non_max_suppression
- EmptyHandling idea
Signed-off-by: raver119 <raver119@gmail.com>
* Create op and mapping for it
Signed-off-by: raver119 <raver119@gmail.com>
* Added implementation files for image_resize and resize_bicubic ops.
* Image resize and image.resize_bicubic ops implementation. Initial revision.
* Minor fix
* Some TF imports disabled.
* Finished with infrastructure development for image.resize_bilinear op and image_resizo op implementation.
* Refactored resize methods.
* Added processing for Mitchelcubic algorithm.
* adjust_contrast
* Small fix for TF import expected value loading when variable name starts with the test name
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests
* Tests added.
* Removed tf names absent in mapping.
* Some fixes.
* Small fixes
* Minor change
* Some failing tests.
* Disable failed test
* Ignore some tests
* Fix import class mapping
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix float property mapping (flatbuffers)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Override equality function for model 'dropout'
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fail tests
* Failed tests ignored temporarily.
* Minor fixes
* Small fix
* Conflict resolved
* Default implementations of tensorflowName and onnxName
* one range test
Signed-off-by: raver119 <raver119@gmail.com>
* few Context convenience singatures
Signed-off-by: raver119 <raver119@gmail.com>
* one more range test
Signed-off-by: raver119 <raver119@gmail.com>
* "range" "fix"
Signed-off-by: raver119 <raver119@gmail.com>
* adjuct_contrast_v2 now allows scale factor to be provided via input_variable
Signed-off-by: raver119 <raver119@gmail.com>
* adjust_contrast now allows scale factor as variable too
Signed-off-by: raver119 <raver119@gmail.com>
* bitcast shape tests
Signed-off-by: raver119 <raver119@gmail.com>
* BitCast import dtype added
Signed-off-by: raver119 <raver119@gmail.com>
* few more BitCast signatures
Signed-off-by: raver119 <raver119@gmail.com>
* #8280 biasadd_bp nchw arg fixes (java side) + test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8285 Concat op Java side fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Concat op cpp fix - allow dynamic axis to be negative, same as static axis
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ignores for deconv3d import tests until deconv3d_tf op is implemented
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* update javadocs and a few method signatures
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add PRelu op
Signed-off-by: Ryan Nett <rnett@skymind.io>
* test and fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add PRelu op
Signed-off-by: Ryan Nett <rnett@skymind.io>
* test and fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* slightly better test
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Fixed signatures. SameDiff tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Tests fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Test fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Small fix
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Fixed test
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* fix execBackwards training issue
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix validation not specifying outputs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* another fix for validation listeners and history
Signed-off-by: Ryan Nett <rnett@skymind.io>
* tests
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add single batch dataset output methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
* SDCNN cleanup
Signed-off-by: Ryan Nett <rnett@skymind.io>
* NonNull annotations
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better javadoc, NonNull fix for sconv
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update builders to fix names
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* even more fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix for null bias
Signed-off-by: Ryan Nett <rnett@skymind.io>
* one test for alex
Signed-off-by: raver119 <raver119@gmail.com>
* fix
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of safety offset in cpp
Signed-off-by: raver119 <raver119@gmail.com>
* bfloat16
Signed-off-by: raver119 <raver119@gmail.com>
* minor test rearrangement to fastpath launch
Signed-off-by: raver119 <raver119@gmail.com>
* - atomicAdd/Mul/Div fix for float16/bfloat16 misalignment
- one special test for maxpoolbp java
- safety offset of 8 bytes is back to libnd4j legacy
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored kernels for segment_max/min/sum ops.
* Refactored segment_prod kernels.
* Refactored segment_prod kernels.
* DynamicPartition test
Signed-off-by: raver119 <raver119@gmail.com>
* Addede linear test for dynamic_partition op.
* Refactored test with int datatype.
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* dynamicPartition fix
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of some logging
Signed-off-by: raver119 <raver119@gmail.com>
* one more test for dynamic_stitch
Signed-off-by: raver119 <raver119@gmail.com>
* one more test for dynamic_stitch
Signed-off-by: raver119 <raver119@gmail.com>
* empty check for stitch
Signed-off-by: raver119 <raver119@gmail.com>
* minor print changes
Signed-off-by: raver119 <raver119@gmail.com>
* remove some unneeded java-side output shape calculations
Signed-off-by: Ryan Nett <rnett@skymind.io>
* delete Broadcast
Signed-off-by: Ryan Nett <rnett@skymind.io>
* delete Linear and Module,
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update Identity, HashCode, and NoOp
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed Cast java-side shape function, added tests and SDVariable.isEmpty
Signed-off-by: Ryan Nett <rnett@skymind.io>
* ignoring test w/ issues on master
Signed-off-by: Ryan Nett <rnett@skymind.io>
* noop needs more work, fixed BaseArithmeticBackprop and BaseDynamicTransform ops
merge in master for c++ build fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix EqualTo
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix other cond ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* "fake" ops calculateOutputShape() throws exception
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use c++ shape calc for Linspace
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix exception message, move most to BaseCompatOp
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove SDVariable.isEmpty
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove commented out code
Signed-off-by: Ryan Nett <rnett@skymind.io>
* one noop test
Signed-off-by: raver119 <raver119@gmail.com>
* skip input validation for no-input ops
Signed-off-by: raver119 <raver119@gmail.com>
* - one more noop empty test
- one more validation before sync
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* one more validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA empty reductions java side
Signed-off-by: raver119 <raver119@gmail.com>
* one svd test
Signed-off-by: raver119 <raver119@gmail.com>
* Corrected segment_mean helpers and added another test.
* Refactored segment_mean kernels to avoid race_condition.
* CUDA empty reduction
Signed-off-by: raver119 <raver119@gmail.com>
* - listdiff synchronization fix for CUDA
- listdiff test
Signed-off-by: raver119 <raver119@gmail.com>
* - IndexReduce ops now allow INDEXING_TYPES output
- topK op accepts only INDEXING_TYPES as output
Signed-off-by: raver119 <raver119@gmail.com>
* one test for maxpool2d_bp
Signed-off-by: raver119 <raver119@gmail.com>
* - maxpool2d_bp cuda fix for NaNs
- streamSync after each custom op execution
Signed-off-by: raver119 <raver119@gmail.com>
* one test for size
Signed-off-by: raver119 <raver119@gmail.com>
* - few tests for size op
- size/rank/size_at ops now use p instead of assign
Signed-off-by: raver119 <raver119@gmail.com>
* throw exception if op execution failed
Signed-off-by: raver119 <raver119@gmail.com>
* expected for test
Signed-off-by: raver119 <raver119@gmail.com>
* one more ismax test
Signed-off-by: raver119 <raver119@gmail.com>
* ismax view fix
Signed-off-by: raver119 <raver119@gmail.com>
* Nd4j pad update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* switched from guava Immutables to Collections.unmodifiableList/Map
Signed-off-by: Ryan Nett <rnett@skymind.io>
* javadoc
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use new pad
Signed-off-by: Ryan Nett <rnett@skymind.io>
* conv tests use OpValidation
Signed-off-by: Ryan Nett <rnett@skymind.io>
* deconv3d overrides
Signed-off-by: Ryan Nett <rnett@skymind.io>
* test fix for the new pad method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more test fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more test fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* rename SameDiff function methods to op (except for the actual SameDiff function ones)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more pad overloads, test fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* test updates
Signed-off-by: Ryan Nett <rnett@skymind.io>
* conv1d test
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove Conv1D tf import (there isn't a TF conv1d op)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove numThreads from Nd4j
Signed-off-by: Ryan Nett <rnett@skymind.io>
* replace Old ops with their newer versions, deprecate ones that haven't already been deprecated
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove use of setNumThreads
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix for Reverse and ATan2
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix test for wrong equals type
Signed-off-by: Ryan Nett <rnett@skymind.io>
* well it works now
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better javadocs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* NonNulls
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better array literal
Signed-off-by: Ryan Nett <rnett@skymind.io>
* re-add tf import stuff (will remove later)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* conv1d config load fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* partial config usage changes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove Old op classes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* config property fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed one too many ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* refactoring
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
* fix: make test public.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* make test public.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* fixes read refactoring.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* First pass on SameDiff op exec debug listener
Signed-off-by: Alex Black <blacka101@gmail.com>
* #7555 DL4J helpers - don't fall back on builtin for op profiler exceptions
Signed-off-by: Alex Black <blacka101@gmail.com>
* Exec debugging listener + fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix import counts for TF ops in OpValidationSuite
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix bad DL4J test configuration
Signed-off-by: Alex Black <blacka101@gmail.com>
* Exec debugging listener polish
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* wip
* update interface, add null implementations.
* Breaking one test in a weird way.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* createUninitializedDetached refactored.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove create method with unused parameter.
* removed more unused methods.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* removing more unused code.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* last removal of unused code.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Conv Config validation & tests
Signed-off-by: Ryan Nett <rnett@skymind.io>
* stackOutputs utility method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use constructor for validation, support negative kernel sizes (infered from weights)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better output methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
* move output to be with fit and evaluate
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove SDVariable inplace methods
* import methods
* npe fix in OpVal
* removed SameDiff inplace ops from tests
* Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything
* quick fixes
* javadoc
* SDVariable eval with placeholders
* use regex match
* better matching
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* Added gradcheck test for dynamic_partition_bp op.
* - implementation of dilation op (cpu and cuda)
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed broadcast_dynamic_shape 1D case and tests.
* Fixed usage of default integer arguments.
* Fixed dynamic_partition_bp op and tests.
* Eliminated test with grad check for dynamic_partition_bp op.
* start working on cuda svd - porting available corresponding api from cuSOLVER library
Signed-off-by: Yurii <yurii@skymind.io>
* provide prelu_bp
Signed-off-by: Yurii <yurii@skymind.io>
* - provide gruCell_bp (old version ??)
Signed-off-by: Yurii <yurii@skymind.io>
* - polishing cumsum_bp and cumprod_bp tests
Signed-off-by: Yurii <yurii@skymind.io>
* provide sparseSoftmaxCrossEntropyWithLogits and sparseSoftmaxCrossEntropyWithLogits_grad
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed atomicMul with float input/output
* implementation of cuda kernel for triu_bp operation
Signed-off-by: Yurii <yurii@skymind.io>
* Refactored lup helper to add parrallel computing.
* cusolver libraries
Signed-off-by: raver119 <raver119@gmail.com>
* uncomment cuSolver APIs in svd.cu
Signed-off-by: Yurii <yurii@skymind.io>
* cusolver var
Signed-off-by: raver119 <raver119@gmail.com>
* - further work on cuSolver svd
Signed-off-by: Yurii <yurii@skymind.io>
* Implement usage of cuda solver to LUP decomposition.
* - correct naames in lup functions
Signed-off-by: Yurii <yurii@skymind.io>
* correct svdQR cuda
Signed-off-by: Yurii <yurii@skymind.io>
* - provide transpositions of input matrices in case of c order in svdCudaQR
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed implementation issues with LUP usign cuda solver.
* Implementation of matrix_determinant helper with cuda kernels. Working revision.
* Implemented log_matrix_determinant helper with cuda kernels.
* - implementation of batched cuda svd
Signed-off-by: Yurii <yurii@skymind.io>
* Refactored cholesky helper and implementation of cuda solver cholesky batch.
* - implementation of cuda kernel for tile bp
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of cholesky and logdet with cuda kernels.
* - implementation of cuda kernel for sru_bidirectional
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed cholesky helper.
* Cholesky op helper implementation. Working double-based cublas implementation.
* bad import excluded
Signed-off-by: raver119 <raver119@gmail.com>
* Finished with cuda implementation of cholesky helper and tests.
* - implementation of cuda kernel for sru_bidirectional_backprop operation
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of matrix_inverse op helper with cuda kernels. The first revision.
* - start working on gruCell_bp
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of matrix_inverse helper.
* - further work on new gruCell_bp
Signed-off-by: Yurii <yurii@skymind.io>
* cuBLAS related fixes
Signed-off-by: raver119 <raver119@gmail.com>
* calculateOutputShapes() now passes device buffers as well
Signed-off-by: raver119 <raver119@gmail.com>
* special concat/average/accumulate init host pointers now
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* additional CudaDataBufferFactory signatures certain for data types
Signed-off-by: raver119 <raver119@gmail.com>
* cuSolver host buffer
Signed-off-by: raver119 <raver119@gmail.com>
* buffer to buffer memcpy host ptr allocation
Signed-off-by: raver119 <raver119@gmail.com>