Commit Graph

263 Commits (c505a11ed613147b2f2757402e2d6677d68a088d)

Author SHA1 Message Date
raver119 1dfac9a736
DataBuffer.write() tweak (#221)
* special workaround methods for DataBuffer.write

Signed-off-by: raver119 <raver119@gmail.com>

* one test removed

Signed-off-by: raver119 <raver119@gmail.com>

* more of unsynced

Signed-off-by: raver119 <raver119@gmail.com>

* missing asLong for BaseCudaDataBuffer

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-07 18:16:11 +03:00
Alex Black 569a46f87d
Fixes (#213)
* Increase timeouts for 2 tests occasionally failing on CI

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Explicitly set character encoding via argline for maven surefire tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* CUDA gradient check timeout fix + simple rnn masking fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-02-05 17:07:36 +11:00
raver119 5d28e6143d
OpContext handling (#214)
* nano tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* OpContext tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* OpContext deallocators

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of few mkldnn safety checks

Signed-off-by: raver119 <raver119@gmail.com>

* databuffer setSpecial fix

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-05 07:27:24 +03:00
shugeo 41ff907bc6
Shugeo solve linear (#191)
* linear equations systems solve op. Initial commit.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed compiling issues.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Linear equations systems solve. The next stage commit.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added test for linear equations systems solve operation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added additional test and fixed lower matrix retrievance.

* Implementation for solve of the systems of linear equations."

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored permutation generation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added restore for permutations batched with cuda helper for solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Finished cuda implementation for solve op helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored cpu helpers for solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fix gtest output on Windows

* Fixed issue with permutation matrix for cuda implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed issue with permutation matrix for cpu implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Eliminated waste comments.

Signed-off-by: shugeo <sgazeos@gmail.com>

* LinearSolve added

* Mapping added

* Javadoc added

* Refactored implementation of triangular_solve helpers and tests for solve matrix equations generally.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a test for solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Solve test added

* Fix for TF import

Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-02-04 08:59:11 +03:00
Alex Black ddf70ac450
Avoid double printing of start/stop test in a few cases (#210)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-02-03 22:18:01 +11:00
raver119 9bb5798cac
Null arrays fix (#208)
* don't skip null arrays

Signed-off-by: raver119 <raver119@gmail.com>

* one test tweak

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-02 23:14:00 +03:00
raver119 81efa5c3b6
[WIP] one small fix (#207)
* one small fix

Signed-off-by: raver119 <raver119@gmail.com>

* assert added

Signed-off-by: raver119 <raver119@gmail.com>
2020-02-02 19:17:26 +03:00
Alex Black 0756e3fe70
Small fixes. (#206)
* Logging format tweaks for file logging

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Min abs error tweak for Util layer gradient checks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8648 Fix SameDiff NPE instead of error for missing placeholders

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test runtime reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-02-01 18:19:36 +11:00
raver119 1ab86d1306
Range op data type (#204)
* - range op now accepts dargs
- dargs now can be in signature

Signed-off-by: raver119 <raver119@gmail.com>

* range dtype java side

Signed-off-by: raver119 <raver119@gmail.com>

* linspace fix

Signed-off-by: raver119 <raver119@gmail.com>

* lin_space fix for scalar outputs

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-31 10:45:40 +03:00
raver119 5d98cfcf47
Configurable DataType for ops (#201)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* - one more test for OneHot with dtype
- one more signature in Nd4j

Signed-off-by: raver119 <raver119@gmail.com>

* ones_as/zeros_as now accept dtype

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* - more updates for configurable data types
- ones_as/zeros_as java side + tests

Signed-off-by: raver119 <raver119@gmail.com>

* few c++ tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* few more changes around DArgs

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-30 18:46:12 +03:00
raver119 9f719488b9
CUDA sync tweaks (#194)
* ThreadLocal cache for CudaContext

Signed-off-by: raver119 <raver119@gmail.com>

* temp commit

Signed-off-by: raver119 <raver119@gmail.com>

* remove unwanted synchronization

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-28 10:55:06 +03:00
raver119 7ef0ef907e
Packages fix (#193)
* packages fix

Signed-off-by: raver119 <raver119@gmail.com>

* few imports fixed

Signed-off-by: raver119 <raver119@gmail.com>

* few imports fixed

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-27 23:04:21 +03:00
Alex Black 458d141d8e
Fix SDLoss null weights array issue (#185)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-01-25 20:13:23 +11:00
Alexander Stoyakin 4db28a9300 Cleanup of multiple projects (#175)
* Cleanup modules

* Moving subprojects to nd4j-api

* Project cleanup

* Dropped AWS sub-project

* dl4j-util moved to core

* dl4j-perf moved to core

* Tests coverage

* Revert "Moving subprojects to nd4j-api"

This reverts commit bc6eb573c6b60c407ade47172c5d204725077e6b.

* Moved nd4j-buffer and nd4j-context to nd4j-api

* Rolled back change

* Revert "Project cleanup"

This reverts commit 64ac7f369b2d968f7be437718034f093fc886ffc.

* Datavec cleaned up

* Revert "Moved nd4j-buffer and nd4j-context to nd4j-api"

This reverts commit 75f4e8da80d2551e44e1251dd6c5923289fff8e1.

# Conflicts:
#	nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/opvalidation/ReductionBpOpValidation.java

* Resolve conflict

* Compilation fixed.

* nd4j-context and nd4j-buffer moved to nd4j-api

* Fixed TF mapping for mmul

* Fix for dl4j-cuda tests

Signed-off-by: Alex Black <blacka101@gmail.com>

* Move last few tests from deeplearning4j-nn to -core

Signed-off-by: Alex Black <blacka101@gmail.com>

* Remove incorrect TF import mapping for TensorMmul op

Signed-off-by: Alex Black <blacka101@gmail.com>

* Cleaned TF mapping

* Fix path for test results on windows

* Remove old dependency

Signed-off-by: Alex Black <blacka101@gmail.com>

* One more attempt to fix path for test results on windows

* fixup! One more attempt to fix path for test results on windows

* fixup! One more attempt to fix path for test results on windows

Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
2020-01-24 22:35:00 +03:00
raver119 5d69069177
[WIP] Memory limits (#167)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* one more initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* additional initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* subsequent initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit testing

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit per device

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit per group

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda + few missed lines

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda + missed includes

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit for cuda + one more missed include

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit shouldn't count host mem as dev0 in cuda

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit that tracks HOST group limits for CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with some Environment changes

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with more Environment changes

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with maxMasterThreads fix

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit with maxMasterThreads fix

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit without maxMasterThreads exception

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit without Nd4jULong in Environment

Signed-off-by: raver119 <raver119@gmail.com>

* add sleep and more iterations for OOM cases

Signed-off-by: raver119 <raver119@gmail.com>

* limits propagation from java side

Signed-off-by: raver119 <raver119@gmail.com>

* - consume ErrorCode every time
- one test for memory limits

Signed-off-by: raver119 <raver119@gmail.com>

* unordered_map

Signed-off-by: raver119 <raver119@gmail.com>

* unordered_map

Signed-off-by: raver119 <raver119@gmail.com>

* unordered_map

Signed-off-by: raver119 <raver119@gmail.com>

* RSub op mapping fixed

Signed-off-by: raver119 <raver119@gmail.com>

* typo fixed

Signed-off-by: raver119 <raver119@gmail.com>

* one bad test fixed

Signed-off-by: raver119 <raver119@gmail.com>
2020-01-24 10:11:09 +03:00
Alex Black a25bb6a11c
Unit/integration test split + test speedup (#166)
* Add maven profile + base tests methods for integration tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Switch from system property to environment variable; seems more reliable in intellij

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add nd4j-common-tests module, and common base test; cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ensure all ND4J tests extend BaseND4JTest

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test spam reduction, import fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add test logging to nd4j-aeron

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix unintended change

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Reduce sprint test log spam

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More test spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Significantly speed up TSNE tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* W2V iterator test unit/integration split

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More NLP test speedups

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Avoid debug/verbose mode leaking between tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* test tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Arbiter extends base DL4J test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Arbiter test speedup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* nlp-uima test speedup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More test speedups

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix ND4J base test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Few small ND4J test speed improvements

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J tests speedup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More tweaks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Even more test speedups

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More tweaks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Various test fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* More test fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Add ability to specify number of threads for C++ ops in BaseDL4JTest and BaseND4JTest

Signed-off-by: Alex Black <blacka101@gmail.com>

* nd4j-aeron test profile fix for CUDA

Signed-off-by: Alex Black <blacka101@gmail.com>
2020-01-22 22:27:01 +11:00
shugeo 815a2908af Shugeo solve triangular (#173)
* Added implementation of the triangular_solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed compilation issues.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added verification of input data and helpers facilities for triangular_solve op.'

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added cpu implementation for triangular_solve helpers.

* Added tests and implementation for upper triangular equations.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a pair of cases to tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added multithreading with cpu helpers for triangular_solve op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added cuda implementation of triangular_solve op helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Finished cuda implementation of triangular_solve helpers and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed copyright marks.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected grammar errors with doc and error messages.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored matricies processing with triangular_solve cuda helper implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added triangular_solve wrapper

* Fixed mapping

* Added processing for adjoint with cpu helpers of triangular_solve op implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added implementation for adjoint routine with cuda platform.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added multithreading with adjoint routine for cpu platform.

Signed-off-by: shugeo <sgazeos@gmail.com>

Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-22 10:48:03 +03:00
shugeo e50b285c2c Shugeo resize area (#162)
* Added implementation for resize_area op. Initial commit.

* Added implementation of resize_area op. Initial revision.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected resizeArea functor call.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Implementation of resize_area. Cpu platform helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Implementation for resize_area helpers. The first part revision.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added a set of tests for resize_area op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Cuda implementation for resize_area. Initial approach.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adding multithreading for resize_area algorithm.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Cuda implementation of resize_area helpers. Shared memory approach.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored resizeAreaKernel with cuda implementation.

* Eliminated compilation errors.

* ResizeArea helpers for cuda platform. The first working revision.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added test for batched resize_area op testing.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Implementation of resize_are for cuda platform and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed multithreading with resize_area op helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected copyright marks with sources.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected copyright mark for resize_area op implementation.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected copyright mark for parity ops header.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected typo in strings and so on with image resize ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored resize_area helpers and multithreading.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added ResizeArea wrapper

* Added test with align_corners and fixed shape processing with only int args given for output size.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added test

* TF mapping for ResizeArea

* Fixed implementation issues with resize_area op for both platforms.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored image resizer struct to use flexible types for ints and floats.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Improved multithreading with resizeAreaKernel launch.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Use asynchronical memory copying with cuda platform image resize allocations.

Signed-off-by: shugeo <sgazeos@gmail.com>

Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-22 10:46:33 +03:00
Oleh 8fc0e63ce7 Oleh powderev (#171)
* Libnd4j: Add broadcastable elementwise power derivative #7461 first step of Pow_bp operation implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 some corrections of calculation steps

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 some bug fixes, the PowDerevative op made broadcastable, add the raw tests for op, need refactoring to use broadcast ops

* Libnd4j: Add broadcastable elementwise power derivative #7461 fixed several bugs add broadcast support and tests, need to fix scalar+array and array+scalar

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 fixed bugs for scalar inputs, fixed multinomial tests, added tests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 fised bugs for different shapes support, tests updated

* Libnd4j: Add broadcastable elementwise power derivative #7461 applied all possible variants via tiled arrays, add support of broadcast for Pow and PowDerivative ops, covered by tests, before review have to be replaced tiled implementation by applyTrueBroadcast

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 replaced tile by broadcast implementation, fixed issue with negative x input, corrected tests, need additional testing

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 added and corrected test cases, corrected implementation need review

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 code clean up

* Libnd4j: Add broadcastable elementwise power derivative #7461 code clean up, removed some tests, add tests with scalar

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 code improvement and clean up, split tests

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative #7461 some code clean up

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* Libnd4j: Add broadcastable elementwise power derivative replace __isnanf by internal realization

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* pow_bp wrapper

* Fixed PowBp wrapper

* Tests added

* Test fixed

* Fix return type

* Disable powBp usage

* Pow backprop changed

Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-20 12:59:12 +03:00
shugeo 6943a5f57a Shugeo lgamma (#170)
* lgamma op. Initial version.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored lgamma op and test.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Lgamma wrapper

* Added TF mapping

Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-20 12:29:36 +03:00
Alex Black c84307a6fe
Small SameDiff execution fix (#168)
* SameDiff exec: Fix for switch op when predicate is constant, and op is inside loop

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update ignores for failing zoo models

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-01-08 23:57:23 +11:00
raver119 29e8e09db6
String changes (#3)
* initial commit

* additional data types & tensor type

Signed-off-by: raver119 <raver119@gmail.com>

* next step

Signed-off-by: raver119 <raver119@gmail.com>

* missing include

* sparse_to_dense

Signed-off-by: raver119 <raver119@gmail.com>

* few more tests files

Signed-off-by: raver119 <raver119@gmail.com>

* draft

Signed-off-by: raver119 <raver119@gmail.com>

* numeric sparse_to_dense

Signed-off-by: raver119 <raver119@gmail.com>

* comment

Signed-off-by: raver119 <raver119@gmail.com>

* string sparse_to_dense version

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA DataBuffer expand

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks for CUDA build

Signed-off-by: raver119 <raver119@gmail.com>

* shape fn for string_split

Signed-off-by: raver119 <raver119@gmail.com>

* one more comment

Signed-off-by: raver119 <raver119@gmail.com>

* string_split indices

Signed-off-by: raver119 <raver119@gmail.com>

* next step

Signed-off-by: raver119 <raver119@gmail.com>

* test passes

Signed-off-by: raver119 <raver119@gmail.com>

* few rearrangements for databuffer implementations

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer: move inline methods to common implementations

Signed-off-by: raver119 <raver119@gmail.com>

* add native DataBuffer to Nd4j presets

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer creation

Signed-off-by: raver119 <raver119@gmail.com>

* use DataBuffer for allocation

Signed-off-by: raver119 <raver119@gmail.com>

* cpu databuffer as deallocatable

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer setters for bufers

Signed-off-by: raver119 <raver119@gmail.com>

* couple of wrappers

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffers being passed around

Signed-off-by: raver119 <raver119@gmail.com>

* Bunch of ByteBuffer-related signatures gone

Signed-off-by: raver119 <raver119@gmail.com>

* - few more Nd4j signatures removed
- minor fix for bfloat16

Signed-off-by: raver119 <raver119@gmail.com>

* nullptr pointer is still a pointer, but 0 as address :)

Signed-off-by: raver119 <raver119@gmail.com>

* one special test

Signed-off-by: raver119 <raver119@gmail.com>

* empty string array init

Signed-off-by: raver119 <raver119@gmail.com>

* one more test in cpp

Signed-off-by: raver119 <raver119@gmail.com>

* memcpy instead of databuffer swap

Signed-off-by: raver119 <raver119@gmail.com>

* special InteropDataBuffer for front-end languages

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks for java

Signed-off-by: raver119 <raver119@gmail.com>

* pointer/indexer actualization

Signed-off-by: raver119 <raver119@gmail.com>

* CustomOp returns list for inputArumgents and outputArguments instead of array

Signed-off-by: raver119 <raver119@gmail.com>

* redundant call

Signed-off-by: raver119 <raver119@gmail.com>

* print_variable op

Signed-off-by: raver119 <raver119@gmail.com>

* - view handling (but wrong one)
- print_variable java wrapper

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* - empty arrays handling

Signed-off-by: raver119 <raver119@gmail.com>

* - deserialization works now

Signed-off-by: raver119 <raver119@gmail.com>

* minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* one more fix

Signed-off-by: raver119 <raver119@gmail.com>

* initial cuda commit

Signed-off-by: raver119 <raver119@gmail.com>

* print_variable message validation

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA views

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA special buffer size

Signed-off-by: raver119 <raver119@gmail.com>

* minor update to match master changes

Signed-off-by: raver119 <raver119@gmail.com>

* - consider arrays always actual on device for CUDA
- additional PrintVariable constructor
- CudaUtf8Buffer now allocates host buffer by default

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* - print_variable now allows print from device

Signed-off-by: raver119 <raver119@gmail.com>

* InteropDataBuffer data type fix

Signed-off-by: raver119 <raver119@gmail.com>

* ...

Signed-off-by: raver119 <raver119@gmail.com>

* disable some debug messages

Signed-off-by: raver119 <raver119@gmail.com>

* master pulled in

Signed-off-by: raver119 <raver119@gmail.com>

* couple of new methods for DataBuffer interop

Signed-off-by: raver119 <raver119@gmail.com>

* java side

Signed-off-by: raver119 <raver119@gmail.com>

* offsetted constructor

Signed-off-by: raver119 <raver119@gmail.com>

* new CUDA deallocator

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA backend torn apart

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA backend torn apart 2

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA backend torn apart 3

Signed-off-by: raver119 <raver119@gmail.com>

* - few new tests
- few new methods for DataBuffer management

Signed-off-by: raver119 <raver119@gmail.com>

* few more tests + few more tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* two failing tests

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* two failing tests pass

Signed-off-by: raver119 <raver119@gmail.com>

* now we pass DataBuffer to legacy ops too

Signed-off-by: raver119 <raver119@gmail.com>

* Native DataBuffer for legacy ops, Java side

Signed-off-by: raver119 <raver119@gmail.com>

* CPU java side update

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA java side update

Signed-off-by: raver119 <raver119@gmail.com>

* no more prepare/register action on java side

Signed-off-by: raver119 <raver119@gmail.com>

* NDArray::prepare/register use now accepts vectors

Signed-off-by: raver119 <raver119@gmail.com>

* InteropDataBuffer now has few more convenience methods

Signed-off-by: raver119 <raver119@gmail.com>

* java bindings update

Signed-off-by: raver119 <raver119@gmail.com>

* tick device in NativeOps

Signed-off-by: raver119 <raver119@gmail.com>

* Corrected usage of OpaqueBuffer for tests.

* Corrected usage of OpaqueBuffer for java tests.

* NativeOpsTests fixes.

* print_variable now returns scalar

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* compat_string_split fix for CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* - CUDA execScalar fix
- CUDA lazyAllocateHostPointer now checks java indexer/pointer instead of native pointer

Signed-off-by: raver119 <raver119@gmail.com>

* legacy ops DataBuffer migration prototype

Signed-off-by: raver119 <raver119@gmail.com>

* ignore device shapeinfo coming from java

Signed-off-by: raver119 <raver119@gmail.com>

* minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* minor transformAny fix

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweak for lazy host allocation

Signed-off-by: raver119 <raver119@gmail.com>

* - DataBuffer::memcpy method
- bitcast now uses memcpy

Signed-off-by: raver119 <raver119@gmail.com>

* - IndexReduce CUDA dimension buffer fix

Signed-off-by: raver119 <raver119@gmail.com>

* views for CPU and CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* less spam

Signed-off-by: raver119 <raver119@gmail.com>

* optional memory init

Signed-off-by: raver119 <raver119@gmail.com>

* async memset

Signed-off-by: raver119 <raver119@gmail.com>

* - SummaryStats CUDA fix
- DataBuffer.sameUnderlyingData() impl
- execBroadcast fix

Signed-off-by: raver119 <raver119@gmail.com>

* - reduce3All fix
switch to CUDA 10 temporarily

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA version

Signed-off-by: raver119 <raver119@gmail.com>

* proper memory deallocator registration

Signed-off-by: raver119 <raver119@gmail.com>

* HOST_ONLY workspace allocation

Signed-off-by: raver119 <raver119@gmail.com>

* temp commit

Signed-off-by: raver119 <raver119@gmail.com>

* few conflicts resolved

Signed-off-by: raver119 <raver119@gmail.com>

* few minor fixes

Signed-off-by: raver119 <raver119@gmail.com>

* one more minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* NDArray permute should operate on JVM primitives

Signed-off-by: raver119 <raver119@gmail.com>

* - create InteropDataBuffer for shapes as well
- update pointers after view creation in Java

Signed-off-by: raver119 <raver119@gmail.com>

* - addressPointer temporary moved to C++

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA: don't account offset twice

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA: DataBuffer pointer constructor updated

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA NDArray.unsafeDuplication() simplified

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA minor workspace-related fixes

Signed-off-by: raver119 <raver119@gmail.com>

* CPU DataBuffer.reallocate()

Signed-off-by: raver119 <raver119@gmail.com>

* print_affinity op

Signed-off-by: raver119 <raver119@gmail.com>

* print_affinity java side

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA more tweaks for data locality

Signed-off-by: raver119 <raver119@gmail.com>

* - compat_string_split tweak
- CudaUtf8Buffer update

Signed-off-by: raver119 <raver119@gmail.com>

* INDArray.close() mechanic restored

Signed-off-by: raver119 <raver119@gmail.com>

* one more test fixed

Signed-off-by: raver119 <raver119@gmail.com>

* - CUDA DataBuffer.reallocate() updated
- cudaMemcpy (synchronous) restored

Signed-off-by: raver119 <raver119@gmail.com>

* one last fix

Signed-off-by: raver119 <raver119@gmail.com>

* bad import removed

Signed-off-by: raver119 <raver119@gmail.com>

* another small fix

Signed-off-by: raver119 <raver119@gmail.com>

* one special test

Signed-off-by: raver119 <raver119@gmail.com>

* fix bad databuffer size

Signed-off-by: raver119 <raver119@gmail.com>

* release primaryBuffer on replace

Signed-off-by: raver119 <raver119@gmail.com>

* higher timeout

Signed-off-by: raver119 <raver119@gmail.com>

* disable timeouts

Signed-off-by: raver119 <raver119@gmail.com>

* dbCreateView now validates offset and length of a view

Signed-off-by: raver119 <raver119@gmail.com>

* additional validation for dbExpand

Signed-off-by: raver119 <raver119@gmail.com>

* restore timeout back again

Signed-off-by: raver119 <raver119@gmail.com>

* smaller distribution for rng test to prevent timeouts

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA DataBuffer::memcpy now copies to device all the time

Signed-off-by: raver119 <raver119@gmail.com>

* OpaqueDataBuffer now contains all required methods for interop

Signed-off-by: raver119 <raver119@gmail.com>

* some javadoc

Signed-off-by: raver119 <raver119@gmail.com>

* GC on failed allocations

Signed-off-by: raver119 <raver119@gmail.com>

* minoe memcpu tweak

Signed-off-by: raver119 <raver119@gmail.com>

* one more bitcast test

Signed-off-by: raver119 <raver119@gmail.com>

* - NDArray::deviceId() propagation
- special multi-threaded test for data locality checks

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer additional syncStream

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer additional syncStream

Signed-off-by: raver119 <raver119@gmail.com>

* one ignored test

Signed-off-by: raver119 <raver119@gmail.com>

* skip host alloc for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* ByteBuffer support is back

Signed-off-by: raver119 <raver119@gmail.com>

* DataBuffer::memcpy minor fix

Signed-off-by: raver119 <raver119@gmail.com>

* few minor prelu/bp tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* nullify-related fixes

Signed-off-by: raver119 <raver119@gmail.com>

* PReLU fixes (#157)

Signed-off-by: Alex Black <blacka101@gmail.com>

* Build fixed

* Fix tests

* one more ByteBuffer signature restored

Signed-off-by: raver119 <raver119@gmail.com>

* nd4j-jdbc-hsql profiles fix

Signed-off-by: raver119 <raver119@gmail.com>

* nd4j-jdbc-hsql profiles fix

Signed-off-by: raver119 <raver119@gmail.com>

* PReLU weight init fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small PReLU fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* - INDArray.migrate() reactivated
- DataBuffer::setDeviceId(...) added
- InteropDataBuffer Z syncToDevice added for views

Signed-off-by: raver119 <raver119@gmail.com>

* missed file

Signed-off-by: raver119 <raver119@gmail.com>

* Small tweak

Signed-off-by: Alex Black <blacka101@gmail.com>

* cuda 10.2

Signed-off-by: raver119 <raver119@gmail.com>

* minor fix

Signed-off-by: raver119 <raver119@gmail.com>

Co-authored-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-04 13:27:50 +03:00
Robert Altena 53d3bd1269 shallow delete of assign from SDBase. (#164)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2020-01-04 15:26:39 +11:00
Alex Black 29104083cc
Various fixes (#143)
* #8568 ArrayUtil optimization

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6171 Keras ReLU and ELU support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Keras softmax layer import

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8549 Webjars dependency management

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for TF import names ':0' suffix issue / NPE

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* BiasAdd: fix default data format for TF import

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update zoo test ignores

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8509 SameDiff Listener API - provide frame + iteration

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8520 ND4J Environment

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Deconv3d

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Deconv3d fixes + gradient check

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Conv3d fixes + deconv3d DType test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix issue with deconv3d gradinet check weight init

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8579 Fix BaseCudaDataBuffer constructor fix for UINT16

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DataType.isNumerical() returns false for BOOL type

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8504 Reduce Spark log spam for tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clean up DL4J gradient check test spam

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More Gradient check spam reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff test spam reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes for FlatBuffers mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff log spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tests should extend BaseNd4jTest

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove debug line in c++ op

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ND4J test spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J test spam reduction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More Dl4J and datavec test spam cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for bad conv3d test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Additional test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Embedding layers: don't inherit global default activation function

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Trigger CI

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Consolidate all BaseDL4JTest classes to single class used everywhere; make timeout configurable per class

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test fixes and timeout increases

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Timeouts and PReLU fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Restore libnd4j build threads arg for CUDA build

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Increase timeouts on a few tests to avoid spurious failures on some CI machines

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More timeout fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More test timeout fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tweak timeout for one more test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Final tweaks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* One more ignore

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-01-04 13:45:07 +11:00
Alexander Stoyakin 010744ef9c Lu wrapper and tests fixes (#144)
* Tests fixed

* Lu added

* Test fixed

* Default timeout

* Tests timeouts fixed.

* TF import fix

* Timeouts added

* Timeout fixed.

* Test corrected

* rgb and yiq conversion ops added

* Converter ops added

* Header

* Yuv converters

* API added

* Empty test for matmul

* Explanation

* skip gemm/gemv on empty inputs

Signed-off-by: raver119 <raver119@gmail.com>

* Test added

* Correct test

* one more empty pass-through for mmul

Signed-off-by: raver119 <raver119@gmail.com>

* Cleanup

* Test added

* Test fixed

* Added missing mapping

* Added missing mapping

Co-authored-by: raver119 <raver119@gmail.com>
2019-12-30 15:06:12 +03:00
Alex Black ce02b6fae7
Small fixes (#140)
* Allow scalar op result array auto allocation

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Don't swallow underlying exception for calculateOutputShape execution failures

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ignore for known keras failure

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-12-21 17:00:46 +11:00
Alexander Stoyakin 6d8a063c9b nd4j-tests cleanup (#137)
* Fixed tests

* Invalid test removed
2019-12-20 16:38:33 +03:00
Alex Black 3d8f6d50a1
SameDiff profiler / tracing and profile analysis/comparison (#133)
* Profiler

Signed-off-by: Alex Black <blacka101@gmail.com>

* Next steps, polishing, and loading SD/TF format JSON

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Profile comparison method

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Make profiling result writing async to reduce main thread overhead

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Profiling polishing

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Profile analyzer fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Polish

Signed-off-by: Alex Black <blacka101@gmail.com>

* Cleanup

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small formatting improvement

Signed-off-by: Alex Black <blacka101@gmail.com>

* Formatting tweak

Signed-off-by: Alex Black <blacka101@gmail.com>

* License headers

Signed-off-by: Alex Black <blacka101@gmail.com>
2019-12-19 23:43:58 +11:00
Alexander Stoyakin f5068f3980 Added missing Java ops wrappers (#122)
* Timeouts added

* Added some ops

* Ops added

* Fixed tests

* Minor fix

* Some fixes

* Digamma added

* Small fixes

* Timeouts added

* Added some ops

* Ops added

* Fixed tests

* Minor fix

* Some fixes

* Digamma added

* Small fixes

* Fused batch norm fixes-

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tests switched off.

* Added test for resize_bicubic.

* Eliminated wasted in test of bicubic resize.

* Switched off multithreading explicit.

* HsvToRgb and RgbToHsv added

* Eliminated waste comments and conform proper float constants.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed multithreading with resize_bicubic helper for cpu platform.

Signed-off-by: shugeo <sgazeos@gmail.com>

* ResizeBicubic was fixed.

* Some fixes

* Fix op name

* Validation fixed.

* Clarifications for tests

* Wrappers and small fixes for new ops.
2019-12-19 20:15:48 +11:00
AlexDBlack 0df1b46c8c Merge 2019-12-10 15:08:50 +11:00
raver119 a5f5ac72b1
reduce bool changes (#118)
* reduce bool changes

Signed-off-by: raver119 <raver119@gmail.com>

* reduce bool tweaks

Signed-off-by: raver119 <raver119@gmail.com>
2019-12-09 20:08:59 +03:00
Alexander Stoyakin 927d591421 ResizeBicubic added (#117)
* ResizeBicubic added
Some fixes.

* Test fixed

* Narrowed argument type changed to boolean

* Clean up
2019-12-09 18:25:39 +11:00
raver119 b32dd1bf92
[WIP] resize_bicubic types (#116)
* resize_bicubic: allow more dtypes

Signed-off-by: raver119 <raver119@gmail.com>

* resize_bicubic: allow less dtypes

Signed-off-by: raver119 <raver119@gmail.com>

* Refactored resize_bicubic op to full conform with TF1.5 and tests.

* Corrected test to proper data type output.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected double input test to float constant outputs.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Finished with correction of tests for bicubic interpolated resizes expected.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed adjust_contrast ops to allow non-RGB inputs.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored adjust_contrast_v2 to conform with TF one.

Signed-off-by: shugeo <sgazeos@gmail.com>

* AdjustContrast tests activated

* two typos fixed

Signed-off-by: raver119 <raver119@gmail.com>
2019-12-06 18:58:37 +03:00
raver119 972fae60dc
Update master (#8511)
* cleaned up bert iterator tests (#110)

Signed-off-by: eraly <susan.eraly@gmail.com>

* Various pre-release fixes (#111)

* Various fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix default dtypes for MaxPoolWithArgmax

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small pre-release tweak (#112)

* Log UI address on launch as in previous Play-based UI

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Logging level tweak for UI

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* http not https

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* datavec python ensure host (#113)

* ensure host

* one more host ensure

* info->debug

* [WIP] reverse improvements (#115)

* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* reverse draft

Signed-off-by: raver119 <raver119@gmail.com>

* reverse kernel

Signed-off-by: raver119 <raver119@gmail.com>

* reverse kernel

Signed-off-by: raver119 <raver119@gmail.com>

* 2 micro fixes

Signed-off-by: raver119 <raver119@gmail.com>

* Shugeo resize fix5 (#102)

* Refactored resize images ops to use TF-like bool args as input.

* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.

* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.

* Refactored nearest_neighbor resize op.

* Added a pair of tests for special case of resize_bilinear algorithm.

* Fixed issue with resize_bilinear op.

* Refactored cpu implementation for helpers with resize_nearest_neighbor op.

* Final fixed for resize ops to conform TF v.1.5

* Refactored cuda helpers for resize_neares_neighbor op.

* Fixed resize_bilinear to accept proper data.

* Fixed issue with non-float input for resize_bilinear op.

* Refactored cuda helper for resize_bilinear to proper process non-float inputs.

* Added tests for resize_bilinear to int inputs.

* Fixed ResizeBilinear wrapper

* Tests fixed

* Fixed float and bool constant to avoid overflow for some kind of compilers.

* Corrected float constants with float data type.

* Added f suffix for float constants.

* Corrected float constant to avoid overflow with initializing lists.

* Corrected float initializing list with float input.

* Corrected bool constant with initalizing list.

* Corrected float and bool values with initializing lists.

* Fixed wrong constant.

* Fixed issue with 1x1 input picture for resize.

* ResizeBilinear default values on import fix

Signed-off-by: raver119 <raver119@gmail.com>
2019-12-06 11:10:44 +03:00
Robert Altena e7730eded4 delete unused and refactor. (#8262)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-12-05 22:25:41 -05:00
shugeo e09a785232 Shugeo resize fix5 (#102)
* Refactored resize images ops to use TF-like bool args as input.

* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.

* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.

* Refactored nearest_neighbor resize op.

* Added a pair of tests for special case of resize_bilinear algorithm.

* Fixed issue with resize_bilinear op.

* Refactored cpu implementation for helpers with resize_nearest_neighbor op.

* Final fixed for resize ops to conform TF v.1.5

* Refactored cuda helpers for resize_neares_neighbor op.

* Fixed resize_bilinear to accept proper data.

* Fixed issue with non-float input for resize_bilinear op.

* Refactored cuda helper for resize_bilinear to proper process non-float inputs.

* Added tests for resize_bilinear to int inputs.

* Fixed ResizeBilinear wrapper

* Tests fixed

* Fixed float and bool constant to avoid overflow for some kind of compilers.

* Corrected float constants with float data type.

* Added f suffix for float constants.

* Corrected float constant to avoid overflow with initializing lists.

* Corrected float initializing list with float input.

* Corrected bool constant with initalizing list.

* Corrected float and bool values with initializing lists.

* Fixed wrong constant.

* Fixed issue with 1x1 input picture for resize.

* ResizeBilinear default values on import fix

Signed-off-by: raver119 <raver119@gmail.com>
2019-12-05 22:05:33 +03:00
Alex Black 2052ce7026
Various pre-release fixes (#111)
* Various fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix default dtypes for MaxPoolWithArgmax

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-12-05 14:20:03 +11:00
raver119 25b3cd9b80
[WIP] CUDA tests (#95)
* one more CI test

Signed-off-by: raver119 <raver119@gmail.com>

* export additional symbols

Signed-off-by: raver119 <raver119@gmail.com>

* few more tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* one more tweak for linux

Signed-off-by: raver119 <raver119@gmail.com>

* fix dtype in few tests

Signed-off-by: raver119 <raver119@gmail.com>

* missing sync and memset in couple of tests

Signed-off-by: raver119 <raver119@gmail.com>

* copy step for libnd4j cuda

Signed-off-by: raver119 <raver119@gmail.com>

* no-op on empty for adjust hue/contrast/saturation

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA_VERBOSE Off

Signed-off-by: raver119 <raver119@gmail.com>

* BroadcastBool fix + few tests

Signed-off-by: raver119 <raver119@gmail.com>

* trigger jenkins

Signed-off-by: raver119 <raver119@gmail.com>

* trigger jenkins

Signed-off-by: raver119 <raver119@gmail.com>

* - ignore couple of warnings
- remove redundant compiler options

Signed-off-by: raver119 <raver119@gmail.com>
2019-12-02 21:37:21 +03:00
Alexander Stoyakin 5e152c0d9a TF import tests - adding missing operations (#65)
* Add and fix mappings.

* Intermediate

* Added and fixed some mappings

* Added op

* Missing constructors added.

* Added new mappings

* SDImage wrappers and minor tweaks.

* Added missing constructor

* Some corrections

* Cleanup

* Small fixes

* Ops wrappers

* Minor fixes.

* Max Pooling

* MaxPoolWithArgmax

* Some fixes

* Ignores for failures

* Some ops fixed.

* Some fixes

* Missing package added

* Some fixes

* Ignored tests fixed.

* Some fixes

* Merge master

* bitcast fix

Signed-off-by: raver119 <raver119@gmail.com>

* Bitcast fixed
2019-12-02 21:23:06 +11:00
Alex Black 2be47082c9
#8470 TrainingConfig json fix for Evaluation instances (#93)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-30 20:08:30 +11:00
Alex Black 35ab4a72ba
TF import test resources loading precision fixes (#92)
* Fix precision issues when loading from CSV

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-30 18:58:37 +11:00
Alex Black 4fb9fa7748
Add ND4J namespaces (#83)
* Add NDValidation

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add bitwise namespace

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Math namespace op constructor fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Constructor fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add Math namespace

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update NDBitwise

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add random namespaces

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* NN namespace

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-30 18:39:32 +11:00
Yurii Shyrma d19eeaec52 Shyrma casual conv1d (#90)
* - add causal mode of padding to convolutions

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add additional tests for causal conv1d

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add causal mode for cuda conv kernels

Signed-off-by: Yurii <iuriish@yahoo.com>

* Java side of Conv1D changes

Signed-off-by: raver119 <raver119@gmail.com>

* Add Conv1DDerivative op

Signed-off-by: Alex Black <blacka101@gmail.com>

* Causal Conv1D gradient checks

Signed-off-by: Alex Black <blacka101@gmail.com>

* Tweaks

Signed-off-by: Alex Black <blacka101@gmail.com>

* - add causal padding mode to conv2d_bp

Signed-off-by: Yurii <iuriish@yahoo.com>

* More thorough causal conv1d tests

Signed-off-by: Alex Black <blacka101@gmail.com>
2019-11-29 14:14:30 +03:00
Samuel Audet 5e07998e59 Add support for CUDA 10.2 (#89) 2019-11-29 16:31:03 +11:00
Alex Black abd2017a0a
Add ignore for known issue with non_max_suppression_v2/float16 test (#85)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-27 16:35:05 +11:00
raver119 aa44fd6850 one more BitCast test
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-25 08:52:11 +03:00
Alex Black e910ce75ec
Various Fixes (#75)
* #8431 Cast loss function weights array automatically

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add 'regex verbose mode' printing (ExecDebugListener) for TFGraphTestAllSameDiff'

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Class import mapping fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Reshape fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Don't swallow first exception in NativeOpExecutioner.exec(CustomOp)

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-23 20:06:12 +11:00
Alex Black 4a2fedf3e7
DL4J: Add Sparse multi-class cross entropy loss function (#72)
* #8432 Add sparse mcxent loss

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes for LossSparseMCXENT

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* add simple debugging listener for SameDiff exec debugging

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Extra gradient check + header polishing

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-22 18:54:31 +11:00
raver119 064a56ccf1
Few fixes (#66)
* skip legacy transforms execution in case of empty input arrays

Signed-off-by: raver119 <raver119@gmail.com>

* - BroadcastBool ops now accept extraParams to make MatchCondition possible
- TrueBroadcastHelper now uses samediff::threads

Signed-off-by: raver119 <raver119@gmail.com>

* java side

Signed-off-by: raver119 <raver119@gmail.com>

* trigger jenkins

Signed-off-by: raver119 <raver119@gmail.com>

* update LessThanOrEqual opNum mapping

Signed-off-by: raver119 <raver119@gmail.com>

* update LessThanOrEqual opNum mapping

Signed-off-by: raver119 <raver119@gmail.com>
2019-11-21 15:43:03 +03:00
raver119 83cb0d9329
[WIP] Create and small fix (#67)
* - create op
- skip exec for empty inputs for non_max_suppression
- EmptyHandling idea

Signed-off-by: raver119 <raver119@gmail.com>

* Create op and mapping for it

Signed-off-by: raver119 <raver119@gmail.com>
2019-11-21 13:31:20 +03:00
Alex Black da1944e8e1
SameDiff TF import (#49)
* Added implementation files for image_resize and resize_bicubic ops.

* Image resize and image.resize_bicubic ops implementation. Initial revision.

* Minor fix

* Some TF imports disabled.

* Finished with infrastructure development for image.resize_bilinear op and image_resizo op implementation.

* Refactored resize methods.

* Added processing for Mitchelcubic algorithm.

* adjust_contrast

* Small fix for TF import expected value loading when variable name starts with the test name

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tests

* Tests added.

* Removed tf names absent in mapping.

* Some fixes.

* Small fixes

* Minor change

* Some failing tests.

* Disable failed test

* Ignore some tests

* Fix import class mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix float property mapping (flatbuffers)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Override equality function for model 'dropout'

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fail tests

* Failed tests ignored temporarily.

* Minor fixes

* Small fix

* Conflict resolved

* Default implementations of tensorflowName and onnxName
2019-11-19 22:44:29 +11:00
raver119 1780dcc883
[WIP] Small fixes here and there (#50)
* one range test

Signed-off-by: raver119 <raver119@gmail.com>

* few Context convenience singatures

Signed-off-by: raver119 <raver119@gmail.com>

* one more range test

Signed-off-by: raver119 <raver119@gmail.com>

* "range" "fix"

Signed-off-by: raver119 <raver119@gmail.com>

* adjuct_contrast_v2 now allows scale factor to be provided via input_variable

Signed-off-by: raver119 <raver119@gmail.com>

* adjust_contrast now allows scale factor as variable too

Signed-off-by: raver119 <raver119@gmail.com>

* bitcast shape tests

Signed-off-by: raver119 <raver119@gmail.com>

* BitCast import dtype added

Signed-off-by: raver119 <raver119@gmail.com>

* few more BitCast signatures

Signed-off-by: raver119 <raver119@gmail.com>
2019-11-15 17:04:29 +03:00
Alex Black 47d19908f4
Various fixes (#43)
* #8172 Enable DL4J MKLDNN batch norm backward pass

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8382 INDArray.toString() rank 1 brackets / ambiguity fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8308 Fix handful of broken links (inc. some in errors)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Unused dependencies, round 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Unused dependencies, round 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Unused dependencies, round 3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Uniform distribution TF import fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-14 19:38:20 +11:00
Alex Black 18c01f5bdc
Add SameDiff memory reuse memory manager (array cache) (#39)
* Attention op comments

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ArrayCacheMemoryMgr - first pass

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tweak array cache for use with SameDiff identity arrays

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ArrayCacheMemoryMgr javadoc and properly get max memory

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* LRU cache policy + add tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Resize arrays internally if required for ArrayCacheMemoryMgr

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test improvement

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small polish

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-12 21:15:44 +11:00
AlexDBlack 0107fb10ab Merge remote-tracking branch 'konduit/master' 2019-11-08 18:11:45 +11:00
Alex Black 2f84ea666d
Uniform distribution op tweaks + 'specified output dtype' constructor (#38)
* Uniform distribution op tweaks + 'specified output dtype' constructor

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Validation tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-08 18:08:25 +11:00
Alex Black 24980efde3
Fix LogSumExp along dimension (#35)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-07 23:36:15 +11:00
longzhendong 52c9918c6f Testing slice and concat (#8362) 2019-11-07 14:47:37 +11:00
Alex Black 948ebef41c
Op Fixes (#28)
* #8280 biasadd_bp nchw arg fixes (java side) + test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8285 Concat op Java side fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Concat op cpp fix - allow dynamic axis to be negative, same as static axis

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ignores for deconv3d import tests until deconv3d_tf op is implemented

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-05 00:05:04 +11:00
AlexDBlack 2844f8b69a Merge remote-tracking branch 'konduit/master' 2019-11-02 19:00:47 +11:00
Alex Black 9efd811508
Use DL4J workspaces for SameDiff layers in MLN/CG (#23)
* #8329 DL4J workspace integration for SameDiff layers

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix bug for Nd4j.createUninitializedDetached for scalars (length 0 shape array)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff output layer, graph vertex, various fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-02 17:42:01 +11:00
Alex Black d82877b18b
Various SameDiff fixes (#21)
* MKLDNN LSTM forward implementation (disabled pending #8331)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8318 add SameDiff.calculateGradientsAndOutputs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Disable mkldnn backprop for now - pending fix, issue #8335

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8337 Fix CudaExecutioner unnecessary result array allocation/replacement

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small FlatBuffers serde fix, UInt8

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8135 ImagePreProcessingScaler - add segmentation support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8319 Ensure listeners are called when they are supposed to be called

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8214 UNet (non-pretrained) last conv layer kernal size fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-02 11:25:53 +11:00
Alexander Stoyakin 45a40c8a89
DL4J/ND4J: Do pass on integer casts (#15)
* Int cast fixes.

* Revert "Int cast fixes."

This reverts commit aa36e8ca

* Int casts

* Int cast

* Int casts

* Get rid of int casts. Dropping deprecated aggregate ops.

* java scatterUpdate changes

Signed-off-by: raver119 <raver119@gmail.com>

* c++ scatterUpdate changes

Signed-off-by: raver119 <raver119@gmail.com>

* Remove aggregated ops.

* Restored test

* Tests restored.

* Minor fixes
2019-10-31 11:23:09 +02:00
raver119 5a4d2e8b31
[WIP] SVD (#16)
* - new SVD constructor
- OrthogonalDistribution now uses SVD custom op

Signed-off-by: raver119 <raver119@gmail.com>

* shapes fixed

Signed-off-by: raver119 <raver119@gmail.com>
2019-10-28 12:31:01 +03:00
Alex Black d333d29099
SameDiff cleanup and fixes (#12)
* #8160 Remove resolvePrepertiesFromSameDiffBeforeExecution

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff API cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More SameDiff cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8248 Switch SameDiff variable init from lazy to creation time for more predictable behaviour

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8252 TanhDerivative javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8225 Deconvolution2D input validation

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8265 Switch SameDiff.outputs() to user settable, instead of unreliable 'best guess'

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8224 SameDiff.zero and .one create constants, not variables

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More cleanup and fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J SameDiff fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Re-add hack for Deconvolution2DLayer until #8315 is resolved

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8270 Move CUDA device/version logging to Java; can be disabled via existing org.nd4j.log.initialization system property

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* All ND4J init logging checks system property

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove redundant device logging

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* One more fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* UX improvements

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Deconv fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add deconv tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove debug code

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-26 12:38:08 +11:00
Alex Black 3f0b4a2d4c
SameDiff execution, TF and memory management overhaul (#10)
* SameDiff execution memory management improvements, round 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Round 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Round 3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clear node outputs closed array references; Slight change to OpValidation internals to not rely on cached op outputs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next step

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More polish

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add WeakIdentityHashmap

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Session fixes for control ops and next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* First steps for training session + in-line updating

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix losses and history during training

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* BiasAdd and other fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Don't use SDVariable.getArr() in TFGraphTestAllHelper (import tests)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* First steps for new dependency tracking approach

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Start integrating dependency tracking for memory management

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Non-control op dependency tracking works/passes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Switch/merge

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup and next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix issue dependency tracking for initial variables/constants

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add check for aliases when determining if safe to close array

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* First pass on new TF graph import class

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Import fixes, op fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup and fixes for new TF import mapper

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Partial implementation of new dependency tracker

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* AbstractDependencyTracker for shared code

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Overhaul SameDiff graph execution (dependency tracking)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes, cleanup, next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ad no-op memory manager, cleanup, fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix switch dependency tracking

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* INDArray.toString: no exception on closed arrays, just note closed

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix enter and exit dependency tracking

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* TensorArray memory management fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add unique ID for INDArray instances

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix memory management for NextIteration outputs in multi-iteration loops

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove (now unnecessary) special case handling for nested enters

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Handle control dependencies during execution; javadoc for memory managers

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup, polish, code comments, javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup and more javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add memory validation for all TF import tests - ensure all arrays (except outputs) are released

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clean up arrays waiting on unexecuted ops at the end of execution

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes for enter op memory managent in the context of multiple non-nested loops/frames

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix order of operation issues for dependency tracker

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Always clear op fields after execution to avoid leaks or unintended array reuse

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Re-implement dtype conversion

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for control dependencies execution (dependency tracking)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix TF import overrides and filtering

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for constant enter array dependency tracking

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J Fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More DL4J fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup and polish

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More polish and javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More logging level tweaks, small DL4J fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix to DL4J SameDiffLayer

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix empty array deserialization, add extra deserialization checks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* FlatBuffers control dep serialization fixes; test serialization as part of all TF import tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Variable control dependencies serialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix issue with removing inputs for ops

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* FlatBuffers NDArray deserialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* FlatBuffers NDArray deserialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Final cleanup/polish

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-23 21:19:50 +11:00
Alexander Stoyakin f31661e13b
Merge pull request #7 from KonduitAI/asto_nd4s_10172019
KDTree optimization
2019-10-23 12:11:25 +03:00
Alexander Stoyakin d5002b14c7 New ops wrappers 2019-10-16 12:59:08 +03:00
Alex Black b9a4b0f25f
Update links (#8292)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-16 12:59:52 +11:00
Robert Altena 50b13fadc8 nd4j-api cleanup. (#8273)
* nd4j-api cleanup.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* restore deleted schemes.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-10-08 21:48:22 +11:00
Alex Black f98f8be7b6
SameDiff ops (#8247)
* update javadocs and a few method signatures

Signed-off-by: Ryan Nett <rnett@skymind.io>

* add PRelu op

Signed-off-by: Ryan Nett <rnett@skymind.io>

* test and fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* add PRelu op

Signed-off-by: Ryan Nett <rnett@skymind.io>

* test and fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* slightly better test

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-19 11:52:20 +10:00
Robert Altena 83d958d536 Sparse matrix refactoring. (#8238)
* remove sparse method from INDArray.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove gemm

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove useage of n4j.sparseFactory

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* Nd4j.sparseFactory removed.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* sparseNDArray deleted.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* iremove more sparse calls and constants.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove SparseBlasWrapper.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* delete BaseSparseBlaswrapper.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove 3 sparse factory classes.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* delete SparseCPULevel.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* deletes JcusparseLevel, CUDASparselevel.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* delete nativeCPU sparse classes.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* removes sparse methods from NDArrayFactory.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* more deletes.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* delete (ignored) tests. BaseSparseNDArray.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* deletes ISparseNDArray.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove sparse methods from indArray.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* deletes sparse classes.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-17 22:56:29 +03:00
raver119 979ef13c0b fix build issues
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-13 11:55:13 +03:00
raver119 2bd69c004c
[WIP] Fixed signatures. SameDiff tests (#258) (#8233)
* Fixed signatures. SameDiff tests

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Tests fixed

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Test fixed

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Small fix

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Fixed test

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2019-09-12 19:25:03 +03:00
Alex Black 3e73e9b56e
Fixes, cleanup, enable now fixed tests, etc (#254)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-11 23:37:24 +10:00
Ryan Nett 8a05ec2a97
Fix a couple SameDiff training issues (#253)
* fix execBackwards training issue

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix validation not specifying outputs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* another fix for validation listeners and history

Signed-off-by: Ryan Nett <rnett@skymind.io>

* tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* add single batch dataset output methods

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-10 20:38:23 -07:00
Alex Black b582e69e3b
Small ND4J/SameDiff fixes (#248)
* #8218 Fix Nd4j.hstack rank 1 case

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8209 SameDiff: don't allow empty arrays (with 0s in shape) for variables

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-09 22:54:07 +10:00
Alex Black 87d873929f
Small LapackTest fix (#240)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-05 14:25:20 +10:00
Ryan Nett 79867f5c5a cleanup SDRNN and rnn ops (#238)
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-05 12:25:03 +10:00
Robert Altena f25e3e71e5 remove lengthLong (#236)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-05 11:19:38 +10:00
Ryan Nett e9454b8882
SDCNN cleanup pass (#230)
* SDCNN cleanup

Signed-off-by: Ryan Nett <rnett@skymind.io>

* NonNull annotations

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better javadoc, NonNull fix for sconv

Signed-off-by: Ryan Nett <rnett@skymind.io>

* update builders to fix names

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* even more fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix for null bias

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-04 00:44:01 -07:00
Alex Black 82c9dc5743
ELU fix (#219)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-02 18:37:05 +10:00
raver119 b71c993ded
[WIP] maxpool_bp cuda fix (#212)
* one test for alex

Signed-off-by: raver119 <raver119@gmail.com>

* fix

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of safety offset in cpp

Signed-off-by: raver119 <raver119@gmail.com>

* bfloat16

Signed-off-by: raver119 <raver119@gmail.com>

* minor test rearrangement to fastpath launch

Signed-off-by: raver119 <raver119@gmail.com>

* - atomicAdd/Mul/Div fix for float16/bfloat16 misalignment
- one special test for maxpoolbp java
- safety offset of 8 bytes is back to libnd4j legacy

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-31 20:57:05 +03:00
Robert Altena 54e320a255 javadoc (#201)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-30 22:40:27 +10:00
Alex Black dcc2baa676
Version upgrades (#199)
* DataVec fixes for Jackson version upgrade

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J jackson updates + databind version 2.9.9.3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Shade snakeyaml along with jackson

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Version fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Switch DataVec legacy JSON format handling to mixins

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next set of fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup for legacy JSON mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade commons compress to 1.18; small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* New Jackson backward compatibility for DL4J - Round 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* New Jackson backward compatibility for DL4J - Round 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes, all but legacy custom passing

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Provide an upgrade path for custom layers for models in pre-1.0.0-beta JSON format

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Legacy deserialization cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small amount of polish - legacy JSON

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade guava version

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* IEvaluation legacy format deserialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade play version to 2.7.3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update nd4j-parameter-server-status to new Play API

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update DL4J UI for new play version

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More play framework updates

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove Spark 1/2 adapter code from DataVec

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* datavec-spark dependency cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 4

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Another fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Breeze upgrade, dependency cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add Scala 2.12 version to pom.xml

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* change-scala-versions.sh - add scala 2.12, remove 2.10

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Move Spark version properties to parent pom (now that only one spark version is supported)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DataVec Play fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* datavec play dependency fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clean up old spark/jackson stuff

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup jackson unused dependencies

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Dropping redundant dependency

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Removed scalaxy dependency

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* DataVec fixes for Jackson version upgrade

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J jackson updates + databind version 2.9.9.3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Shade snakeyaml along with jackson

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Version fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Switch DataVec legacy JSON format handling to mixins

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next set of fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup for legacy JSON mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade commons compress to 1.18; small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* New Jackson backward compatibility for DL4J - Round 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* New Jackson backward compatibility for DL4J - Round 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes, all but legacy custom passing

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Provide an upgrade path for custom layers for models in pre-1.0.0-beta JSON format

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Legacy deserialization cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small amount of polish - legacy JSON

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade guava version

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* IEvaluation legacy format deserialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade play version to 2.7.3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update nd4j-parameter-server-status to new Play API

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update DL4J UI for new play version

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More play framework updates

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove Spark 1/2 adapter code from DataVec

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* datavec-spark dependency cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 4

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Another fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Breeze upgrade, dependency cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add Scala 2.12 version to pom.xml

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* change-scala-versions.sh - add scala 2.12, remove 2.10

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Move Spark version properties to parent pom (now that only one spark version is supported)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DataVec Play fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* datavec play dependency fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clean up old spark/jackson stuff

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup jackson unused dependencies

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add shaded guava

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Dropping redundant dependency

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Removed scalaxy dependency

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Ensure not possible to import pre-shaded classes, and remove direct guava dependencies in favor of shaded

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ND4J Shaded guava import fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DataVec and DL4J guava shading

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Arbiter, RL4J fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Build fixed

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Fix dependency

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Fix bad merge

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Jackson shading fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Set play secret, datavec-spark-inference-server

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for datavec-spark-inference-server

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Arbiter fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Arbiter fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-30 14:35:27 +10:00
Serhii Shepel 0463ee4eba Fix backend dependencies for tests (#189) 2019-08-29 12:54:48 +09:00
raver119 f4860574d7
[WIP] More fixes (#190)
* Refactored kernels for segment_max/min/sum ops.

* Refactored segment_prod kernels.

* Refactored segment_prod kernels.

* DynamicPartition test

Signed-off-by: raver119 <raver119@gmail.com>

* Addede linear test for dynamic_partition op.

* Refactored test with int datatype.

* some logging

Signed-off-by: raver119 <raver119@gmail.com>

* some logging

Signed-off-by: raver119 <raver119@gmail.com>

* some logging

Signed-off-by: raver119 <raver119@gmail.com>

* dynamicPartition fix

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of some logging

Signed-off-by: raver119 <raver119@gmail.com>

* one more test for dynamic_stitch

Signed-off-by: raver119 <raver119@gmail.com>

* one more test for dynamic_stitch

Signed-off-by: raver119 <raver119@gmail.com>

* empty check for stitch

Signed-off-by: raver119 <raver119@gmail.com>

* minor print changes

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-28 15:38:57 +03:00
Ryan Nett 2a1431264f
Remove calculate output shape from java side (#151)
* remove some unneeded java-side output shape calculations

Signed-off-by: Ryan Nett <rnett@skymind.io>

* delete Broadcast

Signed-off-by: Ryan Nett <rnett@skymind.io>

* delete Linear and Module,

Signed-off-by: Ryan Nett <rnett@skymind.io>

* update Identity, HashCode, and NoOp

Signed-off-by: Ryan Nett <rnett@skymind.io>

* removed Cast java-side shape function, added tests and SDVariable.isEmpty

Signed-off-by: Ryan Nett <rnett@skymind.io>

* ignoring test w/ issues on master

Signed-off-by: Ryan Nett <rnett@skymind.io>

* noop needs more work, fixed BaseArithmeticBackprop and BaseDynamicTransform ops

merge in master for c++ build fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix EqualTo

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix other cond ops

Signed-off-by: Ryan Nett <rnett@skymind.io>

* "fake" ops calculateOutputShape() throws exception

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use c++ shape calc for Linspace

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix exception message, move most to BaseCompatOp

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove SDVariable.isEmpty

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove commented out code

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-08-27 20:39:32 -07:00
Alex Black b46f9827b8
Layer norm test updates (#187)
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-28 13:27:00 +10:00
Robert Altena 59a6e4e3ae
INDArray refactoring (#170)
* javadoc

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove javaTensorAlongDimension

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* javadoc
2019-08-28 12:03:23 +09:00
raver119 b472d7d8c8
[WIP] few more fixes (#182)
* one noop test

Signed-off-by: raver119 <raver119@gmail.com>

* skip input validation for no-input ops

Signed-off-by: raver119 <raver119@gmail.com>

* - one more noop empty test
- one more validation before sync

Signed-off-by: raver119 <raver119@gmail.com>

* typo

Signed-off-by: raver119 <raver119@gmail.com>

* one more validation fix

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA empty reductions java side

Signed-off-by: raver119 <raver119@gmail.com>

* one svd test

Signed-off-by: raver119 <raver119@gmail.com>

* Corrected segment_mean helpers and added another test.

* Refactored segment_mean kernels to avoid race_condition.
2019-08-27 21:00:38 +03:00
Alex Black dff599aa8f
Test fix (#179)
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-27 20:43:36 +10:00
Alex Black dce4751fc1
Layer norm 4d case fixes (#174)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-27 18:34:53 +10:00
raver119 df84bc7255
[WIP] More tweaks (#173)
* CUDA empty reduction

Signed-off-by: raver119 <raver119@gmail.com>

* - listdiff synchronization fix for CUDA
- listdiff test

Signed-off-by: raver119 <raver119@gmail.com>

* - IndexReduce ops now allow INDEXING_TYPES output
- topK op accepts only INDEXING_TYPES as output

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 10:37:10 +03:00
Alex Black e92f7218f3
Add new tests (#171)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-27 12:15:56 +10:00
raver119 25e5c23eae
[WIP] Error handling (#169)
* CUDA reverse rewrite + couple of tests

Signed-off-by: raver119 <raver119@gmail.com>

* don't throw exception on invalid pointer

Signed-off-by: raver119 <raver119@gmail.com>

* data types validation for fastpath exec mode + 2 tests

Signed-off-by: raver119 <raver119@gmail.com>

* data types validation for fastpath exec mode + 2 tests

Signed-off-by: raver119 <raver119@gmail.com>

* ismax allowed dtypes tweak

Signed-off-by: raver119 <raver119@gmail.com>

* lastErrorCode + lastErrorMessage for native exceptions handling

Signed-off-by: raver119 <raver119@gmail.com>

* exportable ErrorReference

Signed-off-by: raver119 <raver119@gmail.com>

* check error codes in java

Signed-off-by: raver119 <raver119@gmail.com>

* - consume lastErrorCode
- fast_in dtype validation fix

Signed-off-by: raver119 <raver119@gmail.com>

* - sg/cb allowed output type change
- minor logging fix for data type validation

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:57:51 +03:00
raver119 bb5fc36e5e
[WIP] ops fixes (#168)
* - correct layer_norm

Signed-off-by: Yurii <yurii@skymind.io>

* - further fix of layer norm

Signed-off-by: Yurii <yurii@skymind.io>

* - correct scatter_upd op

Signed-off-by: Yurii <yurii@skymind.io>

* - correct cuda kernel for histogram_fixed_width op

Signed-off-by: Yurii <yurii@skymind.io>

* - delete comments

Signed-off-by: Yurii <yurii@skymind.io>

* enabled one ignored test

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:37:05 +03:00
Alex Black b417ca21bf
Fix for concat op shape function (empty shapes) (#167)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-26 23:10:28 +10:00
Alex Black d607bec6f9
Small test fixes (#165)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-26 11:45:33 +10:00
raver119 f8364997c0
[WIP] maxpool2d_bp fix (#160)
* one test for maxpool2d_bp

Signed-off-by: raver119 <raver119@gmail.com>

* - maxpool2d_bp cuda fix for NaNs
- streamSync after each custom op execution

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-24 09:20:57 +03:00
raver119 f03b0ee78f
[WIP] more fixes (#159)
* Added test for MatrixInverse with double input. Fixed matrixDeterminantKernel.

* Fixed kernels to avoid waste templating.

* Fixed logDeterminant kernel.

* Refactored type check for lup'

* - decrease blockDim value for zeta op

Signed-off-by: Yurii <yurii@skymind.io>

* Added print for compound matrix with CUDA.

* Refactored upper matrix invertion kernels.

* - provide move constructor and move assignment operator for OpArgsHoder class

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored usage of launch context.

* - add test for mergemax

Signed-off-by: Yurii <yurii@skymind.io>

* get rid of AveragingArrayProxy

Signed-off-by: raver119 <raver119@gmail.com>

* Refactoring of LUP inversion.

* Added prints for invertion.

* - add OpArgsHolder copy constructor and assignment operator

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for lower inversion

* - fix bug in upsampling2d/3d_bp op

Signed-off-by: Yurii <yurii@skymind.io>

* Added expensive printfs to kernel.

* Refactored expensive kernel prints.

* Refactored expensive printfs

* - remove nullify

Signed-off-by: Yurii <yurii@skymind.io>

* Eliminated waste prints with tests.

* upsampling2d_bp test

Signed-off-by: raver119 <raver119@gmail.com>

* test updated

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 19:20:50 +03:00
raver119 99cdf6d42b - cpu isMax fix for multidim case + test
- INDArray.wasClosed() fix for empty array edge case

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 18:44:37 +03:00
raver119 729dc5e879
[WIP] size etc (#155)
* one test for size

Signed-off-by: raver119 <raver119@gmail.com>

* - few tests for size op
- size/rank/size_at ops now use p instead of assign

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 12:31:12 +03:00
raver119 243bf866c4
[WIP] Few fixes (#153)
* throw exception if op execution failed

Signed-off-by: raver119 <raver119@gmail.com>

* expected for test

Signed-off-by: raver119 <raver119@gmail.com>

* one more ismax test

Signed-off-by: raver119 <raver119@gmail.com>

* ismax view fix

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 09:00:10 +03:00
Alex Black 80d35377d4
SameDiff cleanup and fixes (#150)
* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SDVariable no longer extends DifferentialFunction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8123 Remove cloning library to avoid 'illegal reflective access' warnings

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8095 Make Pooling3D abstract, fix flatbuffers serialization issue

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8117 WordVectorSerializer deprecated method javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Final fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* One more

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-23 15:09:53 +10:00
raver119 930b49e87f
[WIP] DeviceLocalNDArray updates (#149)
* ContextBuffers are released upon device change

Signed-off-by: raver119 <raver119@gmail.com>

* DeviceLocalNDArray updates + tests

Signed-off-by: raver119 <raver119@gmail.com>

* special array for delayed mode

Signed-off-by: raver119 <raver119@gmail.com>

* additional detach()

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-22 20:01:29 +03:00
Alex Black 9c2bfc9863
Various fixes (DL4J, ND4J) (#147)
* Import fixes, IsMax dtype calc, small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SubsamplingLayer fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J - SpaceToBatch layer updates

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-22 16:16:03 +10:00
Robert Altena ca7e5593ec
ND4J: Remove Nd4j.trueScalar/trueVector (#145)
* merge conflict.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove/replace trueVector

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-22 10:49:30 +09:00
Ryan Nett 2b0d7b3b52
[WIP] Various fixes, mostly SameDiff/Nd4j (#110)
* Nd4j pad update

Signed-off-by: Ryan Nett <rnett@skymind.io>

* switched from guava Immutables to Collections.unmodifiableList/Map

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use new pad

Signed-off-by: Ryan Nett <rnett@skymind.io>

* conv tests use OpValidation

Signed-off-by: Ryan Nett <rnett@skymind.io>

* deconv3d overrides

Signed-off-by: Ryan Nett <rnett@skymind.io>

* test fix for the new pad method

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more test fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more test fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* rename SameDiff function methods to op (except for the actual SameDiff function ones)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more pad overloads, test fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* test updates

Signed-off-by: Ryan Nett <rnett@skymind.io>

* conv1d test

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove Conv1D tf import (there isn't a TF conv1d op)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove numThreads from Nd4j

Signed-off-by: Ryan Nett <rnett@skymind.io>

* replace Old ops with their newer versions, deprecate ones that haven't already been deprecated

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove use of setNumThreads

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix for Reverse and ATan2

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix test for wrong equals type

Signed-off-by: Ryan Nett <rnett@skymind.io>

* well it works now

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better javadocs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* NonNulls

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better array literal

Signed-off-by: Ryan Nett <rnett@skymind.io>

* re-add tf import stuff (will remove later)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* conv1d config load fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* partial config usage changes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove Old op classes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* config property fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* removed one too many ops

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-08-21 16:40:32 -07:00
raver119 77805cb7fa
[WIP] cpu ismax fix (#137)
* cpu ismax fix

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of smaller scalar tests

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-21 10:12:11 +03:00
raver119 269d508ba5
[WIP] cross-device migrations (#134)
* two more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA device afinity tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* prepareAction/registerAction for CustomOps

Signed-off-by: raver119 <raver119@gmail.com>

* lazy allocate host bufer before relocation

Signed-off-by: raver119 <raver119@gmail.com>

* one special test for migration in cpp

Signed-off-by: raver119 <raver119@gmail.com>

* tests update for msvc

Signed-off-by: raver119 <raver119@gmail.com>

* logging

Signed-off-by: raver119 <raver119@gmail.com>

* stick to old col2im impl

Signed-off-by: raver119 <raver119@gmail.com>

* cudaStreams reorganization

Signed-off-by: raver119 <raver119@gmail.com>

* buffer size fix

Signed-off-by: raver119 <raver119@gmail.com>

* c++ data migration

Signed-off-by: raver119 <raver119@gmail.com>

* fix CropAndResize test

Signed-off-by: raver119 <raver119@gmail.com>

* - minor improvment

Signed-off-by: Yurii <yurii@skymind.io>
2019-08-20 18:52:41 +03:00
Robert Altena 38310777ee
fix for eclipse#8087 (#129)
*  fix for #8087

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove commented code.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* removing trueScalar.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove tueScalar.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-20 15:20:40 +09:00
raver119 b8ab1a00b0 - 2 mod tests
- ModOp mapping added

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-19 12:57:24 +03:00
raver119 aceb915557
[WIP] tests fixes (#130)
* no openmp for ClipByGlobalNorm

Signed-off-by: raver119 <raver119@gmail.com>

* one more bfloat16 rng test

Signed-off-by: raver119 <raver119@gmail.com>

* assertion fix

Signed-off-by: raver119 <raver119@gmail.com>

* - legacy IsMax gone
- linear IsMax gets shapeInfo argument

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of legacy IsMax tests

Signed-off-by: raver119 <raver119@gmail.com>

* IsMax is custom op now

Signed-off-by: raver119 <raver119@gmail.com>

* more blocks for ismax

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

*  - sqrt test
 - some legacy code removed from CudaExecutioner
 - Transforms.asin tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* - TransformFloat fix

Signed-off-by: raver119 <raver119@gmail.com>

* - ismax fix
- SpaceToBatchND/BatchToSpaceND wrappers
- couple of legacy tests removed

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-19 11:33:15 +03:00
Robert Altena 59cba587f4
Nd4j refactoring (#112)
* refactoring

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

* fix: make test public.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* make test public.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* fixes read refactoring.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-15 13:50:52 +09:00
raver119 c7277729e9
few fixes for bfloat16 in java and cpp (#114)
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-14 21:51:42 +03:00
raver119 53ca9a76e8
[WIP] multi-device support (#80)
* fix pad javadoc and @see links. (#72)

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* [WIP] More fixes (#73)

* special tests for ConstantTadHelper/ConstantShapeHelper

Signed-off-by: raver119 <raver119@gmail.com>

* release methods for data buffers

Signed-off-by: raver119 <raver119@gmail.com>

* delete temporary buffer Java side

Signed-off-by: raver119 <raver119@gmail.com>

* delete temporary buffer Java side

Signed-off-by: raver119 <raver119@gmail.com>

* delete temporary TadPack C++/Java side (#74)

Signed-off-by: raver119 <raver119@gmail.com>

* Zoo model TF import test updates (#75)

* argLine fix, update compression_gru comment

* updated comment for xception

* undid but commented argLine change

* updated xlnet comment

* copyright headers

* - new NDArray methods like()/ulike() (#77)

- fix for depthwise_conv2d_bp + special test

Signed-off-by: raver119 <raver119@gmail.com>

* upsampling2d fix CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* DL4J trace logging (#79)

* MLN/CG trace logging for debugging

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tiny tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* strided_slice_bp shape fn leak fix

Signed-off-by: raver119 <raver119@gmail.com>

* SameDiff fixes and naming (#78)

* remove SDVariable inplace methods

* import methods

* npe fix in OpVal

* removed SameDiff inplace ops from tests

* Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything

* quick fixes

* javadoc

* SDVariable eval with placeholders

* use regex match

* better matching

* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* fix javadoc. (#76)

* fix javadoc.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* replace most @see with @link s.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* 4 additional tests

Signed-off-by: raver119 <raver119@gmail.com>

* launch context reorganization

Signed-off-by: raver119 <raver119@gmail.com>

* LaunchContext reorganization

Signed-off-by: raver119 <raver119@gmail.com>

* per-device LaunchContext

Signed-off-by: raver119 <raver119@gmail.com>

* Various DL4J/ND4J fixes (#81)

* #7954 Force refresh of UI when switching tabs on overview page

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8017 Concurrent modification exception (synchronize) fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8033 Don't initialize updater in middle of writing memory crash dump

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8208 Fix shape checks for ND4J int[] creator methods

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6385 #7992 Keras import naming fixes + cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8016 Upsampling3D - add NDHWC format support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ContextBuffers as separate entity

Signed-off-by: raver119 <raver119@gmail.com>

* Refactor NativeOps.h to export C functions

* Actually export functions from NativeOps.h

* Adapt the Java wrappers in ND4J generated with JavaCPP

* Create C wrappers for some of the C++ classes currently used by ND4J

* ContextBuffers as separate entity

Signed-off-by: raver119 <raver119@gmail.com>

* remove duplicate code in createBufferDetached. (#83)

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* Keras model import - updater lr fix (#84)

* Keras model import - updater lr fix

Signed-off-by: eraly <susan.eraly@gmail.com>

* Keras model import - updater lr fix, cleanup

Signed-off-by: eraly <susan.eraly@gmail.com>

* ContextBuffers as separate entity

Signed-off-by: raver119 <raver119@gmail.com>

* ContextBuffers as separate entity

Signed-off-by: raver119 <raver119@gmail.com>

* Fix functions of OpaqueVariablesSet

* thread-local buffers/affinity

Signed-off-by: raver119 <raver119@gmail.com>

* thread safety for LaunchContext

Signed-off-by: raver119 <raver119@gmail.com>

* more of thread safety

Signed-off-by: raver119 <raver119@gmail.com>

* one more multi threaded test

Signed-off-by: raver119 <raver119@gmail.com>

* SameDiff Convolution Config validation, better output methods (#82)

* Conv Config validation & tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* stackOutputs utility method

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use constructor for validation, support negative kernel sizes (infered from weights)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better output methods

Signed-off-by: Ryan Nett <rnett@skymind.io>

* move output to be with fit and evaluate

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* refactor duplicate code from pad methods. (#86)

* refactor duplicate code from pad methods.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* replace switch with if.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* Various ND4J/DL4J fixes and improvements (#87)

* Reshape and reallocate - small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Reshape and reallocate - small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6488 ElementWiseVertex broadcast support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Constructors and broadcast supported it Transforms.max/min

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8054 ElementWiseVertex now supports broadcast inputs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8057 Nd4j.create overload dtype fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7551 ND4J Shape validation fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* [WIP] Numpy boolean import (#91)

* numpy bool type

Signed-off-by: raver119 <raver119@gmail.com>

* numpy bool java side

Signed-off-by: raver119 <raver119@gmail.com>

* remove create method with unused parameter. (#89)

* remove create method with unused parameter.

* removed more unused methods.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* removing more unused code.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* last removal of unused code.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove createSparse methods. (#92)

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* Various ND4J/DL4J fixes (#90)

* Deprecate Old*Op instances

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8063 #8054 Broadcast exceptions + cleanup inplace ops

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove bad test condition

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7993 Fix shape function issue in crop_and_resize op

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J SameDiff lambda layer fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8029 Fix for pnorm backprop math

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8038 Fix Op profiler NaN/Inf triggering + add tests (#93)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* createUninitializedDetached refactoring. (#94)

* wip

* update interface, add null implementations.

* Breaking one test in a weird way.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* createUninitializedDetached refactored.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* cuda build fix for issues introduced by recent refactoring

Signed-off-by: raver119 <raver119@gmail.com>

* [WIP] More of CUDA (#95)

* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* Implementation of hashcode cuda helper. Working edition.

* Fixed parallel test input arangements.

* Fixed tests for hashcode op.

* Fixed shape calculation for image:crop_and_resize op and test.

* NativeOps tests. Initial test suite.

* Added tests for indexReduce methods.

* Added test on execBroadcast with NDArray as dimensions.

* Added test on execBroadcastBool with NDArray as dimensions.

* Added tests on execPairwiseTransform and execPairwiseTransofrmBool.

* Added tests for execReduce with scalar results.

* Added reduce tests for non-empty dims array.

* Added tests for reduce3.

* Added tests for execScalar.

* Added tests for execSummaryStats.

* - provide cpu/cuda code for batch_to_space
- testing it

Signed-off-by: Yurii <yurii@skymind.io>

* - remove old test for batch_to_space (had wrong format and numbers were not checked)

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed complilation errors with test.

* Added test for execTransformFloat.

* Added test for execTransformSame.

* Added test for execTransformBool.

* Added test for execTransformStrict.

* Added tests for execScalar/execScalarBool with TADs.

* Added test for flatten.

* - provide cpu/cuda code for space_to_Batch operaion

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for concat.

* comment unnecessary stuff in s_t_b

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for specialConcat.

* Added tests for memcpy/set routines.

* Fixed pullRow cuda test.

* Added pullRow test.

* Added average test.

* - correct typo in NDArray::applyPairwiseTransform(nd4j::pairwise::BoolOps op...)

Signed-off-by: Yurii <yurii@skymind.io>

* - debugging and fixing cuda tests in JavaInteropTests file

Signed-off-by: Yurii <yurii@skymind.io>

* - correct some tests

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for shuffle.

* Fixed ops declarations.

* Restored omp and added shuffle test.

* Added convertTypes test.

* Added tests for execRandom. Eliminated usage of RandomBuffer with NativeOps.

* Added sort tests.

* Added tests for execCustomOp.

* - further debuging and fixing tests terminated with crash

Signed-off-by: Yurii <yurii@skymind.io>

* Added tests for calculateOutputShapes.

* Addded Benchmarks test.

* Commented benchmark tests.

* change assertion

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for apply_sgd op. Added cpu helper for that op.

* Implement cuda helper for aplly_sgd op. Fixed tests for NativeOps.

* Added test for assign broadcastable.

* Added tests for assign_bp op.

* Added tests for axpy op.

* - assign/execScalar/execTransformAny signature change
- minor test fix

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed axpy op.

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* - fix tests for nativeOps::concat

Signed-off-by: Yurii <yurii@skymind.io>

* sequential transform/scalar

Signed-off-by: raver119 <raver119@gmail.com>

* allow nested parallelism

Signed-off-by: raver119 <raver119@gmail.com>

* assign_bp leak fix

Signed-off-by: raver119 <raver119@gmail.com>

* block setRNG fix

Signed-off-by: raver119 <raver119@gmail.com>

* enable parallelism by default

Signed-off-by: raver119 <raver119@gmail.com>

* enable nested parallelism by default

Signed-off-by: raver119 <raver119@gmail.com>

* Added cuda implementation for row_count helper.

* Added implementation for tnse gains op helper.

* - take into account possible situations when input arrays are empty in reduce_ cuda stuff

Signed-off-by: Yurii <yurii@skymind.io>

* Implemented tsne/edge_forces op cuda-based helper. Parallelized cpu-based helper for edge_forces.

* Added kernel for tsne/symmetrized op heleper.

* Implementation of tsne/symmetrized op cuda helper. Working edition.

* Eliminated waste printfs.

* Added test for broadcastgradientargs op.

* host-only fallback for empty reduce float

Signed-off-by: raver119 <raver119@gmail.com>

* - some tests fixes

Signed-off-by: Yurii <yurii@skymind.io>

* - correct the rest of reduce_ stuff

Signed-off-by: Yurii <yurii@skymind.io>

* - further correction of reduce_ stuff

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for Cbow op. Also added cuda implementation for cbow helpers.

* - improve code of stack operation for scalar case

Signed-off-by: Yurii <yurii@skymind.io>

* - provide cuda kernel for gatherND operation

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of cbow helpers with cuda kernels.

* minor tests tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* minor tests tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* - further correction of cuda stuff

Signed-off-by: Yurii <yurii@skymind.io>

* Implementatation of cbow op helper with cuda kernels. Working edition.

* Skip random testing for cudablas case.

* lstmBlockCell context fix

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for ELU and ELU_BP ops.

* Added tests for eq_scalar, gt_scalar, gte_scalar and lte_scalar ops.

* Added tests for neq_scalar.

* Added test for noop.

* - further work on clipbynorm_bp

Signed-off-by: Yurii <yurii@skymind.io>

* - get rid of concat op call, use instead direct concat helper call

Signed-off-by: Yurii <yurii@skymind.io>

* lstmBlockCell context fix

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for lrelu and lrelu_bp.

* Added tests for selu and selu_bp.

* Fixed lrelu derivative helpers.

* - some corrections in lstm

Signed-off-by: Yurii <yurii@skymind.io>

* operator * result shape fix

Signed-off-by: raver119 <raver119@gmail.com>

* - correct typo in lstmCell

Signed-off-by: Yurii <yurii@skymind.io>

* few tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA inverse broadcast bool fix

Signed-off-by: raver119 <raver119@gmail.com>

* disable MMAP test for CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* BooleanOp syncToDevice

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* additional data types for im2col/col2im

Signed-off-by: raver119 <raver119@gmail.com>

* Added test for firas_sparse op.

* one more RandomBuffer test excluded

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for flatten op.

* Added test for Floor op.

* bunch of tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* mmulDot tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* Implemented floordiv_bp op and tests.

* Fixed scalar case with cuda implementation for bds.

* - work on cuda kernel for clip_by_norm backprop op is completed

Signed-off-by: Yurii <yurii@skymind.io>

* Eliminate cbow crach.

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* Eliminated abortion with batched nlp test.

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed shared flag initializing.

* disabled bunch of cpu workspaces tests

Signed-off-by: raver119 <raver119@gmail.com>

* scalar operators fix: missing registerSpecialUse call

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed logdet for cuda and tests.

* - correct clipBynorm_bp

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed crop_and_resize shape datatype.

* - correct some mmul tests

Signed-off-by: Yurii <yurii@skymind.io>

* build fix

Signed-off-by: raver119 <raver119@gmail.com>

* exclude two methods for JNI

Signed-off-by: raver119 <raver119@gmail.com>

* exclude two methods for JNI

Signed-off-by: raver119 <raver119@gmail.com>

* exclude two methods for JNI (#97)

Signed-off-by: raver119 <raver119@gmail.com>

* temporary stack fix

Signed-off-by: raver119 <raver119@gmail.com>

* round robin affinity test

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of legacy CudaContext methods

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of legacy ContextPool classes/methods

Signed-off-by: raver119 <raver119@gmail.com>

* one legacy test removed

Signed-off-by: raver119 <raver119@gmail.com>

* few more fields rearranged

Signed-off-by: raver119 <raver119@gmail.com>

* OpaqueLaunchContext

Signed-off-by: raver119 <raver119@gmail.com>

* OpaqueLaunchContext++

Signed-off-by: raver119 <raver119@gmail.com>

* more of OpaqueLaunchContext methods

Signed-off-by: raver119 <raver119@gmail.com>

* LaunchContext -> CudaContext

Signed-off-by: raver119 <raver119@gmail.com>

* AffinityManger changes

Signed-off-by: raver119 <raver119@gmail.com>

* AffinityManger changes

Signed-off-by: raver119 <raver119@gmail.com>

* cusolver handles

Signed-off-by: raver119 <raver119@gmail.com>

* typo

Signed-off-by: raver119 <raver119@gmail.com>

* cusolver method

Signed-off-by: raver119 <raver119@gmail.com>

* cusolver handle propagated

Signed-off-by: raver119 <raver119@gmail.com>

* blas/solver handles

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* legacy concat implementations replaced with new CustomOp

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* concat now uses way more blocks

Signed-off-by: raver119 <raver119@gmail.com>

* print

Signed-off-by: raver119 <raver119@gmail.com>

* no more triple template mmul

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of kernels have dtypes reconsidered

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of kernels have dtypes reconsidered

Signed-off-by: raver119 <raver119@gmail.com>

* bitonic sort reorganized

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of cpu stuff removed from cuda scope

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of cpu stuff removed from cuda scope

Signed-off-by: raver119 <raver119@gmail.com>

* type conversions moved to generic impl

Signed-off-by: raver119 <raver119@gmail.com>

* cpu data types pass

Signed-off-by: raver119 <raver119@gmail.com>

* non_max_suppression

Signed-off-by: raver119 <raver119@gmail.com>

* sortByValue fix

Signed-off-by: raver119 <raver119@gmail.com>

* ignore all mixed datatype tests for mmul

Signed-off-by: raver119 <raver119@gmail.com>

* special handling of OpProfiler exceptions

Signed-off-by: raver119 <raver119@gmail.com>

* - one failing concat test in cpp
- Nd4j.tile now uses op internally

Signed-off-by: raver119 <raver119@gmail.com>

* get back dtype exception for legacy arrays deserialization

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-14 16:52:34 +03:00
Robert Altena b10ab239c0 Nd4j refactoring (#111)
* refactoring

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-13 21:44:40 +10:00
Ryan Nett 11bddb3825 SameDiff: Listener changes and training api update (#99)
* example api

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Lambda based evaluation

Signed-off-by: Ryan Nett <rnett@skymind.io>

* lambda test

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* partial fixes, use get-variable listener framework, example EvaluationListener

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc fix and newInstance implementations

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fit and evaluate methods with validation data (for fit) and listeners

Signed-off-by: Ryan Nett <rnett@skymind.io>

* output method overloads + listener args

Signed-off-by: Ryan Nett <rnett@skymind.io>

* history and evaluation helpers

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* FitConfig and added getters and setters

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadocs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes, javadoc, added activations to history, added latest activation listener

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes, start of tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes and updates

Signed-off-by: Ryan Nett <rnett@skymind.io>

* newInstance fixes, tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* test fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadocs, getters with SDVariable overrides, CustomEvaluation fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more operation config classes (evaluation, output, exec/single batch output), fix custom eval tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* merge fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes, most old fit/evaluate/output methods use the builders

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* numerous fixes/cleanup

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Polish round 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Round 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Formatting + round 3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Round 4

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-10 15:30:31 +10:00
Alex Black edb71bf46f
Add SameDiff exec debugging listener + few fixes (#104)
* First pass on SameDiff op exec debug listener

Signed-off-by: Alex Black <blacka101@gmail.com>

* #7555 DL4J helpers - don't fall back on builtin for op profiler exceptions

Signed-off-by: Alex Black <blacka101@gmail.com>

* Exec debugging listener + fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix import counts for TF ops in OpValidationSuite

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix bad DL4J test configuration

Signed-off-by: Alex Black <blacka101@gmail.com>

* Exec debugging listener polish

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* Another fix

Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-07 17:18:29 +10:00
Robert Altena ba1d1b160b
remove method with unused parameter. (#102)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-06 19:14:45 +09:00
Alex Black b8846113bd
Small number of fixes + cleanup + some missing op methods + constructors (#100)
* Remove unused op class - DefaultOpConverter

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add SDImage class; INDArray constructor additions

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Floordiv

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small polish to image methods

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small DataVec test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 22:31:46 +10:00
Alex Black 923ab15583 Small number of fixes (#98)
* Remove no longer functioning LegacyPooling2D class

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8066 First steps for reflection scanning - INDArray constructors

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small build fix for ND4S

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More nd4s fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:28:07 +10:00
Robert Altena dfec54242d createUninitializedDetached refactoring. (#94)
* wip

* update interface, add null implementations.

* Breaking one test in a weird way.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* createUninitializedDetached refactored.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-05 11:27:05 +10:00
Alex Black cd41c3540d #8038 Fix Op profiler NaN/Inf triggering + add tests (#93)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:25:48 +10:00
Alex Black e18e2dc014 Various ND4J/DL4J fixes (#90)
* Deprecate Old*Op instances

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8063 #8054 Broadcast exceptions + cleanup inplace ops

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove bad test condition

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7993 Fix shape function issue in crop_and_resize op

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J SameDiff lambda layer fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8029 Fix for pnorm backprop math

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:25:48 +10:00
Robert Altena fbe120031d remove createSparse methods. (#92)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-05 11:25:48 +10:00
Robert Altena 386a9f057b remove create method with unused parameter. (#89)
* remove create method with unused parameter.

* removed more unused methods.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* removing more unused code.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* last removal of unused code.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-05 11:25:48 +10:00
raver119 065b34c7cb [WIP] Numpy boolean import (#91)
* numpy bool type

Signed-off-by: raver119 <raver119@gmail.com>

* numpy bool java side

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:25:48 +10:00
Alex Black b95417f7c5 Various ND4J/DL4J fixes and improvements (#87)
* Reshape and reallocate - small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Reshape and reallocate - small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6488 ElementWiseVertex broadcast support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Constructors and broadcast supported it Transforms.max/min

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8054 ElementWiseVertex now supports broadcast inputs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8057 Nd4j.create overload dtype fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7551 ND4J Shape validation fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:24:20 +10:00
Ryan Nett d4e7997134 SameDiff Convolution Config validation, better output methods (#82)
* Conv Config validation & tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* stackOutputs utility method

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use constructor for validation, support negative kernel sizes (infered from weights)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better output methods

Signed-off-by: Ryan Nett <rnett@skymind.io>

* move output to be with fit and evaluate

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-08-05 11:24:20 +10:00
Samuel Audet 526b782e51 Create C wrappers for some of the C++ classes currently used by ND4J 2019-08-05 11:22:59 +10:00
Alex Black fad8da878f Various DL4J/ND4J fixes (#81)
* #7954 Force refresh of UI when switching tabs on overview page

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8017 Concurrent modification exception (synchronize) fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8033 Don't initialize updater in middle of writing memory crash dump

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8208 Fix shape checks for ND4J int[] creator methods

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6385 #7992 Keras import naming fixes + cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8016 Upsampling3D - add NDHWC format support

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:21:23 +10:00
Ryan Nett ac321265a7 SameDiff fixes and naming (#78)
* remove SDVariable inplace methods

* import methods

* npe fix in OpVal

* removed SameDiff inplace ops from tests

* Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything

* quick fixes

* javadoc

* SDVariable eval with placeholders

* use regex match

* better matching
2019-08-05 11:21:23 +10:00
Ryan Nett 00e6296140 Zoo model TF import test updates (#75)
* argLine fix, update compression_gru comment

* updated comment for xception

* undid but commented argLine change

* updated xlnet comment

* copyright headers
2019-08-05 11:13:41 +10:00
Alex Black 7939cf384b Misc fixes (#66)
* Small fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Flaky test fix

Signed-off-by: Alex Black <blacka101@gmail.com>
2019-07-20 23:17:03 +10:00
Alex Black d94bc7257c Various fixes (#65)
* #7977 deprecate legacy MultiLayerNetwork/ComputationGraph.params(boolean) method

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix bad test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix Histogram mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix incorrect name handling in DifferentialFunction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Histogram fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Proper histogram fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ToString/NDArrayStrings fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* JSON UTF8 serialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-07-20 23:16:41 +10:00
raver119 c499dc962f - numpy import fix for CUDA (#64)
- skip tagLocation for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:16:19 +10:00
raver119 c9e867b2e8 File existence validation for Nd4j.createFromNpyFile()
Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:15:57 +10:00
raver119 fd6c0df024 [WIP] More CUDA fixes/updates (#62)
* CUDA reallocation update

Signed-off-by: raver119 <raver119@gmail.com>

* Legacy SoftMax/LogSoftMax/SoftMaxDerivative removed from cpp

Signed-off-by: raver119 <raver119@gmail.com>

* SoftMaxDerivative op removed

Signed-off-by: raver119 <raver119@gmail.com>

* few tests updates

Signed-off-by: raver119 <raver119@gmail.com>

* RNG fixes

Signed-off-by: raver119 <raver119@gmail.com>

* few more tests updates

Signed-off-by: raver119 <raver119@gmail.com>

* legacy Histogram/Pooling2D removed

Signed-off-by: raver119 <raver119@gmail.com>

* legacy Histogram removed

Signed-off-by: raver119 <raver119@gmail.com>

* histogram moved

Signed-off-by: raver119 <raver119@gmail.com>

* histogram moved cuda

Signed-off-by: raver119 <raver119@gmail.com>

* Histogram custom op

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:07:42 +10:00
raver119 9cf28ea6c9 [WIP] CUDA tweaks (#60)
* special cpu concat

Signed-off-by: raver119 <raver119@gmail.com>

* special concat fix

Signed-off-by: raver119 <raver119@gmail.com>

* OpProfiler tweak for absent host pointers

Signed-off-by: raver119 <raver119@gmail.com>

* minor test tweak to see orders

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA broadcasting diff orders fix

Signed-off-by: raver119 <raver119@gmail.com>

* faster iterations

Signed-off-by: raver119 <raver119@gmail.com>

* OldSoftMax/OldLogSoftMax gone

Signed-off-by: raver119 <raver119@gmail.com>

* RandomLauncher tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* additional check int randomtests

Signed-off-by: raver119 <raver119@gmail.com>

* skip prepare/register action for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* npz float16 fix

Signed-off-by: raver119 <raver119@gmail.com>

* empty reduction cuda fixes

Signed-off-by: raver119 <raver119@gmail.com>

* ShapeBufferTests tweaks

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:06:48 +10:00
raver119 6ce458e949 [WIP] CUDA Java side (#58)
* one crashing test

Signed-off-by: raver119 <raver119@gmail.com>

* stupid issue fixed

Signed-off-by: raver119 <raver119@gmail.com>

* one fix

Signed-off-by: raver119 <raver119@gmail.com>

* dont ensure location for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* few more signatures fixed

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks for DataBuffer creation from java primitives

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of legacy im2col/col2im intercept

Signed-off-by: raver119 <raver119@gmail.com>

* rsubi scalar array fix

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:06:25 +10:00
raver119 c969b724bb [WIP] more CUDA stuff (#57)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* Added gradcheck test for dynamic_partition_bp op.

* - implementation of dilation op (cpu and cuda)

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed broadcast_dynamic_shape 1D case and tests.

* Fixed usage of default integer arguments.

* Fixed dynamic_partition_bp op and tests.

* Eliminated test with grad check for dynamic_partition_bp op.

* start working on cuda svd - porting available corresponding api from cuSOLVER library

Signed-off-by: Yurii <yurii@skymind.io>

* provide prelu_bp

Signed-off-by: Yurii <yurii@skymind.io>

* - provide gruCell_bp (old version ??)

Signed-off-by: Yurii <yurii@skymind.io>

* - polishing cumsum_bp and cumprod_bp tests

Signed-off-by: Yurii <yurii@skymind.io>

* provide sparseSoftmaxCrossEntropyWithLogits and sparseSoftmaxCrossEntropyWithLogits_grad

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed atomicMul with float input/output

* implementation of cuda kernel for triu_bp operation

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored lup helper to add parrallel computing.

* cusolver libraries

Signed-off-by: raver119 <raver119@gmail.com>

* uncomment cuSolver APIs in svd.cu

Signed-off-by: Yurii <yurii@skymind.io>

* cusolver var

Signed-off-by: raver119 <raver119@gmail.com>

* - further work on cuSolver svd

Signed-off-by: Yurii <yurii@skymind.io>

* Implement usage of cuda solver to LUP decomposition.

* - correct naames in lup functions

Signed-off-by: Yurii <yurii@skymind.io>

* correct svdQR cuda

Signed-off-by: Yurii <yurii@skymind.io>

* - provide transpositions of input matrices in case of c order in svdCudaQR

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed implementation issues with LUP usign cuda solver.

* Implementation of matrix_determinant helper with cuda kernels. Working revision.

* Implemented log_matrix_determinant helper with cuda kernels.

* - implementation of batched cuda svd

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored cholesky helper and implementation of cuda solver cholesky batch.

* - implementation of cuda kernel for tile bp

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of cholesky and logdet with cuda kernels.

* - implementation of cuda kernel for sru_bidirectional

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed cholesky helper.

* Cholesky op helper implementation. Working double-based cublas implementation.

* bad import excluded

Signed-off-by: raver119 <raver119@gmail.com>

* Finished with cuda implementation of cholesky helper and tests.

* - implementation of cuda kernel for sru_bidirectional_backprop operation

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of matrix_inverse op helper with cuda kernels. The first revision.

* - start working on gruCell_bp

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of matrix_inverse helper.

* - further work on new gruCell_bp

Signed-off-by: Yurii <yurii@skymind.io>

* cuBLAS related fixes

Signed-off-by: raver119 <raver119@gmail.com>

* calculateOutputShapes() now passes device buffers as well

Signed-off-by: raver119 <raver119@gmail.com>

* special concat/average/accumulate init host pointers now

Signed-off-by: raver119 <raver119@gmail.com>

* few more tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* additional CudaDataBufferFactory signatures certain for data types

Signed-off-by: raver119 <raver119@gmail.com>

* cuSolver host buffer

Signed-off-by: raver119 <raver119@gmail.com>

* buffer to buffer memcpy host ptr allocation

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:05:21 +10:00
Alex Black cb6654bebb Add libnd4j benchmarks (#3)
This PR adds 2 libnd4j benchmarking suits
2019-07-20 22:54:44 +10:00
Ryan Nett 62c6a73f9d Include TF Import tests as forward pass tests in OpValidation (#53)
* ignore multinomial

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix for @NonNull varargs

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:24:11 +10:00
Ryan Nett daf3950d8d SameDiff If, While, and Misc changes (#52)
* softmax and logSoftmax w/ dimension

Signed-off-by: Ryan Nett <rnett@skymind.io>

* start of while

Signed-off-by: Ryan Nett <rnett@skymind.io>

* if, start of javadocs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* while foreward pass working, backprop WIP

Signed-off-by: Ryan Nett <rnett@skymind.io>

* no backprop

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Tensorflow style if/while (& tests), name scope fixes (and test), argument interceptor (for if/while), use '_' in op names instead of ':'

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc

Signed-off-by: Ryan Nett <rnett@skymind.io>

* many fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* many fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Some fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* cleanup if condition doesn't return boolean

Signed-off-by: Ryan Nett <rnett@skymind.io>

* serialization fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use constants instead of magic numbers

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:23:40 +10:00
Ryan Nett 2d991f5445 LeakyReLU fix (#55)
* LeakyReLU: Use serScalar to set alpha correctly in TF import
LogX: remove incorrect TF mapping
Pow: remove TF import method (no mapping)
BaseOp: remove duplicate extraArgs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* un-ignore cifar-10 gan, as it is now passing

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:23:19 +10:00
Ryan Nett 027d4d2a47 Add XLNet to zoo model ignores (#54)
* ignore xlnet

Signed-off-by: Ryan Nett <rnett@skymind.io>

* comment

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:22:57 +10:00
Ryan Nett d7c261ec40 added ability to run as test, and comments (#44)
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:22:34 +10:00
raver119 5708fc087a [WIP] INDArray hashCode() impl (#50)
* initial commit

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* parallel hashCode prototype

Signed-off-by: raver119 <raver119@gmail.com>

* longBytes for hashCode

Signed-off-by: raver119 <raver119@gmail.com>

* INDArray hashCode java side

Signed-off-by: raver119 <raver119@gmail.com>

* few tests fixed for MSVC

Signed-off-by: raver119 <raver119@gmail.com>

* Small gradcheck validation util fix - hash names not SDVariables

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small fix + ignore for logged issue

Signed-off-by: Alex Black <blacka101@gmail.com>

* - scrollable iterator fix
- sptree hashset replaced with collection

Signed-off-by: raver119 <raver119@gmail.com>

* hashcode exception removed

* int hashCode for java
2019-07-20 22:22:11 +10:00