Commit Graph

260 Commits (3c6014271eb392f77a6a7db5d236b76a1007c683)

Author SHA1 Message Date
Alex Black 18c01f5bdc
Add SameDiff memory reuse memory manager (array cache) (#39)
* Attention op comments

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ArrayCacheMemoryMgr - first pass

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tweak array cache for use with SameDiff identity arrays

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ArrayCacheMemoryMgr javadoc and properly get max memory

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* LRU cache policy + add tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Resize arrays internally if required for ArrayCacheMemoryMgr

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test improvement

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small polish

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-12 21:15:44 +11:00
AlexDBlack 0107fb10ab Merge remote-tracking branch 'konduit/master' 2019-11-08 18:11:45 +11:00
Alex Black 2f84ea666d
Uniform distribution op tweaks + 'specified output dtype' constructor (#38)
* Uniform distribution op tweaks + 'specified output dtype' constructor

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Validation tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-08 18:08:25 +11:00
Alex Black 24980efde3
Fix LogSumExp along dimension (#35)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-07 23:36:15 +11:00
longzhendong 52c9918c6f Testing slice and concat (#8362) 2019-11-07 14:47:37 +11:00
Alex Black 948ebef41c
Op Fixes (#28)
* #8280 biasadd_bp nchw arg fixes (java side) + test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8285 Concat op Java side fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Concat op cpp fix - allow dynamic axis to be negative, same as static axis

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ignores for deconv3d import tests until deconv3d_tf op is implemented

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-05 00:05:04 +11:00
AlexDBlack 2844f8b69a Merge remote-tracking branch 'konduit/master' 2019-11-02 19:00:47 +11:00
Alex Black 9efd811508
Use DL4J workspaces for SameDiff layers in MLN/CG (#23)
* #8329 DL4J workspace integration for SameDiff layers

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix bug for Nd4j.createUninitializedDetached for scalars (length 0 shape array)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff output layer, graph vertex, various fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-02 17:42:01 +11:00
Alex Black d82877b18b
Various SameDiff fixes (#21)
* MKLDNN LSTM forward implementation (disabled pending #8331)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8318 add SameDiff.calculateGradientsAndOutputs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Disable mkldnn backprop for now - pending fix, issue #8335

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8337 Fix CudaExecutioner unnecessary result array allocation/replacement

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small FlatBuffers serde fix, UInt8

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8135 ImagePreProcessingScaler - add segmentation support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8319 Ensure listeners are called when they are supposed to be called

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8214 UNet (non-pretrained) last conv layer kernal size fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-02 11:25:53 +11:00
Alexander Stoyakin 45a40c8a89
DL4J/ND4J: Do pass on integer casts (#15)
* Int cast fixes.

* Revert "Int cast fixes."

This reverts commit aa36e8ca

* Int casts

* Int cast

* Int casts

* Get rid of int casts. Dropping deprecated aggregate ops.

* java scatterUpdate changes

Signed-off-by: raver119 <raver119@gmail.com>

* c++ scatterUpdate changes

Signed-off-by: raver119 <raver119@gmail.com>

* Remove aggregated ops.

* Restored test

* Tests restored.

* Minor fixes
2019-10-31 11:23:09 +02:00
raver119 5a4d2e8b31
[WIP] SVD (#16)
* - new SVD constructor
- OrthogonalDistribution now uses SVD custom op

Signed-off-by: raver119 <raver119@gmail.com>

* shapes fixed

Signed-off-by: raver119 <raver119@gmail.com>
2019-10-28 12:31:01 +03:00
Alex Black d333d29099
SameDiff cleanup and fixes (#12)
* #8160 Remove resolvePrepertiesFromSameDiffBeforeExecution

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff API cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More SameDiff cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8248 Switch SameDiff variable init from lazy to creation time for more predictable behaviour

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8252 TanhDerivative javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8225 Deconvolution2D input validation

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8265 Switch SameDiff.outputs() to user settable, instead of unreliable 'best guess'

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8224 SameDiff.zero and .one create constants, not variables

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More cleanup and fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J SameDiff fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Re-add hack for Deconvolution2DLayer until #8315 is resolved

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8270 Move CUDA device/version logging to Java; can be disabled via existing org.nd4j.log.initialization system property

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* All ND4J init logging checks system property

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove redundant device logging

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* One more fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* UX improvements

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Deconv fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add deconv tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove debug code

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-26 12:38:08 +11:00
Alex Black 3f0b4a2d4c
SameDiff execution, TF and memory management overhaul (#10)
* SameDiff execution memory management improvements, round 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Round 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Round 3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clear node outputs closed array references; Slight change to OpValidation internals to not rely on cached op outputs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next step

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More polish

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add WeakIdentityHashmap

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Session fixes for control ops and next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* First steps for training session + in-line updating

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix losses and history during training

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* BiasAdd and other fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Don't use SDVariable.getArr() in TFGraphTestAllHelper (import tests)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* First steps for new dependency tracking approach

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Start integrating dependency tracking for memory management

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Non-control op dependency tracking works/passes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Switch/merge

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup and next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix issue dependency tracking for initial variables/constants

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add check for aliases when determining if safe to close array

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* First pass on new TF graph import class

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Import fixes, op fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup and fixes for new TF import mapper

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Partial implementation of new dependency tracker

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* AbstractDependencyTracker for shared code

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Overhaul SameDiff graph execution (dependency tracking)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes, cleanup, next steps

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ad no-op memory manager, cleanup, fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix switch dependency tracking

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* INDArray.toString: no exception on closed arrays, just note closed

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix enter and exit dependency tracking

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* TensorArray memory management fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add unique ID for INDArray instances

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix memory management for NextIteration outputs in multi-iteration loops

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove (now unnecessary) special case handling for nested enters

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Handle control dependencies during execution; javadoc for memory managers

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup, polish, code comments, javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup and more javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add memory validation for all TF import tests - ensure all arrays (except outputs) are released

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clean up arrays waiting on unexecuted ops at the end of execution

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes for enter op memory managent in the context of multiple non-nested loops/frames

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix order of operation issues for dependency tracker

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Always clear op fields after execution to avoid leaks or unintended array reuse

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Re-implement dtype conversion

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for control dependencies execution (dependency tracking)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix TF import overrides and filtering

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for constant enter array dependency tracking

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J Fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More DL4J fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup and polish

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More polish and javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More logging level tweaks, small DL4J fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix to DL4J SameDiffLayer

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix empty array deserialization, add extra deserialization checks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* FlatBuffers control dep serialization fixes; test serialization as part of all TF import tests

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Variable control dependencies serialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix issue with removing inputs for ops

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* FlatBuffers NDArray deserialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* FlatBuffers NDArray deserialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Final cleanup/polish

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-23 21:19:50 +11:00
Alexander Stoyakin f31661e13b
Merge pull request #7 from KonduitAI/asto_nd4s_10172019
KDTree optimization
2019-10-23 12:11:25 +03:00
Alexander Stoyakin d5002b14c7 New ops wrappers 2019-10-16 12:59:08 +03:00
Alex Black b9a4b0f25f
Update links (#8292)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-16 12:59:52 +11:00
Robert Altena 50b13fadc8 nd4j-api cleanup. (#8273)
* nd4j-api cleanup.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* restore deleted schemes.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-10-08 21:48:22 +11:00
Alex Black f98f8be7b6
SameDiff ops (#8247)
* update javadocs and a few method signatures

Signed-off-by: Ryan Nett <rnett@skymind.io>

* add PRelu op

Signed-off-by: Ryan Nett <rnett@skymind.io>

* test and fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* add PRelu op

Signed-off-by: Ryan Nett <rnett@skymind.io>

* test and fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* slightly better test

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-19 11:52:20 +10:00
Robert Altena 83d958d536 Sparse matrix refactoring. (#8238)
* remove sparse method from INDArray.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove gemm

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove useage of n4j.sparseFactory

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* Nd4j.sparseFactory removed.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* sparseNDArray deleted.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* iremove more sparse calls and constants.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove SparseBlasWrapper.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* delete BaseSparseBlaswrapper.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove 3 sparse factory classes.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* delete SparseCPULevel.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* deletes JcusparseLevel, CUDASparselevel.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* delete nativeCPU sparse classes.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* removes sparse methods from NDArrayFactory.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* more deletes.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* delete (ignored) tests. BaseSparseNDArray.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* deletes ISparseNDArray.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove sparse methods from indArray.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* deletes sparse classes.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-17 22:56:29 +03:00
raver119 979ef13c0b fix build issues
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-13 11:55:13 +03:00
raver119 2bd69c004c
[WIP] Fixed signatures. SameDiff tests (#258) (#8233)
* Fixed signatures. SameDiff tests

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Tests fixed

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Test fixed

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Small fix

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Fixed test

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2019-09-12 19:25:03 +03:00
Alex Black 3e73e9b56e
Fixes, cleanup, enable now fixed tests, etc (#254)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-11 23:37:24 +10:00
Ryan Nett 8a05ec2a97
Fix a couple SameDiff training issues (#253)
* fix execBackwards training issue

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix validation not specifying outputs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* another fix for validation listeners and history

Signed-off-by: Ryan Nett <rnett@skymind.io>

* tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* add single batch dataset output methods

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-10 20:38:23 -07:00
Alex Black b582e69e3b
Small ND4J/SameDiff fixes (#248)
* #8218 Fix Nd4j.hstack rank 1 case

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8209 SameDiff: don't allow empty arrays (with 0s in shape) for variables

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-09 22:54:07 +10:00
Alex Black 87d873929f
Small LapackTest fix (#240)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-05 14:25:20 +10:00
Ryan Nett 79867f5c5a cleanup SDRNN and rnn ops (#238)
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-05 12:25:03 +10:00
Robert Altena f25e3e71e5 remove lengthLong (#236)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-05 11:19:38 +10:00
Ryan Nett e9454b8882
SDCNN cleanup pass (#230)
* SDCNN cleanup

Signed-off-by: Ryan Nett <rnett@skymind.io>

* NonNull annotations

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better javadoc, NonNull fix for sconv

Signed-off-by: Ryan Nett <rnett@skymind.io>

* update builders to fix names

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* even more fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix for null bias

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-04 00:44:01 -07:00
Alex Black 82c9dc5743
ELU fix (#219)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-02 18:37:05 +10:00
raver119 b71c993ded
[WIP] maxpool_bp cuda fix (#212)
* one test for alex

Signed-off-by: raver119 <raver119@gmail.com>

* fix

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of safety offset in cpp

Signed-off-by: raver119 <raver119@gmail.com>

* bfloat16

Signed-off-by: raver119 <raver119@gmail.com>

* minor test rearrangement to fastpath launch

Signed-off-by: raver119 <raver119@gmail.com>

* - atomicAdd/Mul/Div fix for float16/bfloat16 misalignment
- one special test for maxpoolbp java
- safety offset of 8 bytes is back to libnd4j legacy

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-31 20:57:05 +03:00
Robert Altena 54e320a255 javadoc (#201)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-30 22:40:27 +10:00
Alex Black dcc2baa676
Version upgrades (#199)
* DataVec fixes for Jackson version upgrade

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J jackson updates + databind version 2.9.9.3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Shade snakeyaml along with jackson

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Version fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Switch DataVec legacy JSON format handling to mixins

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next set of fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup for legacy JSON mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade commons compress to 1.18; small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* New Jackson backward compatibility for DL4J - Round 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* New Jackson backward compatibility for DL4J - Round 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes, all but legacy custom passing

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Provide an upgrade path for custom layers for models in pre-1.0.0-beta JSON format

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Legacy deserialization cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small amount of polish - legacy JSON

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade guava version

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* IEvaluation legacy format deserialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade play version to 2.7.3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update nd4j-parameter-server-status to new Play API

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update DL4J UI for new play version

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More play framework updates

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove Spark 1/2 adapter code from DataVec

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* datavec-spark dependency cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 4

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Another fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Breeze upgrade, dependency cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add Scala 2.12 version to pom.xml

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* change-scala-versions.sh - add scala 2.12, remove 2.10

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Move Spark version properties to parent pom (now that only one spark version is supported)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DataVec Play fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* datavec play dependency fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clean up old spark/jackson stuff

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup jackson unused dependencies

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Dropping redundant dependency

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Removed scalaxy dependency

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* DataVec fixes for Jackson version upgrade

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J jackson updates + databind version 2.9.9.3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Shade snakeyaml along with jackson

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Version fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Switch DataVec legacy JSON format handling to mixins

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Next set of fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup for legacy JSON mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade commons compress to 1.18; small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* New Jackson backward compatibility for DL4J - Round 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* New Jackson backward compatibility for DL4J - Round 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes, all but legacy custom passing

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Provide an upgrade path for custom layers for models in pre-1.0.0-beta JSON format

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Legacy deserialization cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small amount of polish - legacy JSON

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade guava version

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* IEvaluation legacy format deserialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Upgrade play version to 2.7.3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update nd4j-parameter-server-status to new Play API

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Update DL4J UI for new play version

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More play framework updates

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove Spark 1/2 adapter code from DataVec

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* datavec-spark dependency cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J spark updates, pt 4

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Another fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Breeze upgrade, dependency cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add Scala 2.12 version to pom.xml

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* change-scala-versions.sh - add scala 2.12, remove 2.10

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Move Spark version properties to parent pom (now that only one spark version is supported)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DataVec Play fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* datavec play dependency fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Clean up old spark/jackson stuff

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Cleanup jackson unused dependencies

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add shaded guava

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Dropping redundant dependency

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Removed scalaxy dependency

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Ensure not possible to import pre-shaded classes, and remove direct guava dependencies in favor of shaded

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ND4J Shaded guava import fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DataVec and DL4J guava shading

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Arbiter, RL4J fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Build fixed

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Fix dependency

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Fix bad merge

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Jackson shading fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Set play secret, datavec-spark-inference-server

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for datavec-spark-inference-server

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Arbiter fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Arbiter fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-30 14:35:27 +10:00
Serhii Shepel 0463ee4eba Fix backend dependencies for tests (#189) 2019-08-29 12:54:48 +09:00
raver119 f4860574d7
[WIP] More fixes (#190)
* Refactored kernels for segment_max/min/sum ops.

* Refactored segment_prod kernels.

* Refactored segment_prod kernels.

* DynamicPartition test

Signed-off-by: raver119 <raver119@gmail.com>

* Addede linear test for dynamic_partition op.

* Refactored test with int datatype.

* some logging

Signed-off-by: raver119 <raver119@gmail.com>

* some logging

Signed-off-by: raver119 <raver119@gmail.com>

* some logging

Signed-off-by: raver119 <raver119@gmail.com>

* dynamicPartition fix

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of some logging

Signed-off-by: raver119 <raver119@gmail.com>

* one more test for dynamic_stitch

Signed-off-by: raver119 <raver119@gmail.com>

* one more test for dynamic_stitch

Signed-off-by: raver119 <raver119@gmail.com>

* empty check for stitch

Signed-off-by: raver119 <raver119@gmail.com>

* minor print changes

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-28 15:38:57 +03:00
Ryan Nett 2a1431264f
Remove calculate output shape from java side (#151)
* remove some unneeded java-side output shape calculations

Signed-off-by: Ryan Nett <rnett@skymind.io>

* delete Broadcast

Signed-off-by: Ryan Nett <rnett@skymind.io>

* delete Linear and Module,

Signed-off-by: Ryan Nett <rnett@skymind.io>

* update Identity, HashCode, and NoOp

Signed-off-by: Ryan Nett <rnett@skymind.io>

* removed Cast java-side shape function, added tests and SDVariable.isEmpty

Signed-off-by: Ryan Nett <rnett@skymind.io>

* ignoring test w/ issues on master

Signed-off-by: Ryan Nett <rnett@skymind.io>

* noop needs more work, fixed BaseArithmeticBackprop and BaseDynamicTransform ops

merge in master for c++ build fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix EqualTo

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix other cond ops

Signed-off-by: Ryan Nett <rnett@skymind.io>

* "fake" ops calculateOutputShape() throws exception

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use c++ shape calc for Linspace

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix exception message, move most to BaseCompatOp

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove SDVariable.isEmpty

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove commented out code

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-08-27 20:39:32 -07:00
Alex Black b46f9827b8
Layer norm test updates (#187)
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-28 13:27:00 +10:00
Robert Altena 59a6e4e3ae
INDArray refactoring (#170)
* javadoc

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove javaTensorAlongDimension

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* javadoc
2019-08-28 12:03:23 +09:00
raver119 b472d7d8c8
[WIP] few more fixes (#182)
* one noop test

Signed-off-by: raver119 <raver119@gmail.com>

* skip input validation for no-input ops

Signed-off-by: raver119 <raver119@gmail.com>

* - one more noop empty test
- one more validation before sync

Signed-off-by: raver119 <raver119@gmail.com>

* typo

Signed-off-by: raver119 <raver119@gmail.com>

* one more validation fix

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA empty reductions java side

Signed-off-by: raver119 <raver119@gmail.com>

* one svd test

Signed-off-by: raver119 <raver119@gmail.com>

* Corrected segment_mean helpers and added another test.

* Refactored segment_mean kernels to avoid race_condition.
2019-08-27 21:00:38 +03:00
Alex Black dff599aa8f
Test fix (#179)
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-27 20:43:36 +10:00
Alex Black dce4751fc1
Layer norm 4d case fixes (#174)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-27 18:34:53 +10:00
raver119 df84bc7255
[WIP] More tweaks (#173)
* CUDA empty reduction

Signed-off-by: raver119 <raver119@gmail.com>

* - listdiff synchronization fix for CUDA
- listdiff test

Signed-off-by: raver119 <raver119@gmail.com>

* - IndexReduce ops now allow INDEXING_TYPES output
- topK op accepts only INDEXING_TYPES as output

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 10:37:10 +03:00
Alex Black e92f7218f3
Add new tests (#171)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-27 12:15:56 +10:00
raver119 25e5c23eae
[WIP] Error handling (#169)
* CUDA reverse rewrite + couple of tests

Signed-off-by: raver119 <raver119@gmail.com>

* don't throw exception on invalid pointer

Signed-off-by: raver119 <raver119@gmail.com>

* data types validation for fastpath exec mode + 2 tests

Signed-off-by: raver119 <raver119@gmail.com>

* data types validation for fastpath exec mode + 2 tests

Signed-off-by: raver119 <raver119@gmail.com>

* ismax allowed dtypes tweak

Signed-off-by: raver119 <raver119@gmail.com>

* lastErrorCode + lastErrorMessage for native exceptions handling

Signed-off-by: raver119 <raver119@gmail.com>

* exportable ErrorReference

Signed-off-by: raver119 <raver119@gmail.com>

* check error codes in java

Signed-off-by: raver119 <raver119@gmail.com>

* - consume lastErrorCode
- fast_in dtype validation fix

Signed-off-by: raver119 <raver119@gmail.com>

* - sg/cb allowed output type change
- minor logging fix for data type validation

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:57:51 +03:00
raver119 bb5fc36e5e
[WIP] ops fixes (#168)
* - correct layer_norm

Signed-off-by: Yurii <yurii@skymind.io>

* - further fix of layer norm

Signed-off-by: Yurii <yurii@skymind.io>

* - correct scatter_upd op

Signed-off-by: Yurii <yurii@skymind.io>

* - correct cuda kernel for histogram_fixed_width op

Signed-off-by: Yurii <yurii@skymind.io>

* - delete comments

Signed-off-by: Yurii <yurii@skymind.io>

* enabled one ignored test

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:37:05 +03:00
Alex Black b417ca21bf
Fix for concat op shape function (empty shapes) (#167)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-26 23:10:28 +10:00
Alex Black d607bec6f9
Small test fixes (#165)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-26 11:45:33 +10:00
raver119 f8364997c0
[WIP] maxpool2d_bp fix (#160)
* one test for maxpool2d_bp

Signed-off-by: raver119 <raver119@gmail.com>

* - maxpool2d_bp cuda fix for NaNs
- streamSync after each custom op execution

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-24 09:20:57 +03:00
raver119 f03b0ee78f
[WIP] more fixes (#159)
* Added test for MatrixInverse with double input. Fixed matrixDeterminantKernel.

* Fixed kernels to avoid waste templating.

* Fixed logDeterminant kernel.

* Refactored type check for lup'

* - decrease blockDim value for zeta op

Signed-off-by: Yurii <yurii@skymind.io>

* Added print for compound matrix with CUDA.

* Refactored upper matrix invertion kernels.

* - provide move constructor and move assignment operator for OpArgsHoder class

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored usage of launch context.

* - add test for mergemax

Signed-off-by: Yurii <yurii@skymind.io>

* get rid of AveragingArrayProxy

Signed-off-by: raver119 <raver119@gmail.com>

* Refactoring of LUP inversion.

* Added prints for invertion.

* - add OpArgsHolder copy constructor and assignment operator

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for lower inversion

* - fix bug in upsampling2d/3d_bp op

Signed-off-by: Yurii <yurii@skymind.io>

* Added expensive printfs to kernel.

* Refactored expensive kernel prints.

* Refactored expensive printfs

* - remove nullify

Signed-off-by: Yurii <yurii@skymind.io>

* Eliminated waste prints with tests.

* upsampling2d_bp test

Signed-off-by: raver119 <raver119@gmail.com>

* test updated

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 19:20:50 +03:00
raver119 99cdf6d42b - cpu isMax fix for multidim case + test
- INDArray.wasClosed() fix for empty array edge case

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 18:44:37 +03:00
raver119 729dc5e879
[WIP] size etc (#155)
* one test for size

Signed-off-by: raver119 <raver119@gmail.com>

* - few tests for size op
- size/rank/size_at ops now use p instead of assign

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 12:31:12 +03:00
raver119 243bf866c4
[WIP] Few fixes (#153)
* throw exception if op execution failed

Signed-off-by: raver119 <raver119@gmail.com>

* expected for test

Signed-off-by: raver119 <raver119@gmail.com>

* one more ismax test

Signed-off-by: raver119 <raver119@gmail.com>

* ismax view fix

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-23 09:00:10 +03:00
Alex Black 80d35377d4
SameDiff cleanup and fixes (#150)
* Cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SDVariable no longer extends DifferentialFunction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8123 Remove cloning library to avoid 'illegal reflective access' warnings

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8095 Make Pooling3D abstract, fix flatbuffers serialization issue

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8117 WordVectorSerializer deprecated method javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Final fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* One more

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-23 15:09:53 +10:00
raver119 930b49e87f
[WIP] DeviceLocalNDArray updates (#149)
* ContextBuffers are released upon device change

Signed-off-by: raver119 <raver119@gmail.com>

* DeviceLocalNDArray updates + tests

Signed-off-by: raver119 <raver119@gmail.com>

* special array for delayed mode

Signed-off-by: raver119 <raver119@gmail.com>

* additional detach()

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-22 20:01:29 +03:00
Alex Black 9c2bfc9863
Various fixes (DL4J, ND4J) (#147)
* Import fixes, IsMax dtype calc, small test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SubsamplingLayer fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J - SpaceToBatch layer updates

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-22 16:16:03 +10:00
Robert Altena ca7e5593ec
ND4J: Remove Nd4j.trueScalar/trueVector (#145)
* merge conflict.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove/replace trueVector

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-22 10:49:30 +09:00
Ryan Nett 2b0d7b3b52
[WIP] Various fixes, mostly SameDiff/Nd4j (#110)
* Nd4j pad update

Signed-off-by: Ryan Nett <rnett@skymind.io>

* switched from guava Immutables to Collections.unmodifiableList/Map

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use new pad

Signed-off-by: Ryan Nett <rnett@skymind.io>

* conv tests use OpValidation

Signed-off-by: Ryan Nett <rnett@skymind.io>

* deconv3d overrides

Signed-off-by: Ryan Nett <rnett@skymind.io>

* test fix for the new pad method

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more test fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more test fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* rename SameDiff function methods to op (except for the actual SameDiff function ones)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more pad overloads, test fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* test updates

Signed-off-by: Ryan Nett <rnett@skymind.io>

* conv1d test

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove Conv1D tf import (there isn't a TF conv1d op)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove numThreads from Nd4j

Signed-off-by: Ryan Nett <rnett@skymind.io>

* replace Old ops with their newer versions, deprecate ones that haven't already been deprecated

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove use of setNumThreads

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix for Reverse and ATan2

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix test for wrong equals type

Signed-off-by: Ryan Nett <rnett@skymind.io>

* well it works now

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better javadocs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* NonNulls

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better array literal

Signed-off-by: Ryan Nett <rnett@skymind.io>

* re-add tf import stuff (will remove later)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* conv1d config load fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* partial config usage changes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove Old op classes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* config property fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* removed one too many ops

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-08-21 16:40:32 -07:00
raver119 77805cb7fa
[WIP] cpu ismax fix (#137)
* cpu ismax fix

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of smaller scalar tests

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-21 10:12:11 +03:00
raver119 269d508ba5
[WIP] cross-device migrations (#134)
* two more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA device afinity tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* prepareAction/registerAction for CustomOps

Signed-off-by: raver119 <raver119@gmail.com>

* lazy allocate host bufer before relocation

Signed-off-by: raver119 <raver119@gmail.com>

* one special test for migration in cpp

Signed-off-by: raver119 <raver119@gmail.com>

* tests update for msvc

Signed-off-by: raver119 <raver119@gmail.com>

* logging

Signed-off-by: raver119 <raver119@gmail.com>

* stick to old col2im impl

Signed-off-by: raver119 <raver119@gmail.com>

* cudaStreams reorganization

Signed-off-by: raver119 <raver119@gmail.com>

* buffer size fix

Signed-off-by: raver119 <raver119@gmail.com>

* c++ data migration

Signed-off-by: raver119 <raver119@gmail.com>

* fix CropAndResize test

Signed-off-by: raver119 <raver119@gmail.com>

* - minor improvment

Signed-off-by: Yurii <yurii@skymind.io>
2019-08-20 18:52:41 +03:00
Robert Altena 38310777ee
fix for eclipse#8087 (#129)
*  fix for #8087

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove commented code.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* removing trueScalar.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove tueScalar.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-20 15:20:40 +09:00
raver119 b8ab1a00b0 - 2 mod tests
- ModOp mapping added

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-19 12:57:24 +03:00
raver119 aceb915557
[WIP] tests fixes (#130)
* no openmp for ClipByGlobalNorm

Signed-off-by: raver119 <raver119@gmail.com>

* one more bfloat16 rng test

Signed-off-by: raver119 <raver119@gmail.com>

* assertion fix

Signed-off-by: raver119 <raver119@gmail.com>

* - legacy IsMax gone
- linear IsMax gets shapeInfo argument

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of legacy IsMax tests

Signed-off-by: raver119 <raver119@gmail.com>

* IsMax is custom op now

Signed-off-by: raver119 <raver119@gmail.com>

* more blocks for ismax

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

*  - sqrt test
 - some legacy code removed from CudaExecutioner
 - Transforms.asin tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* - TransformFloat fix

Signed-off-by: raver119 <raver119@gmail.com>

* - ismax fix
- SpaceToBatchND/BatchToSpaceND wrappers
- couple of legacy tests removed

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-19 11:33:15 +03:00
Robert Altena 59cba587f4
Nd4j refactoring (#112)
* refactoring

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

* fix: make test public.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* make test public.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* fixes read refactoring.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-15 13:50:52 +09:00
raver119 c7277729e9
few fixes for bfloat16 in java and cpp (#114)
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-14 21:51:42 +03:00
raver119 53ca9a76e8
[WIP] multi-device support (#80)
* fix pad javadoc and @see links. (#72)

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* [WIP] More fixes (#73)

* special tests for ConstantTadHelper/ConstantShapeHelper

Signed-off-by: raver119 <raver119@gmail.com>

* release methods for data buffers

Signed-off-by: raver119 <raver119@gmail.com>

* delete temporary buffer Java side

Signed-off-by: raver119 <raver119@gmail.com>

* delete temporary buffer Java side

Signed-off-by: raver119 <raver119@gmail.com>

* delete temporary TadPack C++/Java side (#74)

Signed-off-by: raver119 <raver119@gmail.com>

* Zoo model TF import test updates (#75)

* argLine fix, update compression_gru comment

* updated comment for xception

* undid but commented argLine change

* updated xlnet comment

* copyright headers

* - new NDArray methods like()/ulike() (#77)

- fix for depthwise_conv2d_bp + special test

Signed-off-by: raver119 <raver119@gmail.com>

* upsampling2d fix CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* DL4J trace logging (#79)

* MLN/CG trace logging for debugging

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tiny tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* strided_slice_bp shape fn leak fix

Signed-off-by: raver119 <raver119@gmail.com>

* SameDiff fixes and naming (#78)

* remove SDVariable inplace methods

* import methods

* npe fix in OpVal

* removed SameDiff inplace ops from tests

* Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything

* quick fixes

* javadoc

* SDVariable eval with placeholders

* use regex match

* better matching

* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* fix javadoc. (#76)

* fix javadoc.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* replace most @see with @link s.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* 4 additional tests

Signed-off-by: raver119 <raver119@gmail.com>

* launch context reorganization

Signed-off-by: raver119 <raver119@gmail.com>

* LaunchContext reorganization

Signed-off-by: raver119 <raver119@gmail.com>

* per-device LaunchContext

Signed-off-by: raver119 <raver119@gmail.com>

* Various DL4J/ND4J fixes (#81)

* #7954 Force refresh of UI when switching tabs on overview page

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8017 Concurrent modification exception (synchronize) fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8033 Don't initialize updater in middle of writing memory crash dump

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8208 Fix shape checks for ND4J int[] creator methods

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6385 #7992 Keras import naming fixes + cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8016 Upsampling3D - add NDHWC format support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ContextBuffers as separate entity

Signed-off-by: raver119 <raver119@gmail.com>

* Refactor NativeOps.h to export C functions

* Actually export functions from NativeOps.h

* Adapt the Java wrappers in ND4J generated with JavaCPP

* Create C wrappers for some of the C++ classes currently used by ND4J

* ContextBuffers as separate entity

Signed-off-by: raver119 <raver119@gmail.com>

* remove duplicate code in createBufferDetached. (#83)

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* Keras model import - updater lr fix (#84)

* Keras model import - updater lr fix

Signed-off-by: eraly <susan.eraly@gmail.com>

* Keras model import - updater lr fix, cleanup

Signed-off-by: eraly <susan.eraly@gmail.com>

* ContextBuffers as separate entity

Signed-off-by: raver119 <raver119@gmail.com>

* ContextBuffers as separate entity

Signed-off-by: raver119 <raver119@gmail.com>

* Fix functions of OpaqueVariablesSet

* thread-local buffers/affinity

Signed-off-by: raver119 <raver119@gmail.com>

* thread safety for LaunchContext

Signed-off-by: raver119 <raver119@gmail.com>

* more of thread safety

Signed-off-by: raver119 <raver119@gmail.com>

* one more multi threaded test

Signed-off-by: raver119 <raver119@gmail.com>

* SameDiff Convolution Config validation, better output methods (#82)

* Conv Config validation & tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* stackOutputs utility method

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use constructor for validation, support negative kernel sizes (infered from weights)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better output methods

Signed-off-by: Ryan Nett <rnett@skymind.io>

* move output to be with fit and evaluate

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* refactor duplicate code from pad methods. (#86)

* refactor duplicate code from pad methods.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* replace switch with if.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* Various ND4J/DL4J fixes and improvements (#87)

* Reshape and reallocate - small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Reshape and reallocate - small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6488 ElementWiseVertex broadcast support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Constructors and broadcast supported it Transforms.max/min

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8054 ElementWiseVertex now supports broadcast inputs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8057 Nd4j.create overload dtype fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7551 ND4J Shape validation fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* [WIP] Numpy boolean import (#91)

* numpy bool type

Signed-off-by: raver119 <raver119@gmail.com>

* numpy bool java side

Signed-off-by: raver119 <raver119@gmail.com>

* remove create method with unused parameter. (#89)

* remove create method with unused parameter.

* removed more unused methods.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* removing more unused code.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* last removal of unused code.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* remove createSparse methods. (#92)

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* Various ND4J/DL4J fixes (#90)

* Deprecate Old*Op instances

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8063 #8054 Broadcast exceptions + cleanup inplace ops

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove bad test condition

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7993 Fix shape function issue in crop_and_resize op

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J SameDiff lambda layer fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8029 Fix for pnorm backprop math

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8038 Fix Op profiler NaN/Inf triggering + add tests (#93)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* createUninitializedDetached refactoring. (#94)

* wip

* update interface, add null implementations.

* Breaking one test in a weird way.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* createUninitializedDetached refactored.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* cuda build fix for issues introduced by recent refactoring

Signed-off-by: raver119 <raver119@gmail.com>

* [WIP] More of CUDA (#95)

* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* Implementation of hashcode cuda helper. Working edition.

* Fixed parallel test input arangements.

* Fixed tests for hashcode op.

* Fixed shape calculation for image:crop_and_resize op and test.

* NativeOps tests. Initial test suite.

* Added tests for indexReduce methods.

* Added test on execBroadcast with NDArray as dimensions.

* Added test on execBroadcastBool with NDArray as dimensions.

* Added tests on execPairwiseTransform and execPairwiseTransofrmBool.

* Added tests for execReduce with scalar results.

* Added reduce tests for non-empty dims array.

* Added tests for reduce3.

* Added tests for execScalar.

* Added tests for execSummaryStats.

* - provide cpu/cuda code for batch_to_space
- testing it

Signed-off-by: Yurii <yurii@skymind.io>

* - remove old test for batch_to_space (had wrong format and numbers were not checked)

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed complilation errors with test.

* Added test for execTransformFloat.

* Added test for execTransformSame.

* Added test for execTransformBool.

* Added test for execTransformStrict.

* Added tests for execScalar/execScalarBool with TADs.

* Added test for flatten.

* - provide cpu/cuda code for space_to_Batch operaion

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for concat.

* comment unnecessary stuff in s_t_b

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for specialConcat.

* Added tests for memcpy/set routines.

* Fixed pullRow cuda test.

* Added pullRow test.

* Added average test.

* - correct typo in NDArray::applyPairwiseTransform(nd4j::pairwise::BoolOps op...)

Signed-off-by: Yurii <yurii@skymind.io>

* - debugging and fixing cuda tests in JavaInteropTests file

Signed-off-by: Yurii <yurii@skymind.io>

* - correct some tests

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for shuffle.

* Fixed ops declarations.

* Restored omp and added shuffle test.

* Added convertTypes test.

* Added tests for execRandom. Eliminated usage of RandomBuffer with NativeOps.

* Added sort tests.

* Added tests for execCustomOp.

* - further debuging and fixing tests terminated with crash

Signed-off-by: Yurii <yurii@skymind.io>

* Added tests for calculateOutputShapes.

* Addded Benchmarks test.

* Commented benchmark tests.

* change assertion

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for apply_sgd op. Added cpu helper for that op.

* Implement cuda helper for aplly_sgd op. Fixed tests for NativeOps.

* Added test for assign broadcastable.

* Added tests for assign_bp op.

* Added tests for axpy op.

* - assign/execScalar/execTransformAny signature change
- minor test fix

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed axpy op.

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* - fix tests for nativeOps::concat

Signed-off-by: Yurii <yurii@skymind.io>

* sequential transform/scalar

Signed-off-by: raver119 <raver119@gmail.com>

* allow nested parallelism

Signed-off-by: raver119 <raver119@gmail.com>

* assign_bp leak fix

Signed-off-by: raver119 <raver119@gmail.com>

* block setRNG fix

Signed-off-by: raver119 <raver119@gmail.com>

* enable parallelism by default

Signed-off-by: raver119 <raver119@gmail.com>

* enable nested parallelism by default

Signed-off-by: raver119 <raver119@gmail.com>

* Added cuda implementation for row_count helper.

* Added implementation for tnse gains op helper.

* - take into account possible situations when input arrays are empty in reduce_ cuda stuff

Signed-off-by: Yurii <yurii@skymind.io>

* Implemented tsne/edge_forces op cuda-based helper. Parallelized cpu-based helper for edge_forces.

* Added kernel for tsne/symmetrized op heleper.

* Implementation of tsne/symmetrized op cuda helper. Working edition.

* Eliminated waste printfs.

* Added test for broadcastgradientargs op.

* host-only fallback for empty reduce float

Signed-off-by: raver119 <raver119@gmail.com>

* - some tests fixes

Signed-off-by: Yurii <yurii@skymind.io>

* - correct the rest of reduce_ stuff

Signed-off-by: Yurii <yurii@skymind.io>

* - further correction of reduce_ stuff

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for Cbow op. Also added cuda implementation for cbow helpers.

* - improve code of stack operation for scalar case

Signed-off-by: Yurii <yurii@skymind.io>

* - provide cuda kernel for gatherND operation

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of cbow helpers with cuda kernels.

* minor tests tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* minor tests tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* - further correction of cuda stuff

Signed-off-by: Yurii <yurii@skymind.io>

* Implementatation of cbow op helper with cuda kernels. Working edition.

* Skip random testing for cudablas case.

* lstmBlockCell context fix

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for ELU and ELU_BP ops.

* Added tests for eq_scalar, gt_scalar, gte_scalar and lte_scalar ops.

* Added tests for neq_scalar.

* Added test for noop.

* - further work on clipbynorm_bp

Signed-off-by: Yurii <yurii@skymind.io>

* - get rid of concat op call, use instead direct concat helper call

Signed-off-by: Yurii <yurii@skymind.io>

* lstmBlockCell context fix

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for lrelu and lrelu_bp.

* Added tests for selu and selu_bp.

* Fixed lrelu derivative helpers.

* - some corrections in lstm

Signed-off-by: Yurii <yurii@skymind.io>

* operator * result shape fix

Signed-off-by: raver119 <raver119@gmail.com>

* - correct typo in lstmCell

Signed-off-by: Yurii <yurii@skymind.io>

* few tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA inverse broadcast bool fix

Signed-off-by: raver119 <raver119@gmail.com>

* disable MMAP test for CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* BooleanOp syncToDevice

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* additional data types for im2col/col2im

Signed-off-by: raver119 <raver119@gmail.com>

* Added test for firas_sparse op.

* one more RandomBuffer test excluded

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for flatten op.

* Added test for Floor op.

* bunch of tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* mmulDot tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* Implemented floordiv_bp op and tests.

* Fixed scalar case with cuda implementation for bds.

* - work on cuda kernel for clip_by_norm backprop op is completed

Signed-off-by: Yurii <yurii@skymind.io>

* Eliminate cbow crach.

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* Eliminated abortion with batched nlp test.

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed shared flag initializing.

* disabled bunch of cpu workspaces tests

Signed-off-by: raver119 <raver119@gmail.com>

* scalar operators fix: missing registerSpecialUse call

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed logdet for cuda and tests.

* - correct clipBynorm_bp

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed crop_and_resize shape datatype.

* - correct some mmul tests

Signed-off-by: Yurii <yurii@skymind.io>

* build fix

Signed-off-by: raver119 <raver119@gmail.com>

* exclude two methods for JNI

Signed-off-by: raver119 <raver119@gmail.com>

* exclude two methods for JNI

Signed-off-by: raver119 <raver119@gmail.com>

* exclude two methods for JNI (#97)

Signed-off-by: raver119 <raver119@gmail.com>

* temporary stack fix

Signed-off-by: raver119 <raver119@gmail.com>

* round robin affinity test

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of legacy CudaContext methods

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of legacy ContextPool classes/methods

Signed-off-by: raver119 <raver119@gmail.com>

* one legacy test removed

Signed-off-by: raver119 <raver119@gmail.com>

* few more fields rearranged

Signed-off-by: raver119 <raver119@gmail.com>

* OpaqueLaunchContext

Signed-off-by: raver119 <raver119@gmail.com>

* OpaqueLaunchContext++

Signed-off-by: raver119 <raver119@gmail.com>

* more of OpaqueLaunchContext methods

Signed-off-by: raver119 <raver119@gmail.com>

* LaunchContext -> CudaContext

Signed-off-by: raver119 <raver119@gmail.com>

* AffinityManger changes

Signed-off-by: raver119 <raver119@gmail.com>

* AffinityManger changes

Signed-off-by: raver119 <raver119@gmail.com>

* cusolver handles

Signed-off-by: raver119 <raver119@gmail.com>

* typo

Signed-off-by: raver119 <raver119@gmail.com>

* cusolver method

Signed-off-by: raver119 <raver119@gmail.com>

* cusolver handle propagated

Signed-off-by: raver119 <raver119@gmail.com>

* blas/solver handles

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* legacy concat implementations replaced with new CustomOp

Signed-off-by: raver119 <raver119@gmail.com>

* one more test

Signed-off-by: raver119 <raver119@gmail.com>

* concat now uses way more blocks

Signed-off-by: raver119 <raver119@gmail.com>

* print

Signed-off-by: raver119 <raver119@gmail.com>

* no more triple template mmul

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of kernels have dtypes reconsidered

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of kernels have dtypes reconsidered

Signed-off-by: raver119 <raver119@gmail.com>

* bitonic sort reorganized

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of cpu stuff removed from cuda scope

Signed-off-by: raver119 <raver119@gmail.com>

* bunch of cpu stuff removed from cuda scope

Signed-off-by: raver119 <raver119@gmail.com>

* type conversions moved to generic impl

Signed-off-by: raver119 <raver119@gmail.com>

* cpu data types pass

Signed-off-by: raver119 <raver119@gmail.com>

* non_max_suppression

Signed-off-by: raver119 <raver119@gmail.com>

* sortByValue fix

Signed-off-by: raver119 <raver119@gmail.com>

* ignore all mixed datatype tests for mmul

Signed-off-by: raver119 <raver119@gmail.com>

* special handling of OpProfiler exceptions

Signed-off-by: raver119 <raver119@gmail.com>

* - one failing concat test in cpp
- Nd4j.tile now uses op internally

Signed-off-by: raver119 <raver119@gmail.com>

* get back dtype exception for legacy arrays deserialization

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-14 16:52:34 +03:00
Robert Altena b10ab239c0 Nd4j refactoring (#111)
* refactoring

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* wip

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-13 21:44:40 +10:00
Ryan Nett 11bddb3825 SameDiff: Listener changes and training api update (#99)
* example api

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Lambda based evaluation

Signed-off-by: Ryan Nett <rnett@skymind.io>

* lambda test

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* partial fixes, use get-variable listener framework, example EvaluationListener

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc fix and newInstance implementations

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fit and evaluate methods with validation data (for fit) and listeners

Signed-off-by: Ryan Nett <rnett@skymind.io>

* output method overloads + listener args

Signed-off-by: Ryan Nett <rnett@skymind.io>

* history and evaluation helpers

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* FitConfig and added getters and setters

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadocs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes, javadoc, added activations to history, added latest activation listener

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes, start of tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes and updates

Signed-off-by: Ryan Nett <rnett@skymind.io>

* newInstance fixes, tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* test fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadocs, getters with SDVariable overrides, CustomEvaluation fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more operation config classes (evaluation, output, exec/single batch output), fix custom eval tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* merge fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes, most old fit/evaluate/output methods use the builders

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* numerous fixes/cleanup

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Polish round 1

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Round 2

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Formatting + round 3

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Round 4

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-10 15:30:31 +10:00
Alex Black edb71bf46f
Add SameDiff exec debugging listener + few fixes (#104)
* First pass on SameDiff op exec debug listener

Signed-off-by: Alex Black <blacka101@gmail.com>

* #7555 DL4J helpers - don't fall back on builtin for op profiler exceptions

Signed-off-by: Alex Black <blacka101@gmail.com>

* Exec debugging listener + fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix import counts for TF ops in OpValidationSuite

Signed-off-by: Alex Black <blacka101@gmail.com>

* Fix bad DL4J test configuration

Signed-off-by: Alex Black <blacka101@gmail.com>

* Exec debugging listener polish

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small fix

Signed-off-by: Alex Black <blacka101@gmail.com>

* Another fix

Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-07 17:18:29 +10:00
Robert Altena ba1d1b160b
remove method with unused parameter. (#102)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-06 19:14:45 +09:00
Alex Black b8846113bd
Small number of fixes + cleanup + some missing op methods + constructors (#100)
* Remove unused op class - DefaultOpConverter

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Add SDImage class; INDArray constructor additions

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Floordiv

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small polish to image methods

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small DataVec test fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 22:31:46 +10:00
Alex Black 923ab15583 Small number of fixes (#98)
* Remove no longer functioning LegacyPooling2D class

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8066 First steps for reflection scanning - INDArray constructors

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small build fix for ND4S

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More nd4s fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:28:07 +10:00
Robert Altena dfec54242d createUninitializedDetached refactoring. (#94)
* wip

* update interface, add null implementations.

* Breaking one test in a weird way.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* createUninitializedDetached refactored.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-05 11:27:05 +10:00
Alex Black cd41c3540d #8038 Fix Op profiler NaN/Inf triggering + add tests (#93)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:25:48 +10:00
Alex Black e18e2dc014 Various ND4J/DL4J fixes (#90)
* Deprecate Old*Op instances

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8063 #8054 Broadcast exceptions + cleanup inplace ops

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Remove bad test condition

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7993 Fix shape function issue in crop_and_resize op

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* DL4J SameDiff lambda layer fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8029 Fix for pnorm backprop math

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:25:48 +10:00
Robert Altena fbe120031d remove createSparse methods. (#92)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-05 11:25:48 +10:00
Robert Altena 386a9f057b remove create method with unused parameter. (#89)
* remove create method with unused parameter.

* removed more unused methods.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* removing more unused code.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* last removal of unused code.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-05 11:25:48 +10:00
raver119 065b34c7cb [WIP] Numpy boolean import (#91)
* numpy bool type

Signed-off-by: raver119 <raver119@gmail.com>

* numpy bool java side

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:25:48 +10:00
Alex Black b95417f7c5 Various ND4J/DL4J fixes and improvements (#87)
* Reshape and reallocate - small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Reshape and reallocate - small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6488 ElementWiseVertex broadcast support

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Constructors and broadcast supported it Transforms.max/min

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8054 ElementWiseVertex now supports broadcast inputs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8057 Nd4j.create overload dtype fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7551 ND4J Shape validation fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:24:20 +10:00
Ryan Nett d4e7997134 SameDiff Convolution Config validation, better output methods (#82)
* Conv Config validation & tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* stackOutputs utility method

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use constructor for validation, support negative kernel sizes (infered from weights)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better output methods

Signed-off-by: Ryan Nett <rnett@skymind.io>

* move output to be with fit and evaluate

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-08-05 11:24:20 +10:00
Samuel Audet 526b782e51 Create C wrappers for some of the C++ classes currently used by ND4J 2019-08-05 11:22:59 +10:00
Alex Black fad8da878f Various DL4J/ND4J fixes (#81)
* #7954 Force refresh of UI when switching tabs on overview page

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8017 Concurrent modification exception (synchronize) fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8033 Don't initialize updater in middle of writing memory crash dump

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8208 Fix shape checks for ND4J int[] creator methods

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6385 #7992 Keras import naming fixes + cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8016 Upsampling3D - add NDHWC format support

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:21:23 +10:00
Ryan Nett ac321265a7 SameDiff fixes and naming (#78)
* remove SDVariable inplace methods

* import methods

* npe fix in OpVal

* removed SameDiff inplace ops from tests

* Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything

* quick fixes

* javadoc

* SDVariable eval with placeholders

* use regex match

* better matching
2019-08-05 11:21:23 +10:00
Ryan Nett 00e6296140 Zoo model TF import test updates (#75)
* argLine fix, update compression_gru comment

* updated comment for xception

* undid but commented argLine change

* updated xlnet comment

* copyright headers
2019-08-05 11:13:41 +10:00
Alex Black 7939cf384b Misc fixes (#66)
* Small fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Flaky test fix

Signed-off-by: Alex Black <blacka101@gmail.com>
2019-07-20 23:17:03 +10:00
Alex Black d94bc7257c Various fixes (#65)
* #7977 deprecate legacy MultiLayerNetwork/ComputationGraph.params(boolean) method

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix bad test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix Histogram mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix incorrect name handling in DifferentialFunction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Histogram fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Proper histogram fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ToString/NDArrayStrings fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* JSON UTF8 serialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-07-20 23:16:41 +10:00
raver119 c499dc962f - numpy import fix for CUDA (#64)
- skip tagLocation for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:16:19 +10:00
raver119 c9e867b2e8 File existence validation for Nd4j.createFromNpyFile()
Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:15:57 +10:00
raver119 fd6c0df024 [WIP] More CUDA fixes/updates (#62)
* CUDA reallocation update

Signed-off-by: raver119 <raver119@gmail.com>

* Legacy SoftMax/LogSoftMax/SoftMaxDerivative removed from cpp

Signed-off-by: raver119 <raver119@gmail.com>

* SoftMaxDerivative op removed

Signed-off-by: raver119 <raver119@gmail.com>

* few tests updates

Signed-off-by: raver119 <raver119@gmail.com>

* RNG fixes

Signed-off-by: raver119 <raver119@gmail.com>

* few more tests updates

Signed-off-by: raver119 <raver119@gmail.com>

* legacy Histogram/Pooling2D removed

Signed-off-by: raver119 <raver119@gmail.com>

* legacy Histogram removed

Signed-off-by: raver119 <raver119@gmail.com>

* histogram moved

Signed-off-by: raver119 <raver119@gmail.com>

* histogram moved cuda

Signed-off-by: raver119 <raver119@gmail.com>

* Histogram custom op

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:07:42 +10:00
raver119 9cf28ea6c9 [WIP] CUDA tweaks (#60)
* special cpu concat

Signed-off-by: raver119 <raver119@gmail.com>

* special concat fix

Signed-off-by: raver119 <raver119@gmail.com>

* OpProfiler tweak for absent host pointers

Signed-off-by: raver119 <raver119@gmail.com>

* minor test tweak to see orders

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA broadcasting diff orders fix

Signed-off-by: raver119 <raver119@gmail.com>

* faster iterations

Signed-off-by: raver119 <raver119@gmail.com>

* OldSoftMax/OldLogSoftMax gone

Signed-off-by: raver119 <raver119@gmail.com>

* RandomLauncher tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* additional check int randomtests

Signed-off-by: raver119 <raver119@gmail.com>

* skip prepare/register action for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* npz float16 fix

Signed-off-by: raver119 <raver119@gmail.com>

* empty reduction cuda fixes

Signed-off-by: raver119 <raver119@gmail.com>

* ShapeBufferTests tweaks

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:06:48 +10:00
raver119 6ce458e949 [WIP] CUDA Java side (#58)
* one crashing test

Signed-off-by: raver119 <raver119@gmail.com>

* stupid issue fixed

Signed-off-by: raver119 <raver119@gmail.com>

* one fix

Signed-off-by: raver119 <raver119@gmail.com>

* dont ensure location for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* few more signatures fixed

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks for DataBuffer creation from java primitives

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of legacy im2col/col2im intercept

Signed-off-by: raver119 <raver119@gmail.com>

* rsubi scalar array fix

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:06:25 +10:00
raver119 c969b724bb [WIP] more CUDA stuff (#57)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* Added gradcheck test for dynamic_partition_bp op.

* - implementation of dilation op (cpu and cuda)

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed broadcast_dynamic_shape 1D case and tests.

* Fixed usage of default integer arguments.

* Fixed dynamic_partition_bp op and tests.

* Eliminated test with grad check for dynamic_partition_bp op.

* start working on cuda svd - porting available corresponding api from cuSOLVER library

Signed-off-by: Yurii <yurii@skymind.io>

* provide prelu_bp

Signed-off-by: Yurii <yurii@skymind.io>

* - provide gruCell_bp (old version ??)

Signed-off-by: Yurii <yurii@skymind.io>

* - polishing cumsum_bp and cumprod_bp tests

Signed-off-by: Yurii <yurii@skymind.io>

* provide sparseSoftmaxCrossEntropyWithLogits and sparseSoftmaxCrossEntropyWithLogits_grad

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed atomicMul with float input/output

* implementation of cuda kernel for triu_bp operation

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored lup helper to add parrallel computing.

* cusolver libraries

Signed-off-by: raver119 <raver119@gmail.com>

* uncomment cuSolver APIs in svd.cu

Signed-off-by: Yurii <yurii@skymind.io>

* cusolver var

Signed-off-by: raver119 <raver119@gmail.com>

* - further work on cuSolver svd

Signed-off-by: Yurii <yurii@skymind.io>

* Implement usage of cuda solver to LUP decomposition.

* - correct naames in lup functions

Signed-off-by: Yurii <yurii@skymind.io>

* correct svdQR cuda

Signed-off-by: Yurii <yurii@skymind.io>

* - provide transpositions of input matrices in case of c order in svdCudaQR

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed implementation issues with LUP usign cuda solver.

* Implementation of matrix_determinant helper with cuda kernels. Working revision.

* Implemented log_matrix_determinant helper with cuda kernels.

* - implementation of batched cuda svd

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored cholesky helper and implementation of cuda solver cholesky batch.

* - implementation of cuda kernel for tile bp

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of cholesky and logdet with cuda kernels.

* - implementation of cuda kernel for sru_bidirectional

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed cholesky helper.

* Cholesky op helper implementation. Working double-based cublas implementation.

* bad import excluded

Signed-off-by: raver119 <raver119@gmail.com>

* Finished with cuda implementation of cholesky helper and tests.

* - implementation of cuda kernel for sru_bidirectional_backprop operation

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of matrix_inverse op helper with cuda kernels. The first revision.

* - start working on gruCell_bp

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of matrix_inverse helper.

* - further work on new gruCell_bp

Signed-off-by: Yurii <yurii@skymind.io>

* cuBLAS related fixes

Signed-off-by: raver119 <raver119@gmail.com>

* calculateOutputShapes() now passes device buffers as well

Signed-off-by: raver119 <raver119@gmail.com>

* special concat/average/accumulate init host pointers now

Signed-off-by: raver119 <raver119@gmail.com>

* few more tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* additional CudaDataBufferFactory signatures certain for data types

Signed-off-by: raver119 <raver119@gmail.com>

* cuSolver host buffer

Signed-off-by: raver119 <raver119@gmail.com>

* buffer to buffer memcpy host ptr allocation

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:05:21 +10:00
Alex Black cb6654bebb Add libnd4j benchmarks (#3)
This PR adds 2 libnd4j benchmarking suits
2019-07-20 22:54:44 +10:00
Ryan Nett 62c6a73f9d Include TF Import tests as forward pass tests in OpValidation (#53)
* ignore multinomial

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix for @NonNull varargs

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:24:11 +10:00
Ryan Nett daf3950d8d SameDiff If, While, and Misc changes (#52)
* softmax and logSoftmax w/ dimension

Signed-off-by: Ryan Nett <rnett@skymind.io>

* start of while

Signed-off-by: Ryan Nett <rnett@skymind.io>

* if, start of javadocs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* while foreward pass working, backprop WIP

Signed-off-by: Ryan Nett <rnett@skymind.io>

* no backprop

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Tensorflow style if/while (& tests), name scope fixes (and test), argument interceptor (for if/while), use '_' in op names instead of ':'

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc

Signed-off-by: Ryan Nett <rnett@skymind.io>

* many fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* many fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Some fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* cleanup if condition doesn't return boolean

Signed-off-by: Ryan Nett <rnett@skymind.io>

* serialization fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use constants instead of magic numbers

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:23:40 +10:00
Ryan Nett 2d991f5445 LeakyReLU fix (#55)
* LeakyReLU: Use serScalar to set alpha correctly in TF import
LogX: remove incorrect TF mapping
Pow: remove TF import method (no mapping)
BaseOp: remove duplicate extraArgs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* un-ignore cifar-10 gan, as it is now passing

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:23:19 +10:00
Ryan Nett 027d4d2a47 Add XLNet to zoo model ignores (#54)
* ignore xlnet

Signed-off-by: Ryan Nett <rnett@skymind.io>

* comment

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:22:57 +10:00
Ryan Nett d7c261ec40 added ability to run as test, and comments (#44)
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:22:34 +10:00
raver119 5708fc087a [WIP] INDArray hashCode() impl (#50)
* initial commit

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* parallel hashCode prototype

Signed-off-by: raver119 <raver119@gmail.com>

* longBytes for hashCode

Signed-off-by: raver119 <raver119@gmail.com>

* INDArray hashCode java side

Signed-off-by: raver119 <raver119@gmail.com>

* few tests fixed for MSVC

Signed-off-by: raver119 <raver119@gmail.com>

* Small gradcheck validation util fix - hash names not SDVariables

Signed-off-by: Alex Black <blacka101@gmail.com>

* Small fix + ignore for logged issue

Signed-off-by: Alex Black <blacka101@gmail.com>

* - scrollable iterator fix
- sptree hashset replaced with collection

Signed-off-by: raver119 <raver119@gmail.com>

* hashcode exception removed

* int hashCode for java
2019-07-20 22:22:11 +10:00
Alex Black 88ea9a49eb TF optional properties resolution fix (#48)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-07-20 22:21:20 +10:00
Ryan Nett 4b23dc01fd fix for #7976, update test comment (to OOM) (#45)
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:20:58 +10:00
raver119 15e7984392 Small fixes (#46)
* - fix for eclipse/#7959

* createFromNpy scalar fix

Signed-off-by: raver119 <raver119@gmail.com>

* remove spam

Signed-off-by: raver119 <raver119@gmail.com>

* rng custom ops tests

Signed-off-by: raver119 <raver119@gmail.com>

* Numpy headers validation + tests

Signed-off-by: raver119 <raver119@gmail.com>

* fix for scalar string flat serde

Signed-off-by: raver119 <raver119@gmail.com>

* Where empty shape test

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 22:20:14 +10:00
Ryan Nett c135883162 [WIP] Ignored TF OP Import test updates (#37)
* new issue for b_d_s (old is closed), new issue for boolean_mask (old is fixed)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* date update

Signed-off-by: Ryan Nett <rnett@skymind.io>

* "embedding_lookup/.*multiple.*" is passing

Signed-off-by: Ryan Nett <rnett@skymind.io>

* new bincount issue (old was fixed)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* date update

Signed-off-by: Ryan Nett <rnett@skymind.io>

* where op comment

Signed-off-by: Ryan Nett <rnett@skymind.io>

* where passing

Signed-off-by: Ryan Nett <rnett@skymind.io>

* ignore gan

Signed-off-by: Ryan Nett <rnett@skymind.io>

* scatter_nd/rank2shape_2indices passing

Signed-off-by: Ryan Nett <rnett@skymind.io>

* topk update, test fix (need to verify)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* batch_to_space issue update

Signed-off-by: Ryan Nett <rnett@skymind.io>

* updates, no change

Signed-off-by: Ryan Nett <rnett@skymind.io>

* date updates

Signed-off-by: Ryan Nett <rnett@skymind.io>

* batch_to_space is fixed

Signed-off-by: Ryan Nett <rnett@skymind.io>

* equality function helper

Signed-off-by: Ryan Nett <rnett@skymind.io>

* topk passing

Signed-off-by: Ryan Nett <rnett@skymind.io>

* dropout equality check

Signed-off-by: Ryan Nett <rnett@skymind.io>

* adding ignores for zoo models

Signed-off-by: Ryan Nett <rnett@skymind.io>

* ignore libnd4j rnn tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* added batch to space test

Signed-off-by: Ryan Nett <rnett@skymind.io>

* issue comments

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:19:48 +10:00
raver119 dde50ee570 Small fixes (#43)
* minor cmake changes to make macos happy

* space_to_batch/batch_to_space validation fix

* - choose op tweaks
- tests updated to match appleid tweaks

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* - get rid of bad import

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* - choose now uses shape function
- choose test updated
2019-07-20 22:19:20 +10:00
Ryan Nett d854e28b34 System info export for debugging and bug reporting (#34)
* System info export for debugging and bug reporting

Signed-off-by: Ryan Nett <rnett@skymind.io>

* class name fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* add version information, pointer memory info

Signed-off-by: Ryan Nett <rnett@skymind.io>

* add nvidia-smi and nvcc info

Signed-off-by: Ryan Nett <rnett@skymind.io>

* line cleanup

Signed-off-by: Ryan Nett <rnett@skymind.io>

* nvidia-smi run works

Signed-off-by: Ryan Nett <rnett@skymind.io>

* add oshi dependency

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use OS info, add workspaces info

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use ServiceLoader to load GPU information

Signed-off-by: Ryan Nett <rnett@skymind.io>

* register service

Signed-off-by: Ryan Nett <rnett@skymind.io>

* moved service out of NativeOpsHolder (private constructor)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* added newline

Signed-off-by: Ryan Nett <rnett@skymind.io>

* added license

Signed-off-by: Ryan Nett <rnett@skymind.io>

* and one more

Signed-off-by: Ryan Nett <rnett@skymind.io>

* copyright update

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* removed unused imports

Signed-off-by: Ryan Nett <rnett@skymind.io>

* removed more unused imports

Signed-off-by: Ryan Nett <rnett@skymind.io>

* close streams

Signed-off-by: Ryan Nett <rnett@skymind.io>

* and another one

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use method

Signed-off-by: Ryan Nett <rnett@skymind.io>

* one more copyright

Signed-off-by: Ryan Nett <rnett@skymind.io>

* remove double license

Signed-off-by: Ryan Nett <rnett@skymind.io>

* moved test to correct package

Signed-off-by: Ryan Nett <rnett@skymind.io>

* classpath update

Signed-off-by: Ryan Nett <rnett@skymind.io>

* classpath for java >8 fix

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:18:24 +10:00
Ryan Nett ef8cb55b7b Nd4j: Change array args to vararg and add toString options (#36)
* changed [] to ...

Signed-off-by: Ryan Nett <rnett@skymind.io>

* added randn(long seed, int... shape)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Fixed a couple of methods

Signed-off-by: Ryan Nett <rnett@skymind.io>

* ToString methods w/ options

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes, less toString methods, and a few ops I missed

Signed-off-by: Ryan Nett <rnett@skymind.io>

* some javadocs, change int... to long... where possible

Signed-off-by: Ryan Nett <rnett@skymind.io>

* another javadoc

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* just javadoc in INDArray

Signed-off-by: Ryan Nett <rnett@skymind.io>

* local/static fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Add @NonNull to options

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc updates/fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more @NonNulls

Signed-off-by: Ryan Nett <rnett@skymind.io>

* even more @NonNulls, this time on varargs

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:16:13 +10:00
Alexandre Boulanger ee6aae268f RL4J refac: Added some observation transform classes (#7958)
* Added observation classes and tests

Signed-off-by: unknown <aboulang2002@yahoo.com>

* Now uses DataSetPreProcessors

Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>

* CompositeDataSetPreProcessor can now stop processing on empty dataset; Some DataSetPreProcessors moving from RL4J to ND4J

Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>

* Did requested minor changes

Signed-off-by: Alexandre Boulanger <Alexandre.Boulanger@ia.ca>
Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
2019-07-20 10:28:20 +10:00
Alex Black 1170827c18 Merge master to upstream (#7945)
* Shugeo strided slice zeros (#14)

* Modified strided_slice op to properly work with empty-like shapes.

* Fixed test for reduce_mean with empty-like input.

* [WIP] Last merge (#15)

* correct logsoftmax looss (#2)

* Small SameDiff listener fix (#4)

* Various fixes (#6)

* #7839 Fix for asXMatrix and tests

* #7866 EmbeddingSequenceLayer dtype fix + test

* #7856 SameDiff save/load stream methods

* #7859 RegressionEvaluation rank 4 fix + tests + axis configuration

* EvaluationBinary 3d/4d

* More evaluation 3d/4d tests

* #7847 Evaluation empty checks

* Small test ifx

* #7848 Fix median edge case

* Improve DL4J samediff layer tests

* [WIP] FastText wrapper implemented (#8)

* FastText implemented

* Some fixes

* Fix shapes for wordsNearest

* Validation of input vectors

* Fixes

* Fixed test

* Thread tagged

* Some tweaks

* setContextClassLoader for DeallocatorServiceThread

* Numpy format tests (#1)

* Various fixes (#11)

* #7852 SameDiff gather fix

* #7892 SameDiff placeholder to constant conversion

* #7890 validate input rank for MLN/CG init methods

* Fix broken permute shape calculation

* Permute and gather fixes

* Tests

* #7850 LogSumExp fix + test

* Handful of test fixes

* Empty arrays with non-scalar shapes (#10)

* minor rearrangements for lambdas

* empty tensors with non-scalar shapes

* numpy empty tensors with non-scalar shapes

* few more empty tweaks

* Small fixes

* conv3d signature update

* micro fix in batchnorm mkldnn

* Import fixes

* Fix

* MKL-DNN update

* Small fill fix

* fill with empty input + test

* Fixes

* Small error improvement

* Fix

* one special test

* couple of fixes for lstm

* Rewrite TFGraphMapper.getNDArrayFromTensor to be maintainable and less error prone

* Fixes

* FP16

* Unsigned

* BFloat16

* Fill op - empty tweaks

* - couple of fixes for empty arrays construction
- stack updated

* strided slice fix

* one transform test

* provide method for reducing shapeInfo in case of input array is empty

* Fixed reduceAlongDimensions to use empty input properly.

* couple of broadcast tests

* couple of tests broadcast tests + tweak to make them pass

* add check of non-empty to methods producing sub-arrays

* Fixed reshapeC with zeros in shape.

* complete empty check in reduce_... legacy ops

* Concat and cumsum/prod

* Tweak to empty shape inference on import

* add empty check to the rest of reduce legacy ops

* one more test

* correct typo in evalReduceShapeInfoEmpty

* Added tests for reduce_* ops to tests with zero shapes.

* few more tests for empty reductions

* Fixed strided_slice op with empty case and tests.

* one more empty reduction test

* Fixed strided_slice test.

* add empty check to NDArray::reshapei

* infOrMax

* empty min/max with infinity tests

* made unstack working correctly with empty arrays

* few IndexReduce tests + tweaks for empty shapes

* add test for empty concat

* few tests fixed

* Validation fix for reductions on empty shapes

* Reverse fix

* Reduction shape calc fixes

* SameDiff.generateOutputVariable: don't use shape function to determine number of outputs

* Range fix

* - NDArray constructor updated for scalars/empty arrays
- few tests fixed

* More fixes

* Empty creator fixes

* concat fix

* concat fix

* TF import tests: allow 'both all NaN' and 'both all inf' to pass

* Slice, zero fraction, and reshape fixes

* transpose, gather

* Zero fraction

* scalar cast fix

* Empty reduction axis support

* few more tests fixed

* Fixed input checks conforming with TF for concat op and tests.

* few tests fixed

* matmul scalar shape fix

* Fixed checkout for data type and scalarity with concat to allow non-empty scalars with vector concats.

* broadcast bool fix

* few more tests

* few more tests

* correct evalReduceShapeInfoEmpty

* argmax/argmin + tests

* one more empty edge case + one more test

* argmax/argmin/realdiv_bp tweaks

* empty reshape test + fix

* Helper fixes

* Small fixes

* Gather test fix

* Gather test fix

* Small fixes

* reduce scalar zero values

* scalar mean workaround

* Remove debug code

* along dim mean workaround

* one more test

* - equalsTo() tweak for empty arrays
- one more test

* broadcast tweaks

* [WIP] Fixing outstanding issues for NLP (#9)

* Avoid using not-inited objects

* Test fixed.

* Redundant method avoided for models like FastText

* KMeans++ implementation

* KMeans++ implementation

* Disable parallel execution

* KMeans++

* Tests

* Dev branch merge (#16)

* SameDiff: convertDataType and gradient check util improvements (#12)

* GradCheck util improvements

* StopGradient constructor + test

* SameDiff: Add datatype conversion

* Javadoc and add DataType.isNumerical()

* Small fix

* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)

* TFGraphTestAllHelper: check intermediates in execution order

* Add missing debug listener

* [WIP] lstmBlock fix + other changes (#13)

- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite

* Small test fix

* CheckNumerics op wrapper

* Fix some issues on master (#17)

* Fix DataVec test issue

* Fix issue with dl4j SameDiff output layer

* Dtype fix for lambda layers

* #7912 BertIterator dtype fix (use float32 not global default)

* [WIP] Next set of CUDA stuff (#7)

New CUDA implementations and improvements

* bad file

* Dev branch master merge (#23)

* SameDiff: convertDataType and gradient check util improvements (#12)

* GradCheck util improvements

* StopGradient constructor + test

* SameDiff: Add datatype conversion

* Javadoc and add DataType.isNumerical()

* Small fix

* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)

* TFGraphTestAllHelper: check intermediates in execution order

* Add missing debug listener

* [WIP] lstmBlock fix + other changes (#13)

- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite

* Small test fix

* CheckNumerics op wrapper

* Compatibility of deserialization (#18)

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* SameDiff: add activation gradient checking support for debugging (#19)

* SameDiff gradient checker: first pass on activation gradient checks

* Fixes + tests for activation gradient checking

* Javadoc

* [WIP] Some nd4j data type corrections (#20)

* Adjust data type

* Set correct Data type.

* Size of proper data type.

* fix averaged cpu load (#22)

* SameDiff ops, TF import and fixes (#24)

* CheckNumerics tests + fixes + misc fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fake quant

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* FakeQuantWithMinMaxArgs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* CheckNumerics fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix libnd4j ALL_INTS and ALL_FLOATS declaration (uint and bfloat types)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Exception tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for out of scope stack allocated var use

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ignores

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ignore for known failing test (already logged issue)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Merge upstream to fork (#25)

* Add thousand-separator commas to TotalParams (#7915)

* Add thousand-separator commas to TotalParams

The number of parameters can be quite large, and it would help the reading of the summary printout to have the TotalParams column & values at the bottom have thousand-separator-commas in them.

* Add thousand-separator commas to MultiLayerNetwork

Corresponding change to MultiLayerNetwork

Signed-off-by: Jxtps Jxtps <jxtps435@gmail.com>

* Update contributing and issue/PR templates (#7934)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix link to AdaDelta paper (#7942)

Fix link to AdaDelta paper hosted on matthewzeiler.com

Signed-off-by: Jxtps

* Fixes, and ignores for known/logged failing issues (#7943)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff + DL4J/SameDiff: Multiple fixes (#28)

* #7919 HDF5 attribute buffer length fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7909 Arbiter constructor exception ux improvements

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7925 RNN output layer length checks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7939 Add listener for validating inputs are not incorrectly modified

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7939 Integrate NonInplaceValidationListener into tests

* #7844 DL4J SameDiff fixes for variable minibatch size

* DL4J SameDiff fixes - ensure gradient for input placeholder is available

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tweaks to ExternalErrorsFunction - use placeholders, make more robust

* Another fix

* More fixes

* More SameDiff/DL4J fixes

* Scope out scalar array creation in BaseScalarOp

* Remove debug code

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* [WIP] Final dev branch merge (#29)

* SameDiff: convertDataType and gradient check util improvements (#12)

* GradCheck util improvements

* StopGradient constructor + test

* SameDiff: Add datatype conversion

* Javadoc and add DataType.isNumerical()

* Small fix

* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)

* TFGraphTestAllHelper: check intermediates in execution order

* Add missing debug listener

* [WIP] lstmBlock fix + other changes (#13)

- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite

* Small test fix

* CheckNumerics op wrapper

* Compatibility of deserialization (#18)

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* SameDiff: add activation gradient checking support for debugging (#19)

* SameDiff gradient checker: first pass on activation gradient checks

* Fixes + tests for activation gradient checking

* Javadoc

* [WIP] Some nd4j data type corrections (#20)

* Adjust data type

* Set correct Data type.

* Size of proper data type.

* fix averaged cpu load (#22)

* [WIP] Multiple dataset iterators (#27)

* Splitting dataset into arbitrary number

* Fixes

* Multiple split of iterator

* Test

* Test

* Some fixes

* signature change

* one more tweak

Signed-off-by: raver119 <raver119@gmail.com>

* one more test for sequential use of DataSetIteratorSplitter

Signed-off-by: raver119 <raver119@gmail.com>

* Fixes

* Fixes

* one more test for Alexander

Signed-off-by: raver119 <raver119@gmail.com>

* Some fixes

* Some fixes

* one more test for Alexander

Signed-off-by: raver119 <raver119@gmail.com>

* minor test fix

Signed-off-by: raver119 <raver119@gmail.com>

* Some fixes

* Some fixes

* couple of assertions tweaked

Signed-off-by: raver119 <raver119@gmail.com>

* MDS splitter test :/

Signed-off-by: raver119 <raver119@gmail.com>

* Minor refactoring

* Multi dataset

* Some fixes

* More tests

* Small number of test fixes/improvements (failures on CI) (#31)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* [WIP] More CUDA stuff (#26)

* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* LRN BP CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* less memory

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed bug with crop_and_resize op helper.

* get rid of unnecessary index-calculation dunction

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed sort with nth_element cuda-based helper.

* Refactored nth_element.

* Refactored nth_element op and tests.

* Modified usage of dim array with sortTad routine.

* Refactored main routine of helper for non_max_image_suppression op.

* non_max_image_suppression op helper with cuda kernel implementation. Initial revision.

* fix vol2col cuda kernel

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* topK concept

Signed-off-by: raver119 <raver119@gmail.com>

* unsorted topK with scanWitdh of 1

Signed-off-by: raver119 <raver119@gmail.com>

* correct vol2col tests

* sorted/unsorted topK

Signed-off-by: raver119 <raver119@gmail.com>

* implementation and fixing col2im/col2vol

* Corrected usage flags with input/output with reverse op.

* dup is const now

Signed-off-by: raver119 <raver119@gmail.com>

* percentile op

Signed-off-by: raver119 <raver119@gmail.com>

* group tests for mapool2d

Signed-off-by: Yurii <yurii@skymind.io>

* special test for george

Signed-off-by: raver119 <raver119@gmail.com>

* less threads for sortTad

Signed-off-by: raver119 <raver119@gmail.com>

* provide conv2d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* remove auther in sort tad kernel code

Signed-off-by: Yurii <yurii@skymind.io>

* provide depthwise_conv2d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* - max_pooling_with_argmax
- null check for special use

Signed-off-by: raver119 <raver119@gmail.com>

* dts cuda

Signed-off-by: raver119 <raver119@gmail.com>

* provide sconv2d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* std cuda

Signed-off-by: raver119 <raver119@gmail.com>

* Refactored non_max_suppression op to conform TF implementation.

* Improved suppression helper.

* provide pooling3d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* minor lstm rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* more of minor lstm rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* (bi)dynamic_rnn

Signed-off-by: raver119 <raver119@gmail.com>

* templates init order

Signed-off-by: raver119 <raver119@gmail.com>

* Refactored non_max_suppression op.

* Added cuda kernel for non_max_suppression.

* CPU sort by key/value

Signed-off-by: raver119 <raver119@gmail.com>

* CPU sort TAD by key/value

Signed-off-by: raver119 <raver119@gmail.com>

* CPU sort TAD by key/value tests

Signed-off-by: raver119 <raver119@gmail.com>

* Eliminate compiler error with cuda implementation.

* - repaired gradCheck in cuda
- provide conv2d_bp for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* missed signature

Signed-off-by: raver119 <raver119@gmail.com>

* provide depthwise_conv2d_bp for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of lup helper with cuda kernel. Initial commit.

* further work on backprops for convolutions

Signed-off-by: Yurii <yurii@skymind.io>

* CUDA linear sort by key/val

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA tad sort by key/val

Signed-off-by: raver119 <raver119@gmail.com>

* start providing of backprop for pooling2d/3d

Signed-off-by: Yurii <yurii@skymind.io>

* Added atomicAdd for bool datatype.

* dynamic partition concept

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic partition concept

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic partition scalar CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* important comment

Signed-off-by: raver119 <raver119@gmail.com>

* fix pooling2d/3d backprop helpers

Signed-off-by: Yurii <yurii@skymind.io>

* Added non-linear test with dynamic_partition.

* Improved test for dynamic_partition.

* dynamic_partition TAD concept

Signed-off-by: raver119 <raver119@gmail.com>

* - dynamic_partition TAD CUDA impl
- dynamic_partition TAD CPU fix

Signed-off-by: raver119 <raver119@gmail.com>

* - rewrite cpu code for usampling2d/3d
- write cuda code for usampling2d/3d

Signed-off-by: Yurii <yurii@skymind.io>

* dynamic_stitch CUDA vector case

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic_stitch CUDA TAD case concept

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic_stitch CUDA TAD case impl

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for dynamic_stitch 3D-4D cases.

* minor tests tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed type check for dynamic stitch.

* min/max bp

Signed-off-by: raver119 <raver119@gmail.com>

* rewrite code for upsampling2d/3d cpu

Signed-off-by: Yurii <yurii@skymind.io>

* reduce min/max/norm_max bp

Signed-off-by: raver119 <raver119@gmail.com>

* lup implementation. Additional enhancements.

* provide code for upsamling2d/3d backprop

Signed-off-by: Yurii <yurii@skymind.io>

* weightedCrossEntropyWithLogits

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed template math atomicMul for 64bit ints.

* Refactored dynamic_partition_bp op.

* inverseBroadcast fix

Signed-off-by: raver119 <raver119@gmail.com>

* DynamicPartitionBP test datatype fixed.

* - nd4j_atomicMul Windows fix
- cpu/NDArrayLambda.hpp excluded from CUDA

Signed-off-by: raver119 <raver119@gmail.com>
2019-06-27 18:37:04 +03:00
Alex Black cae4fc9760
Fixes, and ignores for known/logged failing issues (#7943)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-06-25 12:07:28 +10:00
Alex Black 68ea5f3688
Dev branch merge: dev_20190606 (#7904)
* correct logsoftmax looss (#2)

* Small SameDiff listener fix (#4)

* Various fixes (#6)

* #7839 Fix for asXMatrix and tests

* #7866 EmbeddingSequenceLayer dtype fix + test

* #7856 SameDiff save/load stream methods

* #7859 RegressionEvaluation rank 4 fix + tests + axis configuration

* EvaluationBinary 3d/4d

* More evaluation 3d/4d tests

* #7847 Evaluation empty checks

* Small test ifx

* #7848 Fix median edge case

* Improve DL4J samediff layer tests

* [WIP] FastText wrapper implemented (#8)

* FastText implemented

* Some fixes

* Fix shapes for wordsNearest

* Validation of input vectors

* Fixes

* Fixed test

* Thread tagged

* Some tweaks

* setContextClassLoader for DeallocatorServiceThread

* Numpy format tests (#1)

* Various fixes (#11)

* #7852 SameDiff gather fix

* #7892 SameDiff placeholder to constant conversion

* #7890 validate input rank for MLN/CG init methods

* Fix broken permute shape calculation

* Permute and gather fixes

* Tests

* #7850 LogSumExp fix + test

* Handful of test fixes

* Empty arrays with non-scalar shapes (#10)

* minor rearrangements for lambdas

* empty tensors with non-scalar shapes

* numpy empty tensors with non-scalar shapes

* few more empty tweaks

* Small fixes

* conv3d signature update

* micro fix in batchnorm mkldnn

* Import fixes

* Fix

* MKL-DNN update

* Small fill fix

* fill with empty input + test

* Fixes

* Small error improvement

* Fix

* one special test

* couple of fixes for lstm

* Rewrite TFGraphMapper.getNDArrayFromTensor to be maintainable and less error prone

* Fixes

* FP16

* Unsigned

* BFloat16

* Fill op - empty tweaks

* - couple of fixes for empty arrays construction
- stack updated

* strided slice fix

* one transform test

* provide method for reducing shapeInfo in case of input array is empty

* Fixed reduceAlongDimensions to use empty input properly.

* couple of broadcast tests

* couple of tests broadcast tests + tweak to make them pass

* add check of non-empty to methods producing sub-arrays

* Fixed reshapeC with zeros in shape.

* complete empty check in reduce_... legacy ops

* Concat and cumsum/prod

* Tweak to empty shape inference on import

* add empty check to the rest of reduce legacy ops

* one more test

* correct typo in evalReduceShapeInfoEmpty

* Added tests for reduce_* ops to tests with zero shapes.

* few more tests for empty reductions

* Fixed strided_slice op with empty case and tests.

* one more empty reduction test

* Fixed strided_slice test.

* add empty check to NDArray::reshapei

* infOrMax

* empty min/max with infinity tests

* made unstack working correctly with empty arrays

* few IndexReduce tests + tweaks for empty shapes

* add test for empty concat

* few tests fixed

* Validation fix for reductions on empty shapes

* Reverse fix

* Reduction shape calc fixes

* SameDiff.generateOutputVariable: don't use shape function to determine number of outputs

* Range fix

* - NDArray constructor updated for scalars/empty arrays
- few tests fixed

* More fixes

* Empty creator fixes

* concat fix

* concat fix

* TF import tests: allow 'both all NaN' and 'both all inf' to pass

* Slice, zero fraction, and reshape fixes

* transpose, gather

* Zero fraction

* scalar cast fix

* Empty reduction axis support

* few more tests fixed

* Fixed input checks conforming with TF for concat op and tests.

* few tests fixed

* matmul scalar shape fix

* Fixed checkout for data type and scalarity with concat to allow non-empty scalars with vector concats.

* broadcast bool fix

* few more tests

* few more tests

* correct evalReduceShapeInfoEmpty

* argmax/argmin + tests

* one more empty edge case + one more test

* argmax/argmin/realdiv_bp tweaks

* empty reshape test + fix

* Helper fixes

* Small fixes

* Gather test fix

* Gather test fix

* Small fixes

* reduce scalar zero values

* scalar mean workaround

* Remove debug code

* along dim mean workaround

* one more test

* - equalsTo() tweak for empty arrays
- one more test

* broadcast tweaks
2019-06-15 21:34:34 +10:00
Alex Black 32e5cc1945
First round of runtime test improvements (#7875)
* Capsnet test runtime improvements

* Slow test speedups

* Next round of test speed improvements

* More test improvements

* Improve test speed

* Next round of test speedups

* Another round

* More test speedups

* Another round

* Another round of test speedups

* Another round of speedups...

* CuDNN test speedups + more tests extending BaseDL4JTest

* Minor fix + more BaseDL4JTest use in other modules
2019-06-13 20:40:40 +10:00
skymindops b5f0ec072f Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00