* new (for java at least) backprop ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update activation functions
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add differential functions for SameDiff
Signed-off-by: Ryan Nett <rnett@skymind.io>
* deprecate old ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update correct old ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update ops backprop to use new ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* misc updates for deprecated functions (mostly Nd4j.rand w/ vararg shape)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove old imports
Signed-off-by: Ryan Nett <rnett@skymind.io>
* one test for alex
Signed-off-by: raver119 <raver119@gmail.com>
* fix
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of safety offset in cpp
Signed-off-by: raver119 <raver119@gmail.com>
* bfloat16
Signed-off-by: raver119 <raver119@gmail.com>
* minor test rearrangement to fastpath launch
Signed-off-by: raver119 <raver119@gmail.com>
* - atomicAdd/Mul/Div fix for float16/bfloat16 misalignment
- one special test for maxpoolbp java
- safety offset of 8 bytes is back to libnd4j legacy
Signed-off-by: raver119 <raver119@gmail.com>
* Add java op class for relu derivative, and use in ACtivation ReLU
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* - add one additional test for svd
* - provide float argument in eye op to be a type of output array
Signed-off-by: Yurii <yurii@skymind.io>
* - add cuda capability check to mmulHelper
Signed-off-by: Yurii <yurii@skymind.io>
* - make use another method for divice id evaluation
Signed-off-by: Yurii <yurii@skymind.io>
* Eye data type as T argument
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored kernels for segment_max/min/sum ops.
* Refactored segment_prod kernels.
* Refactored segment_prod kernels.
* DynamicPartition test
Signed-off-by: raver119 <raver119@gmail.com>
* Addede linear test for dynamic_partition op.
* Refactored test with int datatype.
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* dynamicPartition fix
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of some logging
Signed-off-by: raver119 <raver119@gmail.com>
* one more test for dynamic_stitch
Signed-off-by: raver119 <raver119@gmail.com>
* one more test for dynamic_stitch
Signed-off-by: raver119 <raver119@gmail.com>
* empty check for stitch
Signed-off-by: raver119 <raver119@gmail.com>
* minor print changes
Signed-off-by: raver119 <raver119@gmail.com>
* remove some unneeded java-side output shape calculations
Signed-off-by: Ryan Nett <rnett@skymind.io>
* delete Broadcast
Signed-off-by: Ryan Nett <rnett@skymind.io>
* delete Linear and Module,
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update Identity, HashCode, and NoOp
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed Cast java-side shape function, added tests and SDVariable.isEmpty
Signed-off-by: Ryan Nett <rnett@skymind.io>
* ignoring test w/ issues on master
Signed-off-by: Ryan Nett <rnett@skymind.io>
* noop needs more work, fixed BaseArithmeticBackprop and BaseDynamicTransform ops
merge in master for c++ build fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix EqualTo
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix other cond ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* "fake" ops calculateOutputShape() throws exception
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use c++ shape calc for Linspace
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix exception message, move most to BaseCompatOp
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove SDVariable.isEmpty
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove commented out code
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove unneeded resolveProperties methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
* final fixes, make final to prevent more from being added
Signed-off-by: Ryan Nett <rnett@skymind.io>
* gather fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* deprecate DifferentialFunction resolveProps
Signed-off-by: Ryan Nett <rnett@skymind.io>
* one noop test
Signed-off-by: raver119 <raver119@gmail.com>
* skip input validation for no-input ops
Signed-off-by: raver119 <raver119@gmail.com>
* - one more noop empty test
- one more validation before sync
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* one more validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA empty reductions java side
Signed-off-by: raver119 <raver119@gmail.com>
* one svd test
Signed-off-by: raver119 <raver119@gmail.com>
* Corrected segment_mean helpers and added another test.
* Refactored segment_mean kernels to avoid race_condition.
* CUDA empty reduction
Signed-off-by: raver119 <raver119@gmail.com>
* - listdiff synchronization fix for CUDA
- listdiff test
Signed-off-by: raver119 <raver119@gmail.com>
* - IndexReduce ops now allow INDEXING_TYPES output
- topK op accepts only INDEXING_TYPES as output
Signed-off-by: raver119 <raver119@gmail.com>
* small fix of compiler warnings in nd4j.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* indarray javadoc start.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* First steps for protobuf version upgrade
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Phase 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update imports to shaded protobuf
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Version fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Switch to single execution for protobuf codegen to work around plugin bug
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Automatically delete old PB generated files after name change
Signed-off-by: Alex Black <blacka101@gmail.com>
* one test for maxpool2d_bp
Signed-off-by: raver119 <raver119@gmail.com>
* - maxpool2d_bp cuda fix for NaNs
- streamSync after each custom op execution
Signed-off-by: raver119 <raver119@gmail.com>
* one test for size
Signed-off-by: raver119 <raver119@gmail.com>
* - few tests for size op
- size/rank/size_at ops now use p instead of assign
Signed-off-by: raver119 <raver119@gmail.com>
* throw exception if op execution failed
Signed-off-by: raver119 <raver119@gmail.com>
* expected for test
Signed-off-by: raver119 <raver119@gmail.com>
* one more ismax test
Signed-off-by: raver119 <raver119@gmail.com>
* ismax view fix
Signed-off-by: raver119 <raver119@gmail.com>
* Nd4j pad update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* switched from guava Immutables to Collections.unmodifiableList/Map
Signed-off-by: Ryan Nett <rnett@skymind.io>
* javadoc
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use new pad
Signed-off-by: Ryan Nett <rnett@skymind.io>
* conv tests use OpValidation
Signed-off-by: Ryan Nett <rnett@skymind.io>
* deconv3d overrides
Signed-off-by: Ryan Nett <rnett@skymind.io>
* test fix for the new pad method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more test fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more test fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* rename SameDiff function methods to op (except for the actual SameDiff function ones)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more pad overloads, test fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* test updates
Signed-off-by: Ryan Nett <rnett@skymind.io>
* conv1d test
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove Conv1D tf import (there isn't a TF conv1d op)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove numThreads from Nd4j
Signed-off-by: Ryan Nett <rnett@skymind.io>
* replace Old ops with their newer versions, deprecate ones that haven't already been deprecated
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove use of setNumThreads
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix for Reverse and ATan2
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix test for wrong equals type
Signed-off-by: Ryan Nett <rnett@skymind.io>
* well it works now
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better javadocs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* NonNulls
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better array literal
Signed-off-by: Ryan Nett <rnett@skymind.io>
* re-add tf import stuff (will remove later)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* conv1d config load fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* partial config usage changes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove Old op classes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* config property fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed one too many ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Jar packaging for maven
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Typo fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* minimal viable prototype for SD
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Tests corrected
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* few fixes for bfloat16 in java and cpp (#114)
Signed-off-by: raver119 <raver119@gmail.com>
* Nd4j refactoring (#112)
* refactoring
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
* fix: make test public.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* make test public.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* fixes read refactoring.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Enabled test
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Test copied from nd4j
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* [WIP] bitwise ops (#115)
* - cyclic_shift_bits + test
- shift_bits + test
Signed-off-by: raver119 <raver119@gmail.com>
* OMP_IF replacement
Signed-off-by: raver119 <raver119@gmail.com>
* Thin wrapper added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Cleanup
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Shugeo cuda tests (#116)
* Added tests for get_seed/set_seed ops.
* Added missed tests for scatter_sub/mul/div ops.
* Added tests for hardsigmoid and hardsigmoid_bp.
* Added tests for hardtanh and hardtanh_bp ops.
* Added test for histogram op.
* Added tests for identity op.
* Refactored mergemaxindex op. Added tests for log1p,mergemaxindex, mod and mod_bp ops.
* Fixed tests for FloorDiv.
* Added test for rank op.
* Added tests for rationaltanh/rationaltanh_bp ops.
* Added tests for realdiv/realdiv_bp.
* Added tests for rectifiedtanh/_bp ops.
* Added tests for shapes_of op.
* Added tests for shapes_of op.
* Added tests for size op.
* Added tests for softplus/_bp ops.
* Added tests for softsign/_bp ops.
* Added tests for toggle_bits op. Fixed processing of OP_IMPL and so on defititions.
* Added test for truncatediv op.
* Added another test for truncatediv op.
* Added another test for histogram.
* Added tests for unstack_list op.
* Refactored to_int32/uint32/float16/float32/double/int64/uint64 ops and tests.
* Refactored mergemaxindex op helper for cuda platform and tests.
* Fixed cuda kernel for histogram op helper.
* Refactor skipgram to avoid early buffers shift.
* Fixed check up with non_max_suppression op cuda helper. Added cuda kernel implementation for skipgram op helpers.
* Added implementation of skipgram op helper for cuda platform. Working revision
* Fixed mergeMaxIndex kernel and move it to separate source file.
* Adding arithmetic
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Eliminated memory leaks and dropped waste prints with tests. (#117)
* Added tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* fix test
Signed-off-by: raver119 <raver119@gmail.com>
* no openmp for ClipByGlobalNorm
Signed-off-by: raver119 <raver119@gmail.com>
* Stubs for ops
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* [WIP] right shift ops (#118)
* right shift ops
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* rotr test
Signed-off-by: raver119 <raver119@gmail.com>
* fix: IOException no longer thrown by read(). (#120)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Small fix in TensorflowConversion class (#121)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Shyrma concat2 (#119)
* - rewrite/improve concat
Signed-off-by: Yurii <yurii@skymind.io>
* - ged rid of unnecessary argument in concat kernel
Signed-off-by: Yurii <yurii@skymind.io>
* InferenceSession additional validation for shape calc (#122)
Signed-off-by: Alex Black <blacka101@gmail.com>
* [WIP] build fix (#124)
* AffinityManager changes
Signed-off-by: raver119 <raver119@gmail.com>
* build fixes
Signed-off-by: raver119 <raver119@gmail.com>
* OP/CONFIGURABLE_OP shapefn fix (#125)
Signed-off-by: raver119 <raver119@gmail.com>
* Some ops added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Nd4j refactoring (last one!) (#123)
* fix: IOException no longer thrown by read().
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* refactoring
* last refactorings
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Advanced tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* [WIP] Java wrappers (#126)
* shift/rshift/rotl/rotr java/sd wrappers
Signed-off-by: raver119 <raver119@gmail.com>
* few additional wrappers
Signed-off-by: raver119 <raver119@gmail.com>
* minor naming tweak
Signed-off-by: raver119 <raver119@gmail.com>
* Test added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* one more build fix
Signed-off-by: raver119 <raver119@gmail.com>
* Jar packaging for maven
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Typo fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* minimal viable prototype for SD
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Tests corrected
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Enabled test
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Test copied from nd4j
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Thin wrapper added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Cleanup
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Adding arithmetic
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Added tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Stubs for ops
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Some ops added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Advanced tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Test added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Ops added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Small build fixes (#127)
* Small build fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix RL4J
Signed-off-by: Alex Black <blacka101@gmail.com>
* Test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* parent module name fix
Signed-off-by: raver119 <raver119@gmail.com>
* [WIP] Roll rewritten (#128)
* Process correct input vector.
* Added tests for roll.
* Refactored roll to conform with TF. Eliminated memory leaks with Roll op tests.
* no thread_local for cpu
Signed-off-by: raver119 <raver119@gmail.com>
* Jar packaging for maven
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Typo fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* minimal viable prototype for SD
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Tests corrected
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Enabled test
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Test copied from nd4j
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Thin wrapper added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Cleanup
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Adding arithmetic
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Added tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Stubs for ops
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Some ops added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Advanced tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Test added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Ops added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Tests added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Boolen logic ops
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Test added
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Shift operations
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* fix: IOException no longer thrown by read().
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* refactoring
* last refactorings
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* refactoring
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
* fix: make test public.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* make test public.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* fixes read refactoring.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* First pass on SameDiff op exec debug listener
Signed-off-by: Alex Black <blacka101@gmail.com>
* #7555 DL4J helpers - don't fall back on builtin for op profiler exceptions
Signed-off-by: Alex Black <blacka101@gmail.com>
* Exec debugging listener + fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix import counts for TF ops in OpValidationSuite
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix bad DL4J test configuration
Signed-off-by: Alex Black <blacka101@gmail.com>
* Exec debugging listener polish
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* wip
* update interface, add null implementations.
* Breaking one test in a weird way.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* createUninitializedDetached refactored.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove create method with unused parameter.
* removed more unused methods.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* removing more unused code.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* last removal of unused code.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* refactor duplicate code from pad methods.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* replace switch with if.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Conv Config validation & tests
Signed-off-by: Ryan Nett <rnett@skymind.io>
* stackOutputs utility method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use constructor for validation, support negative kernel sizes (infered from weights)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better output methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
* move output to be with fit and evaluate
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove SDVariable inplace methods
* import methods
* npe fix in OpVal
* removed SameDiff inplace ops from tests
* Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything
* quick fixes
* javadoc
* SDVariable eval with placeholders
* use regex match
* better matching
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* Added gradcheck test for dynamic_partition_bp op.
* - implementation of dilation op (cpu and cuda)
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed broadcast_dynamic_shape 1D case and tests.
* Fixed usage of default integer arguments.
* Fixed dynamic_partition_bp op and tests.
* Eliminated test with grad check for dynamic_partition_bp op.
* start working on cuda svd - porting available corresponding api from cuSOLVER library
Signed-off-by: Yurii <yurii@skymind.io>
* provide prelu_bp
Signed-off-by: Yurii <yurii@skymind.io>
* - provide gruCell_bp (old version ??)
Signed-off-by: Yurii <yurii@skymind.io>
* - polishing cumsum_bp and cumprod_bp tests
Signed-off-by: Yurii <yurii@skymind.io>
* provide sparseSoftmaxCrossEntropyWithLogits and sparseSoftmaxCrossEntropyWithLogits_grad
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed atomicMul with float input/output
* implementation of cuda kernel for triu_bp operation
Signed-off-by: Yurii <yurii@skymind.io>
* Refactored lup helper to add parrallel computing.
* cusolver libraries
Signed-off-by: raver119 <raver119@gmail.com>
* uncomment cuSolver APIs in svd.cu
Signed-off-by: Yurii <yurii@skymind.io>
* cusolver var
Signed-off-by: raver119 <raver119@gmail.com>
* - further work on cuSolver svd
Signed-off-by: Yurii <yurii@skymind.io>
* Implement usage of cuda solver to LUP decomposition.
* - correct naames in lup functions
Signed-off-by: Yurii <yurii@skymind.io>
* correct svdQR cuda
Signed-off-by: Yurii <yurii@skymind.io>
* - provide transpositions of input matrices in case of c order in svdCudaQR
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed implementation issues with LUP usign cuda solver.
* Implementation of matrix_determinant helper with cuda kernels. Working revision.
* Implemented log_matrix_determinant helper with cuda kernels.
* - implementation of batched cuda svd
Signed-off-by: Yurii <yurii@skymind.io>
* Refactored cholesky helper and implementation of cuda solver cholesky batch.
* - implementation of cuda kernel for tile bp
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of cholesky and logdet with cuda kernels.
* - implementation of cuda kernel for sru_bidirectional
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed cholesky helper.
* Cholesky op helper implementation. Working double-based cublas implementation.
* bad import excluded
Signed-off-by: raver119 <raver119@gmail.com>
* Finished with cuda implementation of cholesky helper and tests.
* - implementation of cuda kernel for sru_bidirectional_backprop operation
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of matrix_inverse op helper with cuda kernels. The first revision.
* - start working on gruCell_bp
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of matrix_inverse helper.
* - further work on new gruCell_bp
Signed-off-by: Yurii <yurii@skymind.io>
* cuBLAS related fixes
Signed-off-by: raver119 <raver119@gmail.com>
* calculateOutputShapes() now passes device buffers as well
Signed-off-by: raver119 <raver119@gmail.com>
* special concat/average/accumulate init host pointers now
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* additional CudaDataBufferFactory signatures certain for data types
Signed-off-by: raver119 <raver119@gmail.com>
* cuSolver host buffer
Signed-off-by: raver119 <raver119@gmail.com>
* buffer to buffer memcpy host ptr allocation
Signed-off-by: raver119 <raver119@gmail.com>
* softmax and logSoftmax w/ dimension
Signed-off-by: Ryan Nett <rnett@skymind.io>
* start of while
Signed-off-by: Ryan Nett <rnett@skymind.io>
* if, start of javadocs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* while foreward pass working, backprop WIP
Signed-off-by: Ryan Nett <rnett@skymind.io>
* no backprop
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Tensorflow style if/while (& tests), name scope fixes (and test), argument interceptor (for if/while), use '_' in op names instead of ':'
Signed-off-by: Ryan Nett <rnett@skymind.io>
* javadoc
Signed-off-by: Ryan Nett <rnett@skymind.io>
* many fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* many fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Some fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* cleanup if condition doesn't return boolean
Signed-off-by: Ryan Nett <rnett@skymind.io>
* serialization fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use constants instead of magic numbers
Signed-off-by: Ryan Nett <rnett@skymind.io>
* LeakyReLU: Use serScalar to set alpha correctly in TF import
LogX: remove incorrect TF mapping
Pow: remove TF import method (no mapping)
BaseOp: remove duplicate extraArgs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* un-ignore cifar-10 gan, as it is now passing
Signed-off-by: Ryan Nett <rnett@skymind.io>
* new issue for b_d_s (old is closed), new issue for boolean_mask (old is fixed)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* date update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* "embedding_lookup/.*multiple.*" is passing
Signed-off-by: Ryan Nett <rnett@skymind.io>
* new bincount issue (old was fixed)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* date update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* where op comment
Signed-off-by: Ryan Nett <rnett@skymind.io>
* where passing
Signed-off-by: Ryan Nett <rnett@skymind.io>
* ignore gan
Signed-off-by: Ryan Nett <rnett@skymind.io>
* scatter_nd/rank2shape_2indices passing
Signed-off-by: Ryan Nett <rnett@skymind.io>
* topk update, test fix (need to verify)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* batch_to_space issue update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* updates, no change
Signed-off-by: Ryan Nett <rnett@skymind.io>
* date updates
Signed-off-by: Ryan Nett <rnett@skymind.io>
* batch_to_space is fixed
Signed-off-by: Ryan Nett <rnett@skymind.io>
* equality function helper
Signed-off-by: Ryan Nett <rnett@skymind.io>
* topk passing
Signed-off-by: Ryan Nett <rnett@skymind.io>
* dropout equality check
Signed-off-by: Ryan Nett <rnett@skymind.io>
* adding ignores for zoo models
Signed-off-by: Ryan Nett <rnett@skymind.io>
* ignore libnd4j rnn tests
Signed-off-by: Ryan Nett <rnett@skymind.io>
* added batch to space test
Signed-off-by: Ryan Nett <rnett@skymind.io>
* issue comments
Signed-off-by: Ryan Nett <rnett@skymind.io>
* minor cmake changes to make macos happy
* space_to_batch/batch_to_space validation fix
* - choose op tweaks
- tests updated to match appleid tweaks
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* - get rid of bad import
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* - choose now uses shape function
- choose test updated
* System info export for debugging and bug reporting
Signed-off-by: Ryan Nett <rnett@skymind.io>
* class name fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add version information, pointer memory info
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add nvidia-smi and nvcc info
Signed-off-by: Ryan Nett <rnett@skymind.io>
* line cleanup
Signed-off-by: Ryan Nett <rnett@skymind.io>
* nvidia-smi run works
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add oshi dependency
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use OS info, add workspaces info
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use ServiceLoader to load GPU information
Signed-off-by: Ryan Nett <rnett@skymind.io>
* register service
Signed-off-by: Ryan Nett <rnett@skymind.io>
* moved service out of NativeOpsHolder (private constructor)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* added newline
Signed-off-by: Ryan Nett <rnett@skymind.io>
* added license
Signed-off-by: Ryan Nett <rnett@skymind.io>
* and one more
Signed-off-by: Ryan Nett <rnett@skymind.io>
* copyright update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed unused imports
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed more unused imports
Signed-off-by: Ryan Nett <rnett@skymind.io>
* close streams
Signed-off-by: Ryan Nett <rnett@skymind.io>
* and another one
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* one more copyright
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove double license
Signed-off-by: Ryan Nett <rnett@skymind.io>
* moved test to correct package
Signed-off-by: Ryan Nett <rnett@skymind.io>
* classpath update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* classpath for java >8 fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* changed [] to ...
Signed-off-by: Ryan Nett <rnett@skymind.io>
* added randn(long seed, int... shape)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Fixed a couple of methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
* ToString methods w/ options
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes, less toString methods, and a few ops I missed
Signed-off-by: Ryan Nett <rnett@skymind.io>
* some javadocs, change int... to long... where possible
Signed-off-by: Ryan Nett <rnett@skymind.io>
* another javadoc
Signed-off-by: Ryan Nett <rnett@skymind.io>
* javadoc fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* just javadoc in INDArray
Signed-off-by: Ryan Nett <rnett@skymind.io>
* local/static fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Add @NonNull to options
Signed-off-by: Ryan Nett <rnett@skymind.io>
* javadoc updates/fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more @NonNulls
Signed-off-by: Ryan Nett <rnett@skymind.io>
* even more @NonNulls, this time on varargs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* automatically add placeholders for keras_learning_phase if required
Signed-off-by: Ryan Nett <rnett@skymind.io>
* comment
Signed-off-by: Ryan Nett <rnett@skymind.io>
* up to line 500.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 1120.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to 1286.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 2400.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 2500.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 3600.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 4000.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 4500.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 5000.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 6000.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Added observation classes and tests
Signed-off-by: unknown <aboulang2002@yahoo.com>
* Now uses DataSetPreProcessors
Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
* CompositeDataSetPreProcessor can now stop processing on empty dataset; Some DataSetPreProcessors moving from RL4J to ND4J
Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
* Did requested minor changes
Signed-off-by: Alexandre Boulanger <Alexandre.Boulanger@ia.ca>
Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
* fix#7983
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* small fix.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* fix javadoc.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>