* First pass on SameDiff op exec debug listener
Signed-off-by: Alex Black <blacka101@gmail.com>
* #7555 DL4J helpers - don't fall back on builtin for op profiler exceptions
Signed-off-by: Alex Black <blacka101@gmail.com>
* Exec debugging listener + fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix import counts for TF ops in OpValidationSuite
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix bad DL4J test configuration
Signed-off-by: Alex Black <blacka101@gmail.com>
* Exec debugging listener polish
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* wip
* update interface, add null implementations.
* Breaking one test in a weird way.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* createUninitializedDetached refactored.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove create method with unused parameter.
* removed more unused methods.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* removing more unused code.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* last removal of unused code.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* refactor duplicate code from pad methods.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* replace switch with if.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Conv Config validation & tests
Signed-off-by: Ryan Nett <rnett@skymind.io>
* stackOutputs utility method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use constructor for validation, support negative kernel sizes (infered from weights)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better output methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
* move output to be with fit and evaluate
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove SDVariable inplace methods
* import methods
* npe fix in OpVal
* removed SameDiff inplace ops from tests
* Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything
* quick fixes
* javadoc
* SDVariable eval with placeholders
* use regex match
* better matching
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* Added gradcheck test for dynamic_partition_bp op.
* - implementation of dilation op (cpu and cuda)
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed broadcast_dynamic_shape 1D case and tests.
* Fixed usage of default integer arguments.
* Fixed dynamic_partition_bp op and tests.
* Eliminated test with grad check for dynamic_partition_bp op.
* start working on cuda svd - porting available corresponding api from cuSOLVER library
Signed-off-by: Yurii <yurii@skymind.io>
* provide prelu_bp
Signed-off-by: Yurii <yurii@skymind.io>
* - provide gruCell_bp (old version ??)
Signed-off-by: Yurii <yurii@skymind.io>
* - polishing cumsum_bp and cumprod_bp tests
Signed-off-by: Yurii <yurii@skymind.io>
* provide sparseSoftmaxCrossEntropyWithLogits and sparseSoftmaxCrossEntropyWithLogits_grad
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed atomicMul with float input/output
* implementation of cuda kernel for triu_bp operation
Signed-off-by: Yurii <yurii@skymind.io>
* Refactored lup helper to add parrallel computing.
* cusolver libraries
Signed-off-by: raver119 <raver119@gmail.com>
* uncomment cuSolver APIs in svd.cu
Signed-off-by: Yurii <yurii@skymind.io>
* cusolver var
Signed-off-by: raver119 <raver119@gmail.com>
* - further work on cuSolver svd
Signed-off-by: Yurii <yurii@skymind.io>
* Implement usage of cuda solver to LUP decomposition.
* - correct naames in lup functions
Signed-off-by: Yurii <yurii@skymind.io>
* correct svdQR cuda
Signed-off-by: Yurii <yurii@skymind.io>
* - provide transpositions of input matrices in case of c order in svdCudaQR
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed implementation issues with LUP usign cuda solver.
* Implementation of matrix_determinant helper with cuda kernels. Working revision.
* Implemented log_matrix_determinant helper with cuda kernels.
* - implementation of batched cuda svd
Signed-off-by: Yurii <yurii@skymind.io>
* Refactored cholesky helper and implementation of cuda solver cholesky batch.
* - implementation of cuda kernel for tile bp
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of cholesky and logdet with cuda kernels.
* - implementation of cuda kernel for sru_bidirectional
Signed-off-by: Yurii <yurii@skymind.io>
* Fixed cholesky helper.
* Cholesky op helper implementation. Working double-based cublas implementation.
* bad import excluded
Signed-off-by: raver119 <raver119@gmail.com>
* Finished with cuda implementation of cholesky helper and tests.
* - implementation of cuda kernel for sru_bidirectional_backprop operation
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of matrix_inverse op helper with cuda kernels. The first revision.
* - start working on gruCell_bp
Signed-off-by: Yurii <yurii@skymind.io>
* Implementation of matrix_inverse helper.
* - further work on new gruCell_bp
Signed-off-by: Yurii <yurii@skymind.io>
* cuBLAS related fixes
Signed-off-by: raver119 <raver119@gmail.com>
* calculateOutputShapes() now passes device buffers as well
Signed-off-by: raver119 <raver119@gmail.com>
* special concat/average/accumulate init host pointers now
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* additional CudaDataBufferFactory signatures certain for data types
Signed-off-by: raver119 <raver119@gmail.com>
* cuSolver host buffer
Signed-off-by: raver119 <raver119@gmail.com>
* buffer to buffer memcpy host ptr allocation
Signed-off-by: raver119 <raver119@gmail.com>
* softmax and logSoftmax w/ dimension
Signed-off-by: Ryan Nett <rnett@skymind.io>
* start of while
Signed-off-by: Ryan Nett <rnett@skymind.io>
* if, start of javadocs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* while foreward pass working, backprop WIP
Signed-off-by: Ryan Nett <rnett@skymind.io>
* no backprop
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Tensorflow style if/while (& tests), name scope fixes (and test), argument interceptor (for if/while), use '_' in op names instead of ':'
Signed-off-by: Ryan Nett <rnett@skymind.io>
* javadoc
Signed-off-by: Ryan Nett <rnett@skymind.io>
* many fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* many fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Some fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* cleanup if condition doesn't return boolean
Signed-off-by: Ryan Nett <rnett@skymind.io>
* serialization fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use constants instead of magic numbers
Signed-off-by: Ryan Nett <rnett@skymind.io>
* LeakyReLU: Use serScalar to set alpha correctly in TF import
LogX: remove incorrect TF mapping
Pow: remove TF import method (no mapping)
BaseOp: remove duplicate extraArgs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* un-ignore cifar-10 gan, as it is now passing
Signed-off-by: Ryan Nett <rnett@skymind.io>
* minor cmake changes to make macos happy
* space_to_batch/batch_to_space validation fix
* - choose op tweaks
- tests updated to match appleid tweaks
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* - get rid of bad import
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* - choose now uses shape function
- choose test updated
* System info export for debugging and bug reporting
Signed-off-by: Ryan Nett <rnett@skymind.io>
* class name fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add version information, pointer memory info
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add nvidia-smi and nvcc info
Signed-off-by: Ryan Nett <rnett@skymind.io>
* line cleanup
Signed-off-by: Ryan Nett <rnett@skymind.io>
* nvidia-smi run works
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add oshi dependency
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use OS info, add workspaces info
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use ServiceLoader to load GPU information
Signed-off-by: Ryan Nett <rnett@skymind.io>
* register service
Signed-off-by: Ryan Nett <rnett@skymind.io>
* moved service out of NativeOpsHolder (private constructor)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* added newline
Signed-off-by: Ryan Nett <rnett@skymind.io>
* added license
Signed-off-by: Ryan Nett <rnett@skymind.io>
* and one more
Signed-off-by: Ryan Nett <rnett@skymind.io>
* copyright update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed unused imports
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed more unused imports
Signed-off-by: Ryan Nett <rnett@skymind.io>
* close streams
Signed-off-by: Ryan Nett <rnett@skymind.io>
* and another one
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use method
Signed-off-by: Ryan Nett <rnett@skymind.io>
* one more copyright
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove double license
Signed-off-by: Ryan Nett <rnett@skymind.io>
* moved test to correct package
Signed-off-by: Ryan Nett <rnett@skymind.io>
* classpath update
Signed-off-by: Ryan Nett <rnett@skymind.io>
* classpath for java >8 fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* changed [] to ...
Signed-off-by: Ryan Nett <rnett@skymind.io>
* added randn(long seed, int... shape)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Fixed a couple of methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
* ToString methods w/ options
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes, less toString methods, and a few ops I missed
Signed-off-by: Ryan Nett <rnett@skymind.io>
* some javadocs, change int... to long... where possible
Signed-off-by: Ryan Nett <rnett@skymind.io>
* another javadoc
Signed-off-by: Ryan Nett <rnett@skymind.io>
* javadoc fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* just javadoc in INDArray
Signed-off-by: Ryan Nett <rnett@skymind.io>
* local/static fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* Add @NonNull to options
Signed-off-by: Ryan Nett <rnett@skymind.io>
* javadoc updates/fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* more @NonNulls
Signed-off-by: Ryan Nett <rnett@skymind.io>
* even more @NonNulls, this time on varargs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* automatically add placeholders for keras_learning_phase if required
Signed-off-by: Ryan Nett <rnett@skymind.io>
* comment
Signed-off-by: Ryan Nett <rnett@skymind.io>
* up to line 500.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 1120.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to 1286.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 2400.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 2500.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 3600.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 4000.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 4500.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 5000.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to line 6000.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Added observation classes and tests
Signed-off-by: unknown <aboulang2002@yahoo.com>
* Now uses DataSetPreProcessors
Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
* CompositeDataSetPreProcessor can now stop processing on empty dataset; Some DataSetPreProcessors moving from RL4J to ND4J
Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
* Did requested minor changes
Signed-off-by: Alexandre Boulanger <Alexandre.Boulanger@ia.ca>
Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>