Commit Graph

735 Commits (e6a7b94fe43efa8212ccdee680ba5113e1d10fdc)

Author SHA1 Message Date
Robert Altena aa4af2c36d refactor duplicate code from pad methods. (#86)
* refactor duplicate code from pad methods.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* replace switch with if.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-05 11:24:20 +10:00
Ryan Nett d4e7997134 SameDiff Convolution Config validation, better output methods (#82)
* Conv Config validation & tests

Signed-off-by: Ryan Nett <rnett@skymind.io>

* stackOutputs utility method

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use constructor for validation, support negative kernel sizes (infered from weights)

Signed-off-by: Ryan Nett <rnett@skymind.io>

* better output methods

Signed-off-by: Ryan Nett <rnett@skymind.io>

* move output to be with fit and evaluate

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* more fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-08-05 11:24:20 +10:00
Samuel Audet 8d1fe8b1b3 Fix functions of OpaqueVariablesSet 2019-08-05 11:24:20 +10:00
Susan Eraly b57f1d52cc Keras model import - updater lr fix (#84)
* Keras model import - updater lr fix

Signed-off-by: eraly <susan.eraly@gmail.com>

* Keras model import - updater lr fix, cleanup

Signed-off-by: eraly <susan.eraly@gmail.com>
2019-08-05 11:24:19 +10:00
Robert Altena fa98b83295 remove duplicate code in createBufferDetached. (#83)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-05 11:22:59 +10:00
Samuel Audet 526b782e51 Create C wrappers for some of the C++ classes currently used by ND4J 2019-08-05 11:22:59 +10:00
Samuel Audet 8881bfe7aa Adapt the Java wrappers in ND4J generated with JavaCPP 2019-08-05 11:22:59 +10:00
Samuel Audet 780ae628a9 Actually export functions from NativeOps.h 2019-08-05 11:22:59 +10:00
Samuel Audet dcc72e23b2 Refactor NativeOps.h to export C functions 2019-08-05 11:22:59 +10:00
Alex Black fad8da878f Various DL4J/ND4J fixes (#81)
* #7954 Force refresh of UI when switching tabs on overview page

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8017 Concurrent modification exception (synchronize) fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8033 Don't initialize updater in middle of writing memory crash dump

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8208 Fix shape checks for ND4J int[] creator methods

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #6385 #7992 Keras import naming fixes + cleanup

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #8016 Upsampling3D - add NDHWC format support

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:21:23 +10:00
raver119 7c5c84bea8 4 additional tests
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:21:23 +10:00
Robert Altena 70dbe70594 fix javadoc. (#76)
* fix javadoc.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>

* replace most @see with @link s.

Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-05 11:21:23 +10:00
Ryan Nett ac321265a7 SameDiff fixes and naming (#78)
* remove SDVariable inplace methods

* import methods

* npe fix in OpVal

* removed SameDiff inplace ops from tests

* Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything

* quick fixes

* javadoc

* SDVariable eval with placeholders

* use regex match

* better matching
2019-08-05 11:21:23 +10:00
raver119 ce0743da17 strided_slice_bp shape fn leak fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:13:42 +10:00
Alex Black ac1fc1a27a DL4J trace logging (#79)
* MLN/CG trace logging for debugging

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tiny tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-05 11:13:42 +10:00
raver119 2cd95dc517 upsampling2d fix CUDA
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:13:41 +10:00
raver119 4f2dae23a1 - new NDArray methods like()/ulike() (#77)
- fix for depthwise_conv2d_bp + special test

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:13:41 +10:00
Ryan Nett 00e6296140 Zoo model TF import test updates (#75)
* argLine fix, update compression_gru comment

* updated comment for xception

* undid but commented argLine change

* updated xlnet comment

* copyright headers
2019-08-05 11:13:41 +10:00
raver119 b9708be5db delete temporary TadPack C++/Java side (#74)
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:11:22 +10:00
raver119 59a006ce29 [WIP] More fixes (#73)
* special tests for ConstantTadHelper/ConstantShapeHelper

Signed-off-by: raver119 <raver119@gmail.com>

* release methods for data buffers

Signed-off-by: raver119 <raver119@gmail.com>

* delete temporary buffer Java side

Signed-off-by: raver119 <raver119@gmail.com>

* delete temporary buffer Java side

Signed-off-by: raver119 <raver119@gmail.com>
2019-08-05 11:11:22 +10:00
Robert Altena ce9c372974 fix pad javadoc and @see links. (#72)
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-05 11:11:22 +10:00
Alexandre Boulanger b083c22de5 RL4J - Added a unit test to help refac QLearningDiscrete.trainStep() (#8065)
* Added a unit test to help refac QLearningDiscrete.trainStep()

Signed-off-by: unknown <aboulang2002@yahoo.com>

* Changed expReplay setter to package private

Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
2019-08-02 12:50:28 +10:00
Alexandre Boulanger b2145ca780 RL4J Added listener pattern to SyncLearning (#8050)
* Added listener pattern to SyncLearning

Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>

* Did requested changes

Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
2019-08-02 12:43:45 +10:00
Alex Black 0527ab8d98
Fix validation (#8059)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-07-29 15:26:18 +10:00
Alexandre Boulanger 87d2b2cd3d Added interface IDataManager (#8034)
Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com>
2019-07-25 21:34:54 +10:00
jxtps 22993f853f Disallow creating bad classification iterator (#8042)
When doing classification we need to know the `numPossibleLabels`. If it's set to -1, then we get obscure and confusing null-pointers when accessing labels when calling `ComputationGraph.fit` on the iterator. This PR blocks the user from shooting themselves in the foot.
2019-07-25 14:51:57 +10:00
AlexDBlack 57efc2bf95 Fix bad merge
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-07-21 00:03:30 +10:00
Alex Black 0d6bb657bc FlatBuffers dtype conversion fix (missing bfloat16) (#71)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-07-20 23:21:15 +10:00
raver119 763a225c6a [WIP] More of CUDA operations (#69)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* - gruCell_bp further

Signed-off-by: Yurii <yurii@skymind.io>

* - further work on gruCell_bp

Signed-off-by: Yurii <yurii@skymind.io>

* Inverse matrix cublas implementation. Partial working revision.

* Separation of segment ops helpers. Max separation.

* Separated segment_min ops.

* Separation of segment_mean/sum/prod/sqrtN ops heleprs.

* Fixed diagonal processing with LUP decomposition.

* Modified inversion approach using current state of LU decomposition.

* Implementation of matrix_inverse op with cuda kernels. Working revision.

* Implemented sequence_mask cuda helper. Eliminated waste printf with matrix_inverse implementation. Added proper tests.

* - further work on gruCell_bp (ff/cuda)

Signed-off-by: Yurii <yurii@skymind.io>

* comment one test for gruCell_bp

Signed-off-by: Yurii <yurii@skymind.io>

* - provide cuda static_rnn

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored random_shuffle op to use new random generator.

* Refactored random_shuffle op helper.

* Fixed debug tests with random ops tests.

* Implement random_shuffle op cuda kernel helper and tests.

* - provide cuda scatter_update

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of random_shuffle for linear case with cuda kernels and tests.

* Implemented random_shuffle with cuda kernels. Final revision.

* - finally gruCell_bp is completed

Signed-off-by: Yurii <yurii@skymind.io>

* Dropout op cuda helper implementation.

* Implemented dropout_bp cuda helper.

* Implemented alpha_dropout_bp with cuda kernel helpers.

* Refactored helper.

* Implementation of suppresion helper with cuda kernels.

* - provide cpu code fot hsvToRgb, rgbToHsv, adjustHue

Signed-off-by: Yurii <yurii@skymind.io>

* Using sort by value method.

* Implementation of image.non_max_suppression op cuda-based helper.

* - correcting and testing adjust_hue, adjust_saturation cpu/cuda code

Signed-off-by: Yurii <yurii@skymind.io>

* Added cuda device prefixes to declarations.

* Implementation of hashcode op with cuda helper. Initital revision.

* rnn cu impl removed

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:20:41 +10:00
Alex Black 06e4f5f96e Small DL4J/SameDiff fixes (#70)
* More mask fixes + remove debugging println

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small batch norm derivative fixe

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-07-20 23:19:09 +10:00
Alexander Stoyakin f29f19e9e9 [WIP] Some nd4s tweaks (#68)
* Executioner fallback

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Tests for executioner

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2019-07-20 23:18:15 +10:00
Alexander Stoyakin 2fb4a52a02 [WIP] nd4s tests coverage (#59)
* Unit tests added

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Added operator + for left integer

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Added broadcast tests

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Build fixed after master changes

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Operatable tested

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* -sAdded tests

* Projection tests

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Projection tests

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Benchmarking

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2019-07-20 23:17:51 +10:00
Alex Black c3e684d648 DL4J: Switch subsampling layer to custom ops; DL4J samediff mask fix (#67)
* SpaceToDepth layer fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Switch subsampling layer to use dynamiccustomop + add legacy mode support for beta4 and earlier models

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small subsampling fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Subsampling layer eps fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Handle 'no mask provided this minibatch' case for DL4J SameDiff layers

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small comment/javadoc fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-07-20 23:17:28 +10:00
Alex Black 7939cf384b Misc fixes (#66)
* Small fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* Flaky test fix

Signed-off-by: Alex Black <blacka101@gmail.com>
2019-07-20 23:17:03 +10:00
Alex Black d94bc7257c Various fixes (#65)
* #7977 deprecate legacy MultiLayerNetwork/ComputationGraph.params(boolean) method

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix bad test

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix Histogram mapping

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix incorrect name handling in DifferentialFunction

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* More fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Histogram fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Proper histogram fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* ToString/NDArrayStrings fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* JSON UTF8 serialization fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-07-20 23:16:41 +10:00
raver119 c499dc962f - numpy import fix for CUDA (#64)
- skip tagLocation for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:16:19 +10:00
raver119 c9e867b2e8 File existence validation for Nd4j.createFromNpyFile()
Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:15:57 +10:00
raver119 91a8fb0d90 [WIP] More of CUDA (#63)
* less spam

Signed-off-by: raver119 <raver119@gmail.com>

* flatten kernel

Signed-off-by: raver119 <raver119@gmail.com>

* adjust_hue/adjust_saturation tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* adjust_hue cuda single

Signed-off-by: raver119 <raver119@gmail.com>

* adjust_hue cuda batch

Signed-off-by: raver119 <raver119@gmail.com>

* adjust_saturation cuda

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:15:14 +10:00
raver119 cf2311859a cpp tests fixes
Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:08:07 +10:00
raver119 fd6c0df024 [WIP] More CUDA fixes/updates (#62)
* CUDA reallocation update

Signed-off-by: raver119 <raver119@gmail.com>

* Legacy SoftMax/LogSoftMax/SoftMaxDerivative removed from cpp

Signed-off-by: raver119 <raver119@gmail.com>

* SoftMaxDerivative op removed

Signed-off-by: raver119 <raver119@gmail.com>

* few tests updates

Signed-off-by: raver119 <raver119@gmail.com>

* RNG fixes

Signed-off-by: raver119 <raver119@gmail.com>

* few more tests updates

Signed-off-by: raver119 <raver119@gmail.com>

* legacy Histogram/Pooling2D removed

Signed-off-by: raver119 <raver119@gmail.com>

* legacy Histogram removed

Signed-off-by: raver119 <raver119@gmail.com>

* histogram moved

Signed-off-by: raver119 <raver119@gmail.com>

* histogram moved cuda

Signed-off-by: raver119 <raver119@gmail.com>

* Histogram custom op

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:07:42 +10:00
Alex Black 5cf6859fc4 Add SequenceTrimToLengthTransform (#61)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-07-20 23:07:12 +10:00
raver119 9cf28ea6c9 [WIP] CUDA tweaks (#60)
* special cpu concat

Signed-off-by: raver119 <raver119@gmail.com>

* special concat fix

Signed-off-by: raver119 <raver119@gmail.com>

* OpProfiler tweak for absent host pointers

Signed-off-by: raver119 <raver119@gmail.com>

* minor test tweak to see orders

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA broadcasting diff orders fix

Signed-off-by: raver119 <raver119@gmail.com>

* faster iterations

Signed-off-by: raver119 <raver119@gmail.com>

* OldSoftMax/OldLogSoftMax gone

Signed-off-by: raver119 <raver119@gmail.com>

* RandomLauncher tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* additional check int randomtests

Signed-off-by: raver119 <raver119@gmail.com>

* skip prepare/register action for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* npz float16 fix

Signed-off-by: raver119 <raver119@gmail.com>

* empty reduction cuda fixes

Signed-off-by: raver119 <raver119@gmail.com>

* ShapeBufferTests tweaks

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:06:48 +10:00
raver119 6ce458e949 [WIP] CUDA Java side (#58)
* one crashing test

Signed-off-by: raver119 <raver119@gmail.com>

* stupid issue fixed

Signed-off-by: raver119 <raver119@gmail.com>

* one fix

Signed-off-by: raver119 <raver119@gmail.com>

* dont ensure location for empty arrays

Signed-off-by: raver119 <raver119@gmail.com>

* few more signatures fixed

Signed-off-by: raver119 <raver119@gmail.com>

* few tweaks for DataBuffer creation from java primitives

Signed-off-by: raver119 <raver119@gmail.com>

* get rid of legacy im2col/col2im intercept

Signed-off-by: raver119 <raver119@gmail.com>

* rsubi scalar array fix

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:06:25 +10:00
Alexander Stoyakin 68b82f3856 [WIP] nd4s - data types (#51)
* Fixed tests

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Added conversions for Long

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Added conversions for Long

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Added data types

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Failing test

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Types in conversions

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Added mixins for integer types

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Conversion of different types to scalar

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Tests added

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Tests for arrays construction

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Construction tests

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Fixed slicing

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Add own Executioner implementation to nd4s

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Filter operation activated

* Collection tests activated

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Types in operations

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Commented unused code

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* Types in operations

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* String implicit conversion added

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* String implicit conversion added

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2019-07-20 23:06:00 +10:00
raver119 c969b724bb [WIP] more CUDA stuff (#57)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* Added gradcheck test for dynamic_partition_bp op.

* - implementation of dilation op (cpu and cuda)

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed broadcast_dynamic_shape 1D case and tests.

* Fixed usage of default integer arguments.

* Fixed dynamic_partition_bp op and tests.

* Eliminated test with grad check for dynamic_partition_bp op.

* start working on cuda svd - porting available corresponding api from cuSOLVER library

Signed-off-by: Yurii <yurii@skymind.io>

* provide prelu_bp

Signed-off-by: Yurii <yurii@skymind.io>

* - provide gruCell_bp (old version ??)

Signed-off-by: Yurii <yurii@skymind.io>

* - polishing cumsum_bp and cumprod_bp tests

Signed-off-by: Yurii <yurii@skymind.io>

* provide sparseSoftmaxCrossEntropyWithLogits and sparseSoftmaxCrossEntropyWithLogits_grad

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed atomicMul with float input/output

* implementation of cuda kernel for triu_bp operation

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored lup helper to add parrallel computing.

* cusolver libraries

Signed-off-by: raver119 <raver119@gmail.com>

* uncomment cuSolver APIs in svd.cu

Signed-off-by: Yurii <yurii@skymind.io>

* cusolver var

Signed-off-by: raver119 <raver119@gmail.com>

* - further work on cuSolver svd

Signed-off-by: Yurii <yurii@skymind.io>

* Implement usage of cuda solver to LUP decomposition.

* - correct naames in lup functions

Signed-off-by: Yurii <yurii@skymind.io>

* correct svdQR cuda

Signed-off-by: Yurii <yurii@skymind.io>

* - provide transpositions of input matrices in case of c order in svdCudaQR

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed implementation issues with LUP usign cuda solver.

* Implementation of matrix_determinant helper with cuda kernels. Working revision.

* Implemented log_matrix_determinant helper with cuda kernels.

* - implementation of batched cuda svd

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored cholesky helper and implementation of cuda solver cholesky batch.

* - implementation of cuda kernel for tile bp

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of cholesky and logdet with cuda kernels.

* - implementation of cuda kernel for sru_bidirectional

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed cholesky helper.

* Cholesky op helper implementation. Working double-based cublas implementation.

* bad import excluded

Signed-off-by: raver119 <raver119@gmail.com>

* Finished with cuda implementation of cholesky helper and tests.

* - implementation of cuda kernel for sru_bidirectional_backprop operation

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of matrix_inverse op helper with cuda kernels. The first revision.

* - start working on gruCell_bp

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of matrix_inverse helper.

* - further work on new gruCell_bp

Signed-off-by: Yurii <yurii@skymind.io>

* cuBLAS related fixes

Signed-off-by: raver119 <raver119@gmail.com>

* calculateOutputShapes() now passes device buffers as well

Signed-off-by: raver119 <raver119@gmail.com>

* special concat/average/accumulate init host pointers now

Signed-off-by: raver119 <raver119@gmail.com>

* few more tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* additional CudaDataBufferFactory signatures certain for data types

Signed-off-by: raver119 <raver119@gmail.com>

* cuSolver host buffer

Signed-off-by: raver119 <raver119@gmail.com>

* buffer to buffer memcpy host ptr allocation

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:05:21 +10:00
Alex Black cb6654bebb Add libnd4j benchmarks (#3)
This PR adds 2 libnd4j benchmarking suits
2019-07-20 22:54:44 +10:00
Ryan Nett 62c6a73f9d Include TF Import tests as forward pass tests in OpValidation (#53)
* ignore multinomial

Signed-off-by: Ryan Nett <rnett@skymind.io>

* fix for @NonNull varargs

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:24:11 +10:00
Ryan Nett daf3950d8d SameDiff If, While, and Misc changes (#52)
* softmax and logSoftmax w/ dimension

Signed-off-by: Ryan Nett <rnett@skymind.io>

* start of while

Signed-off-by: Ryan Nett <rnett@skymind.io>

* if, start of javadocs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* while foreward pass working, backprop WIP

Signed-off-by: Ryan Nett <rnett@skymind.io>

* no backprop

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Tensorflow style if/while (& tests), name scope fixes (and test), argument interceptor (for if/while), use '_' in op names instead of ':'

Signed-off-by: Ryan Nett <rnett@skymind.io>

* javadoc

Signed-off-by: Ryan Nett <rnett@skymind.io>

* many fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* many fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* Some fixes

Signed-off-by: Ryan Nett <rnett@skymind.io>

* cleanup if condition doesn't return boolean

Signed-off-by: Ryan Nett <rnett@skymind.io>

* serialization fix

Signed-off-by: Ryan Nett <rnett@skymind.io>

* use constants instead of magic numbers

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:23:40 +10:00
Ryan Nett 2d991f5445 LeakyReLU fix (#55)
* LeakyReLU: Use serScalar to set alpha correctly in TF import
LogX: remove incorrect TF mapping
Pow: remove TF import method (no mapping)
BaseOp: remove duplicate extraArgs

Signed-off-by: Ryan Nett <rnett@skymind.io>

* un-ignore cifar-10 gan, as it is now passing

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:23:19 +10:00
Ryan Nett 027d4d2a47 Add XLNet to zoo model ignores (#54)
* ignore xlnet

Signed-off-by: Ryan Nett <rnett@skymind.io>

* comment

Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-07-20 22:22:57 +10:00