raver119
320924278d
Legacy API changes ( #441 )
...
* initial commit
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* another initial commit
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* another initial commit
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* one more initial commit
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* next step
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* next step
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* next step
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* next step
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Refactored buffer() and shapeInfo() methods usage with NDArray class.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt Graph class methods to use const shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt choose op to use constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt where op shape method to use constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt lstsq op to use constant empty shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt matrix_diag_part op shape routine to use constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt determinant ops to use constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt mean_pairwssqerr_loss ops to use constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt ops shape methods.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt shape methods for loss ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt log_loss op shape method.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt shape methods for ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt dilation2d ops shape methods.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted deconv2d ops shape methods.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted dynamicRNN op shape method.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted shape methods for ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted shape methods for lstm layer ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* few updates
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* first cuda tweak
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Adopt constant shapes for sconv2d ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt constant shapes for gru ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt constant shapes with shape methods for segment ops and so on.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted constant shapes with unsorted_segment_* ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted constant shapes with gamma op shape method.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted shape methods of reduce_stddev ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted shape methods for reduce_* ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt shape method for squeeze op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt strided_slice shape method.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored concat op shape method to adopt constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted shape method for mirror_pad op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted split op shape method.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted tile ops shape methods.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added const cast for mkldnn routines handles.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored logSoftMaxForVector_ routine to conform with proper data and shape pointer casts.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Cosmetic changes to proper usage of constant pointers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored a couple shape comparators for strides and addBias helpers to proper use data pointers with inplace option.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored depthToSpace helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored histogram helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored im2col helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored gather and gatherND helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed buffer usage on percentile helper.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed gather shape with helpers and range buffer usage.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed buffer usage with space to depth helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed buffer usage and constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed buffer usage with LUP decomposition>
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored onehot_ helper.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored pad and prefix to use constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactoed softmax helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed space to batch helpers to use buffers properly.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed stack and split helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed buffer usage with sparse to dense helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed buffer usage with mindistance_ helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed buffer usage with tile helper.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed constant shape usage.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed constant shape usage with legacy pairwise bool ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored a couple of methods to adopt constant shape usage.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed broadcasting with constant shape."
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed const usage with inplace reverse and constant shapes with legacy reduction.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored legacy ops with const shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored sort to adopt constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected sort for constant shape usage.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed constant shape usage with special methods.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored Context to conform with constant shape usage.
Signed-off-by: shugeo <sgazeos@gmail.com>
* CUDA broadcasting headers
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* pairwise/indexreduce/random headers
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Refactored native ops to adopt constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* legacy reduce3/scalar headers
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Corrected pullRow signature and tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected routines to proper use of constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored tests to use constant shapes properly.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored legacy ops tests to use constant shapes properly.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored buffer usage with NDArray tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed native ops tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed special concat routine.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed buffer usage with test.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed buffer usage with a test.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored TAD.h and tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored calcStrides* routines to use constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed miscelaneous errors with constant shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* NativeOps const changes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Corrected definitions for declared functions.
Signed-off-by: shugeo <sgazeos@gmail.com>
* NativeOps const changes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* few more const changes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Fixed const shapes with shape routines.
Signed-off-by: shugeo <sgazeos@gmail.com>
* few more const changes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Fixed shape method for broadcastable case.
Signed-off-by: shugeo <sgazeos@gmail.com>
* few more const changes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* xw_plus_b BP shape fn restored
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Fixed signatures with broadcasting.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Repaired backprops shape methods for a set of operations.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored broadcast bool for cuda.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored methods for 3 args with const qualifier.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed a couple of kernel signatures for broadcasting.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed kernels signatures for const buffers and shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored pairwise methods to persistent buffers and shapes usage.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt const to buffers and shapes with kernels.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopt const to buffers and shapes with scalar kernels.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored indexreduce kernels signatures to use const buffers and shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored pairwise kernels to adopt cons shapes and buffers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored pairwise bool kernels to adopt cons shapes and buffers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored random special ops to conform with const shapes and buffers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored native ops to conform with const shapes and buffers under cuda platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Cosmetical changes only.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed const shapes and buffers error.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected start pos routine.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored methods to conform with const shapes and buffers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored helpers to use proper methods instead.
Signed-off-by: shugeo <sgazeos@gmail.com>
* bunch of changes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* next bunch of changes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* next bunch of changes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Fixed execScalar declaration.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed execScalar declaration.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected const shape cases with sort and so on.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed const shapes for sort.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored kernel declarations to adopt const shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed kernels declarations to adopt const shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected kernel declarations to adopt const shapes and buffers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed kernels declarations to adopt const shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed segment helpers kernels declarations and so on to adopt const shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed const shape usage with segment and solve helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed kernel declaration with adjustWeight helper.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed cuda implementations for constant shape helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted const shape usage with kernels.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adopted top_k kernels to use const shapes and buffers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected kernels declarations to adopt const shapes with helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored NDArray definitions to adopt const shapes and buffers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed const shapes with image suppression helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Slight improvement with buffers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored buffer usage.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored buffer usage with tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed const shape usage with definitions.
Signed-off-by: shugeo <sgazeos@gmail.com>
* minor updates on cpu side
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* Refactored const shape usage with ConstantDescritor and native ops with cuda platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored tear and tile kernels to adopt with const shapes.
Signed-off-by: shugeo <sgazeos@gmail.com>
* softmax_loop fix
Signed-off-by: raver119 <raver119@gmail.com>
* update missing signature
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* softmax again
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* few more missing consts
Signed-off-by: raver119 <raver119@gmail.com>
* new methods updated
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
Co-authored-by: shugeo <sgazeos@gmail.com>
2020-05-09 08:06:14 +03:00
raver119
57210b936c
Revert "OpenMP Threads execution ( #297 )" ( #299 )
...
This reverts commit dd2043ef48
.
2020-03-09 08:22:49 +03:00
raver119
dd2043ef48
OpenMP Threads execution ( #297 )
...
* omp threads backported
Signed-off-by: raver119 <raver119@gmail.com>
* omp scalar reduce
Signed-off-by: raver119 <raver119@gmail.com>
* timing
Signed-off-by: raver119 <raver119@gmail.com>
* timing
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* namespace change
Signed-off-by: raver119 <raver119@gmail.com>
* num_threads
Signed-off-by: raver119 <raver119@gmail.com>
* one minor fix
Signed-off-by: raver119 <raver119@gmail.com>
2020-03-09 08:21:44 +03:00
raver119
63fa3c2ef3
libnd4j polishing ( #273 )
...
* initial set of include changes
Signed-off-by: raver119 <raver119@gmail.com>
* one more tweak
Signed-off-by: raver119 <raver119@gmail.com>
* few more rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* few more rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* few more rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* cuda includes rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* java update
Signed-off-by: raver119 <raver119@gmail.com>
* = namespace changed to sd
- few CMake variables renamed with SD_ prefix
Signed-off-by: raver119 <raver119@gmail.com>
* java update
Signed-off-by: raver119 <raver119@gmail.com>
* LoopKind minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* few more changes
Signed-off-by: raver119 <raver119@gmail.com>
* few more changes
Signed-off-by: raver119 <raver119@gmail.com>
* few more changes
Signed-off-by: raver119 <raver119@gmail.com>
* sanitizer is optional now
Signed-off-by: raver119 <raver119@gmail.com>
* dev tests updated
Signed-off-by: raver119 <raver119@gmail.com>
* few more changes
Signed-off-by: raver119 <raver119@gmail.com>
* last update
Signed-off-by: raver119 <raver119@gmail.com>
* java update
Signed-off-by: raver119 <raver119@gmail.com>
2020-03-02 12:49:41 +03:00
Oleh
b4575d11e9
Loops auto-vectorization problem fix ( #274 )
...
* libnd4j cast loop types
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j more type castination added to loops
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j sync casting types of iterated variable in loops
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j more loops reviewed for vectorization problem fix
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j fixed several typos
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j several more files reviewed to fix auto-vectorization problem in loops
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j merge master and reviewed more files to fix auto-vectorization problem in loops
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j several type casting added in broadcasting that were missed, fixed mac builds
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j double check all files and fix several more places in loops
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j fixed builds
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j revert changes for lup.cpp
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-02-26 21:12:19 +03:00
raver119
29e8e09db6
String changes ( #3 )
...
* initial commit
* additional data types & tensor type
Signed-off-by: raver119 <raver119@gmail.com>
* next step
Signed-off-by: raver119 <raver119@gmail.com>
* missing include
* sparse_to_dense
Signed-off-by: raver119 <raver119@gmail.com>
* few more tests files
Signed-off-by: raver119 <raver119@gmail.com>
* draft
Signed-off-by: raver119 <raver119@gmail.com>
* numeric sparse_to_dense
Signed-off-by: raver119 <raver119@gmail.com>
* comment
Signed-off-by: raver119 <raver119@gmail.com>
* string sparse_to_dense version
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA DataBuffer expand
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks for CUDA build
Signed-off-by: raver119 <raver119@gmail.com>
* shape fn for string_split
Signed-off-by: raver119 <raver119@gmail.com>
* one more comment
Signed-off-by: raver119 <raver119@gmail.com>
* string_split indices
Signed-off-by: raver119 <raver119@gmail.com>
* next step
Signed-off-by: raver119 <raver119@gmail.com>
* test passes
Signed-off-by: raver119 <raver119@gmail.com>
* few rearrangements for databuffer implementations
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer: move inline methods to common implementations
Signed-off-by: raver119 <raver119@gmail.com>
* add native DataBuffer to Nd4j presets
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer creation
Signed-off-by: raver119 <raver119@gmail.com>
* use DataBuffer for allocation
Signed-off-by: raver119 <raver119@gmail.com>
* cpu databuffer as deallocatable
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer setters for bufers
Signed-off-by: raver119 <raver119@gmail.com>
* couple of wrappers
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffers being passed around
Signed-off-by: raver119 <raver119@gmail.com>
* Bunch of ByteBuffer-related signatures gone
Signed-off-by: raver119 <raver119@gmail.com>
* - few more Nd4j signatures removed
- minor fix for bfloat16
Signed-off-by: raver119 <raver119@gmail.com>
* nullptr pointer is still a pointer, but 0 as address :)
Signed-off-by: raver119 <raver119@gmail.com>
* one special test
Signed-off-by: raver119 <raver119@gmail.com>
* empty string array init
Signed-off-by: raver119 <raver119@gmail.com>
* one more test in cpp
Signed-off-by: raver119 <raver119@gmail.com>
* memcpy instead of databuffer swap
Signed-off-by: raver119 <raver119@gmail.com>
* special InteropDataBuffer for front-end languages
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks for java
Signed-off-by: raver119 <raver119@gmail.com>
* pointer/indexer actualization
Signed-off-by: raver119 <raver119@gmail.com>
* CustomOp returns list for inputArumgents and outputArguments instead of array
Signed-off-by: raver119 <raver119@gmail.com>
* redundant call
Signed-off-by: raver119 <raver119@gmail.com>
* print_variable op
Signed-off-by: raver119 <raver119@gmail.com>
* - view handling (but wrong one)
- print_variable java wrapper
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* - empty arrays handling
Signed-off-by: raver119 <raver119@gmail.com>
* - deserialization works now
Signed-off-by: raver119 <raver119@gmail.com>
* minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* one more fix
Signed-off-by: raver119 <raver119@gmail.com>
* initial cuda commit
Signed-off-by: raver119 <raver119@gmail.com>
* print_variable message validation
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA views
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA special buffer size
Signed-off-by: raver119 <raver119@gmail.com>
* minor update to match master changes
Signed-off-by: raver119 <raver119@gmail.com>
* - consider arrays always actual on device for CUDA
- additional PrintVariable constructor
- CudaUtf8Buffer now allocates host buffer by default
Signed-off-by: raver119 <raver119@gmail.com>
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* - print_variable now allows print from device
Signed-off-by: raver119 <raver119@gmail.com>
* InteropDataBuffer data type fix
Signed-off-by: raver119 <raver119@gmail.com>
* ...
Signed-off-by: raver119 <raver119@gmail.com>
* disable some debug messages
Signed-off-by: raver119 <raver119@gmail.com>
* master pulled in
Signed-off-by: raver119 <raver119@gmail.com>
* couple of new methods for DataBuffer interop
Signed-off-by: raver119 <raver119@gmail.com>
* java side
Signed-off-by: raver119 <raver119@gmail.com>
* offsetted constructor
Signed-off-by: raver119 <raver119@gmail.com>
* new CUDA deallocator
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA backend torn apart
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA backend torn apart 2
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA backend torn apart 3
Signed-off-by: raver119 <raver119@gmail.com>
* - few new tests
- few new methods for DataBuffer management
Signed-off-by: raver119 <raver119@gmail.com>
* few more tests + few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* two failing tests
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* two failing tests pass
Signed-off-by: raver119 <raver119@gmail.com>
* now we pass DataBuffer to legacy ops too
Signed-off-by: raver119 <raver119@gmail.com>
* Native DataBuffer for legacy ops, Java side
Signed-off-by: raver119 <raver119@gmail.com>
* CPU java side update
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA java side update
Signed-off-by: raver119 <raver119@gmail.com>
* no more prepare/register action on java side
Signed-off-by: raver119 <raver119@gmail.com>
* NDArray::prepare/register use now accepts vectors
Signed-off-by: raver119 <raver119@gmail.com>
* InteropDataBuffer now has few more convenience methods
Signed-off-by: raver119 <raver119@gmail.com>
* java bindings update
Signed-off-by: raver119 <raver119@gmail.com>
* tick device in NativeOps
Signed-off-by: raver119 <raver119@gmail.com>
* Corrected usage of OpaqueBuffer for tests.
* Corrected usage of OpaqueBuffer for java tests.
* NativeOpsTests fixes.
* print_variable now returns scalar
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* compat_string_split fix for CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* - CUDA execScalar fix
- CUDA lazyAllocateHostPointer now checks java indexer/pointer instead of native pointer
Signed-off-by: raver119 <raver119@gmail.com>
* legacy ops DataBuffer migration prototype
Signed-off-by: raver119 <raver119@gmail.com>
* ignore device shapeinfo coming from java
Signed-off-by: raver119 <raver119@gmail.com>
* minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* minor transformAny fix
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweak for lazy host allocation
Signed-off-by: raver119 <raver119@gmail.com>
* - DataBuffer::memcpy method
- bitcast now uses memcpy
Signed-off-by: raver119 <raver119@gmail.com>
* - IndexReduce CUDA dimension buffer fix
Signed-off-by: raver119 <raver119@gmail.com>
* views for CPU and CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* less spam
Signed-off-by: raver119 <raver119@gmail.com>
* optional memory init
Signed-off-by: raver119 <raver119@gmail.com>
* async memset
Signed-off-by: raver119 <raver119@gmail.com>
* - SummaryStats CUDA fix
- DataBuffer.sameUnderlyingData() impl
- execBroadcast fix
Signed-off-by: raver119 <raver119@gmail.com>
* - reduce3All fix
switch to CUDA 10 temporarily
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA version
Signed-off-by: raver119 <raver119@gmail.com>
* proper memory deallocator registration
Signed-off-by: raver119 <raver119@gmail.com>
* HOST_ONLY workspace allocation
Signed-off-by: raver119 <raver119@gmail.com>
* temp commit
Signed-off-by: raver119 <raver119@gmail.com>
* few conflicts resolved
Signed-off-by: raver119 <raver119@gmail.com>
* few minor fixes
Signed-off-by: raver119 <raver119@gmail.com>
* one more minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* NDArray permute should operate on JVM primitives
Signed-off-by: raver119 <raver119@gmail.com>
* - create InteropDataBuffer for shapes as well
- update pointers after view creation in Java
Signed-off-by: raver119 <raver119@gmail.com>
* - addressPointer temporary moved to C++
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA: don't account offset twice
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA: DataBuffer pointer constructor updated
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA NDArray.unsafeDuplication() simplified
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA minor workspace-related fixes
Signed-off-by: raver119 <raver119@gmail.com>
* CPU DataBuffer.reallocate()
Signed-off-by: raver119 <raver119@gmail.com>
* print_affinity op
Signed-off-by: raver119 <raver119@gmail.com>
* print_affinity java side
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA more tweaks for data locality
Signed-off-by: raver119 <raver119@gmail.com>
* - compat_string_split tweak
- CudaUtf8Buffer update
Signed-off-by: raver119 <raver119@gmail.com>
* INDArray.close() mechanic restored
Signed-off-by: raver119 <raver119@gmail.com>
* one more test fixed
Signed-off-by: raver119 <raver119@gmail.com>
* - CUDA DataBuffer.reallocate() updated
- cudaMemcpy (synchronous) restored
Signed-off-by: raver119 <raver119@gmail.com>
* one last fix
Signed-off-by: raver119 <raver119@gmail.com>
* bad import removed
Signed-off-by: raver119 <raver119@gmail.com>
* another small fix
Signed-off-by: raver119 <raver119@gmail.com>
* one special test
Signed-off-by: raver119 <raver119@gmail.com>
* fix bad databuffer size
Signed-off-by: raver119 <raver119@gmail.com>
* release primaryBuffer on replace
Signed-off-by: raver119 <raver119@gmail.com>
* higher timeout
Signed-off-by: raver119 <raver119@gmail.com>
* disable timeouts
Signed-off-by: raver119 <raver119@gmail.com>
* dbCreateView now validates offset and length of a view
Signed-off-by: raver119 <raver119@gmail.com>
* additional validation for dbExpand
Signed-off-by: raver119 <raver119@gmail.com>
* restore timeout back again
Signed-off-by: raver119 <raver119@gmail.com>
* smaller distribution for rng test to prevent timeouts
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA DataBuffer::memcpy now copies to device all the time
Signed-off-by: raver119 <raver119@gmail.com>
* OpaqueDataBuffer now contains all required methods for interop
Signed-off-by: raver119 <raver119@gmail.com>
* some javadoc
Signed-off-by: raver119 <raver119@gmail.com>
* GC on failed allocations
Signed-off-by: raver119 <raver119@gmail.com>
* minoe memcpu tweak
Signed-off-by: raver119 <raver119@gmail.com>
* one more bitcast test
Signed-off-by: raver119 <raver119@gmail.com>
* - NDArray::deviceId() propagation
- special multi-threaded test for data locality checks
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer additional syncStream
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer additional syncStream
Signed-off-by: raver119 <raver119@gmail.com>
* one ignored test
Signed-off-by: raver119 <raver119@gmail.com>
* skip host alloc for empty arrays
Signed-off-by: raver119 <raver119@gmail.com>
* ByteBuffer support is back
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer::memcpy minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* few minor prelu/bp tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* nullify-related fixes
Signed-off-by: raver119 <raver119@gmail.com>
* PReLU fixes (#157 )
Signed-off-by: Alex Black <blacka101@gmail.com>
* Build fixed
* Fix tests
* one more ByteBuffer signature restored
Signed-off-by: raver119 <raver119@gmail.com>
* nd4j-jdbc-hsql profiles fix
Signed-off-by: raver119 <raver119@gmail.com>
* nd4j-jdbc-hsql profiles fix
Signed-off-by: raver119 <raver119@gmail.com>
* PReLU weight init fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small PReLU fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* - INDArray.migrate() reactivated
- DataBuffer::setDeviceId(...) added
- InteropDataBuffer Z syncToDevice added for views
Signed-off-by: raver119 <raver119@gmail.com>
* missed file
Signed-off-by: raver119 <raver119@gmail.com>
* Small tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* cuda 10.2
Signed-off-by: raver119 <raver119@gmail.com>
* minor fix
Signed-off-by: raver119 <raver119@gmail.com>
Co-authored-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-04 13:27:50 +03:00
raver119
6de00bf75f
[WIP] Weekly update of repo ( #8390 )
...
* [WIP] Fix compilation after nd4j changes (#37 )
* Fix compilation.
* Some tests fixed
* Disable tests temporarily.
* Restored test
* Tests restored.
* Test restored.
* [WIP] perf tests (#40 )
* special maxpool test
Signed-off-by: raver119 <raver119@gmail.com>
* special maxpool test
Signed-off-by: raver119 <raver119@gmail.com>
* Shyrma bnorm bp (#41 )
Batchnorm backprop mkldnn
* Add SameDiff memory reuse memory manager (array cache) (#39 )
* Attention op comments
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ArrayCacheMemoryMgr - first pass
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tweak array cache for use with SameDiff identity arrays
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ArrayCacheMemoryMgr javadoc and properly get max memory
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* LRU cache policy + add tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Resize arrays internally if required for ArrayCacheMemoryMgr
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test improvement
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* SameDiff op runtime benchmarking listener (#42 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* INLINE_LOOPS for windows
Signed-off-by: raver119 <raver119@gmail.com>
* [WIP] ThreadPool (#8 )
This PR removes OpenMP use in 95% of cases
2019-11-13 17:15:18 +03:00
raver119
589401477d
[WIP] bunch of improvements ( #257 )
...
* - profiling bias_add op
- add some docementation
Signed-off-by: Yurii <yurii@skymind.io>
* - minor change
Signed-off-by: Yurii <yurii@skymind.io>
* - provide addBias cuda kernel
Signed-off-by: Yurii <yurii@skymind.io>
* - improve shape::getIndexOfffset and change its signature
Signed-off-by: Yurii <yurii@skymind.io>
* - same as previous
Signed-off-by: Yurii <yurii@skymind.io>
* - improve and change signature in some shape:: stuff which has to do with calculation of offsets for array elements
Signed-off-by: Yurii <yurii@skymind.io>
* - minor changes in flatten
Signed-off-by: Yurii <shyrma@skymind.io>
* - add function shape::getIndexOffsetOrdered
Signed-off-by: Yurii <shyrma@skymind.io>
* - correct shape::getIndexOffsetOrdered()
Signed-off-by: Yurii <shyrma@skymind.io>
* - move getIndexOffsetOrdered to flatten.h header in order to isolate this function
Signed-off-by: Yurii <shyrma@skymind.io>
2019-09-11 20:12:09 +03:00
Alex Black
68ea5f3688
Dev branch merge: dev_20190606 ( #7904 )
...
* correct logsoftmax looss (#2 )
* Small SameDiff listener fix (#4 )
* Various fixes (#6 )
* #7839 Fix for asXMatrix and tests
* #7866 EmbeddingSequenceLayer dtype fix + test
* #7856 SameDiff save/load stream methods
* #7859 RegressionEvaluation rank 4 fix + tests + axis configuration
* EvaluationBinary 3d/4d
* More evaluation 3d/4d tests
* #7847 Evaluation empty checks
* Small test ifx
* #7848 Fix median edge case
* Improve DL4J samediff layer tests
* [WIP] FastText wrapper implemented (#8 )
* FastText implemented
* Some fixes
* Fix shapes for wordsNearest
* Validation of input vectors
* Fixes
* Fixed test
* Thread tagged
* Some tweaks
* setContextClassLoader for DeallocatorServiceThread
* Numpy format tests (#1 )
* Various fixes (#11 )
* #7852 SameDiff gather fix
* #7892 SameDiff placeholder to constant conversion
* #7890 validate input rank for MLN/CG init methods
* Fix broken permute shape calculation
* Permute and gather fixes
* Tests
* #7850 LogSumExp fix + test
* Handful of test fixes
* Empty arrays with non-scalar shapes (#10 )
* minor rearrangements for lambdas
* empty tensors with non-scalar shapes
* numpy empty tensors with non-scalar shapes
* few more empty tweaks
* Small fixes
* conv3d signature update
* micro fix in batchnorm mkldnn
* Import fixes
* Fix
* MKL-DNN update
* Small fill fix
* fill with empty input + test
* Fixes
* Small error improvement
* Fix
* one special test
* couple of fixes for lstm
* Rewrite TFGraphMapper.getNDArrayFromTensor to be maintainable and less error prone
* Fixes
* FP16
* Unsigned
* BFloat16
* Fill op - empty tweaks
* - couple of fixes for empty arrays construction
- stack updated
* strided slice fix
* one transform test
* provide method for reducing shapeInfo in case of input array is empty
* Fixed reduceAlongDimensions to use empty input properly.
* couple of broadcast tests
* couple of tests broadcast tests + tweak to make them pass
* add check of non-empty to methods producing sub-arrays
* Fixed reshapeC with zeros in shape.
* complete empty check in reduce_... legacy ops
* Concat and cumsum/prod
* Tweak to empty shape inference on import
* add empty check to the rest of reduce legacy ops
* one more test
* correct typo in evalReduceShapeInfoEmpty
* Added tests for reduce_* ops to tests with zero shapes.
* few more tests for empty reductions
* Fixed strided_slice op with empty case and tests.
* one more empty reduction test
* Fixed strided_slice test.
* add empty check to NDArray::reshapei
* infOrMax
* empty min/max with infinity tests
* made unstack working correctly with empty arrays
* few IndexReduce tests + tweaks for empty shapes
* add test for empty concat
* few tests fixed
* Validation fix for reductions on empty shapes
* Reverse fix
* Reduction shape calc fixes
* SameDiff.generateOutputVariable: don't use shape function to determine number of outputs
* Range fix
* - NDArray constructor updated for scalars/empty arrays
- few tests fixed
* More fixes
* Empty creator fixes
* concat fix
* concat fix
* TF import tests: allow 'both all NaN' and 'both all inf' to pass
* Slice, zero fraction, and reshape fixes
* transpose, gather
* Zero fraction
* scalar cast fix
* Empty reduction axis support
* few more tests fixed
* Fixed input checks conforming with TF for concat op and tests.
* few tests fixed
* matmul scalar shape fix
* Fixed checkout for data type and scalarity with concat to allow non-empty scalars with vector concats.
* broadcast bool fix
* few more tests
* few more tests
* correct evalReduceShapeInfoEmpty
* argmax/argmin + tests
* one more empty edge case + one more test
* argmax/argmin/realdiv_bp tweaks
* empty reshape test + fix
* Helper fixes
* Small fixes
* Gather test fix
* Gather test fix
* Small fixes
* reduce scalar zero values
* scalar mean workaround
* Remove debug code
* along dim mean workaround
* one more test
* - equalsTo() tweak for empty arrays
- one more test
* broadcast tweaks
2019-06-15 21:34:34 +10:00
skymindops
b5f0ec072f
Eclipse Migration Initial Commit
2019-06-06 15:21:15 +03:00