raver119
29e8e09db6
String changes ( #3 )
...
* initial commit
* additional data types & tensor type
Signed-off-by: raver119 <raver119@gmail.com>
* next step
Signed-off-by: raver119 <raver119@gmail.com>
* missing include
* sparse_to_dense
Signed-off-by: raver119 <raver119@gmail.com>
* few more tests files
Signed-off-by: raver119 <raver119@gmail.com>
* draft
Signed-off-by: raver119 <raver119@gmail.com>
* numeric sparse_to_dense
Signed-off-by: raver119 <raver119@gmail.com>
* comment
Signed-off-by: raver119 <raver119@gmail.com>
* string sparse_to_dense version
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA DataBuffer expand
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks for CUDA build
Signed-off-by: raver119 <raver119@gmail.com>
* shape fn for string_split
Signed-off-by: raver119 <raver119@gmail.com>
* one more comment
Signed-off-by: raver119 <raver119@gmail.com>
* string_split indices
Signed-off-by: raver119 <raver119@gmail.com>
* next step
Signed-off-by: raver119 <raver119@gmail.com>
* test passes
Signed-off-by: raver119 <raver119@gmail.com>
* few rearrangements for databuffer implementations
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer: move inline methods to common implementations
Signed-off-by: raver119 <raver119@gmail.com>
* add native DataBuffer to Nd4j presets
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer creation
Signed-off-by: raver119 <raver119@gmail.com>
* use DataBuffer for allocation
Signed-off-by: raver119 <raver119@gmail.com>
* cpu databuffer as deallocatable
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer setters for bufers
Signed-off-by: raver119 <raver119@gmail.com>
* couple of wrappers
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffers being passed around
Signed-off-by: raver119 <raver119@gmail.com>
* Bunch of ByteBuffer-related signatures gone
Signed-off-by: raver119 <raver119@gmail.com>
* - few more Nd4j signatures removed
- minor fix for bfloat16
Signed-off-by: raver119 <raver119@gmail.com>
* nullptr pointer is still a pointer, but 0 as address :)
Signed-off-by: raver119 <raver119@gmail.com>
* one special test
Signed-off-by: raver119 <raver119@gmail.com>
* empty string array init
Signed-off-by: raver119 <raver119@gmail.com>
* one more test in cpp
Signed-off-by: raver119 <raver119@gmail.com>
* memcpy instead of databuffer swap
Signed-off-by: raver119 <raver119@gmail.com>
* special InteropDataBuffer for front-end languages
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks for java
Signed-off-by: raver119 <raver119@gmail.com>
* pointer/indexer actualization
Signed-off-by: raver119 <raver119@gmail.com>
* CustomOp returns list for inputArumgents and outputArguments instead of array
Signed-off-by: raver119 <raver119@gmail.com>
* redundant call
Signed-off-by: raver119 <raver119@gmail.com>
* print_variable op
Signed-off-by: raver119 <raver119@gmail.com>
* - view handling (but wrong one)
- print_variable java wrapper
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* - empty arrays handling
Signed-off-by: raver119 <raver119@gmail.com>
* - deserialization works now
Signed-off-by: raver119 <raver119@gmail.com>
* minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* one more fix
Signed-off-by: raver119 <raver119@gmail.com>
* initial cuda commit
Signed-off-by: raver119 <raver119@gmail.com>
* print_variable message validation
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA views
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA special buffer size
Signed-off-by: raver119 <raver119@gmail.com>
* minor update to match master changes
Signed-off-by: raver119 <raver119@gmail.com>
* - consider arrays always actual on device for CUDA
- additional PrintVariable constructor
- CudaUtf8Buffer now allocates host buffer by default
Signed-off-by: raver119 <raver119@gmail.com>
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* - print_variable now allows print from device
Signed-off-by: raver119 <raver119@gmail.com>
* InteropDataBuffer data type fix
Signed-off-by: raver119 <raver119@gmail.com>
* ...
Signed-off-by: raver119 <raver119@gmail.com>
* disable some debug messages
Signed-off-by: raver119 <raver119@gmail.com>
* master pulled in
Signed-off-by: raver119 <raver119@gmail.com>
* couple of new methods for DataBuffer interop
Signed-off-by: raver119 <raver119@gmail.com>
* java side
Signed-off-by: raver119 <raver119@gmail.com>
* offsetted constructor
Signed-off-by: raver119 <raver119@gmail.com>
* new CUDA deallocator
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA backend torn apart
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA backend torn apart 2
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA backend torn apart 3
Signed-off-by: raver119 <raver119@gmail.com>
* - few new tests
- few new methods for DataBuffer management
Signed-off-by: raver119 <raver119@gmail.com>
* few more tests + few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* two failing tests
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* two failing tests pass
Signed-off-by: raver119 <raver119@gmail.com>
* now we pass DataBuffer to legacy ops too
Signed-off-by: raver119 <raver119@gmail.com>
* Native DataBuffer for legacy ops, Java side
Signed-off-by: raver119 <raver119@gmail.com>
* CPU java side update
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA java side update
Signed-off-by: raver119 <raver119@gmail.com>
* no more prepare/register action on java side
Signed-off-by: raver119 <raver119@gmail.com>
* NDArray::prepare/register use now accepts vectors
Signed-off-by: raver119 <raver119@gmail.com>
* InteropDataBuffer now has few more convenience methods
Signed-off-by: raver119 <raver119@gmail.com>
* java bindings update
Signed-off-by: raver119 <raver119@gmail.com>
* tick device in NativeOps
Signed-off-by: raver119 <raver119@gmail.com>
* Corrected usage of OpaqueBuffer for tests.
* Corrected usage of OpaqueBuffer for java tests.
* NativeOpsTests fixes.
* print_variable now returns scalar
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* compat_string_split fix for CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* - CUDA execScalar fix
- CUDA lazyAllocateHostPointer now checks java indexer/pointer instead of native pointer
Signed-off-by: raver119 <raver119@gmail.com>
* legacy ops DataBuffer migration prototype
Signed-off-by: raver119 <raver119@gmail.com>
* ignore device shapeinfo coming from java
Signed-off-by: raver119 <raver119@gmail.com>
* minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* minor transformAny fix
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweak for lazy host allocation
Signed-off-by: raver119 <raver119@gmail.com>
* - DataBuffer::memcpy method
- bitcast now uses memcpy
Signed-off-by: raver119 <raver119@gmail.com>
* - IndexReduce CUDA dimension buffer fix
Signed-off-by: raver119 <raver119@gmail.com>
* views for CPU and CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* less spam
Signed-off-by: raver119 <raver119@gmail.com>
* optional memory init
Signed-off-by: raver119 <raver119@gmail.com>
* async memset
Signed-off-by: raver119 <raver119@gmail.com>
* - SummaryStats CUDA fix
- DataBuffer.sameUnderlyingData() impl
- execBroadcast fix
Signed-off-by: raver119 <raver119@gmail.com>
* - reduce3All fix
switch to CUDA 10 temporarily
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA version
Signed-off-by: raver119 <raver119@gmail.com>
* proper memory deallocator registration
Signed-off-by: raver119 <raver119@gmail.com>
* HOST_ONLY workspace allocation
Signed-off-by: raver119 <raver119@gmail.com>
* temp commit
Signed-off-by: raver119 <raver119@gmail.com>
* few conflicts resolved
Signed-off-by: raver119 <raver119@gmail.com>
* few minor fixes
Signed-off-by: raver119 <raver119@gmail.com>
* one more minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* NDArray permute should operate on JVM primitives
Signed-off-by: raver119 <raver119@gmail.com>
* - create InteropDataBuffer for shapes as well
- update pointers after view creation in Java
Signed-off-by: raver119 <raver119@gmail.com>
* - addressPointer temporary moved to C++
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA: don't account offset twice
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA: DataBuffer pointer constructor updated
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA NDArray.unsafeDuplication() simplified
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA minor workspace-related fixes
Signed-off-by: raver119 <raver119@gmail.com>
* CPU DataBuffer.reallocate()
Signed-off-by: raver119 <raver119@gmail.com>
* print_affinity op
Signed-off-by: raver119 <raver119@gmail.com>
* print_affinity java side
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA more tweaks for data locality
Signed-off-by: raver119 <raver119@gmail.com>
* - compat_string_split tweak
- CudaUtf8Buffer update
Signed-off-by: raver119 <raver119@gmail.com>
* INDArray.close() mechanic restored
Signed-off-by: raver119 <raver119@gmail.com>
* one more test fixed
Signed-off-by: raver119 <raver119@gmail.com>
* - CUDA DataBuffer.reallocate() updated
- cudaMemcpy (synchronous) restored
Signed-off-by: raver119 <raver119@gmail.com>
* one last fix
Signed-off-by: raver119 <raver119@gmail.com>
* bad import removed
Signed-off-by: raver119 <raver119@gmail.com>
* another small fix
Signed-off-by: raver119 <raver119@gmail.com>
* one special test
Signed-off-by: raver119 <raver119@gmail.com>
* fix bad databuffer size
Signed-off-by: raver119 <raver119@gmail.com>
* release primaryBuffer on replace
Signed-off-by: raver119 <raver119@gmail.com>
* higher timeout
Signed-off-by: raver119 <raver119@gmail.com>
* disable timeouts
Signed-off-by: raver119 <raver119@gmail.com>
* dbCreateView now validates offset and length of a view
Signed-off-by: raver119 <raver119@gmail.com>
* additional validation for dbExpand
Signed-off-by: raver119 <raver119@gmail.com>
* restore timeout back again
Signed-off-by: raver119 <raver119@gmail.com>
* smaller distribution for rng test to prevent timeouts
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA DataBuffer::memcpy now copies to device all the time
Signed-off-by: raver119 <raver119@gmail.com>
* OpaqueDataBuffer now contains all required methods for interop
Signed-off-by: raver119 <raver119@gmail.com>
* some javadoc
Signed-off-by: raver119 <raver119@gmail.com>
* GC on failed allocations
Signed-off-by: raver119 <raver119@gmail.com>
* minoe memcpu tweak
Signed-off-by: raver119 <raver119@gmail.com>
* one more bitcast test
Signed-off-by: raver119 <raver119@gmail.com>
* - NDArray::deviceId() propagation
- special multi-threaded test for data locality checks
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer additional syncStream
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer additional syncStream
Signed-off-by: raver119 <raver119@gmail.com>
* one ignored test
Signed-off-by: raver119 <raver119@gmail.com>
* skip host alloc for empty arrays
Signed-off-by: raver119 <raver119@gmail.com>
* ByteBuffer support is back
Signed-off-by: raver119 <raver119@gmail.com>
* DataBuffer::memcpy minor fix
Signed-off-by: raver119 <raver119@gmail.com>
* few minor prelu/bp tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* nullify-related fixes
Signed-off-by: raver119 <raver119@gmail.com>
* PReLU fixes (#157 )
Signed-off-by: Alex Black <blacka101@gmail.com>
* Build fixed
* Fix tests
* one more ByteBuffer signature restored
Signed-off-by: raver119 <raver119@gmail.com>
* nd4j-jdbc-hsql profiles fix
Signed-off-by: raver119 <raver119@gmail.com>
* nd4j-jdbc-hsql profiles fix
Signed-off-by: raver119 <raver119@gmail.com>
* PReLU weight init fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small PReLU fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* - INDArray.migrate() reactivated
- DataBuffer::setDeviceId(...) added
- InteropDataBuffer Z syncToDevice added for views
Signed-off-by: raver119 <raver119@gmail.com>
* missed file
Signed-off-by: raver119 <raver119@gmail.com>
* Small tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* cuda 10.2
Signed-off-by: raver119 <raver119@gmail.com>
* minor fix
Signed-off-by: raver119 <raver119@gmail.com>
Co-authored-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2020-01-04 13:27:50 +03:00
raver119
451d9d57fd
shape function override ( #161 )
...
Signed-off-by: raver119 <raver119@gmail.com>
2020-01-04 09:06:44 +03:00
Robert Altena
53d3bd1269
shallow delete of assign from SDBase. ( #164 )
...
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2020-01-04 15:26:39 +11:00
Alex Black
29104083cc
Various fixes ( #143 )
...
* #8568 ArrayUtil optimization
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #6171 Keras ReLU and ELU support
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Keras softmax layer import
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8549 Webjars dependency management
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix for TF import names ':0' suffix issue / NPE
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* BiasAdd: fix default data format for TF import
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update zoo test ignores
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8509 SameDiff Listener API - provide frame + iteration
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8520 ND4J Environment
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Deconv3d
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Deconv3d fixes + gradient check
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Conv3d fixes + deconv3d DType test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix issue with deconv3d gradinet check weight init
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8579 Fix BaseCudaDataBuffer constructor fix for UINT16
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DataType.isNumerical() returns false for BOOL type
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8504 Reduce Spark log spam for tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Clean up DL4J gradient check test spam
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More Gradient check spam reduction
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* SameDiff test spam reduction
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fixes for FlatBuffers mapping
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* SameDiff log spam cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests should extend BaseNd4jTest
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove debug line in c++ op
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ND4J test spam cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J test spam reduction
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More Dl4J and datavec test spam cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix for bad conv3d test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Additional test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Embedding layers: don't inherit global default activation function
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Trigger CI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Consolidate all BaseDL4JTest classes to single class used everywhere; make timeout configurable per class
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test fixes and timeout increases
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Timeouts and PReLU fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Restore libnd4j build threads arg for CUDA build
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Increase timeouts on a few tests to avoid spurious failures on some CI machines
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More timeout fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More test timeout fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tweak timeout for one more test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Final tweaks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* One more ignore
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2020-01-04 13:45:07 +11:00
Alexander Stoyakin
010744ef9c
Lu wrapper and tests fixes ( #144 )
...
* Tests fixed
* Lu added
* Test fixed
* Default timeout
* Tests timeouts fixed.
* TF import fix
* Timeouts added
* Timeout fixed.
* Test corrected
* rgb and yiq conversion ops added
* Converter ops added
* Header
* Yuv converters
* API added
* Empty test for matmul
* Explanation
* skip gemm/gemv on empty inputs
Signed-off-by: raver119 <raver119@gmail.com>
* Test added
* Correct test
* one more empty pass-through for mmul
Signed-off-by: raver119 <raver119@gmail.com>
* Cleanup
* Test added
* Test fixed
* Added missing mapping
* Added missing mapping
Co-authored-by: raver119 <raver119@gmail.com>
2019-12-30 15:06:12 +03:00
Alex Black
1f9e1b6022
SameDiff profiler analysis improvements ( #141 )
...
* #8555 SameDiff profiler analysis improvements
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix TF sub-op aggregation
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small filtering tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* Copyright headers
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-12-23 15:24:20 +11:00
Alex Black
ce02b6fae7
Small fixes ( #140 )
...
* Allow scalar op result array auto allocation
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Don't swallow underlying exception for calculateOutputShape execution failures
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ignore for known keras failure
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-12-21 17:00:46 +11:00
Alexander Stoyakin
6d8a063c9b
nd4j-tests cleanup ( #137 )
...
* Fixed tests
* Invalid test removed
2019-12-20 16:38:33 +03:00
Alex Black
3d8f6d50a1
SameDiff profiler / tracing and profile analysis/comparison ( #133 )
...
* Profiler
Signed-off-by: Alex Black <blacka101@gmail.com>
* Next steps, polishing, and loading SD/TF format JSON
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Profile comparison method
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Make profiling result writing async to reduce main thread overhead
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Profiling polishing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Profile analyzer fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Polish
Signed-off-by: Alex Black <blacka101@gmail.com>
* Cleanup
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small formatting improvement
Signed-off-by: Alex Black <blacka101@gmail.com>
* Formatting tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* License headers
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-12-19 23:43:58 +11:00
Alexander Stoyakin
f5068f3980
Added missing Java ops wrappers ( #122 )
...
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Fused batch norm fixes-
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests switched off.
* Added test for resize_bicubic.
* Eliminated wasted in test of bicubic resize.
* Switched off multithreading explicit.
* HsvToRgb and RgbToHsv added
* Eliminated waste comments and conform proper float constants.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed multithreading with resize_bicubic helper for cpu platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* ResizeBicubic was fixed.
* Some fixes
* Fix op name
* Validation fixed.
* Clarifications for tests
* Wrappers and small fixes for new ops.
2019-12-19 20:15:48 +11:00
Alex Black
bfd9e3692a
Add op counting to TensorFlowImportValidator ( #128 )
...
* Add op counting to TensorFlowImportValidator
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-12-17 10:23:37 +11:00
AlexDBlack
0df1b46c8c
Merge
2019-12-10 15:08:50 +11:00
raver119
a5f5ac72b1
reduce bool changes ( #118 )
...
* reduce bool changes
Signed-off-by: raver119 <raver119@gmail.com>
* reduce bool tweaks
Signed-off-by: raver119 <raver119@gmail.com>
2019-12-09 20:08:59 +03:00
Alex Black
0175ace4c3
Small tweaks ( #119 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-12-09 23:08:00 +11:00
Alexander Stoyakin
927d591421
ResizeBicubic added ( #117 )
...
* ResizeBicubic added
Some fixes.
* Test fixed
* Narrowed argument type changed to boolean
* Clean up
2019-12-09 18:25:39 +11:00
Alex Black
b66154a9d4
Add ArraySavingListener for debugging ( #114 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-12-09 14:16:11 +11:00
raver119
b32dd1bf92
[WIP] resize_bicubic types ( #116 )
...
* resize_bicubic: allow more dtypes
Signed-off-by: raver119 <raver119@gmail.com>
* resize_bicubic: allow less dtypes
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored resize_bicubic op to full conform with TF1.5 and tests.
* Corrected test to proper data type output.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected double input test to float constant outputs.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Finished with correction of tests for bicubic interpolated resizes expected.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed adjust_contrast ops to allow non-RGB inputs.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored adjust_contrast_v2 to conform with TF one.
Signed-off-by: shugeo <sgazeos@gmail.com>
* AdjustContrast tests activated
* two typos fixed
Signed-off-by: raver119 <raver119@gmail.com>
2019-12-06 18:58:37 +03:00
raver119
972fae60dc
Update master ( #8511 )
...
* cleaned up bert iterator tests (#110 )
Signed-off-by: eraly <susan.eraly@gmail.com>
* Various pre-release fixes (#111 )
* Various fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix default dtypes for MaxPoolWithArgmax
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small pre-release tweak (#112 )
* Log UI address on launch as in previous Play-based UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Logging level tweak for UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* http not https
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec python ensure host (#113 )
* ensure host
* one more host ensure
* info->debug
* [WIP] reverse improvements (#115 )
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* reverse draft
Signed-off-by: raver119 <raver119@gmail.com>
* reverse kernel
Signed-off-by: raver119 <raver119@gmail.com>
* reverse kernel
Signed-off-by: raver119 <raver119@gmail.com>
* 2 micro fixes
Signed-off-by: raver119 <raver119@gmail.com>
* Shugeo resize fix5 (#102 )
* Refactored resize images ops to use TF-like bool args as input.
* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.
* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.
* Refactored nearest_neighbor resize op.
* Added a pair of tests for special case of resize_bilinear algorithm.
* Fixed issue with resize_bilinear op.
* Refactored cpu implementation for helpers with resize_nearest_neighbor op.
* Final fixed for resize ops to conform TF v.1.5
* Refactored cuda helpers for resize_neares_neighbor op.
* Fixed resize_bilinear to accept proper data.
* Fixed issue with non-float input for resize_bilinear op.
* Refactored cuda helper for resize_bilinear to proper process non-float inputs.
* Added tests for resize_bilinear to int inputs.
* Fixed ResizeBilinear wrapper
* Tests fixed
* Fixed float and bool constant to avoid overflow for some kind of compilers.
* Corrected float constants with float data type.
* Added f suffix for float constants.
* Corrected float constant to avoid overflow with initializing lists.
* Corrected float initializing list with float input.
* Corrected bool constant with initalizing list.
* Corrected float and bool values with initializing lists.
* Fixed wrong constant.
* Fixed issue with 1x1 input picture for resize.
* ResizeBilinear default values on import fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-12-06 11:10:44 +03:00
Robert Altena
e7730eded4
delete unused and refactor. ( #8262 )
...
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-12-05 22:25:41 -05:00
shugeo
e09a785232
Shugeo resize fix5 ( #102 )
...
* Refactored resize images ops to use TF-like bool args as input.
* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.
* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.
* Refactored nearest_neighbor resize op.
* Added a pair of tests for special case of resize_bilinear algorithm.
* Fixed issue with resize_bilinear op.
* Refactored cpu implementation for helpers with resize_nearest_neighbor op.
* Final fixed for resize ops to conform TF v.1.5
* Refactored cuda helpers for resize_neares_neighbor op.
* Fixed resize_bilinear to accept proper data.
* Fixed issue with non-float input for resize_bilinear op.
* Refactored cuda helper for resize_bilinear to proper process non-float inputs.
* Added tests for resize_bilinear to int inputs.
* Fixed ResizeBilinear wrapper
* Tests fixed
* Fixed float and bool constant to avoid overflow for some kind of compilers.
* Corrected float constants with float data type.
* Added f suffix for float constants.
* Corrected float constant to avoid overflow with initializing lists.
* Corrected float initializing list with float input.
* Corrected bool constant with initalizing list.
* Corrected float and bool values with initializing lists.
* Fixed wrong constant.
* Fixed issue with 1x1 input picture for resize.
* ResizeBilinear default values on import fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-12-05 22:05:33 +03:00
Alex Black
2052ce7026
Various pre-release fixes ( #111 )
...
* Various fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix default dtypes for MaxPoolWithArgmax
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-12-05 14:20:03 +11:00
Fariz Rahman
0d14032d26
TF Updates ( #87 )
...
* tf updates
* pom
* copyright
* graphrunner tests
* gpu test
* getSessionOptionsConfigProto
* dtype fix
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* cast graphs
* savemodel test fix
* testresource instead of local
* Logging level
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* gson dependency issue fix; fix GraphRunnerTest for no session options config case
Signed-off-by: Alex Black <blacka101@gmail.com>
* Final tweaks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* few minor fixes
Signed-off-by: raver119 <raver119@gmail.com>
* one more fix
Signed-off-by: raver119 <raver119@gmail.com>
* Tweak configuration for GraphRunnerTest
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* nd4j align config
* tf warmup
2019-12-04 17:11:03 +11:00
raver119
25b3cd9b80
[WIP] CUDA tests ( #95 )
...
* one more CI test
Signed-off-by: raver119 <raver119@gmail.com>
* export additional symbols
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* one more tweak for linux
Signed-off-by: raver119 <raver119@gmail.com>
* fix dtype in few tests
Signed-off-by: raver119 <raver119@gmail.com>
* missing sync and memset in couple of tests
Signed-off-by: raver119 <raver119@gmail.com>
* copy step for libnd4j cuda
Signed-off-by: raver119 <raver119@gmail.com>
* no-op on empty for adjust hue/contrast/saturation
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA_VERBOSE Off
Signed-off-by: raver119 <raver119@gmail.com>
* BroadcastBool fix + few tests
Signed-off-by: raver119 <raver119@gmail.com>
* trigger jenkins
Signed-off-by: raver119 <raver119@gmail.com>
* trigger jenkins
Signed-off-by: raver119 <raver119@gmail.com>
* - ignore couple of warnings
- remove redundant compiler options
Signed-off-by: raver119 <raver119@gmail.com>
2019-12-02 21:37:21 +03:00
Alexander Stoyakin
5e152c0d9a
TF import tests - adding missing operations ( #65 )
...
* Add and fix mappings.
* Intermediate
* Added and fixed some mappings
* Added op
* Missing constructors added.
* Added new mappings
* SDImage wrappers and minor tweaks.
* Added missing constructor
* Some corrections
* Cleanup
* Small fixes
* Ops wrappers
* Minor fixes.
* Max Pooling
* MaxPoolWithArgmax
* Some fixes
* Ignores for failures
* Some ops fixed.
* Some fixes
* Missing package added
* Some fixes
* Ignored tests fixed.
* Some fixes
* Merge master
* bitcast fix
Signed-off-by: raver119 <raver119@gmail.com>
* Bitcast fixed
2019-12-02 21:23:06 +11:00
Alex Black
8123d9fa9b
SameDiff: Add Java-level assertion check/exception ( #96 )
...
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-12-02 18:07:54 +11:00
Alex Black
2be47082c9
#8470 TrainingConfig json fix for Evaluation instances ( #93 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-30 20:08:30 +11:00
Alex Black
35ab4a72ba
TF import test resources loading precision fixes ( #92 )
...
* Fix precision issues when loading from CSV
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-30 18:58:37 +11:00
Alex Black
4fb9fa7748
Add ND4J namespaces ( #83 )
...
* Add NDValidation
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add bitwise namespace
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Math namespace op constructor fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Constructor fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add Math namespace
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update NDBitwise
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add random namespaces
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* NN namespace
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-30 18:39:32 +11:00
Yurii Shyrma
d19eeaec52
Shyrma casual conv1d ( #90 )
...
* - add causal mode of padding to convolutions
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add additional tests for causal conv1d
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add causal mode for cuda conv kernels
Signed-off-by: Yurii <iuriish@yahoo.com>
* Java side of Conv1D changes
Signed-off-by: raver119 <raver119@gmail.com>
* Add Conv1DDerivative op
Signed-off-by: Alex Black <blacka101@gmail.com>
* Causal Conv1D gradient checks
Signed-off-by: Alex Black <blacka101@gmail.com>
* Tweaks
Signed-off-by: Alex Black <blacka101@gmail.com>
* - add causal padding mode to conv2d_bp
Signed-off-by: Yurii <iuriish@yahoo.com>
* More thorough causal conv1d tests
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-11-29 14:14:30 +03:00
Samuel Audet
5e07998e59
Add support for CUDA 10.2 ( #89 )
2019-11-29 16:31:03 +11:00
shugeo
009007120b
Shugeo_release_fixes3 ( #81 )
...
* Implementation for non_max_suppression_v3 was added. Initial version
* Added check for overcome threshold.
* Added definition for V3 method.
* java remapping for NonMaxSuppressionV3
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed proporly processing of an empty output and test.
* Refactored op to less threshold data to float.
* Implemented cuda-based helper for non_max_suppression_v3 op.
* Fixed fake_quant_with_min_max_vars op.
* Fixed tests with float numbers.
* - assert now stops execution
- sortByKey/sortByValue now have input validation
Signed-off-by: raver119 <raver119@gmail.com>
* missing var
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed proper processing for zero max_size inputs.
* Refactored kernel callers.
* Fixed return statement for logdet op helper.
* Refactored unsorted segment SqrtN op.
* get back 8 tail bytes on CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored segment prod ops and helpers for cuda and tests.
* Additional test.
* CudaWorkspace tests updated for 8 tail bytes
Signed-off-by: raver119 <raver119@gmail.com>
* special atomic test
Signed-off-by: raver119 <raver119@gmail.com>
* atomicMul/atomicDiv fix for 16bit values
Signed-off-by: raver119 <raver119@gmail.com>
* Eliminated waste prints.
2019-11-28 21:08:51 +03:00
Alex Black
abd2017a0a
Add ignore for known issue with non_max_suppression_v2/float16 test ( #85 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-27 16:35:05 +11:00
Alex Black
8843c7377a
Update shaded Jackson version to 2.10.1 ( #82 )
...
* Update shaded Jackson version to 2.10.1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove no longer needed scala compiler plugin from UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix op name for BitwiseAnd op
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* TimeDistributedLayer mask array fix + test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-26 19:24:38 +11:00
raver119
aa44fd6850
one more BitCast test
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-25 08:52:11 +03:00
Alex Black
e910ce75ec
Various Fixes ( #75 )
...
* #8431 Cast loss function weights array automatically
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add 'regex verbose mode' printing (ExecDebugListener) for TFGraphTestAllSameDiff'
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Class import mapping fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Reshape fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Don't swallow first exception in NativeOpExecutioner.exec(CustomOp)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-23 20:06:12 +11:00
shugeo
4187190609
Shugeo release fix2 ( #70 )
...
* Corrected input checking and tests for bitcast op.
* Fixed an issue with non_max_suppression form generation and processing with score threshold given.
* Fixed bilinear resize kernel and tests.
* push for Serhii
Signed-off-by: raver119 <raver119@gmail.com>
* Added test for nearest_neighbor resize with int input.
* Added data type check for input/output match.
* Eliminate error in macros.
* Improved output message for type checking.
* Fixed input/output types for op.
* Eliminated waste logging.
* Refactored resize_bilinear helper for multithreading for cpu platform.
* Cosmetic changes only.
* Fixed error for string substitution.
* Skip test for cbow_batch with cuda.
* fix for resizeNearestNeighbor output dtype
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored non_max_suppression helper.
* Refactored shape generation and input handling.
* Added additional test.
2019-11-22 22:42:44 +03:00
Alex Black
4a2fedf3e7
DL4J: Add Sparse multi-class cross entropy loss function ( #72 )
...
* #8432 Add sparse mcxent loss
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fixes for LossSparseMCXENT
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* add simple debugging listener for SameDiff exec debugging
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Extra gradient check + header polishing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-22 18:54:31 +11:00
Samuel Audet
ff73e6da3f
ND4J: Fix OpenBLAS loading for nd4j-native ( #64 )
...
* ND4J: Fix OpenBLAS loading for nd4j-native and remove bundling of OpenMP
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
* Bundle back libgomp.so.1 for Linux
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
* Readd preload directories for ARM
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
* Add back preloads for GCC on Windows
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
* Add explicit preloadpaths for ARM and POWER to bundle correct library
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
2019-11-21 15:54:41 +03:00
raver119
064a56ccf1
Few fixes ( #66 )
...
* skip legacy transforms execution in case of empty input arrays
Signed-off-by: raver119 <raver119@gmail.com>
* - BroadcastBool ops now accept extraParams to make MatchCondition possible
- TrueBroadcastHelper now uses samediff::threads
Signed-off-by: raver119 <raver119@gmail.com>
* java side
Signed-off-by: raver119 <raver119@gmail.com>
* trigger jenkins
Signed-off-by: raver119 <raver119@gmail.com>
* update LessThanOrEqual opNum mapping
Signed-off-by: raver119 <raver119@gmail.com>
* update LessThanOrEqual opNum mapping
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-21 15:43:03 +03:00
raver119
83cb0d9329
[WIP] Create and small fix ( #67 )
...
* - create op
- skip exec for empty inputs for non_max_suppression
- EmptyHandling idea
Signed-off-by: raver119 <raver119@gmail.com>
* Create op and mapping for it
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-21 13:31:20 +03:00
raver119
3f38900c33
J9+ -> J8 ByteBuffer fix ( #59 )
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-20 07:43:17 +03:00
Yurii Shyrma
66b84b38cf
Shyrma mmul ( #58 )
...
* - get rid of some copy procedures in mmulHelper ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on embedding cuda api for batched gemm (cublasGemmBatchedEx) in our mmulHelper class
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on cuda batched gamm api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - write own cuda kernel performing batched gemm
Signed-off-by: Yurii <iuriish@yahoo.com>
* missing include in MmulHelper
Signed-off-by: raver119 <raver119@gmail.com>
* - forgot to keep in code previous correct kernels for mmulNxN, since it may happen that new onw will fail for some reason in future
Signed-off-by: Yurii <iuriish@yahoo.com>
* disable old tensordot
Signed-off-by: raver119 <raver119@gmail.com>
* - rewrite cuda kernels for usualGemm and usualGemv
Signed-off-by: Yurii <iuriish@yahoo.com>
* - profiling mmul helpers
Signed-off-by: Yurii <iuriish@yahoo.com>
* - prints to check shapes were added
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct type of output array Cin mmulNxN
Signed-off-by: Yurii <iuriish@yahoo.com>
* - take into account possible nans in C array
Signed-off-by: Yurii <iuriish@yahoo.com>
* slightly change numThreads message
Signed-off-by: raver119 <raver119@gmail.com>
* - make corrections in accordance to given notes in pr review
Signed-off-by: Yurii <iuriish@yahoo.com>
2019-11-19 15:39:36 +02:00
Alex Black
da1944e8e1
SameDiff TF import ( #49 )
...
* Added implementation files for image_resize and resize_bicubic ops.
* Image resize and image.resize_bicubic ops implementation. Initial revision.
* Minor fix
* Some TF imports disabled.
* Finished with infrastructure development for image.resize_bilinear op and image_resizo op implementation.
* Refactored resize methods.
* Added processing for Mitchelcubic algorithm.
* adjust_contrast
* Small fix for TF import expected value loading when variable name starts with the test name
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests
* Tests added.
* Removed tf names absent in mapping.
* Some fixes.
* Small fixes
* Minor change
* Some failing tests.
* Disable failed test
* Ignore some tests
* Fix import class mapping
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix float property mapping (flatbuffers)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Override equality function for model 'dropout'
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fail tests
* Failed tests ignored temporarily.
* Minor fixes
* Small fix
* Conflict resolved
* Default implementations of tensorflowName and onnxName
2019-11-19 22:44:29 +11:00
raver119
ce2ef20f96
additional reverse signatures
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-19 13:54:07 +03:00
raver119
db7ca956c5
[WIP] Mish ( #55 )
...
* Mish activation function and its derivative
Signed-off-by: raver119 <raver119@gmail.com>
* signature fix
Signed-off-by: raver119 <raver119@gmail.com>
* mish as activation for dl4j
Signed-off-by: raver119 <raver119@gmail.com>
* javadoc
Signed-off-by: raver119 <raver119@gmail.com>
* minor optimization
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-18 13:21:26 +03:00
raver119
c5cbdcd8f4
[WIP] clang for jcpp ( #53 )
...
* clang as compiler for jcpp
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* we don't need macos profile
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-17 09:45:30 +03:00
Alex Black
09a827fb6d
Fixes and pre-release QA ( #51 )
...
* #8395 Keras import - support scaled identity weight init
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More Keras scaled weight init fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8352 Deprecate duplicate SamplingDataSetIterator class
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove /O2 optimization for faster CUDA build
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tweak regression test precision for CUDA
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix edge cases for buffer creation
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update MKLDNN validation tests to new helper enable/disable settings
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Delete debugging class
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* MKLDNN test - add proper skip for CUDA backend
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Align WeightInitUtil with weight init classes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix for SameDiff test layers weight init when using IWeightInit classes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-16 17:04:29 +11:00
raver119
1780dcc883
[WIP] Small fixes here and there ( #50 )
...
* one range test
Signed-off-by: raver119 <raver119@gmail.com>
* few Context convenience singatures
Signed-off-by: raver119 <raver119@gmail.com>
* one more range test
Signed-off-by: raver119 <raver119@gmail.com>
* "range" "fix"
Signed-off-by: raver119 <raver119@gmail.com>
* adjuct_contrast_v2 now allows scale factor to be provided via input_variable
Signed-off-by: raver119 <raver119@gmail.com>
* adjust_contrast now allows scale factor as variable too
Signed-off-by: raver119 <raver119@gmail.com>
* bitcast shape tests
Signed-off-by: raver119 <raver119@gmail.com>
* BitCast import dtype added
Signed-off-by: raver119 <raver119@gmail.com>
* few more BitCast signatures
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-15 17:04:29 +03:00
raver119
c5b912bddf
few changes for openblas and jcpp preloads (on macos) ( #46 )
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-14 19:50:24 +03:00
raver119
1eb3de90d7
[WIP] Platform helpers switches ( #44 )
...
* - platform helpers can be disabled on per-op basis now via Context::allowHelpers
- java has access to it as well
Signed-off-by: raver119 <raver119@gmail.com>
* global platform-helpers trigger
Signed-off-by: raver119 <raver119@gmail.com>
* few signatures renamed
Signed-off-by: raver119 <raver119@gmail.com>
* - few new env variables to follow
- maxThreads/masterThreads differentiation
Signed-off-by: raver119 <raver119@gmail.com>
* Javadoc update
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-14 14:35:02 +03:00
Alex Black
47d19908f4
Various fixes ( #43 )
...
* #8172 Enable DL4J MKLDNN batch norm backward pass
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8382 INDArray.toString() rank 1 brackets / ambiguity fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8308 Fix handful of broken links (inc. some in errors)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Unused dependencies, round 1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Unused dependencies, round 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Unused dependencies, round 3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Uniform distribution TF import fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-14 19:38:20 +11:00
raver119
48df1acdfb
[WIP] ThreadPool ( #8 )
...
This PR removes OpenMP use in 95% of cases
2019-11-13 17:04:59 +03:00
Alex Black
1d96bb9e6e
SameDiff op runtime benchmarking listener ( #42 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-12 22:51:09 +11:00
Alex Black
18c01f5bdc
Add SameDiff memory reuse memory manager (array cache) ( #39 )
...
* Attention op comments
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ArrayCacheMemoryMgr - first pass
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tweak array cache for use with SameDiff identity arrays
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ArrayCacheMemoryMgr javadoc and properly get max memory
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* LRU cache policy + add tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Resize arrays internally if required for ArrayCacheMemoryMgr
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test improvement
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-12 21:15:44 +11:00
AlexDBlack
0107fb10ab
Merge remote-tracking branch 'konduit/master'
2019-11-08 18:11:45 +11:00
Alex Black
2f84ea666d
Uniform distribution op tweaks + 'specified output dtype' constructor ( #38 )
...
* Uniform distribution op tweaks + 'specified output dtype' constructor
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Validation tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-08 18:08:25 +11:00
Alex Black
24980efde3
Fix LogSumExp along dimension ( #35 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-07 23:36:15 +11:00
Samuel Audet
73b5a508fc
Update dependencies to just released JavaCPP and JavaCV 1.5.2
...
Signed-off-by: Samuel Audet <samuel.audet@gmail.com>
2019-11-07 17:57:34 +09:00
longzhendong
52c9918c6f
Testing slice and concat ( #8362 )
2019-11-07 14:47:37 +11:00
Alex Black
df8b4e607a
SameDiff: make use of DeviceLocal configurable ( #32 )
...
* #8340 make DeviceLocal configurable
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Javadoc
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J SameDiff layers: use SingleThreadArrayHolder to avoid assigns + DeviceLocalNDArray overhead
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Javadoc
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-06 18:52:41 +11:00
AlexDBlack
7583ccfa15
Merge
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-06 13:28:03 +11:00
Alex Black
948ebef41c
Op Fixes ( #28 )
...
* #8280 biasadd_bp nchw arg fixes (java side) + test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8285 Concat op Java side fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Concat op cpp fix - allow dynamic axis to be negative, same as static axis
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ignores for deconv3d import tests until deconv3d_tf op is implemented
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-05 00:05:04 +11:00
Alex Black
4763547c9e
Add Deconv3DTF ( #25 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-04 12:42:11 +11:00
Alex Black
5e312374d0
TF deconv3d import ( #8341 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-02 22:57:24 +11:00
AlexDBlack
2844f8b69a
Merge remote-tracking branch 'konduit/master'
2019-11-02 19:00:47 +11:00
Alex Black
9efd811508
Use DL4J workspaces for SameDiff layers in MLN/CG ( #23 )
...
* #8329 DL4J workspace integration for SameDiff layers
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix bug for Nd4j.createUninitializedDetached for scalars (length 0 shape array)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* SameDiff output layer, graph vertex, various fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Javadoc
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-02 17:42:01 +11:00
Alex Black
d82877b18b
Various SameDiff fixes ( #21 )
...
* MKLDNN LSTM forward implementation (disabled pending #8331 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8318 add SameDiff.calculateGradientsAndOutputs
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Disable mkldnn backprop for now - pending fix, issue #8335
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8337 Fix CudaExecutioner unnecessary result array allocation/replacement
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small FlatBuffers serde fix, UInt8
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8135 ImagePreProcessingScaler - add segmentation support
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8319 Ensure listeners are called when they are supposed to be called
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8214 UNet (non-pretrained) last conv layer kernal size fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-11-02 11:25:53 +11:00
Alexander Stoyakin
b816845797
Fixing nd4j-cuda build ( #20 )
...
* Roll back recent fix to restore build.
* Fix compilation.
* presets updated
Signed-off-by: raver119 <raver119@gmail.com>
2019-11-01 15:59:29 +02:00
Alexander Stoyakin
45a40c8a89
DL4J/ND4J: Do pass on integer casts ( #15 )
...
* Int cast fixes.
* Revert "Int cast fixes."
This reverts commit aa36e8ca
* Int casts
* Int cast
* Int casts
* Get rid of int casts. Dropping deprecated aggregate ops.
* java scatterUpdate changes
Signed-off-by: raver119 <raver119@gmail.com>
* c++ scatterUpdate changes
Signed-off-by: raver119 <raver119@gmail.com>
* Remove aggregated ops.
* Restored test
* Tests restored.
* Minor fixes
2019-10-31 11:23:09 +02:00
raver119
5a4d2e8b31
[WIP] SVD ( #16 )
...
* - new SVD constructor
- OrthogonalDistribution now uses SVD custom op
Signed-off-by: raver119 <raver119@gmail.com>
* shapes fixed
Signed-off-by: raver119 <raver119@gmail.com>
2019-10-28 12:31:01 +03:00
Yurii Shyrma
029a69a835
Shyrma bn mkl bp ( #14 )
...
* - write code for new batchnorm backprop
Signed-off-by: Yurii <iuriish@yahoo.com>
* - testing batchnorm backprop
Signed-off-by: Yurii <iuriish@yahoo.com>
* - write code for batchnorm backprop based on mkl dnn api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - testing and fixing bugs in batchnorm_bp mkl dnn
Signed-off-by: Yurii <iuriish@yahoo.com>
* - made corrections required by reviewer
Signed-off-by: Yurii <iuriish@yahoo.com>
* - change name in java wrapper for batchnorm op
Signed-off-by: Yurii <iuriish@yahoo.com>
2019-10-26 14:14:21 +03:00
Alex Black
d333d29099
SameDiff cleanup and fixes ( #12 )
...
* #8160 Remove resolvePrepertiesFromSameDiffBeforeExecution
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* SameDiff API cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More SameDiff cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8248 Switch SameDiff variable init from lazy to creation time for more predictable behaviour
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8252 TanhDerivative javadoc
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8225 Deconvolution2D input validation
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8265 Switch SameDiff.outputs() to user settable, instead of unreliable 'best guess'
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8224 SameDiff.zero and .one create constants, not variables
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More cleanup and fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J SameDiff fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Re-add hack for Deconvolution2DLayer until #8315 is resolved
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8270 Move CUDA device/version logging to Java; can be disabled via existing org.nd4j.log.initialization system property
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* All ND4J init logging checks system property
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove redundant device logging
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* One more fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* UX improvements
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Deconv fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add deconv tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove debug code
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-26 12:38:08 +11:00
Alex Black
730442ae21
Remove bad import ( #13 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-24 20:51:15 +11:00
Alex Black
3f0b4a2d4c
SameDiff execution, TF and memory management overhaul ( #10 )
...
* SameDiff execution memory management improvements, round 1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Round 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Round 3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Clear node outputs closed array references; Slight change to OpValidation internals to not rely on cached op outputs
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next step
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add WeakIdentityHashmap
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Session fixes for control ops and next steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* First steps for training session + in-line updating
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix losses and history during training
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* BiasAdd and other fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Don't use SDVariable.getArr() in TFGraphTestAllHelper (import tests)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* First steps for new dependency tracking approach
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Start integrating dependency tracking for memory management
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Non-control op dependency tracking works/passes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Switch/merge
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup and next steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix issue dependency tracking for initial variables/constants
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add check for aliases when determining if safe to close array
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* First pass on new TF graph import class
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Import fixes, op fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup and fixes for new TF import mapper
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Partial implementation of new dependency tracker
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* AbstractDependencyTracker for shared code
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Overhaul SameDiff graph execution (dependency tracking)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More fixes, cleanup, next steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ad no-op memory manager, cleanup, fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix switch dependency tracking
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* INDArray.toString: no exception on closed arrays, just note closed
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix enter and exit dependency tracking
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* TensorArray memory management fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add unique ID for INDArray instances
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix memory management for NextIteration outputs in multi-iteration loops
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove (now unnecessary) special case handling for nested enters
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Handle control dependencies during execution; javadoc for memory managers
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup, polish, code comments, javadoc
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup and more javadoc
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add memory validation for all TF import tests - ensure all arrays (except outputs) are released
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Clean up arrays waiting on unexecuted ops at the end of execution
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fixes for enter op memory managent in the context of multiple non-nested loops/frames
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix order of operation issues for dependency tracker
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Always clear op fields after execution to avoid leaks or unintended array reuse
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Re-implement dtype conversion
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix for control dependencies execution (dependency tracking)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix TF import overrides and filtering
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix for constant enter array dependency tracking
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J Fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More DL4J fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup and polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More polish and javadoc
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More logging level tweaks, small DL4J fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix to DL4J SameDiffLayer
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix empty array deserialization, add extra deserialization checks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* FlatBuffers control dep serialization fixes; test serialization as part of all TF import tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Variable control dependencies serialization fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix issue with removing inputs for ops
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* FlatBuffers NDArray deserialization fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* FlatBuffers NDArray deserialization fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Final cleanup/polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-23 21:19:50 +11:00
Alexander Stoyakin
f31661e13b
Merge pull request #7 from KonduitAI/asto_nd4s_10172019
...
KDTree optimization
2019-10-23 12:11:25 +03:00
Alexander Stoyakin
9e5799847a
TF names fixed.
2019-10-16 19:50:18 +03:00
Alexander Stoyakin
502bedf5d5
Register ops for TF import.
2019-10-16 19:39:04 +03:00
Alexander Stoyakin
ec722b20ee
TF names added
2019-10-16 19:29:19 +03:00
Alexander Stoyakin
99d77e1384
Ops exported for sameDiff
2019-10-16 19:16:47 +03:00
Alexander Stoyakin
d5002b14c7
New ops wrappers
2019-10-16 12:59:08 +03:00
Alex Black
b9a4b0f25f
Update links ( #8292 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-16 12:59:52 +11:00
Robert Altena
50b13fadc8
nd4j-api cleanup. ( #8273 )
...
* nd4j-api cleanup.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* restore deleted schemes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-10-08 21:48:22 +11:00
Robert Altena
1f4ad08305
refactoring activations. ( #8261 )
...
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-10-03 20:35:27 +10:00
Alex Black
f98f8be7b6
SameDiff ops ( #8247 )
...
* update javadocs and a few method signatures
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add PRelu op
Signed-off-by: Ryan Nett <rnett@skymind.io>
* test and fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add PRelu op
Signed-off-by: Ryan Nett <rnett@skymind.io>
* test and fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* slightly better test
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-19 11:52:20 +10:00
Steven Lang
d58a4b45b1
Fix Nadam updater clone missing schedule ( #8243 )
...
Signed-off-by: Steven Lang <steven.lang.mz@gmail.com>
2019-09-17 22:56:49 +03:00
Robert Altena
83d958d536
Sparse matrix refactoring. ( #8238 )
...
* remove sparse method from INDArray.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove gemm
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove useage of n4j.sparseFactory
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Nd4j.sparseFactory removed.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* sparseNDArray deleted.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* iremove more sparse calls and constants.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove SparseBlasWrapper.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* delete BaseSparseBlaswrapper.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove 3 sparse factory classes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* delete SparseCPULevel.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* deletes JcusparseLevel, CUDASparselevel.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* delete nativeCPU sparse classes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* removes sparse methods from NDArrayFactory.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* more deletes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* delete (ignored) tests. BaseSparseNDArray.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* deletes ISparseNDArray.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove sparse methods from indArray.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* deletes sparse classes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-17 22:56:29 +03:00
raver119
979ef13c0b
fix build issues
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-13 11:55:13 +03:00
raver119
2bd69c004c
[WIP] Fixed signatures. SameDiff tests ( #258 ) ( #8233 )
...
* Fixed signatures. SameDiff tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Tests fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Test fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Small fix
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Fixed test
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2019-09-12 19:25:03 +03:00
AlexDBlack
a66e03355e
Merge remote-tracking branch 'fork/master'
2019-09-12 12:20:57 +10:00
raver119
07901ceb69
few more mkldnn dependencies removed
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-12 04:55:59 +03:00
raver119
98e2814879
Platform helpers ( #8216 )
...
* platform helpers draft
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* disable platform cmake
Signed-off-by: raver119 <raver119@gmail.com>
* another draft
Signed-off-by: raver119 <raver119@gmail.com>
* mkldnn convolution refactored
Signed-off-by: raver119 <raver119@gmail.com>
* minor tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* one more safety check
Signed-off-by: raver119 <raver119@gmail.com>
* prototype works
Signed-off-by: raver119 <raver119@gmail.com>
* meh
Signed-off-by: raver119 <raver119@gmail.com>
* force static library mode for mkldnn
Signed-off-by: raver119 <raver119@gmail.com>
* - ismax fix
- experimental arg fix
- don't enforce openblas on Apple hardware
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of small fixes
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* declare concurrent
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* - MKLDNN version upgrade to 1.0.2
- avgpool2d/maxpool2d APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* - avgpool2d_bp/maxpool2d_bp APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* - conv2d/batchnorm APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* - lrn/conv2d_bp/conv3d/conv3d_bp APIs update
Signed-off-by: raver119 <raver119@gmail.com>
* all ops converted to MKLDNN 1.x
Signed-off-by: raver119 <raver119@gmail.com>
* bunch of tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* namespace for platform helpers
Signed-off-by: raver119 <raver119@gmail.com>
* make sure platform helpers aren't opimized out
Signed-off-by: raver119 <raver119@gmail.com>
* build cpu_features on x86 systems
Signed-off-by: raver119 <raver119@gmail.com>
* build cpu_features on x86 systems
Signed-off-by: raver119 <raver119@gmail.com>
* more of cpu_features
Signed-off-by: raver119 <raver119@gmail.com>
* - mkldnn removed from java
- cpu_features checks in CpuNDArrayFactory
Signed-off-by: raver119 <raver119@gmail.com>
* F16C definition renamed
Signed-off-by: raver119 <raver119@gmail.com>
* some mkldnn rearrangements
Signed-off-by: raver119 <raver119@gmail.com>
* check supported instructions before doing anything
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* missied impl
Signed-off-by: raver119 <raver119@gmail.com>
* BUILD_PIC option
Signed-off-by: raver119 <raver119@gmail.com>
* conv2d fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool3d fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool3d_bp fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool2d_bp leak fix
Signed-off-by: raver119 <raver119@gmail.com>
* avgpool3d_bp leak fix
Signed-off-by: raver119 <raver119@gmail.com>
* maxpool bp leaks fixed
Signed-off-by: raver119 <raver119@gmail.com>
* printf removed
Signed-off-by: raver119 <raver119@gmail.com>
* batchnorm fix
Signed-off-by: raver119 <raver119@gmail.com>
* AVX warning/error polishing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Polish
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* remove previous MKL-DNN support layer
Signed-off-by: raver119 <raver119@gmail.com>
* avx2 tweak
Signed-off-by: raver119 <raver119@gmail.com>
* allow static for apple
Signed-off-by: raver119@gmail.com <raver119@gmail.com>
* exclude mkldnn in one more place
Signed-off-by: raver119 <raver119@gmail.com>
* exclude mkldnn in one more place
Signed-off-by: raver119 <raver119@gmail.com>
* restore OPENBLAS_PATH use
Signed-off-by: raver119 <raver119@gmail.com>
* add runtime check for avx/avx2 support
Signed-off-by: raver119 <raver119@gmail.com>
* convolution_auto
Signed-off-by: raver119 <raver119@gmail.com>
* Add logic for helper argument
* minor test fix
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* few tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* skip OpTracker props for non-x86 builds
Signed-off-by: raver119 <raver119@gmail.com>
* linux arm isn't x86 :)
Signed-off-by: raver119 <raver119@gmail.com>
* avx-512
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA presets fix
Signed-off-by: raver119 <raver119@gmail.com>
* BUILD_PIC
Signed-off-by: raver119 <raver119@gmail.com>
* prefetchw for avx2
Signed-off-by: raver119 <raver119@gmail.com>
* BUILD_PIC again
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-11 21:50:28 +03:00
Alex Black
3e73e9b56e
Fixes, cleanup, enable now fixed tests, etc ( #254 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-11 23:37:24 +10:00
Ryan Nett
8a05ec2a97
Fix a couple SameDiff training issues ( #253 )
...
* fix execBackwards training issue
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix validation not specifying outputs
Signed-off-by: Ryan Nett <rnett@skymind.io>
* another fix for validation listeners and history
Signed-off-by: Ryan Nett <rnett@skymind.io>
* tests
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add single batch dataset output methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-10 20:38:23 -07:00
Alex Black
f91970734b
Another small fix ( #251 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-10 13:14:29 +10:00
Alex Black
3fb9aecb59
Fix for null shape in SameDiff.var validation ( #250 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-10 12:22:10 +10:00
Alex Black
b582e69e3b
Small ND4J/SameDiff fixes ( #248 )
...
* #8218 Fix Nd4j.hstack rank 1 case
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8209 SameDiff: don't allow empty arrays (with 0s in shape) for variables
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-09 22:54:07 +10:00
Robert Altena
c99f980513
INDArray javadoc ( #246 )
...
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* review fixes.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-09 13:09:31 +10:00
AlexDBlack
a76a44e198
Merge remote-tracking branch 'fork/master'
2019-09-05 22:04:25 +10:00
Alex Black
52d2795193
Another round of small fixes ( #241 )
...
* Small base spark test fix; ROC toString for empty ROC
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-09-05 17:01:47 +10:00
Alex Black
87d873929f
Small LapackTest fix ( #240 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-05 14:25:20 +10:00
Ryan Nett
79867f5c5a
cleanup SDRNN and rnn ops ( #238 )
...
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-05 12:25:03 +10:00
Alex Black
7d85775934
Arbiter generic JSON ser/de fixes ( #237 )
...
* Arbiter generic JSON ser/de fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Javadoc fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-05 11:51:11 +10:00
Robert Altena
f25e3e71e5
remove lengthLong ( #236 )
...
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-05 11:19:38 +10:00
AlexDBlack
b7226bdd7a
Merge
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-05 00:54:11 +10:00
Alex Black
03c52ef9dd
Add SameDiff.bitwise namespace ( #232 )
...
* #8196 add SameDiff.bitwise namespace
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add BitsHammingDistance, add remaining bitwise ops to bitwise namespace
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-04 22:34:31 +10:00
raver119
a90c7dd995
[WIP] Last set of changes ( #234 )
...
* mmul op instead of cublasSgemm
Signed-off-by: raver119 <raver119@gmail.com>
* transB
Signed-off-by: raver119 <raver119@gmail.com>
* jcpp handles
Signed-off-by: raver119 <raver119@gmail.com>
* bitwise and/or/xor
Signed-off-by: raver119 <raver119@gmail.com>
* bitwise and/or/xor mapping
Signed-off-by: raver119 <raver119@gmail.com>
* cuda/cublas version check
Signed-off-by: raver119 <raver119@gmail.com>
* add expected version
Signed-off-by: raver119 <raver119@gmail.com>
* cuda/cublas version check in java
Signed-off-by: raver119 <raver119@gmail.com>
* one more error check
Signed-off-by: raver119 <raver119@gmail.com>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* build fix
Signed-off-by: raver119 <raver119@gmail.com>
* one more fix
Signed-off-by: raver119 <raver119@gmail.com>
* skip CUDA version check for now
Signed-off-by: raver119 <raver119@gmail.com>
* better wording
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* few more tweaks
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-04 14:41:08 +03:00
Ryan Nett
e9454b8882
SDCNN cleanup pass ( #230 )
...
* SDCNN cleanup
Signed-off-by: Ryan Nett <rnett@skymind.io>
* NonNull annotations
Signed-off-by: Ryan Nett <rnett@skymind.io>
* better javadoc, NonNull fix for sconv
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update builders to fix names
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* even more fixes
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix for null bias
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-04 00:44:01 -07:00
Alex Black
6cc887bee9
Rename flatbuffers DataType to DType ( #228 )
...
* Rename flatbuffers DataType enum to DType
Signed-off-by: Alex Black <blacka101@gmail.com>
* Rename flatbuffers DataType enum to DType
Signed-off-by: Alex Black <blacka101@gmail.com>
* Updates for flatbuffers datatype enum renaming
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-09-04 16:36:11 +10:00
Robert Altena
25b01f7850
javadoc and remove deprecated methods. ( #231 )
...
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-04 13:29:02 +10:00
raver119
7abc574eeb
Snapshot update ( #8194 )
...
* fix double consumption of rng on cpu
Signed-off-by: raver119 <raver119@gmail.com>
* Shyrma docs (#222 )
* - documenting and profiling matrix_set_diag cuda kernel
Signed-off-by: Yurii <yurii@skymind.io>
* - correct formula of pnorm pooling in cuda 2d/3d kernels
- remove helper matrix_diag which duplicates work of helper matrix_set_diag
Signed-off-by: Yurii <yurii@skymind.io>
* cublasHandle sharing + lock
Signed-off-by: raver119 <raver119@gmail.com>
* cublasHandle sharing + lock
Signed-off-by: raver119 <raver119@gmail.com>
* Documentation from serialization/deserialization in NLP (#221 )
* refactoring
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Javadocs
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Javadoc fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Cleanup
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* dedicated lock for getCudaCublasHandle
Signed-off-by: raver119 <raver119@gmail.com>
* Small fixes (#223 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ELU DL4J fixes (#224 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* javadoc (#225 )
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Small test compilation fix (#226 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8182 remove spark version suffix (#227 )
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* [WIP] Thread safety (#229 )
* sync after cublas*gemm
Signed-off-by: raver119 <raver119@gmail.com>
* mutex for CublasHelper
Signed-off-by: raver119 <raver119@gmail.com>
* don't store cublasHandle in LaunchContext, it's per-device anyway
Signed-off-by: raver119 <raver119@gmail.com>
* some printout
Signed-off-by: raver119 <raver119@gmail.com>
* check for field instead
Signed-off-by: raver119 <raver119@gmail.com>
* pew-pew
Signed-off-by: raver119 <raver119@gmail.com>
* don't release ContextBuffers until device changed
Signed-off-by: raver119 <raver119@gmail.com>
* small tweak
Signed-off-by: raver119 <raver119@gmail.com>
* some logging in sgemm
Signed-off-by: raver119 <raver119@gmail.com>
* stream sync
Signed-off-by: raver119 <raver119@gmail.com>
* some more logging
Signed-off-by: raver119 <raver119@gmail.com>
* some more error checks
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* minor AffinityManager fix
Signed-off-by: raver119 <raver119@gmail.com>
* cudaEvent error logging improvement
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* - minor corrections in ConstantTadHelper
Signed-off-by: Yurii <yurii@skymind.io>
* ConstantShapeHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantTadHelper.cu updated
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-03 22:02:02 +03:00
raver119
dddc8a1143
[WIP] Thread safety ( #229 )
...
* sync after cublas*gemm
Signed-off-by: raver119 <raver119@gmail.com>
* mutex for CublasHelper
Signed-off-by: raver119 <raver119@gmail.com>
* don't store cublasHandle in LaunchContext, it's per-device anyway
Signed-off-by: raver119 <raver119@gmail.com>
* some printout
Signed-off-by: raver119 <raver119@gmail.com>
* check for field instead
Signed-off-by: raver119 <raver119@gmail.com>
* pew-pew
Signed-off-by: raver119 <raver119@gmail.com>
* don't release ContextBuffers until device changed
Signed-off-by: raver119 <raver119@gmail.com>
* small tweak
Signed-off-by: raver119 <raver119@gmail.com>
* some logging in sgemm
Signed-off-by: raver119 <raver119@gmail.com>
* stream sync
Signed-off-by: raver119 <raver119@gmail.com>
* some more logging
Signed-off-by: raver119 <raver119@gmail.com>
* some more error checks
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* one fancy test
Signed-off-by: raver119 <raver119@gmail.com>
* minor AffinityManager fix
Signed-off-by: raver119 <raver119@gmail.com>
* cudaEvent error logging improvement
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* - minor corrections in ConstantTadHelper
Signed-off-by: Yurii <yurii@skymind.io>
* ConstantShapeHelper thread safety
Signed-off-by: raver119 <raver119@gmail.com>
* ConstantTadHelper.cu updated
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
* logging off
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-03 22:00:38 +03:00
Robert Altena
c64b340975
javadoc ( #225 )
...
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-03 14:06:42 +10:00
Alex Black
364a6e1a2a
ELU DL4J fixes ( #224 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-03 13:35:02 +10:00
raver119
d3253aff3f
dedicated lock for getCudaCublasHandle
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-02 20:01:13 +03:00
raver119
2129d5bcac
cublasHandle sharing + lock
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-02 16:52:28 +03:00
raver119
18828f9725
cublasHandle sharing + lock
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-09-02 16:52:10 +03:00
AlexDBlack
7ded4416cb
Merge remote-tracking branch 'fork/master'
2019-09-02 18:52:12 +10:00
Alex Black
82c9dc5743
ELU fix ( #219 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-02 18:37:05 +10:00
Alex Black
65c9f2a888
ELU fix ( #217 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-09-02 17:42:12 +10:00
Ryan Nett
b3a134b608
New Nd4j backprop ops for activations ( #211 )
...
* new (for java at least) backprop ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update activation functions
Signed-off-by: Ryan Nett <rnett@skymind.io>
* add differential functions for SameDiff
Signed-off-by: Ryan Nett <rnett@skymind.io>
* deprecate old ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update correct old ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update ops backprop to use new ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* misc updates for deprecated functions (mostly Nd4j.rand w/ vararg shape)
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove old imports
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-09-02 16:15:23 +10:00
Robert Altena
6d04d30c94
INDArray.java javadoc ( #215 )
...
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-02 16:06:20 +10:00
Robert Altena
ef1de6a4aa
rcorbish #8617 ( #8188 )
...
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-09-01 07:17:36 +03:00
raver119
b71c993ded
[WIP] maxpool_bp cuda fix ( #212 )
...
* one test for alex
Signed-off-by: raver119 <raver119@gmail.com>
* fix
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of safety offset in cpp
Signed-off-by: raver119 <raver119@gmail.com>
* bfloat16
Signed-off-by: raver119 <raver119@gmail.com>
* minor test rearrangement to fastpath launch
Signed-off-by: raver119 <raver119@gmail.com>
* - atomicAdd/Mul/Div fix for float16/bfloat16 misalignment
- one special test for maxpoolbp java
- safety offset of 8 bytes is back to libnd4j legacy
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-31 20:57:05 +03:00
Alex Black
6efffb727f
Import fix ( #208 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-30 23:40:52 +10:00
Alex Black
a7dca9fc87
Add java op class for relu derivative, and use in Activation ReLU ( #207 )
...
* Add java op class for relu derivative, and use in ACtivation ReLU
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-30 23:36:00 +10:00
raver119
70a9ae5068
[WIP] few tweaks ( #206 )
...
* scatter empty check
Signed-off-by: raver119 <raver119@gmail.com>
* scatter empty test
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* two tweaks
Signed-off-by: raver119 <raver119@gmail.com>
* dup tweak
Signed-off-by: raver119 <raver119@gmail.com>
* - put empty checking of indices array immediately prior helper run
Signed-off-by: Yurii <yurii@skymind.io>
* minor tests fix
Signed-off-by: raver119 <raver119@gmail.com>
* minor tests fix
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-30 16:32:01 +03:00
Robert Altena
54e320a255
javadoc ( #201 )
...
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
2019-08-30 22:40:27 +10:00
Alex Black
62e96c9724
Guava Function -> nd4j-common Function ( #203 )
...
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-30 20:39:43 +10:00
raver119
1003428a18
[WIP] Int broadcastables ( #195 )
...
* Removed invalid resource and fixed tests
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* legacy scalar/pairwise/broadcast int ops
Signed-off-by: raver119 <raver119@gmail.com>
* NDArray int broadcastables
Signed-off-by: raver119 <raver119@gmail.com>
* few more bitwise tests
Signed-off-by: raver119 <raver119@gmail.com>
* java side update
Signed-off-by: raver119 <raver119@gmail.com>
* Argument type changed for shift ops
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* legacy scalar/pairwise/broadcast int ops
Signed-off-by: raver119 <raver119@gmail.com>
* NDArray int broadcastables
Signed-off-by: raver119 <raver119@gmail.com>
* few more bitwise tests
Signed-off-by: raver119 <raver119@gmail.com>
* java side update
Signed-off-by: raver119 <raver119@gmail.com>
* Argument type changed for shift ops
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
2019-08-30 10:12:40 +03:00
Alex Black
dcc2baa676
Version upgrades ( #199 )
...
* DataVec fixes for Jackson version upgrade
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J jackson updates + databind version 2.9.9.3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Shade snakeyaml along with jackson
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Version fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Switch DataVec legacy JSON format handling to mixins
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next set of fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup for legacy JSON mapping
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade commons compress to 1.18; small test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* New Jackson backward compatibility for DL4J - Round 1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* New Jackson backward compatibility for DL4J - Round 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More fixes, all but legacy custom passing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Provide an upgrade path for custom layers for models in pre-1.0.0-beta JSON format
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Legacy deserialization cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small amount of polish - legacy JSON
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade guava version
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* IEvaluation legacy format deserialization fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade play version to 2.7.3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update nd4j-parameter-server-status to new Play API
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update DL4J UI for new play version
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More play framework updates
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove Spark 1/2 adapter code from DataVec
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec-spark dependency cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 4
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Another fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Breeze upgrade, dependency cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add Scala 2.12 version to pom.xml
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* change-scala-versions.sh - add scala 2.12, remove 2.10
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Move Spark version properties to parent pom (now that only one spark version is supported)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DataVec Play fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec play dependency fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Clean up old spark/jackson stuff
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup jackson unused dependencies
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Dropping redundant dependency
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Removed scalaxy dependency
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* DataVec fixes for Jackson version upgrade
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J jackson updates + databind version 2.9.9.3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Shade snakeyaml along with jackson
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Version fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Switch DataVec legacy JSON format handling to mixins
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next set of fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup for legacy JSON mapping
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade commons compress to 1.18; small test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* New Jackson backward compatibility for DL4J - Round 1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* New Jackson backward compatibility for DL4J - Round 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More fixes, all but legacy custom passing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Provide an upgrade path for custom layers for models in pre-1.0.0-beta JSON format
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Legacy deserialization cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small amount of polish - legacy JSON
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade guava version
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* IEvaluation legacy format deserialization fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Upgrade play version to 2.7.3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update nd4j-parameter-server-status to new Play API
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update DL4J UI for new play version
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More play framework updates
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove Spark 1/2 adapter code from DataVec
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec-spark dependency cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 2
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 3
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J spark updates, pt 4
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Another fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Breeze upgrade, dependency cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add Scala 2.12 version to pom.xml
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* change-scala-versions.sh - add scala 2.12, remove 2.10
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Move Spark version properties to parent pom (now that only one spark version is supported)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DataVec Play fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec play dependency fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Clean up old spark/jackson stuff
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Cleanup jackson unused dependencies
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add shaded guava
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Dropping redundant dependency
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Removed scalaxy dependency
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Ensure not possible to import pre-shaded classes, and remove direct guava dependencies in favor of shaded
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ND4J Shaded guava import fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DataVec and DL4J guava shading
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter, RL4J fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Build fixed
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Fix dependency
Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Fix bad merge
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Jackson shading fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Set play secret, datavec-spark-inference-server
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix for datavec-spark-inference-server
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small test fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-30 14:35:27 +10:00
Serhii Shepel
0463ee4eba
Fix backend dependencies for tests ( #189 )
2019-08-29 12:54:48 +09:00
Yurii Shyrma
70af8c2afc
Shyrma svd ( #191 )
...
* - add one additional test for svd
* - provide float argument in eye op to be a type of output array
Signed-off-by: Yurii <yurii@skymind.io>
* - add cuda capability check to mmulHelper
Signed-off-by: Yurii <yurii@skymind.io>
* - make use another method for divice id evaluation
Signed-off-by: Yurii <yurii@skymind.io>
* Eye data type as T argument
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-28 18:27:08 +03:00
raver119
f4860574d7
[WIP] More fixes ( #190 )
...
* Refactored kernels for segment_max/min/sum ops.
* Refactored segment_prod kernels.
* Refactored segment_prod kernels.
* DynamicPartition test
Signed-off-by: raver119 <raver119@gmail.com>
* Addede linear test for dynamic_partition op.
* Refactored test with int datatype.
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* some logging
Signed-off-by: raver119 <raver119@gmail.com>
* dynamicPartition fix
Signed-off-by: raver119 <raver119@gmail.com>
* get rid of some logging
Signed-off-by: raver119 <raver119@gmail.com>
* one more test for dynamic_stitch
Signed-off-by: raver119 <raver119@gmail.com>
* one more test for dynamic_stitch
Signed-off-by: raver119 <raver119@gmail.com>
* empty check for stitch
Signed-off-by: raver119 <raver119@gmail.com>
* minor print changes
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-28 15:38:57 +03:00
Ryan Nett
2a1431264f
Remove calculate output shape from java side ( #151 )
...
* remove some unneeded java-side output shape calculations
Signed-off-by: Ryan Nett <rnett@skymind.io>
* delete Broadcast
Signed-off-by: Ryan Nett <rnett@skymind.io>
* delete Linear and Module,
Signed-off-by: Ryan Nett <rnett@skymind.io>
* update Identity, HashCode, and NoOp
Signed-off-by: Ryan Nett <rnett@skymind.io>
* removed Cast java-side shape function, added tests and SDVariable.isEmpty
Signed-off-by: Ryan Nett <rnett@skymind.io>
* ignoring test w/ issues on master
Signed-off-by: Ryan Nett <rnett@skymind.io>
* noop needs more work, fixed BaseArithmeticBackprop and BaseDynamicTransform ops
merge in master for c++ build fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix EqualTo
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix other cond ops
Signed-off-by: Ryan Nett <rnett@skymind.io>
* "fake" ops calculateOutputShape() throws exception
Signed-off-by: Ryan Nett <rnett@skymind.io>
* use c++ shape calc for Linspace
Signed-off-by: Ryan Nett <rnett@skymind.io>
* fix exception message, move most to BaseCompatOp
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove SDVariable.isEmpty
Signed-off-by: Ryan Nett <rnett@skymind.io>
* remove commented out code
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-08-27 20:39:32 -07:00
Alex Black
b46f9827b8
Layer norm test updates ( #187 )
...
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-28 13:27:00 +10:00
Robert Altena
59a6e4e3ae
INDArray refactoring ( #170 )
...
* javadoc
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* remove javaTensorAlongDimension
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* javadoc
2019-08-28 12:03:23 +09:00
Ryan Nett
d31197db5f
Remove resolvePropertiesFromSameDiffBeforeExecution() ( #172 )
...
* remove unneeded resolveProperties methods
Signed-off-by: Ryan Nett <rnett@skymind.io>
* final fixes, make final to prevent more from being added
Signed-off-by: Ryan Nett <rnett@skymind.io>
* gather fix
Signed-off-by: Ryan Nett <rnett@skymind.io>
* deprecate DifferentialFunction resolveProps
Signed-off-by: Ryan Nett <rnett@skymind.io>
2019-08-28 11:02:41 +10:00
raver119
b472d7d8c8
[WIP] few more fixes ( #182 )
...
* one noop test
Signed-off-by: raver119 <raver119@gmail.com>
* skip input validation for no-input ops
Signed-off-by: raver119 <raver119@gmail.com>
* - one more noop empty test
- one more validation before sync
Signed-off-by: raver119 <raver119@gmail.com>
* typo
Signed-off-by: raver119 <raver119@gmail.com>
* one more validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* CUDA empty reductions java side
Signed-off-by: raver119 <raver119@gmail.com>
* one svd test
Signed-off-by: raver119 <raver119@gmail.com>
* Corrected segment_mean helpers and added another test.
* Refactored segment_mean kernels to avoid race_condition.
2019-08-27 21:00:38 +03:00
Alex Black
9d325ad070
Small optimization to Nd4j.readNumpy ( #183 )
...
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-27 23:27:41 +10:00
Alex Black
dff599aa8f
Test fix ( #179 )
...
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-27 20:43:36 +10:00
raver119
a49f7c908b
[WIP] More fixes ( #178 )
...
* skip string arrays for device validation
Signed-off-by: raver119 <raver119@gmail.com>
* histogram_fixed_width now really supports indexing types
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 13:21:01 +03:00
Alex Black
fd22a8ecc7
Small build fix, after last PR ( #177 )
...
Signed-off-by: Alex Black <blacka101@gmail.com>
2019-08-27 19:46:26 +10:00
Alex Black
5cfbeb64ac
Another small fix ( #175 )
...
* Layer norm 4d case fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-27 19:10:31 +10:00
Alex Black
dce4751fc1
Layer norm 4d case fixes ( #174 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-27 18:34:53 +10:00
raver119
05d45ec050
IndexReduce along dim CUDA fix
...
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 11:31:59 +03:00
raver119
df84bc7255
[WIP] More tweaks ( #173 )
...
* CUDA empty reduction
Signed-off-by: raver119 <raver119@gmail.com>
* - listdiff synchronization fix for CUDA
- listdiff test
Signed-off-by: raver119 <raver119@gmail.com>
* - IndexReduce ops now allow INDEXING_TYPES output
- topK op accepts only INDEXING_TYPES as output
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-27 10:37:10 +03:00
Alex Black
e92f7218f3
Add new tests ( #171 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-27 12:15:56 +10:00
raver119
25e5c23eae
[WIP] Error handling ( #169 )
...
* CUDA reverse rewrite + couple of tests
Signed-off-by: raver119 <raver119@gmail.com>
* don't throw exception on invalid pointer
Signed-off-by: raver119 <raver119@gmail.com>
* data types validation for fastpath exec mode + 2 tests
Signed-off-by: raver119 <raver119@gmail.com>
* data types validation for fastpath exec mode + 2 tests
Signed-off-by: raver119 <raver119@gmail.com>
* ismax allowed dtypes tweak
Signed-off-by: raver119 <raver119@gmail.com>
* lastErrorCode + lastErrorMessage for native exceptions handling
Signed-off-by: raver119 <raver119@gmail.com>
* exportable ErrorReference
Signed-off-by: raver119 <raver119@gmail.com>
* check error codes in java
Signed-off-by: raver119 <raver119@gmail.com>
* - consume lastErrorCode
- fast_in dtype validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* - sg/cb allowed output type change
- minor logging fix for data type validation
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:57:51 +03:00
raver119
bb5fc36e5e
[WIP] ops fixes ( #168 )
...
* - correct layer_norm
Signed-off-by: Yurii <yurii@skymind.io>
* - further fix of layer norm
Signed-off-by: Yurii <yurii@skymind.io>
* - correct scatter_upd op
Signed-off-by: Yurii <yurii@skymind.io>
* - correct cuda kernel for histogram_fixed_width op
Signed-off-by: Yurii <yurii@skymind.io>
* - delete comments
Signed-off-by: Yurii <yurii@skymind.io>
* enabled one ignored test
Signed-off-by: raver119 <raver119@gmail.com>
2019-08-26 19:37:05 +03:00
Alex Black
b417ca21bf
Fix for concat op shape function (empty shapes) ( #167 )
...
Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-08-26 23:10:28 +10:00