* Remove old nd4j-jackson dependencies
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix use of old/deprecated JSON serializer
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix deserialization
Signed-off-by: Alex Black <blacka101@gmail.com>
* Delete test using deleted ser/de classes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Delete another copy of old test
Signed-off-by: Alex Black <blacka101@gmail.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to assign operation.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* fix Imax, IMin.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* concat.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* dynamicPartition
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* new ops up to gte.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* updated review items.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to matchCondition.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* up to OneHot.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip. up to permute.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip. up to rank.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip. up to scatterMul.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* resolving code review issues.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip. inclides UnsortedSegment ops.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip. up to stridedSlice.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* fix stridedSlice.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* first pass of SDBaseops.kt complete.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* fix review items.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* put branch in compilable state.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* add NDBaseTest. fix dynamicPartition signature. failed fix of assign.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* make tests public.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* adds tests up to invertedPermutation.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* fix ScalarEquals, Assign.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* updates NDBaseTest.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* updates 'check' comments based on test pass/fail.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* fix scalar ops. Update tests,
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* dev-tools review items. wip.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* dev-tools code review items.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* complete review items.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Comment for logged issue; fix test case
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
* wip
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* undo changes to Nd4jCpu.java
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* update tests.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Fixes and regenerate
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small test fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* small fixes to tests.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Cleanup
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small CUDAExecutioner fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small CudaExecutioner fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another small CudaExecutioner fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another small CudaExecutioner fix
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Robert Altena <Rob@Ra-ai.com>
* Input format extended
* Deleted redundant code
* Added weights format to conv2d config
* Refactoring
* dl4j base test functionality
* Different tests base class per module
* Check base class for dl4j-graph subproject tests
* Check if test classes extend BaseDL4JTest
* Use nd4j-common-tests as transient dependency
* Enums and tests added
* Added codegenerated methods
* Use namespace methods
* Replace DifferentialFunctionFactory with codegenerated classes
* Fixed linspace
* Namespaces regenerated
* Namespaces used instead of factory
* Regenerated base classes
* Input format extended
* Added weights format to conv2d config
* Refactoring
* dl4j base test functionality
* Different tests base class per module
* Check base class for dl4j-graph subproject tests
* Check if test classes extend BaseDL4JTest
* Use nd4j-common-tests as transient dependency
* Enums and tests added
* Added codegenerated methods
* Use namespace methods
* Replace DifferentialFunctionFactory with codegenerated classes
* Fixed linspace
* Namespaces regenerated
* Regenerated base classes
* Regenerated namespaces
* Generate nd4j namespaces
* INDArrays accepting constructors
* Generated some ops
* Some fixes
* SameDiff ops regenerated
* Regenerated nd4j ops
* externalErrors moved
* Compilation fixes
* SquaredDifference - strict number of args
* Deprecated code cleanup. Proper base class for tests.
* Extend test classes with BaseND4JTest
* Extend test classes with BaseDL4JTest
* Legacy code
* DL4J cleanup
* Exclude test utils from base class check
* Tests fixed
* Arbiter tests fix
* Test dependency scope fix + pom.xml formatting
Signed-off-by: Alex Black <blacka101@gmail.com>
* Significant number of fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another round of fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Another round of fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Few additional fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* DataVec missing test scope dependencies
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* - 1D indexing fix
- couple of new tests for 1D indexing
Signed-off-by: raver119 <raver119@gmail.com>
* percentile fix + test
Signed-off-by: raver119 <raver119@gmail.com>
* wrong signature used in test
Signed-off-by: raver119 <raver119@gmail.com>
* libnd4j added optional alpha and beta support to matmul
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j typos fixes
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j add optional alpha and beta to matmul_bp
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j one more typo fix
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j added optional alpha and beta to mkl implementation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* MatMul alpha/beta on java side
Signed-off-by: raver119 <raver119@gmail.com>
* alpha/beta fix in libnd4j
Signed-off-by: raver119 <raver119@gmail.com>
* alpha/beta fix in matmul_bp
Signed-off-by: raver119 <raver119@gmail.com>
* restored view validation
Signed-off-by: raver119 <raver119@gmail.com>
* gemv/gemm now use MatMul op
Signed-off-by: raver119 <raver119@gmail.com>
* few tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* additional INDArray.mmul signature
Signed-off-by: raver119 <raver119@gmail.com>
* make C order default for INDArray.mmul, unless both A/B have F order
Signed-off-by: raver119 <raver119@gmail.com>
* Nd4j.gemm validation fix
Signed-off-by: raver119 <raver119@gmail.com>
* disable mkldnn matmul for xxf with beta != 0 case
Signed-off-by: raver119 <raver119@gmail.com>
* SimpleRnn workspace fix + timeouts
Signed-off-by: Alex Black <blacka101@gmail.com>
* two more tests + minor fix in matmul platform check
Signed-off-by: raver119 <raver119@gmail.com>
* Flaky test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* propagate testresources profile
Signed-off-by: raver119 <raver119@gmail.com>
* Resources fix + flaky test fix
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Oleg <oleg.semeniv@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* init in this branch
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* Lenetet Mnist workflow
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* small fix for calculations
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* for Alex to check placeholder null pointer issue
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* CNN3D workflow
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* state for launching on dxg to regenterate dl4j examples
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* SD RNN test case workflow
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* small fixes
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* checkpoint at lstmBlock: Input array 1 (x) rank must be got input with rank 2 issue
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* Fix LSTMLayer inputs order
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* lstm mismatch with c++ op issue
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* LSTMLayer config draft
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* LSTMLayer config draft v2
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* have doubt I had to do this
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* NDRNN generated by codegen
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* LSTMLayerTestCases draft
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* minor fixes again
* added LSTMLayer testcases to nd4j-tests + setted Preconditions in LSTMLayer constructors
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* added lost SDCNNtestcases
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* overrided getNumOutputs from DynamicCustomOp in LSTMLayer and reorganized LSTMLayerOutputs according to cpp op
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* finished with LSTMLayerOutputs
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* Fix MKLDNN platform checks (i.e., when MKLDNN can be used vs. not)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix LSTMLayerWeights input order
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* minor fixes
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* fixed LSTMLayer testcases
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* finished SameDiffRNNTestCase
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* finished all testcases + minor fixes
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* Multiple generation-related fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix multiple issues
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* LSTM fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Regenerate ND4J namespaces and fix multiple issues
Signed-off-by: Alex Black <blacka101@gmail.com>
* changed SameDiffRNNTestCase
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* Small fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* added Nd4j.getRandom().setSeed(12345) where needed
Signed-off-by: Andrii Tuzhykov <andrewtuzhykov@gmail.com>
* #8828 Fix ND4J profiler NaN/Inf checks when using OpContext
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8828 Fix ND4J profiler NaN/Inf checks when using OpContext
Signed-off-by: Alex Black <blacka101@gmail.com>
* Tweak to weight init for SameDiff CNN test case
Signed-off-by: Alex Black <blacka101@gmail.com>
* Tweaks for test cases
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore failing tests until fixed
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* - correct reshape op for empty shape in case of -1 at the end
Signed-off-by: Yurii <iuriish@yahoo.com>
* Fix test + new reshape op constructor
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* Cholesky fixed
* Constructors added
* MatMul wrapper
* Constructor added
* Missing wrappers added
* Generate Linalg namespace added
* Output data types
* Unit tests
* Added mmul
* Code generation
* Code generated
* Build fixed
* Fixing signatures
* Tests fixed
* Tests fixed
* Added enum
* Fix tests
* Some fixes
* Eye test fixed
* SameDiff: small fix for renameVariable - also replace variable name in lossVariable list if necessary
Signed-off-by: Alex Black <blacka101@gmail.com>
* Some fixes
* Tests fixed
* Revert wrong fix
* Some fixes
* Some fixes
* Extending base test class
* Added pad
* Fixed for generated signatures
* Fixes due to nd4j codegen
* Backwards compatibility fixes
* Fixed errors in tests, reverted wrong changes
* Test fixed
* Added missing operations used for nd4s operators
* Compilation fixed
* Added meshgrid
* Fixed constructors
* fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix bad commit (incorrectly reverted change from master)
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixed test
Co-authored-by: Alex Black <blacka101@gmail.com>
* #8777 MultiLayerNetwork.evaluate(MultiDataSetIterator) overload
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8768 SameDiff.equals
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8750 shade freemarker library and switch to it in DL4J UI
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8704 DL4J UI redirect
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8776 RecordReaderDataSetIterator builder collectMetaData fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8718 Fix DL4J doEvaluation metadata
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8715 ArchiveUtils - Add option to not log every extracted file
Signed-off-by: Alex Black <blacka101@gmail.com>
* No exception for evaluations that don't support metadata
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8765 CompGraph+MDS fix for SharedTrainingMaster
Signed-off-by: Alex Black <blacka101@gmail.com>
* small fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* Timeout
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore
Signed-off-by: Alex Black <blacka101@gmail.com>
* Revert freemarker shading
Signed-off-by: Alex Black <blacka101@gmail.com>
* Ignore
Signed-off-by: Alex Black <blacka101@gmail.com>
* tf op initial
* ..
* protobuf parsing working
* model build working
* test passing
* headers
* conffix
* service loader + tests
* revert cuda version
* msg
* override
* refacc
* pom
* rem bad import
* dtype fix + const cast caaching
* rem unnecessary fields
* rem println
* rem dep
* refacc
* rem redundant arg
* Ignore TFOpLayer in DTypeTests
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* libnd4j raw implementation of sgd upader
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j some corrections and simple test added
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j some corrections after discussion
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j integrate applyScalar
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j raw implementation of rmsPropUpdater on cpu
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j fix operations declaration
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j rmsPropUpdater added, test cases for sgd, etc
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j fixed several typos
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j some fixes and improvements for rmsPropUpdater based on Java tests
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j fixed cuda implementation, update tests and corrected behavior according java tests
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j adaGrad updater added
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j one minor fix for ada grad
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j several more fixes for ada_grad
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j nesterovs updater added
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j fixed nesterovs updater behavior, several typos and rename file
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j one minor typo
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j ada max updater added
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j fixed several typos in adaMax updater
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j fixed several typos in adaMaxUpdater
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j several fixes for adaMax, added Adam Updater
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j adaDeltaUpdater added, minor fixes for adamUpdater
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j several fixes for adaDeltaUpdater
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j nadamUpdater added
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j one more correction for nadam updater
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j several fixes for nadam updater and added amsGradUpdater
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j several typos fixed in amsGradUpdater
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j some corrections and added f order support rmsProp updater
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j added support of f order for all updaters and modify tests for testing in place
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j fixed issues for updates when not in place mode used, added tests for f order
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j added input shape checks
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j some corrections for different cases handling
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j some code clean up and optimize per request
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j updaters refactoring after review
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* SgdUpdater wrapper
Signed-off-by: raver119 <raver119@gmail.com>
* first test
Signed-off-by: raver119 <raver119@gmail.com>
* RmsPropUpdater added
Signed-off-by: raver119 <raver119@gmail.com>
* NadamUpdater + NesterovsUpdater
Signed-off-by: raver119 <raver119@gmail.com>
* AmsGradUpdater
Signed-off-by: raver119 <raver119@gmail.com>
* AdamUpdater added
Signed-off-by: raver119 <raver119@gmail.com>
* AdaGradUpdater + AdaDeltaUpdater + AdaMaxUpdater
Signed-off-by: raver119 <raver119@gmail.com>
* AdaGradUpdater test added
Signed-off-by: raver119 <raver119@gmail.com>
* libnd4j remove input parameters parsing through NDArray, split implementation of helpers to separate files, added some rename, etc
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j next step to split operations implementation into separate files
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j merge master and minor corrections
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j revert some changes of split implementation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* libnd4j forgot to add header file
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* public default constructors
Signed-off-by: raver119 <raver119@gmail.com>
* ImportClassMapping updated
Signed-off-by: raver119 <raver119@gmail.com>
Co-authored-by: raver119 <raver119@gmail.com>
* #8682 Don't log openmp BLAS threads for CUDA
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8654 Add SameDiff multi-threaded tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Switching to op context for SameDiff exec
Signed-off-by: Alex Black <blacka101@gmail.com>
* Next steps
Signed-off-by: Alex Black <blacka101@gmail.com>
* Most back to passing
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Better tests, test refactoring
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* Code duplication reduction
Signed-off-by: Alex Black <blacka101@gmail.com>
* More code deduplication
Signed-off-by: Alex Black <blacka101@gmail.com>
* CUDA fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* More CUDA fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* ND4S small fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix cmake detection in msys
* Fix toolchain file on windows
* Make android 64 bit work
* Fix libnd4j build script on msys
* Update build script for windows/linux
* Encoding issue for ci
* Update pom.xml
* Update pom.xml
* Update pom.xml
* Remove mingw
* Ensure android x86 builds are inline with arm builds
* Update toolchains and env variables for x86
* Move profile for build program up to parent
* Fix blas vendor and add comment
* Update cuda presets version
* Set default value and move properties back to child pom
* Change program from hard coded to use the script as the program
* Update pom.xml
* Update pom.xml
* Static lib fix
* Update static lib output
* Get rid of old comments
* Update static for buiding
* Adding more datatypes support in datavec-python
* Using numpy C API for creating numpy arrays
* Adding parameterized tests
* Adding support for BFLOAT16 (by converting it to FLOAT)
* Cleanup
* Using casting instead of creating an array
* Giving out a warning while casting array from BFLOAT16 to FLOAT
* Add syncToPrimary and syncToSpecial methods to BaseDataBuffer
Signed-off-by: Alex Black <blacka101@gmail.com>
* Python exec: sync to host before passing pointers
Signed-off-by: Alex Black <blacka101@gmail.com>
* Added copyright header
* use np api (#267)
* python exec / numpy - check object type before cast (#268)
* use np api
* verify object before cast
* fix cong
* cuda fix
* inplace test + tiny fix
* more test
* fix double alloc
* rem tags
* fix cuda check
* Fix implicit CUDA dependency in datavec-python tests; remove new method, add test
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Fariz Rahman <farizrahman4u@gmail.com>
* created NDImage.java and fixed constructor in AdjustContrast.java
* created NDImage.java and fixed constructor in AdjustContrast.java
* created NDImage.java and fixed constructor in AdjustContrast.java v2
* regenerated NDImage from cleaned Image,kt also cleaned AdjustContrast.java
* draft of NDCNN
* draft of NDCNN
* started NDRNN
* started NDRNN
* looking like finished with namespace
* Regenerate namespaces
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add ND4J namespace methods for new namespaces
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fixes, cleanup
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Andrii Tuzhykov <andrew@unrealists.com>
Co-authored-by: Andrii Tuzhykov <andrew@konduit.ai>
Co-authored-by: AlexDBlack <blacka101@gmail.com>
* codegen for SDLoss. WIP.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* first pass of SDLoss.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* wip. Firsat cut of new op constructors. UNTESTED , NOT COMPILED YET.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* updated op signatures.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* add NDLoss tests.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* fix test.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* adds loss default params. factory.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Regenerate NDLoss
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* adds tests for null weights.
Signed-off-by: Robert Altena <Rob@Ra-ai.com>
* Last few tweaks
Signed-off-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Robert Altena <Rob@Ra-ai.com>
* Add check to ensure ALL tests extend BaseND4JTest for proper timeouts + logging
Signed-off-by: Alex Black <blacka101@gmail.com>
* Add 'must extend BaseDL4JTest' check for deeplearning4j-core
Signed-off-by: Alex Black <blacka101@gmail.com>
* Flush logging on workspace exit during tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8565 Normalizer toString/hashcode
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8731 ImagePreProcessingScaler lables/segmentation fix
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8691 Fix SameDiffLayer/Vertx finetuning and parameter setting support
Signed-off-by: Alex Black <blacka101@gmail.com>
* #8663 DL4J embedding layer weight init - don't depend on vocab size
Signed-off-by: Alex Black <blacka101@gmail.com>
* EmbeddingLayer test tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* Gradients tests added
* Fix for Standard deviation serialization + test
Signed-off-by: Alex Black <blacka101@gmail.com>
* More fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Test fixed
* Spark config driver host config for CI
Signed-off-by: Alex Black <blacka101@gmail.com>
* Op validation timeout increase
Signed-off-by: Alex Black <blacka101@gmail.com>
* Gradient check - fix for low probability test failure due to randomly all 0s mask
Signed-off-by: AlexDBlack <blacka101@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
* special workaround methods for DataBuffer.write
Signed-off-by: raver119 <raver119@gmail.com>
* one test removed
Signed-off-by: raver119 <raver119@gmail.com>
* more of unsynced
Signed-off-by: raver119 <raver119@gmail.com>
* missing asLong for BaseCudaDataBuffer
Signed-off-by: raver119 <raver119@gmail.com>
* linear equations systems solve op. Initial commit.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed compiling issues.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Linear equations systems solve. The next stage commit.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added test for linear equations systems solve operation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added additional test and fixed lower matrix retrievance.
* Implementation for solve of the systems of linear equations."
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored permutation generation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added restore for permutations batched with cuda helper for solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Finished cuda implementation for solve op helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored cpu helpers for solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fix gtest output on Windows
* Fixed issue with permutation matrix for cuda implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed issue with permutation matrix for cpu implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Eliminated waste comments.
Signed-off-by: shugeo <sgazeos@gmail.com>
* LinearSolve added
* Mapping added
* Javadoc added
* Refactored implementation of triangular_solve helpers and tests for solve matrix equations generally.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added a test for solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Solve test added
* Fix for TF import
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* - range op now accepts dargs
- dargs now can be in signature
Signed-off-by: raver119 <raver119@gmail.com>
* range dtype java side
Signed-off-by: raver119 <raver119@gmail.com>
* linspace fix
Signed-off-by: raver119 <raver119@gmail.com>
* lin_space fix for scalar outputs
Signed-off-by: raver119 <raver119@gmail.com>
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* - one more test for OneHot with dtype
- one more signature in Nd4j
Signed-off-by: raver119 <raver119@gmail.com>
* ones_as/zeros_as now accept dtype
Signed-off-by: raver119 <raver119@gmail.com>
* one more test
Signed-off-by: raver119 <raver119@gmail.com>
* - more updates for configurable data types
- ones_as/zeros_as java side + tests
Signed-off-by: raver119 <raver119@gmail.com>
* few c++ tests fixed
Signed-off-by: raver119 <raver119@gmail.com>
* few more changes around DArgs
Signed-off-by: raver119 <raver119@gmail.com>
* Cleanup modules
* Moving subprojects to nd4j-api
* Project cleanup
* Dropped AWS sub-project
* dl4j-util moved to core
* dl4j-perf moved to core
* Tests coverage
* Revert "Moving subprojects to nd4j-api"
This reverts commit bc6eb573c6b60c407ade47172c5d204725077e6b.
* Moved nd4j-buffer and nd4j-context to nd4j-api
* Rolled back change
* Revert "Project cleanup"
This reverts commit 64ac7f369b2d968f7be437718034f093fc886ffc.
* Datavec cleaned up
* Revert "Moved nd4j-buffer and nd4j-context to nd4j-api"
This reverts commit 75f4e8da80d2551e44e1251dd6c5923289fff8e1.
# Conflicts:
# nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/opvalidation/ReductionBpOpValidation.java
* Resolve conflict
* Compilation fixed.
* nd4j-context and nd4j-buffer moved to nd4j-api
* Fixed TF mapping for mmul
* Fix for dl4j-cuda tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Move last few tests from deeplearning4j-nn to -core
Signed-off-by: Alex Black <blacka101@gmail.com>
* Remove incorrect TF import mapping for TensorMmul op
Signed-off-by: Alex Black <blacka101@gmail.com>
* Cleaned TF mapping
* Fix path for test results on windows
* Remove old dependency
Signed-off-by: Alex Black <blacka101@gmail.com>
* One more attempt to fix path for test results on windows
* fixup! One more attempt to fix path for test results on windows
* fixup! One more attempt to fix path for test results on windows
Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
* Add maven profile + base tests methods for integration tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Switch from system property to environment variable; seems more reliable in intellij
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add nd4j-common-tests module, and common base test; cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Ensure all ND4J tests extend BaseND4JTest
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test spam reduction, import fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Add test logging to nd4j-aeron
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix unintended change
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Reduce sprint test log spam
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More test spam cleanup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Significantly speed up TSNE tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* W2V iterator test unit/integration split
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More NLP test speedups
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Avoid debug/verbose mode leaking between tests
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* test tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter extends base DL4J test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Arbiter test speedup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* nlp-uima test speedup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More test speedups
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix ND4J base test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Few small ND4J test speed improvements
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* DL4J tests speedup
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More tweaks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Even more test speedups
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* More tweaks
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Various test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* More test fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Add ability to specify number of threads for C++ ops in BaseDL4JTest and BaseND4JTest
Signed-off-by: Alex Black <blacka101@gmail.com>
* nd4j-aeron test profile fix for CUDA
Signed-off-by: Alex Black <blacka101@gmail.com>
* Added qr op implementation. Initial version.
* Fixed doc for qr op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation of QR decomposition. CPU platform version.
* Added a pair of tests for qr op testing.
Signed-off-by: shugeo <sgazeos@gmail.com>
* QR implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected norm using.
* Properly calculated intermediate results with QR decomposition.
* Another step to implement QR algorithm by householder.
* Cpu implementatio for QR decomposition. The first working edition.
* Corrected test to QR decomposition.
* Added tad multithreading with QR implementation.
* Finished cpu implementation for QR decomposition helpers.
* Refactored tests and improved multithreading.
* Refactored QR cpu implementation and update cuda implementation helpers.
* Cuda QR helper implementation. The first working edition.
* Eliminated waste prints.
* Restore multithreading with cuda implementation.
* Ops names corrected
* Refactored qr op helpers to optimize.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Eliminated waste manual ticking.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored memory allocation to avoid waste memory usage.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored matrixMinor method both for cuda and cpu platforms.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored method of vmul to use raw buffers instead type conversion.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored temporary array of matricies.
Signed-off-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
Co-authored-by: raver119 <raver119@gmail.com>
* Added implementation of the triangular_solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed compilation issues.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added verification of input data and helpers facilities for triangular_solve op.'
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added cpu implementation for triangular_solve helpers.
* Added tests and implementation for upper triangular equations.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added a pair of cases to tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added multithreading with cpu helpers for triangular_solve op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added cuda implementation of triangular_solve op helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Finished cuda implementation of triangular_solve helpers and tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed copyright marks.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected grammar errors with doc and error messages.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored matricies processing with triangular_solve cuda helper implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added triangular_solve wrapper
* Fixed mapping
* Added processing for adjoint with cpu helpers of triangular_solve op implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added implementation for adjoint routine with cuda platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added multithreading with adjoint routine for cpu platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Added implementation for resize_area op. Initial commit.
* Added implementation of resize_area op. Initial revision.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected resizeArea functor call.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation of resize_area. Cpu platform helpers.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation for resize_area helpers. The first part revision.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added a set of tests for resize_area op.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Cuda implementation for resize_area. Initial approach.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Adding multithreading for resize_area algorithm.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Cuda implementation of resize_area helpers. Shared memory approach.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored resizeAreaKernel with cuda implementation.
* Eliminated compilation errors.
* ResizeArea helpers for cuda platform. The first working revision.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added test for batched resize_area op testing.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Implementation of resize_are for cuda platform and tests.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed multithreading with resize_area op helper.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected copyright marks with sources.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected copyright mark for resize_area op implementation.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected copyright mark for parity ops header.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Corrected typo in strings and so on with image resize ops.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored resize_area helpers and multithreading.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added ResizeArea wrapper
* Added test with align_corners and fixed shape processing with only int args given for output size.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Added test
* TF mapping for ResizeArea
* Fixed implementation issues with resize_area op for both platforms.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Refactored image resizer struct to use flexible types for ints and floats.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Improved multithreading with resizeAreaKernel launch.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Use asynchronical memory copying with cuda platform image resize allocations.
Signed-off-by: shugeo <sgazeos@gmail.com>
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 first step of Pow_bp operation implementation
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 some corrections of calculation steps
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 some bug fixes, the PowDerevative op made broadcastable, add the raw tests for op, need refactoring to use broadcast ops
* Libnd4j: Add broadcastable elementwise power derivative #7461 fixed several bugs add broadcast support and tests, need to fix scalar+array and array+scalar
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 fixed bugs for scalar inputs, fixed multinomial tests, added tests
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 fised bugs for different shapes support, tests updated
* Libnd4j: Add broadcastable elementwise power derivative #7461 applied all possible variants via tiled arrays, add support of broadcast for Pow and PowDerivative ops, covered by tests, before review have to be replaced tiled implementation by applyTrueBroadcast
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 replaced tile by broadcast implementation, fixed issue with negative x input, corrected tests, need additional testing
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 added and corrected test cases, corrected implementation need review
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 code clean up
* Libnd4j: Add broadcastable elementwise power derivative #7461 code clean up, removed some tests, add tests with scalar
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 code improvement and clean up, split tests
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative #7461 some code clean up
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* Libnd4j: Add broadcastable elementwise power derivative replace __isnanf by internal realization
Signed-off-by: Oleg <oleg.semeniv@gmail.com>
* pow_bp wrapper
* Fixed PowBp wrapper
* Tests added
* Test fixed
* Fix return type
* Disable powBp usage
* Pow backprop changed
Co-authored-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>
* SameDiff exec: Fix for switch op when predicate is constant, and op is inside loop
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Update ignores for failing zoo models
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8555 SameDiff profiler analysis improvements
Signed-off-by: Alex Black <blacka101@gmail.com>
* Fix TF sub-op aggregation
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small filtering tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* Copyright headers
Signed-off-by: Alex Black <blacka101@gmail.com>
* Profiler
Signed-off-by: Alex Black <blacka101@gmail.com>
* Next steps, polishing, and loading SD/TF format JSON
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Next steps
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Profile comparison method
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Make profiling result writing async to reduce main thread overhead
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Profiling polishing
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Profile analyzer fixes
Signed-off-by: Alex Black <blacka101@gmail.com>
* Polish
Signed-off-by: Alex Black <blacka101@gmail.com>
* Cleanup
Signed-off-by: Alex Black <blacka101@gmail.com>
* Small formatting improvement
Signed-off-by: Alex Black <blacka101@gmail.com>
* Formatting tweak
Signed-off-by: Alex Black <blacka101@gmail.com>
* License headers
Signed-off-by: Alex Black <blacka101@gmail.com>
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Timeouts added
* Added some ops
* Ops added
* Fixed tests
* Minor fix
* Some fixes
* Digamma added
* Small fixes
* Fused batch norm fixes-
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests switched off.
* Added test for resize_bicubic.
* Eliminated wasted in test of bicubic resize.
* Switched off multithreading explicit.
* HsvToRgb and RgbToHsv added
* Eliminated waste comments and conform proper float constants.
Signed-off-by: shugeo <sgazeos@gmail.com>
* Fixed multithreading with resize_bicubic helper for cpu platform.
Signed-off-by: shugeo <sgazeos@gmail.com>
* ResizeBicubic was fixed.
* Some fixes
* Fix op name
* Validation fixed.
* Clarifications for tests
* Wrappers and small fixes for new ops.
* Add op counting to TensorFlowImportValidator
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Test tweak
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* cleaned up bert iterator tests (#110)
Signed-off-by: eraly <susan.eraly@gmail.com>
* Various pre-release fixes (#111)
* Various fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix default dtypes for MaxPoolWithArgmax
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Small pre-release tweak (#112)
* Log UI address on launch as in previous Play-based UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Logging level tweak for UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* http not https
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* datavec python ensure host (#113)
* ensure host
* one more host ensure
* info->debug
* [WIP] reverse improvements (#115)
* initial commit
Signed-off-by: raver119 <raver119@gmail.com>
* reverse draft
Signed-off-by: raver119 <raver119@gmail.com>
* reverse kernel
Signed-off-by: raver119 <raver119@gmail.com>
* reverse kernel
Signed-off-by: raver119 <raver119@gmail.com>
* 2 micro fixes
Signed-off-by: raver119 <raver119@gmail.com>
* Shugeo resize fix5 (#102)
* Refactored resize images ops to use TF-like bool args as input.
* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.
* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.
* Refactored nearest_neighbor resize op.
* Added a pair of tests for special case of resize_bilinear algorithm.
* Fixed issue with resize_bilinear op.
* Refactored cpu implementation for helpers with resize_nearest_neighbor op.
* Final fixed for resize ops to conform TF v.1.5
* Refactored cuda helpers for resize_neares_neighbor op.
* Fixed resize_bilinear to accept proper data.
* Fixed issue with non-float input for resize_bilinear op.
* Refactored cuda helper for resize_bilinear to proper process non-float inputs.
* Added tests for resize_bilinear to int inputs.
* Fixed ResizeBilinear wrapper
* Tests fixed
* Fixed float and bool constant to avoid overflow for some kind of compilers.
* Corrected float constants with float data type.
* Added f suffix for float constants.
* Corrected float constant to avoid overflow with initializing lists.
* Corrected float initializing list with float input.
* Corrected bool constant with initalizing list.
* Corrected float and bool values with initializing lists.
* Fixed wrong constant.
* Fixed issue with 1x1 input picture for resize.
* ResizeBilinear default values on import fix
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored resize images ops to use TF-like bool args as input.
* Refactored helpers for cpu implementation of resize_bilinear and resize_nearest_neighbor ops.
* Refactored cuda implementation for image.resize_bilinear and image.resize_nearest_neighbor ops helpers.
* Refactored nearest_neighbor resize op.
* Added a pair of tests for special case of resize_bilinear algorithm.
* Fixed issue with resize_bilinear op.
* Refactored cpu implementation for helpers with resize_nearest_neighbor op.
* Final fixed for resize ops to conform TF v.1.5
* Refactored cuda helpers for resize_neares_neighbor op.
* Fixed resize_bilinear to accept proper data.
* Fixed issue with non-float input for resize_bilinear op.
* Refactored cuda helper for resize_bilinear to proper process non-float inputs.
* Added tests for resize_bilinear to int inputs.
* Fixed ResizeBilinear wrapper
* Tests fixed
* Fixed float and bool constant to avoid overflow for some kind of compilers.
* Corrected float constants with float data type.
* Added f suffix for float constants.
* Corrected float constant to avoid overflow with initializing lists.
* Corrected float initializing list with float input.
* Corrected bool constant with initalizing list.
* Corrected float and bool values with initializing lists.
* Fixed wrong constant.
* Fixed issue with 1x1 input picture for resize.
* ResizeBilinear default values on import fix
Signed-off-by: raver119 <raver119@gmail.com>
* - add causal mode of padding to convolutions
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add additional tests for causal conv1d
Signed-off-by: Yurii <iuriish@yahoo.com>
* - add causal mode for cuda conv kernels
Signed-off-by: Yurii <iuriish@yahoo.com>
* Java side of Conv1D changes
Signed-off-by: raver119 <raver119@gmail.com>
* Add Conv1DDerivative op
Signed-off-by: Alex Black <blacka101@gmail.com>
* Causal Conv1D gradient checks
Signed-off-by: Alex Black <blacka101@gmail.com>
* Tweaks
Signed-off-by: Alex Black <blacka101@gmail.com>
* - add causal padding mode to conv2d_bp
Signed-off-by: Yurii <iuriish@yahoo.com>
* More thorough causal conv1d tests
Signed-off-by: Alex Black <blacka101@gmail.com>
* Implementation for non_max_suppression_v3 was added. Initial version
* Added check for overcome threshold.
* Added definition for V3 method.
* java remapping for NonMaxSuppressionV3
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed proporly processing of an empty output and test.
* Refactored op to less threshold data to float.
* Implemented cuda-based helper for non_max_suppression_v3 op.
* Fixed fake_quant_with_min_max_vars op.
* Fixed tests with float numbers.
* - assert now stops execution
- sortByKey/sortByValue now have input validation
Signed-off-by: raver119 <raver119@gmail.com>
* missing var
Signed-off-by: raver119 <raver119@gmail.com>
* Fixed proper processing for zero max_size inputs.
* Refactored kernel callers.
* Fixed return statement for logdet op helper.
* Refactored unsorted segment SqrtN op.
* get back 8 tail bytes on CUDA
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored segment prod ops and helpers for cuda and tests.
* Additional test.
* CudaWorkspace tests updated for 8 tail bytes
Signed-off-by: raver119 <raver119@gmail.com>
* special atomic test
Signed-off-by: raver119 <raver119@gmail.com>
* atomicMul/atomicDiv fix for 16bit values
Signed-off-by: raver119 <raver119@gmail.com>
* Eliminated waste prints.
* Update shaded Jackson version to 2.10.1
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Remove no longer needed scala compiler plugin from UI
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix op name for BitwiseAnd op
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* TimeDistributedLayer mask array fix + test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Corrected input checking and tests for bitcast op.
* Fixed an issue with non_max_suppression form generation and processing with score threshold given.
* Fixed bilinear resize kernel and tests.
* push for Serhii
Signed-off-by: raver119 <raver119@gmail.com>
* Added test for nearest_neighbor resize with int input.
* Added data type check for input/output match.
* Eliminate error in macros.
* Improved output message for type checking.
* Fixed input/output types for op.
* Eliminated waste logging.
* Refactored resize_bilinear helper for multithreading for cpu platform.
* Cosmetic changes only.
* Fixed error for string substitution.
* Skip test for cbow_batch with cuda.
* fix for resizeNearestNeighbor output dtype
Signed-off-by: raver119 <raver119@gmail.com>
* Refactored non_max_suppression helper.
* Refactored shape generation and input handling.
* Added additional test.
* - create op
- skip exec for empty inputs for non_max_suppression
- EmptyHandling idea
Signed-off-by: raver119 <raver119@gmail.com>
* Create op and mapping for it
Signed-off-by: raver119 <raver119@gmail.com>
* - get rid of some copy procedures in mmulHelper ops
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on embedding cuda api for batched gemm (cublasGemmBatchedEx) in our mmulHelper class
Signed-off-by: Yurii <iuriish@yahoo.com>
* - further work on cuda batched gamm api
Signed-off-by: Yurii <iuriish@yahoo.com>
* - write own cuda kernel performing batched gemm
Signed-off-by: Yurii <iuriish@yahoo.com>
* missing include in MmulHelper
Signed-off-by: raver119 <raver119@gmail.com>
* - forgot to keep in code previous correct kernels for mmulNxN, since it may happen that new onw will fail for some reason in future
Signed-off-by: Yurii <iuriish@yahoo.com>
* disable old tensordot
Signed-off-by: raver119 <raver119@gmail.com>
* - rewrite cuda kernels for usualGemm and usualGemv
Signed-off-by: Yurii <iuriish@yahoo.com>
* - profiling mmul helpers
Signed-off-by: Yurii <iuriish@yahoo.com>
* - prints to check shapes were added
Signed-off-by: Yurii <iuriish@yahoo.com>
* - correct type of output array Cin mmulNxN
Signed-off-by: Yurii <iuriish@yahoo.com>
* - take into account possible nans in C array
Signed-off-by: Yurii <iuriish@yahoo.com>
* slightly change numThreads message
Signed-off-by: raver119 <raver119@gmail.com>
* - make corrections in accordance to given notes in pr review
Signed-off-by: Yurii <iuriish@yahoo.com>
* Added implementation files for image_resize and resize_bicubic ops.
* Image resize and image.resize_bicubic ops implementation. Initial revision.
* Minor fix
* Some TF imports disabled.
* Finished with infrastructure development for image.resize_bilinear op and image_resizo op implementation.
* Refactored resize methods.
* Added processing for Mitchelcubic algorithm.
* adjust_contrast
* Small fix for TF import expected value loading when variable name starts with the test name
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Tests
* Tests added.
* Removed tf names absent in mapping.
* Some fixes.
* Small fixes
* Minor change
* Some failing tests.
* Disable failed test
* Ignore some tests
* Fix import class mapping
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fix float property mapping (flatbuffers)
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Override equality function for model 'dropout'
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Fail tests
* Failed tests ignored temporarily.
* Minor fixes
* Small fix
* Conflict resolved
* Default implementations of tensorflowName and onnxName
* one range test
Signed-off-by: raver119 <raver119@gmail.com>
* few Context convenience singatures
Signed-off-by: raver119 <raver119@gmail.com>
* one more range test
Signed-off-by: raver119 <raver119@gmail.com>
* "range" "fix"
Signed-off-by: raver119 <raver119@gmail.com>
* adjuct_contrast_v2 now allows scale factor to be provided via input_variable
Signed-off-by: raver119 <raver119@gmail.com>
* adjust_contrast now allows scale factor as variable too
Signed-off-by: raver119 <raver119@gmail.com>
* bitcast shape tests
Signed-off-by: raver119 <raver119@gmail.com>
* BitCast import dtype added
Signed-off-by: raver119 <raver119@gmail.com>
* few more BitCast signatures
Signed-off-by: raver119 <raver119@gmail.com>
* - platform helpers can be disabled on per-op basis now via Context::allowHelpers
- java has access to it as well
Signed-off-by: raver119 <raver119@gmail.com>
* global platform-helpers trigger
Signed-off-by: raver119 <raver119@gmail.com>
* few signatures renamed
Signed-off-by: raver119 <raver119@gmail.com>
* - few new env variables to follow
- maxThreads/masterThreads differentiation
Signed-off-by: raver119 <raver119@gmail.com>
* Javadoc update
Signed-off-by: raver119 <raver119@gmail.com>
* #8280 biasadd_bp nchw arg fixes (java side) + test
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* #8285 Concat op Java side fixes
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* Concat op cpp fix - allow dynamic axis to be negative, same as static axis
Signed-off-by: AlexDBlack <blacka101@gmail.com>
* ignores for deconv3d import tests until deconv3d_tf op is implemented
Signed-off-by: AlexDBlack <blacka101@gmail.com>