Go to file
raver119 763a225c6a [WIP] More of CUDA operations (#69)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* - gruCell_bp further

Signed-off-by: Yurii <yurii@skymind.io>

* - further work on gruCell_bp

Signed-off-by: Yurii <yurii@skymind.io>

* Inverse matrix cublas implementation. Partial working revision.

* Separation of segment ops helpers. Max separation.

* Separated segment_min ops.

* Separation of segment_mean/sum/prod/sqrtN ops heleprs.

* Fixed diagonal processing with LUP decomposition.

* Modified inversion approach using current state of LU decomposition.

* Implementation of matrix_inverse op with cuda kernels. Working revision.

* Implemented sequence_mask cuda helper. Eliminated waste printf with matrix_inverse implementation. Added proper tests.

* - further work on gruCell_bp (ff/cuda)

Signed-off-by: Yurii <yurii@skymind.io>

* comment one test for gruCell_bp

Signed-off-by: Yurii <yurii@skymind.io>

* - provide cuda static_rnn

Signed-off-by: Yurii <yurii@skymind.io>

* Refactored random_shuffle op to use new random generator.

* Refactored random_shuffle op helper.

* Fixed debug tests with random ops tests.

* Implement random_shuffle op cuda kernel helper and tests.

* - provide cuda scatter_update

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of random_shuffle for linear case with cuda kernels and tests.

* Implemented random_shuffle with cuda kernels. Final revision.

* - finally gruCell_bp is completed

Signed-off-by: Yurii <yurii@skymind.io>

* Dropout op cuda helper implementation.

* Implemented dropout_bp cuda helper.

* Implemented alpha_dropout_bp with cuda kernel helpers.

* Refactored helper.

* Implementation of suppresion helper with cuda kernels.

* - provide cpu code fot hsvToRgb, rgbToHsv, adjustHue

Signed-off-by: Yurii <yurii@skymind.io>

* Using sort by value method.

* Implementation of image.non_max_suppression op cuda-based helper.

* - correcting and testing adjust_hue, adjust_saturation cpu/cuda code

Signed-off-by: Yurii <yurii@skymind.io>

* Added cuda device prefixes to declarations.

* Implementation of hashcode op with cuda helper. Initital revision.

* rnn cu impl removed

Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 23:20:41 +10:00
.github Update contributing and issue/PR templates (#7934) 2019-06-22 16:21:27 +10:00
arbiter Merge master to upstream (#7945) 2019-06-27 18:37:04 +03:00
datavec Add SequenceTrimToLengthTransform (#61) 2019-07-20 23:07:12 +10:00
deeplearning4j Small DL4J/SameDiff fixes (#70) 2019-07-20 23:19:09 +10:00
docs Quick start ND4J (#7916) 2019-07-02 13:43:07 +10:00
gym-java-client Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
jumpy Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
libnd4j [WIP] More of CUDA operations (#69) 2019-07-20 23:20:41 +10:00
nd4j Small DL4J/SameDiff fixes (#70) 2019-07-20 23:19:09 +10:00
nd4s [WIP] Some nd4s tweaks (#68) 2019-07-20 23:18:15 +10:00
pydatavec Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
pydl4j Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
rl4j RL4J refac: Added some observation transform classes (#7958) 2019-07-20 10:28:20 +10:00
scalnet Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
.gitignore Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
CONTRIBUTING.md Update contributing and issue/PR templates (#7934) 2019-06-22 16:21:27 +10:00
Jenkinsfile Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
LICENSE Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
README.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
change-cuda-versions.sh Update dependencies to just released JavaCPP and JavaCV 1.5.1 (#8004) 2019-07-14 21:07:33 +03:00
change-scala-versions.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
change-spark-versions.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
perform-release.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
pom.xml Update dependencies to just released JavaCPP and JavaCV 1.5.1 (#8004) 2019-07-14 21:07:33 +03:00

README.md

Monorepo of Deeplearning4j

Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:

To build everything, we can use commands like

./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx

or

mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true

An example of GPU "CC" or compute capability is 61 for Titan X Pascal.

Want some examples?

We have separate repository with various examples available: https://github.com/deeplearning4j/dl4j-examples

In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/deeplearning4j/dl4j-examples/tree/master/tutorials