f03b0ee78f
* Added test for MatrixInverse with double input. Fixed matrixDeterminantKernel. * Fixed kernels to avoid waste templating. * Fixed logDeterminant kernel. * Refactored type check for lup' * - decrease blockDim value for zeta op Signed-off-by: Yurii <yurii@skymind.io> * Added print for compound matrix with CUDA. * Refactored upper matrix invertion kernels. * - provide move constructor and move assignment operator for OpArgsHoder class Signed-off-by: Yurii <yurii@skymind.io> * Refactored usage of launch context. * - add test for mergemax Signed-off-by: Yurii <yurii@skymind.io> * get rid of AveragingArrayProxy Signed-off-by: raver119 <raver119@gmail.com> * Refactoring of LUP inversion. * Added prints for invertion. * - add OpArgsHolder copy constructor and assignment operator Signed-off-by: Yurii <yurii@skymind.io> * Added test for lower inversion * - fix bug in upsampling2d/3d_bp op Signed-off-by: Yurii <yurii@skymind.io> * Added expensive printfs to kernel. * Refactored expensive kernel prints. * Refactored expensive printfs * - remove nullify Signed-off-by: Yurii <yurii@skymind.io> * Eliminated waste prints with tests. * upsampling2d_bp test Signed-off-by: raver119 <raver119@gmail.com> * test updated Signed-off-by: raver119 <raver119@gmail.com> |
||
---|---|---|
.github | ||
arbiter | ||
datavec | ||
deeplearning4j | ||
docs | ||
gym-java-client | ||
jumpy | ||
libnd4j | ||
nd4j | ||
nd4s | ||
pydatavec | ||
pydl4j | ||
rl4j | ||
scalnet | ||
.gitignore | ||
CONTRIBUTING.md | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
change-cuda-versions.sh | ||
change-scala-versions.sh | ||
change-spark-versions.sh | ||
perform-release.sh | ||
pom.xml |
README.md
Monorepo of Deeplearning4j
Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:
- https://github.com/deeplearning4j/libnd4j
- https://github.com/deeplearning4j/nd4j
- https://github.com/deeplearning4j/datavec
- https://github.com/deeplearning4j/arbiter
- https://github.com/deeplearning4j/nd4s
- https://github.com/deeplearning4j/gym-java-client
- https://github.com/deeplearning4j/rl4j
- https://github.com/deeplearning4j/scalnet
- https://github.com/deeplearning4j/pydl4j
- https://github.com/deeplearning4j/jumpy
- https://github.com/deeplearning4j/pydatavec
To build everything, we can use commands like
./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx
or
mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true
An example of GPU "CC" or compute capability is 61 for Titan X Pascal.
Want some examples?
We have separate repository with various examples available: https://github.com/deeplearning4j/dl4j-examples
In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/deeplearning4j/dl4j-examples/tree/master/tutorials