76f3553679
* - provide correct possible output types in mergeMaxIndex op Signed-off-by: Yurii <iuriish@yahoo.com> * - cleaning up the unneeded backprop arg in reverse_bp op Signed-off-by: Yurii <iuriish@yahoo.com> * - improve clipByNorm both ff and bp Signed-off-by: Yurii <iuriish@yahoo.com> * - implementation and testing clipByAvgNorm_bp op Signed-off-by: Yurii <iuriish@yahoo.com> * - pass biases in any way in dnnl lstm op, they are zeros when user doesn't provide them to us Signed-off-by: Yurii <iuriish@yahoo.com> * - start working on mkldnn concat op Signed-off-by: Yurii <iuriish@yahoo.com> * - further work on mkldnn concat Signed-off-by: Yurii <iuriish@yahoo.com> * missing declaration fix Signed-off-by: raver119@gmail.com <raver119@gmail.com> * - polishing mkl ops Signed-off-by: Yurii <iuriish@yahoo.com> * - testing and fixing bugs in mkl concat op Signed-off-by: Yurii <iuriish@yahoo.com> * - fix linkage error for windows cuda build Signed-off-by: Yurii <iuriish@yahoo.com> * - further conflicts resolving with master Signed-off-by: Yurii <iuriish@yahoo.com> * - fix format tags in mkldnn matmul op Signed-off-by: Yurii <iuriish@yahoo.com> * - provide additional type cast in clip.cu Signed-off-by: Yurii <iuriish@yahoo.com> * - finally bug in mkldnn tanh_bp was caught Co-authored-by: raver119@gmail.com <raver119@gmail.com> |
||
---|---|---|
.github | ||
arbiter | ||
datavec | ||
deeplearning4j | ||
jumpy | ||
libnd4j | ||
nd4j | ||
nd4s | ||
pydatavec | ||
pydl4j | ||
rl4j | ||
scalnet | ||
.gitignore | ||
CONTRIBUTING.md | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
change-cuda-versions.sh | ||
change-scala-versions.sh | ||
perform-release.sh | ||
pom.xml |
README.md
Monorepo of Deeplearning4j
Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:
- https://github.com/eclipse/deeplearning4j/tree/master/libnd4j
- https://github.com/eclipse/deeplearning4j/tree/master/nd4j
- https://github.com/eclipse/deeplearning4j/tree/master/datavec
- https://github.com/eclipse/deeplearning4j/tree/master/arbiter
- https://github.com/eclipse/deeplearning4j/tree/master/nd4s
- https://github.com/eclipse/deeplearning4j/tree/master/rl4j
- https://github.com/eclipse/deeplearning4j/tree/master/scalnet
- https://github.com/eclipse/deeplearning4j/tree/master/pydl4j
- https://github.com/eclipse/deeplearning4j/tree/master/jumpy
- https://github.com/eclipse/deeplearning4j/tree/master/pydatavec
To build everything, we can use commands like
./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx
or
mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true
An example of GPU "CC" or compute capability is 61 for Titan X Pascal.
Want some examples?
We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples
In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials