09a827fb6d
* #8395 Keras import - support scaled identity weight init Signed-off-by: AlexDBlack <blacka101@gmail.com> * More Keras scaled weight init fixes Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8352 Deprecate duplicate SamplingDataSetIterator class Signed-off-by: AlexDBlack <blacka101@gmail.com> * Remove /O2 optimization for faster CUDA build Signed-off-by: AlexDBlack <blacka101@gmail.com> * Tweak regression test precision for CUDA Signed-off-by: AlexDBlack <blacka101@gmail.com> * Fix edge cases for buffer creation Signed-off-by: AlexDBlack <blacka101@gmail.com> * Update MKLDNN validation tests to new helper enable/disable settings Signed-off-by: AlexDBlack <blacka101@gmail.com> * Delete debugging class Signed-off-by: AlexDBlack <blacka101@gmail.com> * MKLDNN test - add proper skip for CUDA backend Signed-off-by: AlexDBlack <blacka101@gmail.com> * Align WeightInitUtil with weight init classes Signed-off-by: AlexDBlack <blacka101@gmail.com> * Fix for SameDiff test layers weight init when using IWeightInit classes Signed-off-by: AlexDBlack <blacka101@gmail.com> |
||
---|---|---|
.github | ||
arbiter | ||
datavec | ||
deeplearning4j | ||
docs | ||
gym-java-client | ||
jumpy | ||
libnd4j | ||
nd4j | ||
nd4s | ||
pydatavec | ||
pydl4j | ||
rl4j | ||
scalnet | ||
.gitignore | ||
CONTRIBUTING.md | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
change-cuda-versions.sh | ||
change-scala-versions.sh | ||
perform-release.sh | ||
pom.xml |
README.md
Monorepo of Deeplearning4j
Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:
- https://github.com/eclipse/deeplearning4j/tree/master/libnd4j
- https://github.com/eclipse/deeplearning4j/tree/master/nd4j
- https://github.com/eclipse/deeplearning4j/tree/master/datavec
- https://github.com/eclipse/deeplearning4j/tree/master/arbiter
- https://github.com/eclipse/deeplearning4j/tree/master/nd4s
- https://github.com/eclipse/deeplearning4j/tree/master/gym-java-client
- https://github.com/eclipse/deeplearning4j/tree/master/rl4j
- https://github.com/eclipse/deeplearning4j/tree/master/scalnet
- https://github.com/eclipse/deeplearning4j/tree/master/pydl4j
- https://github.com/eclipse/deeplearning4j/tree/master/jumpy
- https://github.com/eclipse/deeplearning4j/tree/master/pydatavec
To build everything, we can use commands like
./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx
or
mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true
An example of GPU "CC" or compute capability is 61 for Titan X Pascal.
Want some examples?
We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples
In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials