c396fcb960
* - numPrefixBlocks fix for threshold_encoding - temparrays pointers fixed Signed-off-by: raver119@gmail.com <raver119@gmail.com> * auto configuration of memory workspace for gradients sharing Signed-off-by: raver119@gmail.com <raver119@gmail.com> * limit sparse encoding message size Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one more workspace test Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one more CUDA-specific test Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one more CUDA-specific workspace test Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one more CUDA-specific workspace test Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one more CUDA-specific workspace test Signed-off-by: raver119@gmail.com <raver119@gmail.com> * add separate host/device reset for circular workspace mode Signed-off-by: raver119@gmail.com <raver119@gmail.com> * new PW builder method for encoder memory amount Signed-off-by: raver119@gmail.com <raver119@gmail.com> * "inplace" execution for threshold encoding Signed-off-by: raver119@gmail.com <raver119@gmail.com> |
||
---|---|---|
.github | ||
arbiter | ||
datavec | ||
deeplearning4j | ||
jumpy | ||
libnd4j | ||
nd4j | ||
nd4s | ||
pydatavec | ||
pydl4j | ||
rl4j | ||
scalnet | ||
.gitignore | ||
CONTRIBUTING.md | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
change-cuda-versions.sh | ||
change-scala-versions.sh | ||
perform-release.sh | ||
pom.xml |
README.md
Monorepo of Deeplearning4j
Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:
- https://github.com/eclipse/deeplearning4j/tree/master/libnd4j
- https://github.com/eclipse/deeplearning4j/tree/master/nd4j
- https://github.com/eclipse/deeplearning4j/tree/master/datavec
- https://github.com/eclipse/deeplearning4j/tree/master/arbiter
- https://github.com/eclipse/deeplearning4j/tree/master/nd4s
- https://github.com/eclipse/deeplearning4j/tree/master/rl4j
- https://github.com/eclipse/deeplearning4j/tree/master/scalnet
- https://github.com/eclipse/deeplearning4j/tree/master/pydl4j
- https://github.com/eclipse/deeplearning4j/tree/master/jumpy
- https://github.com/eclipse/deeplearning4j/tree/master/pydatavec
To build everything, we can use commands like
./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx
or
mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true
An example of GPU "CC" or compute capability is 61 for Titan X Pascal.
Want some examples?
We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples
In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials