966642c1c9
* initial commit Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Java Random.getFloat()/getDouble() methods mapped to C++ Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Refactored relativeT for float and double data types. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored float relativeT method. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored relativeT Signed-off-by: shugeo <sgazeos@gmail.com> * - additional rng tests - float/double uniform generation methos slightly changed Signed-off-by: raver119@gmail.com <raver119@gmail.com> * use bitset instead of manual conversion Signed-off-by: raver119@gmail.com <raver119@gmail.com> * rollback valueBits changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * remove unused shapelist Signed-off-by: raver119@gmail.com <raver119@gmail.com> * update KMeans ground truth test Signed-off-by: raver119@gmail.com <raver119@gmail.com> * dedicated union to make MSVC happy Signed-off-by: raver119 <raver119@gmail.com> * minor tweaks Signed-off-by: raver119 <raver119@gmail.com> * .seh_savexmm workaround? Signed-off-by: raver119 <raver119@gmail.com> * don't use march=native in tests on windows Signed-off-by: raver119 <raver119@gmail.com> Co-authored-by: shugeo <sgazeos@gmail.com> |
||
---|---|---|
.github | ||
arbiter | ||
contrib | ||
datavec | ||
deeplearning4j | ||
jumpy | ||
libnd4j | ||
nd4j | ||
nd4s | ||
pydatavec | ||
pydl4j | ||
python4j | ||
rl4j | ||
scalnet | ||
.gitignore | ||
CONTRIBUTING.md | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
change-cuda-versions.sh | ||
change-scala-versions.sh | ||
perform-release.sh | ||
pom.xml |
README.md
Monorepo of Deeplearning4j
Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:
- https://github.com/eclipse/deeplearning4j/tree/master/libnd4j
- https://github.com/eclipse/deeplearning4j/tree/master/nd4j
- https://github.com/eclipse/deeplearning4j/tree/master/datavec
- https://github.com/eclipse/deeplearning4j/tree/master/arbiter
- https://github.com/eclipse/deeplearning4j/tree/master/nd4s
- https://github.com/eclipse/deeplearning4j/tree/master/rl4j
- https://github.com/eclipse/deeplearning4j/tree/master/scalnet
- https://github.com/eclipse/deeplearning4j/tree/master/pydl4j
- https://github.com/eclipse/deeplearning4j/tree/master/jumpy
- https://github.com/eclipse/deeplearning4j/tree/master/pydatavec
To build everything, we can use commands like
./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx
or
mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true
An example of GPU "CC" or compute capability is 61 for Titan X Pascal.
Want some examples?
We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples
In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials