1f5e15b541
* - add possibility of passing scalar-array as input parameter for scale factor in adjust hue/contrast/saturation ops - correct typo in function which calculates regularized incomplete beta integral Signed-off-by: Yurii <iuriish@yahoo.com> * - fix bug in betainc cuda kernel Signed-off-by: Yurii <iuriish@yahoo.com> * - start working on implementation of digamma function Signed-off-by: Yurii <iuriish@yahoo.com> * - further work on digamma function (cpu) Signed-off-by: Yurii <iuriish@yahoo.com> * - testing and fixing bugs in digamma op Signed-off-by: Yurii <iuriish@yahoo.com> * - make correction n cuda kernel for polyGamma Signed-off-by: Yurii <iuriish@yahoo.com> * - remove unnecessary stuff from betaInc cuda kernel Signed-off-by: Yurii <iuriish@yahoo.com> * - resolve conflicts in DeclarableOpsTests3.cpp after master branch has been merged Signed-off-by: Yurii <iuriish@yahoo.com> * - restore id number of Not opertion in legacy_ops.h Signed-off-by: Yurii <iuriish@yahoo.com> * - correct padding calculation in mkl dnn conv1d causal Signed-off-by: Yurii <iuriish@yahoo.com> * restore empty check in adjust_contrast_v2 Signed-off-by: raver119 <raver119@gmail.com> |
||
---|---|---|
.github | ||
arbiter | ||
datavec | ||
deeplearning4j | ||
docs | ||
gym-java-client | ||
jumpy | ||
libnd4j | ||
nd4j | ||
nd4s | ||
pydatavec | ||
pydl4j | ||
rl4j | ||
scalnet | ||
.gitignore | ||
CONTRIBUTING.md | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
change-cuda-versions.sh | ||
change-scala-versions.sh | ||
perform-release.sh | ||
pom.xml |
README.md
Monorepo of Deeplearning4j
Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:
- https://github.com/eclipse/deeplearning4j/tree/master/libnd4j
- https://github.com/eclipse/deeplearning4j/tree/master/nd4j
- https://github.com/eclipse/deeplearning4j/tree/master/datavec
- https://github.com/eclipse/deeplearning4j/tree/master/arbiter
- https://github.com/eclipse/deeplearning4j/tree/master/nd4s
- https://github.com/eclipse/deeplearning4j/tree/master/gym-java-client
- https://github.com/eclipse/deeplearning4j/tree/master/rl4j
- https://github.com/eclipse/deeplearning4j/tree/master/scalnet
- https://github.com/eclipse/deeplearning4j/tree/master/pydl4j
- https://github.com/eclipse/deeplearning4j/tree/master/jumpy
- https://github.com/eclipse/deeplearning4j/tree/master/pydatavec
To build everything, we can use commands like
./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx
or
mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true
An example of GPU "CC" or compute capability is 61 for Titan X Pascal.
Want some examples?
We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples
In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials