* Added tests for get_seed/set_seed ops. * Added missed tests for scatter_sub/mul/div ops. * Added tests for hardsigmoid and hardsigmoid_bp. * Added tests for hardtanh and hardtanh_bp ops. * Added test for histogram op. * Added tests for identity op. * Refactored mergemaxindex op. Added tests for log1p,mergemaxindex, mod and mod_bp ops. * Fixed tests for FloorDiv. * Added test for rank op. * Added tests for rationaltanh/rationaltanh_bp ops. * Added tests for realdiv/realdiv_bp. * Added tests for rectifiedtanh/_bp ops. * Added tests for shapes_of op. * Added tests for shapes_of op. * Added tests for size op. * Added tests for softplus/_bp ops. * Added tests for softsign/_bp ops. * Added tests for toggle_bits op. Fixed processing of OP_IMPL and so on defititions. * Added test for truncatediv op. * Added another test for truncatediv op. * Added another test for histogram. * Added tests for unstack_list op. * Refactored to_int32/uint32/float16/float32/double/int64/uint64 ops and tests. * Refactored mergemaxindex op helper for cuda platform and tests. * Fixed cuda kernel for histogram op helper. * Refactor skipgram to avoid early buffers shift. * Fixed check up with non_max_suppression op cuda helper. Added cuda kernel implementation for skipgram op helpers. * Added implementation of skipgram op helper for cuda platform. Working revision * Fixed mergeMaxIndex kernel and move it to separate source file. |
||
---|---|---|
.github | ||
arbiter | ||
datavec | ||
deeplearning4j | ||
docs | ||
gym-java-client | ||
jumpy | ||
libnd4j | ||
nd4j | ||
nd4s | ||
pydatavec | ||
pydl4j | ||
rl4j | ||
scalnet | ||
.gitignore | ||
CONTRIBUTING.md | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
change-cuda-versions.sh | ||
change-scala-versions.sh | ||
change-spark-versions.sh | ||
perform-release.sh | ||
pom.xml |
README.md
Monorepo of Deeplearning4j
Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:
- https://github.com/deeplearning4j/libnd4j
- https://github.com/deeplearning4j/nd4j
- https://github.com/deeplearning4j/datavec
- https://github.com/deeplearning4j/arbiter
- https://github.com/deeplearning4j/nd4s
- https://github.com/deeplearning4j/gym-java-client
- https://github.com/deeplearning4j/rl4j
- https://github.com/deeplearning4j/scalnet
- https://github.com/deeplearning4j/pydl4j
- https://github.com/deeplearning4j/jumpy
- https://github.com/deeplearning4j/pydatavec
To build everything, we can use commands like
./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx
or
mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true
An example of GPU "CC" or compute capability is 61 for Titan X Pascal.
Want some examples?
We have separate repository with various examples available: https://github.com/deeplearning4j/dl4j-examples
In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/deeplearning4j/dl4j-examples/tree/master/tutorials