Go to file
Alexander Stoyakin 4db28a9300 Cleanup of multiple projects (#175)
* Cleanup modules

* Moving subprojects to nd4j-api

* Project cleanup

* Dropped AWS sub-project

* dl4j-util moved to core

* dl4j-perf moved to core

* Tests coverage

* Revert "Moving subprojects to nd4j-api"

This reverts commit bc6eb573c6b60c407ade47172c5d204725077e6b.

* Moved nd4j-buffer and nd4j-context to nd4j-api

* Rolled back change

* Revert "Project cleanup"

This reverts commit 64ac7f369b2d968f7be437718034f093fc886ffc.

* Datavec cleaned up

* Revert "Moved nd4j-buffer and nd4j-context to nd4j-api"

This reverts commit 75f4e8da80d2551e44e1251dd6c5923289fff8e1.

# Conflicts:
#	nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/opvalidation/ReductionBpOpValidation.java

* Resolve conflict

* Compilation fixed.

* nd4j-context and nd4j-buffer moved to nd4j-api

* Fixed TF mapping for mmul

* Fix for dl4j-cuda tests

Signed-off-by: Alex Black <blacka101@gmail.com>

* Move last few tests from deeplearning4j-nn to -core

Signed-off-by: Alex Black <blacka101@gmail.com>

* Remove incorrect TF import mapping for TensorMmul op

Signed-off-by: Alex Black <blacka101@gmail.com>

* Cleaned TF mapping

* Fix path for test results on windows

* Remove old dependency

Signed-off-by: Alex Black <blacka101@gmail.com>

* One more attempt to fix path for test results on windows

* fixup! One more attempt to fix path for test results on windows

* fixup! One more attempt to fix path for test results on windows

Co-authored-by: Alex Black <blacka101@gmail.com>
Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com>
Co-authored-by: raver119 <raver119@gmail.com>
2020-01-24 22:35:00 +03:00
.github Update contributing and issue/PR templates (#7934) 2019-06-22 16:21:27 +10:00
arbiter Unit/integration test split + test speedup (#166) 2020-01-22 22:27:01 +11:00
datavec Cleanup of multiple projects (#175) 2020-01-24 22:35:00 +03:00
deeplearning4j Cleanup of multiple projects (#175) 2020-01-24 22:35:00 +03:00
docs Mention the new % unit for maxBytes and maxPhysicalBytes in Memory management documentation (#8435) (#8461) 2019-12-05 12:47:53 +09:00
gym-java-client RL4J: Make a few fixes (#8303) 2019-10-31 13:41:52 +09:00
jumpy Update links to eclipse repos (#252) 2019-09-10 19:09:46 +10:00
libnd4j Cleanup of multiple projects (#175) 2020-01-24 22:35:00 +03:00
nd4j Cleanup of multiple projects (#175) 2020-01-24 22:35:00 +03:00
nd4s String changes (#3) 2020-01-04 13:27:50 +03:00
pydatavec Minor edits to README for pydatavec and pydl4j (#8336) 2019-12-06 08:10:38 +01:00
pydl4j Minor edits to README for pydatavec and pydl4j (#8336) 2019-12-06 08:10:38 +01:00
rl4j Merge pull request #8495 from KonduitAI/master 2019-12-05 11:05:44 +11:00
scalnet Add support for CUDA 10.2 (#89) 2019-11-29 16:31:03 +11:00
.gitignore fix pydatavec for python 3... and python2 install problems (#8422) 2019-11-20 08:20:04 +01:00
CONTRIBUTING.md Various fixes (#43) 2019-11-14 19:38:20 +11:00
Jenkinsfile Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
LICENSE Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
README.md Update links to eclipse repos (#252) 2019-09-10 19:09:46 +10:00
change-cuda-versions.sh Add support for CUDA 10.2 (#89) 2019-11-29 16:31:03 +11:00
change-scala-versions.sh Version upgrades (#199) 2019-08-30 14:35:27 +10:00
perform-release.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
pom.xml Unit/integration test split + test speedup (#166) 2020-01-22 22:27:01 +11:00

README.md

Monorepo of Deeplearning4j

Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:

To build everything, we can use commands like

./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx

or

mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true

An example of GPU "CC" or compute capability is 61 for Titan X Pascal.

Want some examples?

We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples

In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials