* remove sparse method from INDArray. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * remove gemm Signed-off-by: Robert Altena <Rob@Ra-ai.com> * remove useage of n4j.sparseFactory Signed-off-by: Robert Altena <Rob@Ra-ai.com> * Nd4j.sparseFactory removed. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * sparseNDArray deleted. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * iremove more sparse calls and constants. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * remove SparseBlasWrapper. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * delete BaseSparseBlaswrapper. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * remove 3 sparse factory classes. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * delete SparseCPULevel. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * deletes JcusparseLevel, CUDASparselevel. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * delete nativeCPU sparse classes. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * removes sparse methods from NDArrayFactory. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * more deletes. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * delete (ignored) tests. BaseSparseNDArray. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * deletes ISparseNDArray. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * remove sparse methods from indArray. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * deletes sparse classes. Signed-off-by: Robert Altena <Rob@Ra-ai.com> |
||
---|---|---|
.github | ||
arbiter | ||
datavec | ||
deeplearning4j | ||
docs | ||
gym-java-client | ||
jumpy | ||
libnd4j | ||
nd4j | ||
nd4s | ||
pydatavec | ||
pydl4j | ||
rl4j | ||
scalnet | ||
.gitignore | ||
CONTRIBUTING.md | ||
Jenkinsfile | ||
LICENSE | ||
README.md | ||
change-cuda-versions.sh | ||
change-scala-versions.sh | ||
perform-release.sh | ||
pom.xml |
README.md
Monorepo of Deeplearning4j
Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:
- https://github.com/eclipse/deeplearning4j/tree/master/libnd4j
- https://github.com/eclipse/deeplearning4j/tree/master/nd4j
- https://github.com/eclipse/deeplearning4j/tree/master/datavec
- https://github.com/eclipse/deeplearning4j/tree/master/arbiter
- https://github.com/eclipse/deeplearning4j/tree/master/nd4s
- https://github.com/eclipse/deeplearning4j/tree/master/gym-java-client
- https://github.com/eclipse/deeplearning4j/tree/master/rl4j
- https://github.com/eclipse/deeplearning4j/tree/master/scalnet
- https://github.com/eclipse/deeplearning4j/tree/master/pydl4j
- https://github.com/eclipse/deeplearning4j/tree/master/jumpy
- https://github.com/eclipse/deeplearning4j/tree/master/pydatavec
To build everything, we can use commands like
./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx
or
mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true
An example of GPU "CC" or compute capability is 61 for Titan X Pascal.
Want some examples?
We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples
In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials