Go to file
raver119 7783012f39
cuDNN integration (#150)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* one file

Signed-off-by: raver119 <raver119@gmail.com>

* few more includes

Signed-off-by: raver119 <raver119@gmail.com>

* m?

Signed-off-by: raver119 <raver119@gmail.com>

* const

Signed-off-by: raver119 <raver119@gmail.com>

* cudnn linkage in tests

Signed-off-by: raver119 <raver119@gmail.com>

* culibos

Signed-off-by: raver119 <raver119@gmail.com>

* static reminder

Signed-off-by: raver119 <raver119@gmail.com>

* platform engine tag

Signed-off-by: raver119 <raver119@gmail.com>

* HAVE_CUDNN moved to config.h.in

Signed-off-by: raver119 <raver119@gmail.com>

* include

Signed-off-by: raver119 <raver119@gmail.com>

* include

Signed-off-by: raver119 <raver119@gmail.com>

* skip cudnn handle creation if there's not cudnn

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* target device in context

Signed-off-by: raver119 <raver119@gmail.com>

* platform engines

Signed-off-by: raver119 <raver119@gmail.com>

* platform engines

Signed-off-by: raver119 <raver119@gmail.com>

* allow multiple -h args

Signed-off-by: raver119 <raver119@gmail.com>

* allow multiple -h args

Signed-off-by: raver119 <raver119@gmail.com>

* move mkldnn out of CPU block

Signed-off-by: raver119 <raver119@gmail.com>

* link to mkldnn on cuda

Signed-off-by: raver119 <raver119@gmail.com>

* less prints

Signed-off-by: raver119 <raver119@gmail.com>

* minor tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* next step

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d NCHW draft

Signed-off-by: raver119 <raver119@gmail.com>

* conv2d biasAdd

Signed-off-by: raver119 <raver119@gmail.com>

* test for MKL/CUDNN combined use

Signed-off-by: raver119 <raver119@gmail.com>

* - provide additional code for conv2d ff based on cudnn api, not tested yet

Signed-off-by: Yurii <iuriish@yahoo.com>

* - further work on conv2d helper based on using cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - fixing several cuda bugs which appeared after cudnn lib had been started to use

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation of conv2d backprop op based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementaion of conv3d and conv3d_bp ops based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - bugs fixing in conv3d/conv3d_bp ops (cudnn in use)

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation of depthwiseConv2d (ff/bp) op based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - implementation of batchnorm ff op based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - disable cudnn batchnorm temporary

Signed-off-by: Yurii <iuriish@yahoo.com>

* - add minor change in cmake

Signed-off-by: Yurii <iuriish@yahoo.com>

* engine for depthwise mkldnn

Signed-off-by: raver119 <raver119@gmail.com>

* couple of includes

Signed-off-by: raver119 <raver119@gmail.com>

* - provide permutation to cudnn batchnorm ff when format is NHWC

Signed-off-by: Yurii <iuriish@yahoo.com>

* lgamma fix

Signed-off-by: raver119 <raver119@gmail.com>

* - eliminate memory leak in two tests

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: Yurii Shyrma <iuriish@yahoo.com>
2020-01-20 21:32:46 +03:00
.github Update contributing and issue/PR templates (#7934) 2019-06-22 16:21:27 +10:00
arbiter Add support for CUDA 10.2 (#89) 2019-11-29 16:31:03 +11:00
datavec Various fixes (#143) 2020-01-04 13:45:07 +11:00
deeplearning4j String changes (#3) 2020-01-04 13:27:50 +03:00
docs Mention the new % unit for maxBytes and maxPhysicalBytes in Memory management documentation (#8435) (#8461) 2019-12-05 12:47:53 +09:00
gym-java-client RL4J: Make a few fixes (#8303) 2019-10-31 13:41:52 +09:00
jumpy Update links to eclipse repos (#252) 2019-09-10 19:09:46 +10:00
libnd4j cuDNN integration (#150) 2020-01-20 21:32:46 +03:00
nd4j cuDNN integration (#150) 2020-01-20 21:32:46 +03:00
nd4s String changes (#3) 2020-01-04 13:27:50 +03:00
pydatavec Minor edits to README for pydatavec and pydl4j (#8336) 2019-12-06 08:10:38 +01:00
pydl4j Minor edits to README for pydatavec and pydl4j (#8336) 2019-12-06 08:10:38 +01:00
rl4j Merge pull request #8495 from KonduitAI/master 2019-12-05 11:05:44 +11:00
scalnet Add support for CUDA 10.2 (#89) 2019-11-29 16:31:03 +11:00
.gitignore fix pydatavec for python 3... and python2 install problems (#8422) 2019-11-20 08:20:04 +01:00
CONTRIBUTING.md Various fixes (#43) 2019-11-14 19:38:20 +11:00
Jenkinsfile Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
LICENSE Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
README.md Update links to eclipse repos (#252) 2019-09-10 19:09:46 +10:00
change-cuda-versions.sh Add support for CUDA 10.2 (#89) 2019-11-29 16:31:03 +11:00
change-scala-versions.sh Version upgrades (#199) 2019-08-30 14:35:27 +10:00
perform-release.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
pom.xml Python updates (#86) 2019-12-02 19:20:23 +11:00

README.md

Monorepo of Deeplearning4j

Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:

To build everything, we can use commands like

./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx

or

mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true

An example of GPU "CC" or compute capability is 61 for Titan X Pascal.

Want some examples?

We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples

In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials