Yurii Shyrma e700b59f80
Shyrma weights format (#329)
* - start to introduce additional weights formats into conv2d ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide weights format variety in backprop conv2d and deconv2d ops, testing and fixing bugs

Signed-off-by: Yurii <iuriish@yahoo.com>

* - forgot to recover kernels sizes in deconv2d_bp test

Signed-off-by: Yurii <iuriish@yahoo.com>

* - built in weights format in depthwise conv 2d op

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in mkl dnn conv ops

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in cuda conv helpers

Signed-off-by: Yurii <iuriish@yahoo.com>

* - working with new weights format in cudnn conv api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - take into account order of arrays in cudnn tensor descriptions

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in cpu conv3d (ff/bp)

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in cpu deconv3d (ff/bp)

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in conv3d ops (ff/bp) based on mkl api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - provide new weights formats in conv3d ops (ff/bp) based on cudnn api

Signed-off-by: Yurii <iuriish@yahoo.com>

* - resolve conflicts 2

Signed-off-by: Yurii <iuriish@yahoo.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-03-20 12:11:27 +03:00
2020-03-20 12:11:27 +03:00
2020-03-20 17:25:46 +11:00
2020-02-18 10:29:06 +11:00
2020-01-27 16:03:00 +11:00
2019-11-29 16:31:03 +11:00
2019-11-14 19:38:20 +11:00
2019-06-06 15:21:15 +03:00
2019-06-06 15:21:15 +03:00
2020-03-19 14:53:21 +09:00
2020-01-27 16:03:00 +11:00

Monorepo of Deeplearning4j

Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:

To build everything, we can use commands like

./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx

or

mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true

An example of GPU "CC" or compute capability is 61 for Titan X Pascal.

Want some examples?

We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples

In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials

Description
No description provided
Readme 108 MiB
Languages
Java 62.6%
C++ 25.3%
Cuda 4.6%
Kotlin 3.2%
PureBasic 1.8%
Other 2.3%