raver119 c396fcb960
More pre-release fixes (#456)
* - numPrefixBlocks fix for threshold_encoding
- temparrays pointers fixed

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* auto configuration of memory workspace for gradients sharing

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* limit sparse encoding message size

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more workspace test

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more CUDA-specific test

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more CUDA-specific workspace test

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more CUDA-specific workspace test

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more CUDA-specific workspace test

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* add separate host/device reset for circular workspace mode

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* new PW builder method for encoder memory amount

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* "inplace" execution for threshold encoding

Signed-off-by: raver119@gmail.com <raver119@gmail.com>
2020-05-13 08:12:07 +03:00
2020-05-06 09:59:53 +02:00
2020-05-13 08:12:07 +03:00
2020-05-13 08:12:07 +03:00
2020-04-20 11:21:01 +09:00
2019-11-14 19:38:20 +11:00
2019-06-06 15:21:15 +03:00
2019-06-06 15:21:15 +03:00
2020-01-27 16:03:00 +11:00

Monorepo of Deeplearning4j

Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:

To build everything, we can use commands like

./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx

or

mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true

An example of GPU "CC" or compute capability is 61 for Titan X Pascal.

Want some examples?

We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples

In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials

Description
No description provided
Readme 108 MiB
Languages
Java 62.6%
C++ 25.3%
Cuda 4.6%
Kotlin 3.2%
PureBasic 1.8%
Other 2.3%