raver119 3e2dbc65dd
MatMul for gemm/gemv calls (#365)
* libnd4j added optional alpha and beta support to matmul

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j typos fixes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j add optional alpha and beta to matmul_bp

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one more typo fix

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added optional alpha and beta to mkl implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* MatMul alpha/beta on java side

Signed-off-by: raver119 <raver119@gmail.com>

* alpha/beta fix in libnd4j

Signed-off-by: raver119 <raver119@gmail.com>

* alpha/beta fix in matmul_bp

Signed-off-by: raver119 <raver119@gmail.com>

* restored view validation

Signed-off-by: raver119 <raver119@gmail.com>

* gemv/gemm now use MatMul op

Signed-off-by: raver119 <raver119@gmail.com>

* few tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* additional INDArray.mmul signature

Signed-off-by: raver119 <raver119@gmail.com>

* make C order default for INDArray.mmul, unless both A/B have F order

Signed-off-by: raver119 <raver119@gmail.com>

* Nd4j.gemm validation fix

Signed-off-by: raver119 <raver119@gmail.com>

* disable mkldnn matmul for xxf with beta != 0 case

Signed-off-by: raver119 <raver119@gmail.com>

* SimpleRnn workspace fix + timeouts

Signed-off-by: Alex Black <blacka101@gmail.com>

* two more tests + minor fix in matmul platform check

Signed-off-by: raver119 <raver119@gmail.com>

* Flaky test fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* propagate testresources profile

Signed-off-by: raver119 <raver119@gmail.com>

* Resources fix + flaky test fix

Signed-off-by: Alex Black <blacka101@gmail.com>

Co-authored-by: Oleg <oleg.semeniv@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
2020-04-10 17:57:02 +03:00
2020-04-01 15:11:39 +11:00
2020-04-10 17:57:02 +03:00
2020-04-10 17:57:02 +03:00
2020-01-27 16:03:00 +11:00
2019-11-29 16:31:03 +11:00
2019-11-14 19:38:20 +11:00
2019-06-06 15:21:15 +03:00
2019-06-06 15:21:15 +03:00
2020-01-27 16:03:00 +11:00

Monorepo of Deeplearning4j

Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to deeplearning4j:

To build everything, we can use commands like

./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx

or

mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true

An example of GPU "CC" or compute capability is 61 for Titan X Pascal.

Want some examples?

We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples

In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials

Description
No description provided
Readme 108 MiB
Languages
Java 62.6%
C++ 25.3%
Cuda 4.6%
Kotlin 3.2%
PureBasic 1.8%
Other 2.3%