cavis/libnd4j/include/ops/declarable/platform/mkldnn
raver119 3e2dbc65dd
MatMul for gemm/gemv calls (#365)
* libnd4j added optional alpha and beta support to matmul

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j typos fixes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j add optional alpha and beta to matmul_bp

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one more typo fix

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added optional alpha and beta to mkl implementation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* MatMul alpha/beta on java side

Signed-off-by: raver119 <raver119@gmail.com>

* alpha/beta fix in libnd4j

Signed-off-by: raver119 <raver119@gmail.com>

* alpha/beta fix in matmul_bp

Signed-off-by: raver119 <raver119@gmail.com>

* restored view validation

Signed-off-by: raver119 <raver119@gmail.com>

* gemv/gemm now use MatMul op

Signed-off-by: raver119 <raver119@gmail.com>

* few tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* additional INDArray.mmul signature

Signed-off-by: raver119 <raver119@gmail.com>

* make C order default for INDArray.mmul, unless both A/B have F order

Signed-off-by: raver119 <raver119@gmail.com>

* Nd4j.gemm validation fix

Signed-off-by: raver119 <raver119@gmail.com>

* disable mkldnn matmul for xxf with beta != 0 case

Signed-off-by: raver119 <raver119@gmail.com>

* SimpleRnn workspace fix + timeouts

Signed-off-by: Alex Black <blacka101@gmail.com>

* two more tests + minor fix in matmul platform check

Signed-off-by: raver119 <raver119@gmail.com>

* Flaky test fixes

Signed-off-by: Alex Black <blacka101@gmail.com>

* propagate testresources profile

Signed-off-by: raver119 <raver119@gmail.com>

* Resources fix + flaky test fix

Signed-off-by: Alex Black <blacka101@gmail.com>

Co-authored-by: Oleg <oleg.semeniv@gmail.com>
Co-authored-by: Alex Black <blacka101@gmail.com>
2020-04-10 17:57:02 +03:00
..
avgpooling2d.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
avgpooling3d.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
batchnorm.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
conv2d.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
conv3d.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
deconv2d.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
deconv2d_tf.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
deconv3d.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
depthwiseConv2d.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
lrn.cpp libnd4j polishing (#273) 2020-03-02 12:49:41 +03:00
lstmLayer.cpp DL4J and SameDiff integration tests + LSTMLayer java op class (#353) 2020-04-09 00:20:48 +10:00
matmul.cpp MatMul for gemm/gemv calls (#365) 2020-04-10 17:57:02 +03:00
maxpooling2d.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
maxpooling3d.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
mkldnnUtils.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
mkldnnUtils.h xw_plus_b mkldnn implementation (#247) 2020-03-31 13:03:10 +03:00
softmax.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
tanh.cpp Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
xw_plus_b.cpp xw_plus_b mkldnn implementation (#247) 2020-03-31 13:03:10 +03:00