cavis/libnd4j/include/ops/declarable/generic/nn
Oleh 1d004b542a
xw_plus_b mkldnn implementation (#247)
* libnd4j first step of mkldnn for xw_plus_b and test of aurora crash in imageHelper

* libnd4j sync folders with master

* libnd4j merge master, raw implementation of xw_plus_b on mkldnn, clean up, need testing and adding checks for corresponded input shapes

* libnd4j corrections and checks added to xw_plus_b mkl

* libnd4j corrected dataType description based on mkl operation description, need more investigation

* libnd4j fixe xw_blus_b mkl implementation, need testing

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j two unit tests added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed check input dimensions bug

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libndj4 one more test added to cover different order handling

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added optional int arg support to define weights format, if arg == 1, mkldnn (do not need transpose in mkldnn implementation), else mmul weights format, corrected check points, added unit test

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j some improvements to avoid NDArray transpose in xw_plus_b operation

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed issues connected with weights rank, also added support of one case based on tf (for mkldnn, cpu, cuda), test case added

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added proper handling of empty inputs (all implementations)

* libnd4j fixed compilation error

* libnd4j several more corrections after conflict solve and fixed typos

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j removed unsupported data types

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j merge master and fixed issues

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j added propagation implementation for xw_plus_b, fixed issue connected with mkl weights data format, avoided data copy in transpose mode, test cases added, manually tested with gradCheck

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j one minor fix of double operation declaration

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j code clean up

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j minor tests fixes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

* libnd4j fixed build problem, integrate helpers changes

Signed-off-by: Oleg <oleg.semeniv@gmail.com>

Co-authored-by: raver119 <raver119@gmail.com>
2020-03-31 13:03:10 +03:00
..
activations some structure for ops (#337) 2020-03-23 07:28:54 +03:00
convo Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
pooling Shyrma weights format (#329) 2020-03-20 12:11:27 +03:00
recurrent some structure for ops (#337) 2020-03-23 07:28:54 +03:00
apply_sgd.cpp libnd4j polishing (#273) 2020-03-02 12:49:41 +03:00
batchnorm.cpp libnd4j polishing (#273) 2020-03-02 12:49:41 +03:00
bias_add.cpp some structure for ops (#337) 2020-03-23 07:28:54 +03:00
dot_product_attention.cpp libnd4j polishing (#273) 2020-03-02 12:49:41 +03:00
embedding_lookup.cpp some structure for ops (#337) 2020-03-23 07:28:54 +03:00
fusedBatchNorm.cpp libnd4j polishing (#273) 2020-03-02 12:49:41 +03:00
layer_norm.cpp some structure for ops (#337) 2020-03-23 07:28:54 +03:00
logSoftmax.cpp libnd4j polishing (#273) 2020-03-02 12:49:41 +03:00
lrn.cpp libnd4j polishing (#273) 2020-03-02 12:49:41 +03:00
multi_head_dot_product_attention.cpp libnd4j polishing (#273) 2020-03-02 12:49:41 +03:00
relu_layer.cpp libnd4j polishing (#273) 2020-03-02 12:49:41 +03:00
softmax.cpp libnd4j polishing (#273) 2020-03-02 12:49:41 +03:00
xw_plus_b.cpp xw_plus_b mkldnn implementation (#247) 2020-03-31 13:03:10 +03:00