raver119 320924278d
Legacy API changes (#441)
* initial commit

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* another initial commit

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* another initial commit

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* one more initial commit

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next step

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next step

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next step

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next step

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Refactored buffer() and shapeInfo() methods usage with NDArray class.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt Graph class methods to use const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt choose op to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt where op shape method to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt lstsq op to use constant empty shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt matrix_diag_part op shape routine to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt determinant ops to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt mean_pairwssqerr_loss ops to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt ops shape methods.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt shape methods for loss ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt log_loss op shape method.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt shape methods for ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt dilation2d ops shape methods.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted deconv2d ops shape methods.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted dynamicRNN op shape method.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted shape methods for ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted shape methods for lstm layer ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* few updates

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* first cuda tweak

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Adopt constant shapes for sconv2d ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt constant shapes for gru ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt constant shapes with shape methods for segment ops and so on.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted constant shapes with unsorted_segment_* ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted constant shapes with gamma op shape method.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted shape methods of reduce_stddev ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted shape methods for reduce_* ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt shape method for squeeze op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt strided_slice shape method.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored concat op shape method to adopt constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted shape method for mirror_pad op.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted split op shape method.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted tile ops shape methods.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Added const cast for mkldnn routines handles.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored logSoftMaxForVector_ routine to conform with proper data and shape pointer casts.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Cosmetic changes to proper usage of constant pointers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored a couple shape comparators for strides and addBias helpers to proper use data pointers with inplace option.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored depthToSpace helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored histogram helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored im2col helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored gather and gatherND helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage on percentile helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed gather shape with helpers and range buffer usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with space to depth helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage and constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with LUP decomposition>

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored onehot_ helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored pad and prefix to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactoed softmax helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed space to batch helpers to use buffers properly.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed stack and split helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with sparse to dense helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with mindistance_ helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with tile helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed constant shape usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed constant shape usage with legacy pairwise bool ops.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored a couple of methods to adopt constant shape usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed broadcasting with constant shape."

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const usage with inplace reverse and constant shapes with legacy reduction.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored legacy ops with const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored sort to adopt constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected sort for constant shape usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed constant shape usage with special methods.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored Context to conform with constant shape usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* CUDA broadcasting headers

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* pairwise/indexreduce/random headers

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Refactored native ops to adopt constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* legacy reduce3/scalar headers

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Corrected pullRow signature and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected routines to proper use of constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored tests to use constant shapes properly.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored legacy ops tests to use constant shapes properly.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored buffer usage with NDArray tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed native ops tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed special concat routine.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with test.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed buffer usage with a test.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored TAD.h and tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored calcStrides* routines to use constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed miscelaneous errors with constant shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* NativeOps const changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Corrected definitions for declared functions.

Signed-off-by: shugeo <sgazeos@gmail.com>

* NativeOps const changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* few more const changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Fixed const shapes with shape routines.

Signed-off-by: shugeo <sgazeos@gmail.com>

* few more const changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Fixed shape method for broadcastable case.

Signed-off-by: shugeo <sgazeos@gmail.com>

* few more const changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* xw_plus_b BP shape fn restored

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Fixed signatures with broadcasting.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Repaired backprops shape methods for a set of operations.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored broadcast bool for cuda.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored methods for 3 args with const qualifier.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed a couple of kernel signatures for broadcasting.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed kernels signatures for const buffers and shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored pairwise methods to persistent buffers and shapes usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt const to buffers and shapes with kernels.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopt const to buffers and shapes with scalar kernels.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored indexreduce kernels signatures to use const buffers and shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored pairwise kernels to adopt cons shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored pairwise bool kernels to adopt cons shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored random special ops to conform with const shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored native ops to conform with const shapes and buffers under cuda platform.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Cosmetical changes only.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const shapes and buffers error.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected start pos routine.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored methods to conform with const shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored helpers to use proper methods instead.

Signed-off-by: shugeo <sgazeos@gmail.com>

* bunch of changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next bunch of changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* next bunch of changes

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Fixed execScalar declaration.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed execScalar declaration.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected const shape cases with sort and so on.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const shapes for sort.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored kernel declarations to adopt const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed kernels declarations to adopt const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected kernel declarations to adopt const shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed kernels declarations to adopt const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed segment helpers kernels declarations and so on to adopt const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const shape usage with segment and solve helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed kernel declaration with adjustWeight helper.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed cuda implementations for constant shape helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted const shape usage with kernels.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Adopted top_k kernels to use const shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Corrected kernels declarations to adopt const shapes with helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored NDArray definitions to adopt const shapes and buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const shapes with image suppression helpers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Slight improvement with buffers.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored buffer usage.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored buffer usage with tests.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Fixed const shape usage with definitions.

Signed-off-by: shugeo <sgazeos@gmail.com>

* minor updates on cpu side

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* Refactored const shape usage with ConstantDescritor and native ops with cuda platform.

Signed-off-by: shugeo <sgazeos@gmail.com>

* Refactored tear and tile kernels to adopt with const shapes.

Signed-off-by: shugeo <sgazeos@gmail.com>

* softmax_loop fix

Signed-off-by: raver119 <raver119@gmail.com>

* update missing signature

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* softmax again

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

* few more missing consts

Signed-off-by: raver119 <raver119@gmail.com>

* new methods updated

Signed-off-by: raver119@gmail.com <raver119@gmail.com>

Co-authored-by: shugeo <sgazeos@gmail.com>
2020-05-09 08:06:14 +03:00

333 lines
15 KiB
C++

/*******************************************************************************
* Copyright (c) 2015-2018 Skymind, Inc.
*
* This program and the accompanying materials are made available under the
* terms of the Apache License, Version 2.0 which is available at
* https://www.apache.org/licenses/LICENSE-2.0.
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*
* SPDX-License-Identifier: Apache-2.0
******************************************************************************/
//
// @author raver119@gmail.com
// @author Yurii Shyrma (iuriish@yahoo.com)
//
#include <ops/declarable/helpers/lrn.h>
#include <graph/Status.h>
#include <helpers/ConstantTadHelper.h>
#include <execution/Threads.h>
namespace sd {
namespace ops {
namespace helpers {
template <typename T>
static int lrnFunctor_(sd::graph::Context& block, NDArray* input, NDArray* output, int depth, float bias, float alpha, float beta) {
nd4j_debug("MKL-DNN is not used for lrn!\n", 0);
const int rank = input->rankOf();
TadPack inTadPack = sd::ConstantTadHelper::getInstance()->tadForDimensions(input->shapeInfo(), {rank - 1});
TadPack outTadPack;
if(shape::haveSameShapeAndStrides(input->shapeInfo(), output->shapeInfo()))
outTadPack = inTadPack;
else
outTadPack = sd::ConstantTadHelper::getInstance()->tadForDimensions(output->shapeInfo(), {rank - 1});
const Nd4jLong numOfTads = inTadPack.numberOfTads();
const Nd4jLong tadLen = input->sizeAt(-1);
const Nd4jLong* inTadOffsets = inTadPack.primaryOffsets();
const Nd4jLong* outTadOffsets = outTadPack.primaryOffsets();
const Nd4jLong inTadEws = shape::elementWiseStride(inTadPack.primaryShapeInfo());
const Nd4jLong outTadEws = shape::elementWiseStride(outTadPack.primaryShapeInfo());
const T* inBuff = reinterpret_cast<T*>(input->buffer());
T* outBuff = reinterpret_cast<T*>(output->buffer());
const T tbias = static_cast<T>(bias);
const T tbeta = static_cast<T>(beta);
const T talpha = static_cast<T>(alpha);
if(inTadEws == 1 && outTadEws == 1) {
auto func = PRAGMA_THREADS_FOR {
for (auto i = start; i < stop; i++) {
const T *x = inBuff + inTadOffsets[i];
T *y = outBuff + outTadOffsets[i];
T prev = 0;
// calculate squared sum of elements per each j-th element range [j - depth, j + depth + 1]
// we store each squared sum in corresponding element of y array
for (Nd4jLong j = 0; j < tadLen; ++j) {
const uint begin = sd::math::nd4j_max<int>(0, j - depth);
const uint last = depth + j + 1;
const uint end = sd::math::nd4j_min<int>(last, tadLen);
if (j == 0) {
for (uint s = begin; s < end; ++s)
prev = prev + x[s] * x[s];
y[j] = prev;
} else if (begin == 0 && last <= tadLen)
y[j] = prev + x[end - 1] * x[end - 1];
else if (begin > 0 && last <= tadLen)
y[j] = prev + x[end - 1] * x[end - 1] - x[begin - 1] * x[begin - 1];
else if (begin > 0 && last > tadLen)
y[j] = prev - x[begin - 1] * x[begin - 1];
else
y[j] = prev;
if (j != 0)
prev = y[j];
y[j] = x[j] / sd::math::nd4j_pow<T, T, T>(tbias + alpha * prev, tbeta);
}
}
};
samediff::Threads::parallel_tad(func, 0, numOfTads);
}
else {
auto func = PRAGMA_THREADS_FOR {
for (Nd4jLong i = 0; i < numOfTads; ++i) {
const T *x = inBuff + inTadOffsets[i];
T *y = outBuff + outTadOffsets[i];
T prev = 0;
// calculate squared sum of elements per each j-th element range [j - depth, j + depth + 1]
// we store each squared sum in corresponding element of y array
for (Nd4jLong j = 0; j < tadLen; ++j) {
const uint begin = sd::math::nd4j_max<int>(0, j - depth);
const uint last = depth + j + 1;
const uint end = sd::math::nd4j_min<int>(last, tadLen);
if (j == 0) {
for (uint s = begin; s < end; ++s)
prev = prev + x[s * inTadEws] * x[s * inTadEws];
y[j * outTadEws] = prev;
} else if (begin == 0 && last <= tadLen)
y[j * outTadEws] = prev + x[(end - 1) * inTadEws] * x[(end - 1) * inTadEws];
else if (begin > 0 && last <= tadLen)
y[j * outTadEws] = prev + x[(end - 1) * inTadEws] * x[(end - 1) * inTadEws] - x[(begin - 1) * inTadEws] * x[(begin - 1) * inTadEws];
else if (begin > 0 && last > tadLen)
y[j * outTadEws] = prev - x[(begin - 1) * inTadEws] * x[(begin - 1) * inTadEws];
else
y[j * outTadEws] = prev;
if (j != 0)
prev = y[j * outTadEws];
y[j * outTadEws] = x[j * inTadEws] / sd::math::nd4j_pow<T, T, T>(tbias + alpha * prev, tbeta);
}
}
};
samediff::Threads::parallel_tad(func, 0, numOfTads);
}
return Status::OK();
}
BUILD_SINGLE_TEMPLATE(template int lrnFunctor_, (sd::graph::Context& block, NDArray* input, NDArray* output, int depth, float bias, float alpha, float beta), FLOAT_TYPES);
int lrnFunctor(sd::graph::Context& block, NDArray* input, NDArray* output, int depth, double bias, double alpha, double beta) {
BUILD_SINGLE_SELECTOR(input->dataType(), return lrnFunctor_, (block, input, output, depth, bias, alpha, beta), FLOAT_TYPES);
}
//////////////////////////////////////////////////////////////////////////
template <typename X, typename Y>
static void lrnBP_(const NDArray& input, const NDArray& gradO, NDArray& gradI, const int depth, const float bias, const float alpha, const float beta) {
const int rank = input.rankOf();
TadPack inTadPack = sd::ConstantTadHelper::getInstance()->tadForDimensions(input.shapeInfo(), {rank - 1});
TadPack gradITadPack;
if(shape::haveSameShapeAndStrides(input.shapeInfo(), gradI.shapeInfo()))
gradITadPack = inTadPack;
else
gradITadPack = sd::ConstantTadHelper::getInstance()->tadForDimensions(gradI.shapeInfo(), {rank - 1});
const Nd4jLong numOfTads = inTadPack.numberOfTads();
const Nd4jLong tadLen = input.sizeAt(-1);
const Nd4jLong* inTadOffsets = inTadPack.primaryOffsets();
const Nd4jLong* gradITadOffsets = gradITadPack.primaryOffsets();
const Nd4jLong inTadEws = shape::elementWiseStride(inTadPack.primaryShapeInfo());
const Nd4jLong gradITadEws = shape::elementWiseStride(gradITadPack.primaryShapeInfo());
const X* inBuff = reinterpret_cast<X const*>(input.buffer());
Y* gradIBuff = reinterpret_cast<Y*>(gradI.buffer());
const Y tbias = static_cast<Y>(bias);
const Y tbeta = static_cast<Y>(beta);
const Y talpha = static_cast<Y>(alpha);
const Y coeff = talpha * tbeta;
if(inTadEws == 1 && gradITadEws == 1) {
auto func = PRAGMA_THREADS_FOR {
for (auto i = start; i < stop; i++) {
const X *x = inBuff + inTadOffsets[i];
Y *y = gradIBuff + gradITadOffsets[i];
// this loop calculates squared sum of elements per each j-th element range [j - depth, j + depth + 1]
// we store each squared sum in corresponding element of y array
for (Nd4jLong j = 0; j < tadLen; ++j) {
const uint begin = sd::math::nd4j_max<int>(0, j - depth);
const uint last = depth + j + 1;
const uint end = sd::math::nd4j_min<int>(last, tadLen);
if (j == 0) {
y[0] = 0;
for (uint s = begin; s < end; ++s)
y[0] = y[0] + x[s] * x[s];
} else if (begin == 0 && last <= tadLen)
y[j] = y[j - 1] + x[end - 1] * x[end - 1];
else if (begin > 0 && last <= tadLen)
y[j] = y[j - 1] + x[end - 1] * x[end - 1] - x[begin - 1] * x[begin - 1];
else if (begin > 0 && last > tadLen)
y[j] = y[j - 1] - x[begin - 1] * x[begin - 1];
else
y[j] = y[j - 1];
}
Y *factor = new Y[tadLen];
Y prev = 0;
// second loop calculates derivatives using information gained in first loop above
for (Nd4jLong j = 0; j < tadLen; ++j) {
const uint begin = sd::math::nd4j_max<int>(0, j - depth);
const uint last = depth + j + 1;
const uint end = sd::math::nd4j_min<int>(last, tadLen);
Y init = tbias + talpha * y[j];
if (j == 0) {
for (uint s = begin; s < end; ++s) {
factor[s] = sd::math::nd4j_pow<Y, Y, Y>(tbias + talpha * y[s], -tbeta - 1);
prev = prev + x[s] * factor[s];
}
y[0] = prev;
} else if (begin == 0 && last <= tadLen) {
factor[end - 1] = sd::math::nd4j_pow<Y, Y, Y>(tbias + talpha * y[end - 1], -tbeta - 1);
y[j] = prev + x[end - 1] * factor[end - 1];
} else if (begin > 0 && last <= tadLen) {
factor[end - 1] = sd::math::nd4j_pow<Y, Y, Y>(tbias + talpha * y[end - 1], -tbeta - 1);
y[j] = prev + x[end - 1] * factor[end - 1] - x[begin - 1] * factor[begin - 1];
} else if (begin > 0 && last > tadLen)
y[j] = prev - x[begin - 1] * factor[begin - 1];
else
y[j] = prev;
if (j != 0)
prev = y[j];
y[j] = factor[j] * init - 2 * x[j] * coeff * prev;
}
delete[]factor;
}
};
samediff::Threads::parallel_tad(func, 0, numOfTads);
}
else {
auto func = PRAGMA_THREADS_FOR {
for (auto i = start; i < stop; i++) {
const X *x = inBuff + inTadOffsets[i];
Y *y = gradIBuff + gradITadOffsets[i];
// this loop calculates squared sum of elements per each j-th element range [j - depth, j + depth + 1]
// we store each squared sum in corresponding element of y array
for (Nd4jLong j = 0; j < tadLen; ++j) {
const uint begin = sd::math::nd4j_max<int>(0, j - depth);
const uint last = depth + j + 1;
const uint end = sd::math::nd4j_min<int>(last, tadLen);
if (j == 0) {
y[0] = 0;
for (uint s = begin; s < end; ++s)
y[0] = y[0] + x[s * inTadEws] * x[s * inTadEws];
} else if (begin == 0 && last <= tadLen)
y[j * gradITadEws] =
y[(j - 1) * gradITadEws] + x[(end - 1) * inTadEws] * x[(end - 1) * inTadEws];
else if (begin > 0 && last <= tadLen)
y[j * gradITadEws] =
y[(j - 1) * gradITadEws] + x[(end - 1) * inTadEws] * x[(end - 1) * inTadEws] -
x[(begin - 1) * inTadEws] * x[(begin - 1) * inTadEws];
else if (begin > 0 && last > tadLen)
y[j * gradITadEws] =
y[(j - 1) * gradITadEws] - x[(begin - 1) * inTadEws] * x[(begin - 1) * inTadEws];
else
y[j * gradITadEws] = y[(j - 1) * gradITadEws];
}
Y *factor = new Y[tadLen];
Y prev = 0;
// second loop calculates derivatives using information gained in first loop above
for (Nd4jLong j = 0; j < tadLen; ++j) {
const uint begin = sd::math::nd4j_max<int>(0, j - depth);
const uint last = depth + j + 1;
const uint end = sd::math::nd4j_min<int>(last, tadLen);
Y init = tbias + talpha * y[j * gradITadEws];
if (j == 0) {
for (uint s = begin; s < end; ++s) {
factor[s] = sd::math::nd4j_pow<Y, Y, Y>(tbias + talpha * y[s * gradITadEws], -tbeta - 1);
prev = prev + x[s * inTadEws] * factor[s];
}
y[0] = prev;
} else if (begin == 0 && last <= tadLen) {
factor[end - 1] = sd::math::nd4j_pow<Y, Y, Y>(tbias + talpha * y[(end - 1) * gradITadEws],
-tbeta - 1);
y[j * gradITadEws] = prev + x[(end - 1) * inTadEws] * factor[end - 1];
} else if (begin > 0 && last <= tadLen) {
factor[end - 1] = sd::math::nd4j_pow<Y, Y, Y>(tbias + talpha * y[(end - 1) * gradITadEws],
-tbeta - 1);
y[j * gradITadEws] = prev + x[(end - 1) * inTadEws] * factor[end - 1] -
x[(begin - 1) * inTadEws] * factor[begin - 1];
} else if (begin > 0 && last > tadLen)
y[j * gradITadEws] = prev - x[(begin - 1) * inTadEws] * factor[begin - 1];
else
y[j * gradITadEws] = prev;
if (j != 0)
prev = y[j * gradITadEws];
y[j * gradITadEws] = factor[j] * init - 2 * x[j * inTadEws] * coeff * prev;
}
delete[]factor;
}
};
samediff::Threads::parallel_tad(func, 0, numOfTads);
}
gradI *= gradO;
}
void lrnBP(sd::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int depth, const float bias, const float alpha, const float beta) {
BUILD_DOUBLE_SELECTOR(input.dataType(), gradO.dataType(), lrnBP_, (input, gradO, gradI, depth, bias, alpha, beta), FLOAT_TYPES, FLOAT_TYPES);
}
}
}
}