* initial commit Signed-off-by: raver119@gmail.com <raver119@gmail.com> * another initial commit Signed-off-by: raver119@gmail.com <raver119@gmail.com> * another initial commit Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one more initial commit Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next step Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next step Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next step Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next step Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Refactored buffer() and shapeInfo() methods usage with NDArray class. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt Graph class methods to use const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt choose op to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt where op shape method to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt lstsq op to use constant empty shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt matrix_diag_part op shape routine to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt determinant ops to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt mean_pairwssqerr_loss ops to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt ops shape methods. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt shape methods for loss ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt log_loss op shape method. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt shape methods for ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt dilation2d ops shape methods. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted deconv2d ops shape methods. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted dynamicRNN op shape method. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted shape methods for ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted shape methods for lstm layer ops. Signed-off-by: shugeo <sgazeos@gmail.com> * few updates Signed-off-by: raver119@gmail.com <raver119@gmail.com> * first cuda tweak Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Adopt constant shapes for sconv2d ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt constant shapes for gru ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt constant shapes with shape methods for segment ops and so on. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted constant shapes with unsorted_segment_* ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted constant shapes with gamma op shape method. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted shape methods of reduce_stddev ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted shape methods for reduce_* ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt shape method for squeeze op. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt strided_slice shape method. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored concat op shape method to adopt constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted shape method for mirror_pad op. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted split op shape method. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted tile ops shape methods. Signed-off-by: shugeo <sgazeos@gmail.com> * Added const cast for mkldnn routines handles. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored logSoftMaxForVector_ routine to conform with proper data and shape pointer casts. Signed-off-by: shugeo <sgazeos@gmail.com> * Cosmetic changes to proper usage of constant pointers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored a couple shape comparators for strides and addBias helpers to proper use data pointers with inplace option. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored depthToSpace helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored histogram helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored im2col helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored gather and gatherND helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage on percentile helper. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed gather shape with helpers and range buffer usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with space to depth helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage and constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with LUP decomposition> Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored onehot_ helper. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored pad and prefix to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactoed softmax helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed space to batch helpers to use buffers properly. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed stack and split helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with sparse to dense helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with mindistance_ helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with tile helper. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed constant shape usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed constant shape usage with legacy pairwise bool ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored a couple of methods to adopt constant shape usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed broadcasting with constant shape." Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const usage with inplace reverse and constant shapes with legacy reduction. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored legacy ops with const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored sort to adopt constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected sort for constant shape usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed constant shape usage with special methods. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored Context to conform with constant shape usage. Signed-off-by: shugeo <sgazeos@gmail.com> * CUDA broadcasting headers Signed-off-by: raver119@gmail.com <raver119@gmail.com> * pairwise/indexreduce/random headers Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Refactored native ops to adopt constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * legacy reduce3/scalar headers Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Corrected pullRow signature and tests. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected routines to proper use of constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored tests to use constant shapes properly. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored legacy ops tests to use constant shapes properly. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored buffer usage with NDArray tests. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed native ops tests. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed special concat routine. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with test. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with a test. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored TAD.h and tests. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored calcStrides* routines to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed miscelaneous errors with constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * NativeOps const changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Corrected definitions for declared functions. Signed-off-by: shugeo <sgazeos@gmail.com> * NativeOps const changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * few more const changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Fixed const shapes with shape routines. Signed-off-by: shugeo <sgazeos@gmail.com> * few more const changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Fixed shape method for broadcastable case. Signed-off-by: shugeo <sgazeos@gmail.com> * few more const changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * xw_plus_b BP shape fn restored Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Fixed signatures with broadcasting. Signed-off-by: shugeo <sgazeos@gmail.com> * Repaired backprops shape methods for a set of operations. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored broadcast bool for cuda. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored methods for 3 args with const qualifier. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed a couple of kernel signatures for broadcasting. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed kernels signatures for const buffers and shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored pairwise methods to persistent buffers and shapes usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt const to buffers and shapes with kernels. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt const to buffers and shapes with scalar kernels. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored indexreduce kernels signatures to use const buffers and shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored pairwise kernels to adopt cons shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored pairwise bool kernels to adopt cons shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored random special ops to conform with const shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored native ops to conform with const shapes and buffers under cuda platform. Signed-off-by: shugeo <sgazeos@gmail.com> * Cosmetical changes only. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const shapes and buffers error. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected start pos routine. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored methods to conform with const shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored helpers to use proper methods instead. Signed-off-by: shugeo <sgazeos@gmail.com> * bunch of changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next bunch of changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next bunch of changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Fixed execScalar declaration. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed execScalar declaration. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected const shape cases with sort and so on. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const shapes for sort. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored kernel declarations to adopt const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed kernels declarations to adopt const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected kernel declarations to adopt const shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed kernels declarations to adopt const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed segment helpers kernels declarations and so on to adopt const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const shape usage with segment and solve helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed kernel declaration with adjustWeight helper. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed cuda implementations for constant shape helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted const shape usage with kernels. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted top_k kernels to use const shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected kernels declarations to adopt const shapes with helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored NDArray definitions to adopt const shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const shapes with image suppression helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Slight improvement with buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored buffer usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored buffer usage with tests. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const shape usage with definitions. Signed-off-by: shugeo <sgazeos@gmail.com> * minor updates on cpu side Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Refactored const shape usage with ConstantDescritor and native ops with cuda platform. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored tear and tile kernels to adopt with const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * softmax_loop fix Signed-off-by: raver119 <raver119@gmail.com> * update missing signature Signed-off-by: raver119@gmail.com <raver119@gmail.com> * softmax again Signed-off-by: raver119@gmail.com <raver119@gmail.com> * few more missing consts Signed-off-by: raver119 <raver119@gmail.com> * new methods updated Signed-off-by: raver119@gmail.com <raver119@gmail.com> Co-authored-by: shugeo <sgazeos@gmail.com>
331 lines
14 KiB
Plaintext
331 lines
14 KiB
Plaintext
/*******************************************************************************
|
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
|
*
|
|
* This program and the accompanying materials are made available under the
|
|
* terms of the Apache License, Version 2.0 which is available at
|
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
|
*
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
* License for the specific language governing permissions and limitations
|
|
* under the License.
|
|
*
|
|
* SPDX-License-Identifier: Apache-2.0
|
|
******************************************************************************/
|
|
|
|
//
|
|
// @author raver119@gmail.com
|
|
//
|
|
|
|
#include <ops/declarable/helpers/roll.h>
|
|
#include <helpers/ConstantTadHelper.h>
|
|
#include <helpers/PointersManager.h>
|
|
|
|
namespace sd {
|
|
namespace ops {
|
|
namespace helpers {
|
|
|
|
template <typename T>
|
|
static void _CUDA_D rollKernelLinearStage1Dev(const void *vx, const Nd4jLong *xShapeInfo, void *vz, const Nd4jLong *zShapeInfo, Nd4jLong fullLength, int actualShift) {
|
|
auto x = reinterpret_cast<const T*>(vx);
|
|
auto z = reinterpret_cast<T*>(vz);
|
|
|
|
auto xEws = shape::elementWiseStride(xShapeInfo);
|
|
auto zEws = shape::elementWiseStride(zShapeInfo);
|
|
|
|
auto xOrder = shape::order(xShapeInfo);
|
|
auto zOrder = shape::order(zShapeInfo);
|
|
|
|
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
|
|
|
if (xEws > 0 && zEws > 0 && xOrder == zOrder) {
|
|
for (int i = tid; i < actualShift; i += blockDim.x * gridDim.x) {
|
|
int sourceIndex = fullLength - actualShift + i;
|
|
|
|
auto eA = x[sourceIndex * xEws];
|
|
auto eB = x[i * xEws];
|
|
|
|
z[i * zEws] = eA;
|
|
z[sourceIndex * zEws] = eB;
|
|
}
|
|
} else {
|
|
for (int i = tid; i < actualShift; i += blockDim.x * gridDim.x) {
|
|
int sourceIndex = fullLength - actualShift + i;
|
|
|
|
auto xOffsetA = shape::getIndexOffset(i, xShapeInfo);
|
|
auto xOffsetB = shape::getIndexOffset(sourceIndex, xShapeInfo);
|
|
|
|
auto zOffsetA = shape::getIndexOffset(i, zShapeInfo);
|
|
auto zOffsetB = shape::getIndexOffset(sourceIndex, zShapeInfo);
|
|
|
|
auto eA = x[xOffsetA];
|
|
auto eB = x[xOffsetB];
|
|
|
|
z[zOffsetA] = eB;
|
|
z[zOffsetB] = eA;
|
|
}
|
|
}
|
|
}
|
|
|
|
template <typename T>
|
|
static void _CUDA_G rollKernelLinearStage1(const void *vx, const Nd4jLong *xShapeInfo, void *vz, const Nd4jLong *zShapeInfo, Nd4jLong fullLength, int actualShift) {
|
|
rollKernelLinearStage1Dev<T>(vx, xShapeInfo, vz, zShapeInfo, fullLength, actualShift);
|
|
}
|
|
|
|
template <typename T>
|
|
static void _CUDA_G rollKernelLinearStage2(const void *vx, const Nd4jLong *xShapeInfo, void *vz, const Nd4jLong *zShapeInfo, Nd4jLong fullLength, int actualShift, int shiftCount) {
|
|
auto x = reinterpret_cast<const T*>(vx);
|
|
auto z = reinterpret_cast<T*>(vz);
|
|
|
|
auto xEws = shape::elementWiseStride(xShapeInfo);
|
|
auto zEws = shape::elementWiseStride(zShapeInfo);
|
|
|
|
auto xOrder = shape::order(xShapeInfo);
|
|
auto zOrder = shape::order(zShapeInfo);
|
|
|
|
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
|
|
|
if (xEws > 0 && zEws > 0 && xOrder == zOrder) {
|
|
for (int count = 1; count < shiftCount; ++count) {
|
|
for (int i = tid; i < actualShift; i += blockDim.x * gridDim.x) {
|
|
int destinationIndex = fullLength - (count + 1) * actualShift + i;
|
|
int sourceIndex = fullLength - count * actualShift + i;
|
|
|
|
auto eA = x[sourceIndex * xEws];
|
|
auto eB = x[destinationIndex * xEws];
|
|
|
|
z[destinationIndex * zEws] = eA;
|
|
z[sourceIndex * zEws] = eB;
|
|
}
|
|
|
|
__syncthreads();
|
|
}
|
|
} else {
|
|
for (int count = 1; count < shiftCount; ++count) {
|
|
for (int i = tid; i < actualShift; i += blockDim.x * gridDim.x) {
|
|
int destinationIndex = fullLength - (count + 1) * actualShift + i;
|
|
int sourceIndex = fullLength - count * actualShift + i;
|
|
|
|
auto xOffsetA = shape::getIndexOffset(destinationIndex, xShapeInfo);
|
|
auto xOffsetB = shape::getIndexOffset(sourceIndex, xShapeInfo);
|
|
|
|
auto zOffsetA = shape::getIndexOffset(destinationIndex, zShapeInfo);
|
|
auto zOffsetB = shape::getIndexOffset(sourceIndex, zShapeInfo);
|
|
|
|
auto eA = x[xOffsetA];
|
|
auto eB = x[xOffsetB];
|
|
|
|
z[zOffsetA] = eB;
|
|
z[zOffsetB] = eA;
|
|
}
|
|
|
|
__syncthreads();
|
|
}
|
|
}
|
|
}
|
|
|
|
template <typename T>
|
|
static void _CUDA_G rollKernelLinearStage3(const void *vx, const Nd4jLong *xShapeInfo, void *vz, const Nd4jLong *zShapeInfo, Nd4jLong fullLength, int actualShift, int remainShift) {
|
|
auto x = reinterpret_cast<const T*>(vx);
|
|
auto z = reinterpret_cast<T*>(vz);
|
|
|
|
auto xEws = shape::elementWiseStride(xShapeInfo);
|
|
auto zEws = shape::elementWiseStride(zShapeInfo);
|
|
|
|
auto xOrder = shape::order(xShapeInfo);
|
|
auto zOrder = shape::order(zShapeInfo);
|
|
|
|
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
|
|
|
if (xEws > 0 && zEws > 0 && xOrder == zOrder) {
|
|
for (int i = tid ; i < actualShift; i += blockDim.x * gridDim.x) {
|
|
int remainIdx = i + actualShift;
|
|
int sourceIndex = remainIdx + remainShift;
|
|
|
|
auto eA = x[sourceIndex * xEws];
|
|
auto eB = x[remainIdx * xEws];
|
|
|
|
z[remainIdx * zEws] = eA;
|
|
z[sourceIndex * zEws] = eB;
|
|
}
|
|
} else {
|
|
for (int i = tid; i < actualShift; i += blockDim.x * gridDim.x) {
|
|
int remainIdx = i + actualShift;
|
|
int sourceIndex = remainIdx + remainShift;
|
|
|
|
auto xOffsetA = shape::getIndexOffset(remainIdx, xShapeInfo);
|
|
auto xOffsetB = shape::getIndexOffset(sourceIndex, xShapeInfo);
|
|
|
|
auto zOffsetA = shape::getIndexOffset(remainIdx, zShapeInfo);
|
|
auto zOffsetB = shape::getIndexOffset(sourceIndex, zShapeInfo);
|
|
|
|
auto eA = x[xOffsetA];
|
|
auto eB = x[xOffsetB];
|
|
|
|
z[zOffsetA] = eB;
|
|
z[zOffsetB] = eA;
|
|
}
|
|
}
|
|
}
|
|
|
|
template <typename T>
|
|
static void _CUDA_D swapTadsKernel(void *vx, void *vz, const Nd4jLong *zShapeInfo, Nd4jLong tadLength) {
|
|
auto x = reinterpret_cast<T*>(vx);
|
|
auto z = reinterpret_cast<T*>(vz);
|
|
|
|
auto zEws = shape::elementWiseStride(zShapeInfo);
|
|
|
|
auto zOrder = shape::order(zShapeInfo);
|
|
|
|
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
|
|
|
if (zEws > 0) {
|
|
for (int e = threadIdx.x; e < tadLength; e += blockDim.x) {
|
|
auto eA = x[e * zEws];
|
|
auto eB = z[e * zEws];
|
|
|
|
x[e * zEws] = eB;
|
|
z[e * zEws] = eA;
|
|
}
|
|
} else {
|
|
for (int e = threadIdx.x; e < tadLength; e += blockDim.x) {
|
|
auto zOffset = shape::getIndexOffset(e, zShapeInfo);
|
|
|
|
auto eA = x[zOffset];
|
|
auto eB = z[zOffset];
|
|
|
|
x[zOffset] = eB;
|
|
z[zOffset] = eA;
|
|
}
|
|
}
|
|
}
|
|
|
|
template <typename T>
|
|
static void _CUDA_G rollKernelFullAnyDimensionStage1(const void *vx, const Nd4jLong *xTadShapeInfo, const Nd4jLong *xTadOffsets, void *vz, const Nd4jLong *zTadShapeInfo, const Nd4jLong *zTadOffsets, int numTads, Nd4jLong tadLength, int dim, Nd4jLong sizeAt, int theShift) {
|
|
auto x = reinterpret_cast<const T *>(vx);
|
|
auto z = reinterpret_cast<T *>(vz);
|
|
|
|
for (int e = blockIdx.x + theShift; e < sizeAt - theShift; e += gridDim.x) {
|
|
int sourceIndex = dim * sizeAt + e - theShift;
|
|
int targetIndex = dim * sizeAt + e;
|
|
|
|
swapTadsKernel<T>(z + xTadOffsets[sourceIndex], z + xTadOffsets[targetIndex], zTadShapeInfo, tadLength);
|
|
}
|
|
}
|
|
|
|
template <typename T>
|
|
static void _CUDA_G rollKernelFullAnyDimensionStage2(void *vx, const Nd4jLong *xTadShapeInfo, const Nd4jLong *xTadOffsets, void *vz, const Nd4jLong *zTadShapeInfo, const Nd4jLong *zTadOffsets, int numTads, Nd4jLong tadLength, int dim, Nd4jLong sizeAt, int theShift) {
|
|
auto x = reinterpret_cast<const T *>(vx);
|
|
auto z = reinterpret_cast<T *>(vz);
|
|
|
|
for (int e = blockIdx.x; e < theShift; e += gridDim.x) {
|
|
int sourceIndex = dim * sizeAt + sizeAt - theShift + e;
|
|
int targetIndex = dim * sizeAt + e;
|
|
|
|
swapTadsKernel<T>(z + zTadOffsets[sourceIndex], z + zTadOffsets[targetIndex], zTadShapeInfo, tadLength);
|
|
}
|
|
}
|
|
|
|
template <typename T>
|
|
static void rollFunctorFull_(NDArray* input, NDArray* output, std::vector<int> const& shifts, std::vector<int> const& axes, bool inplace){
|
|
if (!inplace)
|
|
output->assign(input);
|
|
|
|
for (size_t i = 0; i < axes.size(); i++) {
|
|
int axe = axes[i];
|
|
if (axe == input->rankOf() - 1) { // last dimension
|
|
ResultSet listOfTensors = output->allTensorsAlongDimension({axe});
|
|
ResultSet listOfOutTensors = output->allTensorsAlongDimension({axe});
|
|
int fullLen = listOfTensors.size();
|
|
int theShift = shifts[i];
|
|
// if (theShift > 0) {
|
|
// theShift %= fullLen;
|
|
// }
|
|
// else {
|
|
// theShift -= fullLen * (theShift / fullLen - 1);
|
|
// }
|
|
for (int k = 0; k < fullLen; k++) {
|
|
rollFunctorLinear(output->getContext(), listOfTensors.at(k), listOfOutTensors.at(k), theShift, true);
|
|
}
|
|
} else {
|
|
std::vector<int> dims(input->rankOf() - axe - 1);
|
|
for (int i = 0; i < dims.size(); ++i)
|
|
dims[i] = axe + 1 + i;
|
|
|
|
auto packZ = ConstantTadHelper::getInstance()->tadForDimensions(output->shapeInfo(), dims);
|
|
|
|
int numTads = packZ.numberOfTads();
|
|
int sizeAt = input->sizeAt(axe);
|
|
auto tadLength = shape::length(packZ.primaryShapeInfo());
|
|
|
|
int theShift = shifts[i];
|
|
|
|
// if (theShift > 0)
|
|
// theShift %= sizeAt;
|
|
// else
|
|
// theShift -= sizeAt * (theShift / sizeAt - 1);
|
|
|
|
if (theShift) {
|
|
for (int dim = 0; dim < numTads / sizeAt; ++dim) {
|
|
|
|
rollKernelFullAnyDimensionStage1<T><<<1, 256, 1024, *(output->getContext()->getCudaStream())>>>(output->specialBuffer(), packZ.platformShapeInfo(), packZ.platformOffsets(), output->specialBuffer(), packZ.platformShapeInfo(), packZ.platformOffsets(), numTads, tadLength, dim, sizeAt, theShift);
|
|
|
|
rollKernelFullAnyDimensionStage2<T><<<1, 256, 1024, *(output->getContext()->getCudaStream())>>>(output->specialBuffer(), packZ.platformShapeInfo(), packZ.platformOffsets(), output->specialBuffer(), packZ.platformShapeInfo(), packZ.platformOffsets(), numTads, tadLength, dim, sizeAt, theShift);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
template <typename T>
|
|
static void rollFunctorLinear_(NDArray* input, NDArray* output, int shift, bool inplace){
|
|
if (!inplace)
|
|
output->assign(input);
|
|
|
|
auto fullLen = input->lengthOf();
|
|
int actualShift = shift; // % fullLen; // shift already non-negative then
|
|
if (actualShift < 0) {
|
|
actualShift -= fullLen * (actualShift / fullLen - 1);
|
|
}
|
|
else
|
|
actualShift %= fullLen;
|
|
|
|
if (actualShift) {
|
|
int shiftCount = fullLen / actualShift - 1;
|
|
int remainShift = fullLen % actualShift;
|
|
|
|
// stage 1) swap last actualShift elements with first ones.
|
|
rollKernelLinearStage1<T><<<1, 1, 1024, *(output->getContext()->getCudaStream())>>>(output->specialBuffer(), output->specialShapeInfo(), output->specialBuffer(), output->specialShapeInfo(), fullLen, actualShift);
|
|
|
|
// stage 2) swap swapped actualShift elements with rest remainShiftCount times.
|
|
rollKernelLinearStage2<T><<<1, 1, 1024, *(output->getContext()->getCudaStream())>>>(output->specialBuffer(), output->specialShapeInfo(), output->specialBuffer(), output->specialShapeInfo(), fullLen, actualShift, shiftCount);
|
|
|
|
// FIXME: no parallelism here :(
|
|
// stage 3) swap remainer of items.
|
|
if (remainShift && shiftCount)
|
|
rollKernelLinearStage3<T><<<1, 1, 1024, *(output->getContext()->getCudaStream())>>>(output->specialBuffer(), output->specialShapeInfo(), output->specialBuffer(), output->specialShapeInfo(), fullLen, actualShift, remainShift);
|
|
}
|
|
}
|
|
|
|
void rollFunctorFull(sd::LaunchContext * context, NDArray* input, NDArray* output, std::vector<int> const& shifts, std::vector<int> const& axes, bool inplace){
|
|
input->syncToDevice();
|
|
|
|
BUILD_SINGLE_SELECTOR(input->dataType(), rollFunctorFull_, (input, output, shifts, axes, inplace), LIBND4J_TYPES);
|
|
|
|
output->tickWriteDevice();
|
|
}
|
|
|
|
void rollFunctorLinear(sd::LaunchContext * context, NDArray* input, NDArray* output, int shift, bool inplace){
|
|
input->syncToDevice();
|
|
|
|
BUILD_SINGLE_SELECTOR(input->dataType(), rollFunctorLinear_, (input, output, shift, inplace), LIBND4J_TYPES);
|
|
|
|
output->tickWriteDevice();
|
|
}
|
|
|
|
BUILD_SINGLE_TEMPLATE(template void rollFunctorLinear_, (NDArray* input, NDArray* output, int shift, bool inplace), LIBND4J_TYPES);
|
|
BUILD_SINGLE_TEMPLATE(template void rollFunctorFull_, (NDArray* input, NDArray* output, std::vector<int> const& shifts, std::vector<int> const& axes, bool inplace), LIBND4J_TYPES);
|
|
}
|
|
}
|
|
} |