* initial commit Signed-off-by: raver119@gmail.com <raver119@gmail.com> * another initial commit Signed-off-by: raver119@gmail.com <raver119@gmail.com> * another initial commit Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one more initial commit Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next step Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next step Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next step Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next step Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Refactored buffer() and shapeInfo() methods usage with NDArray class. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt Graph class methods to use const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt choose op to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt where op shape method to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt lstsq op to use constant empty shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt matrix_diag_part op shape routine to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt determinant ops to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt mean_pairwssqerr_loss ops to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt ops shape methods. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt shape methods for loss ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt log_loss op shape method. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt shape methods for ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt dilation2d ops shape methods. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted deconv2d ops shape methods. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted dynamicRNN op shape method. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted shape methods for ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted shape methods for lstm layer ops. Signed-off-by: shugeo <sgazeos@gmail.com> * few updates Signed-off-by: raver119@gmail.com <raver119@gmail.com> * first cuda tweak Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Adopt constant shapes for sconv2d ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt constant shapes for gru ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt constant shapes with shape methods for segment ops and so on. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted constant shapes with unsorted_segment_* ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted constant shapes with gamma op shape method. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted shape methods of reduce_stddev ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted shape methods for reduce_* ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt shape method for squeeze op. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt strided_slice shape method. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored concat op shape method to adopt constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted shape method for mirror_pad op. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted split op shape method. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted tile ops shape methods. Signed-off-by: shugeo <sgazeos@gmail.com> * Added const cast for mkldnn routines handles. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored logSoftMaxForVector_ routine to conform with proper data and shape pointer casts. Signed-off-by: shugeo <sgazeos@gmail.com> * Cosmetic changes to proper usage of constant pointers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored a couple shape comparators for strides and addBias helpers to proper use data pointers with inplace option. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored depthToSpace helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored histogram helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored im2col helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored gather and gatherND helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage on percentile helper. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed gather shape with helpers and range buffer usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with space to depth helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage and constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with LUP decomposition> Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored onehot_ helper. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored pad and prefix to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactoed softmax helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed space to batch helpers to use buffers properly. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed stack and split helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with sparse to dense helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with mindistance_ helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with tile helper. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed constant shape usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed constant shape usage with legacy pairwise bool ops. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored a couple of methods to adopt constant shape usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed broadcasting with constant shape." Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const usage with inplace reverse and constant shapes with legacy reduction. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored legacy ops with const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored sort to adopt constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected sort for constant shape usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed constant shape usage with special methods. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored Context to conform with constant shape usage. Signed-off-by: shugeo <sgazeos@gmail.com> * CUDA broadcasting headers Signed-off-by: raver119@gmail.com <raver119@gmail.com> * pairwise/indexreduce/random headers Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Refactored native ops to adopt constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * legacy reduce3/scalar headers Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Corrected pullRow signature and tests. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected routines to proper use of constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored tests to use constant shapes properly. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored legacy ops tests to use constant shapes properly. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored buffer usage with NDArray tests. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed native ops tests. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed special concat routine. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with test. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed buffer usage with a test. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored TAD.h and tests. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored calcStrides* routines to use constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed miscelaneous errors with constant shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * NativeOps const changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Corrected definitions for declared functions. Signed-off-by: shugeo <sgazeos@gmail.com> * NativeOps const changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * few more const changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Fixed const shapes with shape routines. Signed-off-by: shugeo <sgazeos@gmail.com> * few more const changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Fixed shape method for broadcastable case. Signed-off-by: shugeo <sgazeos@gmail.com> * few more const changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * xw_plus_b BP shape fn restored Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Fixed signatures with broadcasting. Signed-off-by: shugeo <sgazeos@gmail.com> * Repaired backprops shape methods for a set of operations. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored broadcast bool for cuda. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored methods for 3 args with const qualifier. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed a couple of kernel signatures for broadcasting. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed kernels signatures for const buffers and shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored pairwise methods to persistent buffers and shapes usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt const to buffers and shapes with kernels. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopt const to buffers and shapes with scalar kernels. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored indexreduce kernels signatures to use const buffers and shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored pairwise kernels to adopt cons shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored pairwise bool kernels to adopt cons shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored random special ops to conform with const shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored native ops to conform with const shapes and buffers under cuda platform. Signed-off-by: shugeo <sgazeos@gmail.com> * Cosmetical changes only. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const shapes and buffers error. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected start pos routine. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored methods to conform with const shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored helpers to use proper methods instead. Signed-off-by: shugeo <sgazeos@gmail.com> * bunch of changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next bunch of changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * next bunch of changes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Fixed execScalar declaration. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed execScalar declaration. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected const shape cases with sort and so on. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const shapes for sort. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored kernel declarations to adopt const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed kernels declarations to adopt const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected kernel declarations to adopt const shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed kernels declarations to adopt const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed segment helpers kernels declarations and so on to adopt const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const shape usage with segment and solve helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed kernel declaration with adjustWeight helper. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed cuda implementations for constant shape helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted const shape usage with kernels. Signed-off-by: shugeo <sgazeos@gmail.com> * Adopted top_k kernels to use const shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Corrected kernels declarations to adopt const shapes with helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored NDArray definitions to adopt const shapes and buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const shapes with image suppression helpers. Signed-off-by: shugeo <sgazeos@gmail.com> * Slight improvement with buffers. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored buffer usage. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored buffer usage with tests. Signed-off-by: shugeo <sgazeos@gmail.com> * Fixed const shape usage with definitions. Signed-off-by: shugeo <sgazeos@gmail.com> * minor updates on cpu side Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Refactored const shape usage with ConstantDescritor and native ops with cuda platform. Signed-off-by: shugeo <sgazeos@gmail.com> * Refactored tear and tile kernels to adopt with const shapes. Signed-off-by: shugeo <sgazeos@gmail.com> * softmax_loop fix Signed-off-by: raver119 <raver119@gmail.com> * update missing signature Signed-off-by: raver119@gmail.com <raver119@gmail.com> * softmax again Signed-off-by: raver119@gmail.com <raver119@gmail.com> * few more missing consts Signed-off-by: raver119 <raver119@gmail.com> * new methods updated Signed-off-by: raver119@gmail.com <raver119@gmail.com> Co-authored-by: shugeo <sgazeos@gmail.com>
447 lines
22 KiB
Plaintext
447 lines
22 KiB
Plaintext
/*******************************************************************************
|
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
|
*
|
|
* This program and the accompanying materials are made available under the
|
|
* terms of the Apache License, Version 2.0 which is available at
|
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
|
*
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
* License for the specific language governing permissions and limitations
|
|
* under the License.
|
|
*
|
|
* SPDX-License-Identifier: Apache-2.0
|
|
******************************************************************************/
|
|
|
|
//
|
|
// @author raver119@gmail.com
|
|
//
|
|
|
|
#include <system/op_boilerplate.h>
|
|
#include <loops/random.h>
|
|
#include <system/dll.h>
|
|
#include <cuda.h>
|
|
#include <cuda_runtime.h>
|
|
#include <helpers/DebugHelper.h>
|
|
#include <ops/specials_cuda.h>
|
|
|
|
using namespace randomOps;
|
|
|
|
template <typename T, typename OpClass>
|
|
static inline __device__ void randomSingleGeneric(
|
|
Nd4jPointer state,
|
|
void *z,
|
|
Nd4jLong const* zShapeBuffer,
|
|
void *extraArguments) {
|
|
|
|
|
|
functions::random::RandomFunction<T>::template execTransformCuda<OpClass>(
|
|
state,
|
|
z,
|
|
zShapeBuffer,
|
|
extraArguments);
|
|
}
|
|
|
|
template <typename T, typename OpClass>
|
|
static inline __device__ void randomDoubleGeneric(
|
|
Nd4jPointer state,
|
|
void const* x,
|
|
Nd4jLong const* xShapeBuffer,
|
|
void *z,
|
|
Nd4jLong const* zShapeBuffer,
|
|
void *extraArguments) {
|
|
|
|
|
|
functions::random::RandomFunction<T>::template execTransformCuda<OpClass>(
|
|
state,
|
|
x,
|
|
xShapeBuffer,
|
|
z,
|
|
zShapeBuffer,
|
|
extraArguments);
|
|
}
|
|
|
|
|
|
template <typename T, typename OpClass>
|
|
static inline __device__ void randomTripleGeneric(
|
|
Nd4jPointer state,
|
|
void const* x,
|
|
Nd4jLong const* xShapeBuffer,
|
|
void const* y,
|
|
Nd4jLong const* yShapeBuffer,
|
|
void *z,
|
|
Nd4jLong const* zShapeBuffer,
|
|
void *extraArguments) {
|
|
|
|
|
|
functions::random::RandomFunction<T>::template execTransformCuda<OpClass>(
|
|
state,
|
|
x,
|
|
xShapeBuffer,
|
|
y,
|
|
yShapeBuffer,
|
|
z,
|
|
zShapeBuffer,
|
|
extraArguments);
|
|
}
|
|
|
|
|
|
#ifndef __CLION_IDE__
|
|
// here we generate kernels for target operations
|
|
DISPATCH_KERNEL_SIMPLE(randomSingle_, randomSingleGeneric, float, INPUT(Nd4jPointer state, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
DISPATCH_KERNEL_SIMPLE(randomSingle_, randomSingleGeneric, double, INPUT(Nd4jPointer state, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
DISPATCH_KERNEL_SIMPLE(randomSingle_, randomSingleGeneric, float16, INPUT(Nd4jPointer state, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
DISPATCH_KERNEL_SIMPLE(randomSingle_, randomSingleGeneric, bfloat16, INPUT(Nd4jPointer state, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DISPATCH_KERNEL_SIMPLE(randomDouble_, randomDoubleGeneric, float, INPUT(Nd4jPointer state, void const* x, Nd4jLong const* xShapeBuffer, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, x, xShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
DISPATCH_KERNEL_SIMPLE(randomDouble_, randomDoubleGeneric, double, INPUT(Nd4jPointer state, void const* x, Nd4jLong const* xShapeBuffer, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, x, xShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
DISPATCH_KERNEL_SIMPLE(randomDouble_, randomDoubleGeneric, float16, INPUT(Nd4jPointer state, void const* x, Nd4jLong const* xShapeBuffer, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, x, xShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
DISPATCH_KERNEL_SIMPLE(randomDouble_, randomDoubleGeneric, bfloat16, INPUT(Nd4jPointer state, void const* x, Nd4jLong const* xShapeBuffer, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, x, xShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DISPATCH_KERNEL_SIMPLE(randomTriple_, randomTripleGeneric, float, INPUT(Nd4jPointer state, void const* x, Nd4jLong const* xShapeBuffer, void const* y, Nd4jLong const* yShapeBuffer, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, x, xShapeBuffer, y, yShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
DISPATCH_KERNEL_SIMPLE(randomTriple_, randomTripleGeneric, double, INPUT(Nd4jPointer state, void const* x, Nd4jLong const* xShapeBuffer, void const* y, Nd4jLong const* yShapeBuffer, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, x, xShapeBuffer, y, yShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
DISPATCH_KERNEL_SIMPLE(randomTriple_, randomTripleGeneric, float16, INPUT(Nd4jPointer state, void const* x, Nd4jLong const* xShapeBuffer, void const* y, Nd4jLong const* yShapeBuffer, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, x, xShapeBuffer, y, yShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
DISPATCH_KERNEL_SIMPLE(randomTriple_, randomTripleGeneric, bfloat16, INPUT(Nd4jPointer state, void const* x, Nd4jLong const* xShapeBuffer, void const* y, Nd4jLong const* yShapeBuffer, void *z, Nd4jLong const* zShapeBuffer, void *extraArguments), PARAMS(state, x, xShapeBuffer, y, yShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
#endif
|
|
|
|
namespace functions {
|
|
namespace random {
|
|
template<typename T>
|
|
template<typename OpClass>
|
|
void _CUDA_D RandomFunction<T>::execTransformCuda(Nd4jPointer state, void const* vx, Nd4jLong const* xShapeBuffer, void const* vy, Nd4jLong const* yShapeBuffer, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto x = reinterpret_cast<T const*>(vx);
|
|
auto y = reinterpret_cast<T const*>(vy);
|
|
auto z = reinterpret_cast<T*>(vz);
|
|
auto extraArguments = reinterpret_cast<T*>(vextraArguments);
|
|
|
|
if (OpClass::requiresSpecial) {
|
|
OpClass::specialOpCuda(state, x, xShapeBuffer, y, yShapeBuffer, z, zShapeBuffer, extraArguments);
|
|
return;
|
|
} else {
|
|
|
|
__shared__ Nd4jLong length;
|
|
__shared__ int xEWS;
|
|
__shared__ int yEWS;
|
|
__shared__ int zEWS;
|
|
__shared__ char xOrder;
|
|
__shared__ char yOrder;
|
|
__shared__ char zOrder;
|
|
|
|
__shared__ sd::graph::RandomGenerator *buffer;
|
|
__shared__ unsigned char *cB;
|
|
__shared__ unsigned char *dB;
|
|
sd::graph::RandomGenerator *devBuffer;
|
|
if (threadIdx.x == 0) {
|
|
length = shape::length(zShapeBuffer);
|
|
xEWS = shape::elementWiseStride(xShapeBuffer);
|
|
yEWS = shape::elementWiseStride(yShapeBuffer);
|
|
zEWS = shape::elementWiseStride(zShapeBuffer);
|
|
xOrder = shape::order(xShapeBuffer);
|
|
yOrder = shape::order(yShapeBuffer);
|
|
zOrder = shape::order(zShapeBuffer);
|
|
|
|
extern __shared__ unsigned char shmem[];
|
|
buffer = (sd::graph::RandomGenerator *) shmem;
|
|
cB = shmem;
|
|
devBuffer = reinterpret_cast<sd::graph::RandomGenerator *> (state);
|
|
dB = reinterpret_cast<unsigned char *> (state);
|
|
}
|
|
__syncthreads();
|
|
|
|
// using this loop instead of memcpy
|
|
for (int e = threadIdx.x; e < sizeof(sd::graph::RandomGenerator); e+= blockDim.x)
|
|
cB[e] = dB[e];
|
|
|
|
__syncthreads();
|
|
|
|
|
|
int tid = blockIdx.x * blockDim.x + threadIdx.x;
|
|
|
|
if (xEWS >= 1 && yEWS >= 1 && zEWS >= 1 && xOrder == yOrder && xOrder == zOrder) {
|
|
for (Nd4jLong e = tid; e < length; e += blockDim.x * gridDim.x) {
|
|
z[e * zEWS] = OpClass::op(x[e * xEWS], y[e * yEWS], e, length, buffer, extraArguments);
|
|
}
|
|
} else {
|
|
for (Nd4jLong i = tid; i < length; i += blockDim.x * gridDim.x) {
|
|
|
|
auto xOffset2 = shape::getIndexOffset(i, xShapeBuffer);
|
|
auto yOffset2 = shape::getIndexOffset(i, yShapeBuffer);
|
|
auto zOffset2 = shape::getIndexOffset(i, zShapeBuffer);
|
|
|
|
z[zOffset2] = OpClass::op(x[xOffset2], y[yOffset2], i, length, buffer, extraArguments);
|
|
}
|
|
}
|
|
}
|
|
};
|
|
|
|
|
|
template<typename T>
|
|
template<typename OpClass>
|
|
void _CUDA_D RandomFunction<T>::execTransformCuda(Nd4jPointer state, void const* vx, Nd4jLong const* xShapeBuffer, void* vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto x = reinterpret_cast<T const*>(vx);
|
|
auto z = reinterpret_cast<T*>(vz);
|
|
auto extraArguments = reinterpret_cast<T*>(vextraArguments);
|
|
|
|
__shared__ Nd4jLong length;
|
|
__shared__ int xEWS;
|
|
__shared__ int zEWS;
|
|
__shared__ char xOrder;
|
|
__shared__ char zOrder;
|
|
|
|
__shared__ sd::graph::RandomGenerator *buffer;
|
|
__shared__ unsigned char *cB;
|
|
__shared__ unsigned char *dB;
|
|
__shared__ sd::graph::RandomGenerator *devBuffer;
|
|
|
|
if (threadIdx.x == 0) {
|
|
extern __shared__ unsigned char shmem[];
|
|
buffer = (sd::graph::RandomGenerator *) shmem;
|
|
cB = shmem;
|
|
devBuffer = reinterpret_cast<sd::graph::RandomGenerator *> (state);
|
|
dB = reinterpret_cast<unsigned char *> (state);
|
|
|
|
length = shape::length(zShapeBuffer);
|
|
xEWS = shape::elementWiseStride(xShapeBuffer);
|
|
zEWS = shape::elementWiseStride(zShapeBuffer);
|
|
xOrder = shape::order(xShapeBuffer);
|
|
zOrder = shape::order(zShapeBuffer);
|
|
}
|
|
__syncthreads();
|
|
|
|
// using this loop instead of memcpy
|
|
for (int e = threadIdx.x; e < sizeof(sd::graph::RandomGenerator); e+= blockDim.x)
|
|
cB[e] = dB[e];
|
|
|
|
__syncthreads();
|
|
|
|
|
|
if (xEWS >= 1 && zEWS >= 1 && xOrder == zOrder) {
|
|
for (Nd4jLong e = blockIdx.x * blockDim.x + threadIdx.x; e < length; e += blockDim.x * gridDim.x) {
|
|
z[e * zEWS] = OpClass::op(x[e * xEWS], e, length, buffer, extraArguments);
|
|
}
|
|
} else {
|
|
|
|
for (Nd4jLong i = blockIdx.x * blockDim.x + threadIdx.x; i < length; i += blockDim.x * gridDim.x) {
|
|
|
|
auto xOffset2 = shape::getIndexOffset(i, xShapeBuffer);
|
|
auto zOffset2 = shape::getIndexOffset(i, zShapeBuffer);
|
|
|
|
z[zOffset2] = OpClass::op(x[xOffset2], i, length, buffer, extraArguments);
|
|
}
|
|
}
|
|
}
|
|
|
|
|
|
template<typename T>
|
|
template<typename OpClass>
|
|
void _CUDA_D RandomFunction<T>::execTransformCuda(Nd4jPointer state, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto z = reinterpret_cast<T*>(vz);
|
|
auto extraArguments = reinterpret_cast<T*>(vextraArguments);
|
|
|
|
__shared__ Nd4jLong length;
|
|
__shared__ Nd4jLong ews;
|
|
__shared__ sd::graph::RandomGenerator *buffer;
|
|
__shared__ unsigned char *cB;
|
|
__shared__ unsigned char *dB;
|
|
__shared__ sd::graph::RandomGenerator *devBuffer;
|
|
|
|
if (threadIdx.x == 0) {
|
|
extern __shared__ unsigned char shmem[];
|
|
buffer = (sd::graph::RandomGenerator *) shmem;
|
|
cB = shmem;
|
|
devBuffer = reinterpret_cast<sd::graph::RandomGenerator *> (state);
|
|
dB = reinterpret_cast<unsigned char *> (state);
|
|
length = shape::length(zShapeBuffer);
|
|
ews = shape::elementWiseStride(zShapeBuffer);
|
|
}
|
|
__syncthreads();
|
|
|
|
// using this loop instead of memcpy
|
|
for (int e = threadIdx.x; e < sizeof(sd::graph::RandomGenerator); e+= blockDim.x)
|
|
cB[e] = dB[e];
|
|
|
|
__syncthreads();
|
|
|
|
int tid = blockIdx.x * blockDim.x + threadIdx.x;
|
|
|
|
if (ews > 0) {
|
|
for (Nd4jLong i = tid; i < length; i += blockDim.x * gridDim.x) {
|
|
z[i * ews] = OpClass::op(i, length, buffer, extraArguments);
|
|
}
|
|
} else {
|
|
|
|
for (Nd4jLong i = tid; i < length; i += blockDim.x * gridDim.x) {
|
|
auto zOffset2 = shape::getIndexOffset(i, zShapeBuffer);
|
|
z[zOffset2] = OpClass::op(i, length, buffer, extraArguments);
|
|
}
|
|
}
|
|
}
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<float>::executeCudaSingle(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto z = reinterpret_cast<float*>(vz);
|
|
auto extraArguments = reinterpret_cast<float*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomSingle, float, PARAMS(stateHost, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<float16>::executeCudaSingle(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto z = reinterpret_cast<float16*>(vz);
|
|
auto extraArguments = reinterpret_cast<float16*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomSingle, float16, PARAMS(stateHost, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<bfloat16>::executeCudaSingle(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto z = reinterpret_cast<bfloat16*>(vz);
|
|
auto extraArguments = reinterpret_cast<bfloat16*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomSingle, bfloat16, PARAMS(stateHost, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<double>::executeCudaSingle(dim3& launchDims, cudaStream_t *stream, int opNum, Nd4jPointer stateHost, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto z = reinterpret_cast<double*>(vz);
|
|
auto extraArguments = reinterpret_cast<double*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomSingle, double, PARAMS(stateHost, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<float>::executeCudaDouble(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void const* vx, Nd4jLong const* xShapeBuffer, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto x = reinterpret_cast<float const*>(vx);
|
|
auto z = reinterpret_cast<float*>(vz);
|
|
auto extraArguments = reinterpret_cast<float*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomDouble, float, PARAMS(stateHost, x, xShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<float16>::executeCudaDouble(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void const* vx, Nd4jLong const* xShapeBuffer, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto x = reinterpret_cast<float16 const*>(vx);
|
|
auto z = reinterpret_cast<float16*>(vz);
|
|
auto extraArguments = reinterpret_cast<float16*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomDouble, float16, PARAMS(stateHost, x, xShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<bfloat16>::executeCudaDouble(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void const* vx, Nd4jLong const* xShapeBuffer, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto x = reinterpret_cast<bfloat16 const*>(vx);
|
|
auto z = reinterpret_cast<bfloat16*>(vz);
|
|
auto extraArguments = reinterpret_cast<bfloat16*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomDouble, bfloat16, PARAMS(stateHost, x, xShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<double>::executeCudaDouble(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void const* vx, Nd4jLong const* xShapeBuffer, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto x = reinterpret_cast<double const*>(vx);
|
|
auto z = reinterpret_cast<double*>(vz);
|
|
auto extraArguments = reinterpret_cast<double*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomDouble, double, PARAMS(stateHost, x, xShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<float>::executeCudaTriple(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void const* vx, Nd4jLong const* xShapeBuffer, void const* vy, Nd4jLong const* yShapeBuffer, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto x = reinterpret_cast<float const*>(vx);
|
|
auto y = reinterpret_cast<float const*>(vy);
|
|
auto z = reinterpret_cast<float*>(vz);
|
|
auto extraArguments = reinterpret_cast<float*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomTriple, float, PARAMS(stateHost, x, xShapeBuffer, y, yShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<float16>::executeCudaTriple(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void const* vx, Nd4jLong const* xShapeBuffer, void const* vy, Nd4jLong const* yShapeBuffer, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto x = reinterpret_cast<float16 const*>(vx);
|
|
auto y = reinterpret_cast<float16 const*>(vy);
|
|
auto z = reinterpret_cast<float16*>(vz);
|
|
auto extraArguments = reinterpret_cast<float16*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomTriple, float16, PARAMS(stateHost, x, xShapeBuffer, y, yShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<bfloat16>::executeCudaTriple(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void const* vx, Nd4jLong const* xShapeBuffer, void const* vy, Nd4jLong const* yShapeBuffer, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto x = reinterpret_cast<bfloat16 const*>(vx);
|
|
auto y = reinterpret_cast<bfloat16 const*>(vy);
|
|
auto z = reinterpret_cast<bfloat16*>(vz);
|
|
auto extraArguments = reinterpret_cast<bfloat16*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomTriple, bfloat16, PARAMS(stateHost, x, xShapeBuffer, y, yShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
|
|
|
|
template <>
|
|
_CUDA_H void RandomFunction<double>::executeCudaTriple(dim3& launchDims, cudaStream_t* stream, int opNum, Nd4jPointer stateHost, void const* vx, Nd4jLong const* xShapeBuffer, void const* vy, Nd4jLong const* yShapeBuffer, void *vz, Nd4jLong const* zShapeBuffer, void *vextraArguments) {
|
|
|
|
auto x = reinterpret_cast<double const*>(vx);
|
|
auto y = reinterpret_cast<double const*>(vy);
|
|
auto z = reinterpret_cast<double*>(vz);
|
|
auto extraArguments = reinterpret_cast<double*>(vextraArguments);
|
|
|
|
// this macro builds bunch of IF/ELSE selectors for kernel launch
|
|
DISPATCH_SIMPLE(randomTriple, double, PARAMS(stateHost, x, xShapeBuffer, y, yShapeBuffer, z, zShapeBuffer, extraArguments), OPS_A(RANDOM_OPS))
|
|
|
|
DEBUG_KERNEL(stream, opNum);
|
|
}
|
|
|
|
BUILD_SINGLE_TEMPLATE(template class ND4J_EXPORT RandomFunction, , FLOAT_TYPES);
|
|
}
|
|
}
|