cavis/libnd4j/include/ops/declarable/helpers/transforms.h

91 lines
4.7 KiB
C
Raw Normal View History

2019-06-06 14:21:15 +02:00
/*******************************************************************************
* Copyright (c) 2015-2018 Skymind, Inc.
*
* This program and the accompanying materials are made available under the
* terms of the Apache License, Version 2.0 which is available at
* https://www.apache.org/licenses/LICENSE-2.0.
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*
* SPDX-License-Identifier: Apache-2.0
******************************************************************************/
//
// @author Yurii Shyrma (iuriish@yahoo.com), created on 20.04.2018
//
#ifndef LIBND4J_TRANSFORMS_H
#define LIBND4J_TRANSFORMS_H
#include <ops/declarable/helpers/helpers.h>
#include <helpers/helper_random.h>
[WIP] More of CUDA operations (#69) * initial commit Signed-off-by: raver119 <raver119@gmail.com> * - gruCell_bp further Signed-off-by: Yurii <yurii@skymind.io> * - further work on gruCell_bp Signed-off-by: Yurii <yurii@skymind.io> * Inverse matrix cublas implementation. Partial working revision. * Separation of segment ops helpers. Max separation. * Separated segment_min ops. * Separation of segment_mean/sum/prod/sqrtN ops heleprs. * Fixed diagonal processing with LUP decomposition. * Modified inversion approach using current state of LU decomposition. * Implementation of matrix_inverse op with cuda kernels. Working revision. * Implemented sequence_mask cuda helper. Eliminated waste printf with matrix_inverse implementation. Added proper tests. * - further work on gruCell_bp (ff/cuda) Signed-off-by: Yurii <yurii@skymind.io> * comment one test for gruCell_bp Signed-off-by: Yurii <yurii@skymind.io> * - provide cuda static_rnn Signed-off-by: Yurii <yurii@skymind.io> * Refactored random_shuffle op to use new random generator. * Refactored random_shuffle op helper. * Fixed debug tests with random ops tests. * Implement random_shuffle op cuda kernel helper and tests. * - provide cuda scatter_update Signed-off-by: Yurii <yurii@skymind.io> * Implementation of random_shuffle for linear case with cuda kernels and tests. * Implemented random_shuffle with cuda kernels. Final revision. * - finally gruCell_bp is completed Signed-off-by: Yurii <yurii@skymind.io> * Dropout op cuda helper implementation. * Implemented dropout_bp cuda helper. * Implemented alpha_dropout_bp with cuda kernel helpers. * Refactored helper. * Implementation of suppresion helper with cuda kernels. * - provide cpu code fot hsvToRgb, rgbToHsv, adjustHue Signed-off-by: Yurii <yurii@skymind.io> * Using sort by value method. * Implementation of image.non_max_suppression op cuda-based helper. * - correcting and testing adjust_hue, adjust_saturation cpu/cuda code Signed-off-by: Yurii <yurii@skymind.io> * Added cuda device prefixes to declarations. * Implementation of hashcode op with cuda helper. Initital revision. * rnn cu impl removed Signed-off-by: raver119 <raver119@gmail.com>
2019-07-20 07:58:44 +02:00
#include <graph/RandomGenerator.h>
2019-06-06 14:21:15 +02:00
namespace sd {
2019-06-06 14:21:15 +02:00
namespace ops {
namespace helpers {
void triuBP(sd::LaunchContext * context, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int diagonal);
2019-06-06 14:21:15 +02:00
void trace(sd::LaunchContext * context, const NDArray& input, NDArray& output);
2019-06-06 14:21:15 +02:00
void randomShuffle(sd::LaunchContext * context, NDArray& input, NDArray& output, sd::graph::RandomGenerator& rng, const bool isInplace);
2019-06-06 14:21:15 +02:00
// auxiliary function which serves for recursion purpose and is used in pad operation
// void recursiveLoopForPad(const int mode, NDArray& input, const NDArray& paddings, NDArray& output, std::vector<int> dimensions, int dim, int inIdx, int outIdx, NDArray& padValue);
void pad(sd::LaunchContext * context, const int mode, const NDArray& input, const NDArray& paddings, NDArray& output, NDArray const& padValue);
2019-06-06 14:21:15 +02:00
void invertPermutation(sd::LaunchContext * context, const NDArray& input, NDArray& output);
2019-06-06 14:21:15 +02:00
void gatherND(sd::LaunchContext * context, NDArray& input, NDArray& indices, NDArray& output);
2019-06-06 14:21:15 +02:00
void gather(sd::LaunchContext * context, NDArray* input, const NDArray* indices, NDArray* output, const std::vector<int>& intArgs);
2019-06-06 14:21:15 +02:00
void eye(sd::LaunchContext * context, NDArray& output);
2019-06-06 14:21:15 +02:00
void scatterUpdate(sd::LaunchContext * context, NDArray& operand, NDArray& updates, const std::vector<int>* intArgs);
2019-06-06 14:21:15 +02:00
void scatterSimple(sd::LaunchContext * context, const int opId, NDArray& input, const NDArray& updates, const NDArray& indices, const std::vector<int>& dimensions);
2019-06-06 14:21:15 +02:00
Backpropagation implementation of mergemax, mergeadd and mergeavg ops (#343) * libnd4j: first step of merge_max implementation Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j fixed typos Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some corrections for mergeMaxBp Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some minor corrections Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j test added for mergemax_bp Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j fixed several problems tests added, check with gradCheck Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j remove duplicated tests Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j split implementation of transforms ops into separate file implementation Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j code clean up, added mergeavg_bp and mergeadd_bp, need testing Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j merge master, fixed typos and added tests Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some minor fixes Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j added helper for mergeAddBp operation, this permits to skip nullify Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j file renaming changes and cuda some corrections, need some additional corrections Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some additional corrections for merge ops Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j more corrections per request for cuda more proper usage Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-25 06:40:30 +01:00
void mergeMaxIndex(sd::LaunchContext * context, const std::vector<const NDArray*>& inArrs, NDArray& output);
2019-06-06 14:21:15 +02:00
Backpropagation implementation of mergemax, mergeadd and mergeavg ops (#343) * libnd4j: first step of merge_max implementation Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j fixed typos Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some corrections for mergeMaxBp Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some minor corrections Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j test added for mergemax_bp Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j fixed several problems tests added, check with gradCheck Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j remove duplicated tests Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j split implementation of transforms ops into separate file implementation Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j code clean up, added mergeavg_bp and mergeadd_bp, need testing Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j merge master, fixed typos and added tests Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some minor fixes Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j added helper for mergeAddBp operation, this permits to skip nullify Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j file renaming changes and cuda some corrections, need some additional corrections Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some additional corrections for merge ops Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j more corrections per request for cuda more proper usage Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-25 06:40:30 +01:00
void mergeMax(sd::LaunchContext * context, const std::vector<const NDArray*>& inArrs, NDArray& output);
void mergeMaxBp(sd::LaunchContext* context, const std::vector<const NDArray*>& inArrs, std::vector<NDArray*>& outArrs);
2019-06-06 14:21:15 +02:00
Backpropagation implementation of mergemax, mergeadd and mergeavg ops (#343) * libnd4j: first step of merge_max implementation Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j fixed typos Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some corrections for mergeMaxBp Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some minor corrections Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j test added for mergemax_bp Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j fixed several problems tests added, check with gradCheck Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j remove duplicated tests Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j split implementation of transforms ops into separate file implementation Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j code clean up, added mergeavg_bp and mergeadd_bp, need testing Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j merge master, fixed typos and added tests Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some minor fixes Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j added helper for mergeAddBp operation, this permits to skip nullify Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j file renaming changes and cuda some corrections, need some additional corrections Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some additional corrections for merge ops Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j more corrections per request for cuda more proper usage Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-25 06:40:30 +01:00
void mergeAvg(sd::LaunchContext * context, const std::vector<const NDArray*>& inArrs, NDArray& output);
void mergeAvgBp(sd::LaunchContext* context, const NDArray& gradient, std::vector<NDArray*>& outArrs);
2019-06-06 14:21:15 +02:00
Backpropagation implementation of mergemax, mergeadd and mergeavg ops (#343) * libnd4j: first step of merge_max implementation Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j fixed typos Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some corrections for mergeMaxBp Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some minor corrections Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j test added for mergemax_bp Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j fixed several problems tests added, check with gradCheck Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j remove duplicated tests Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j split implementation of transforms ops into separate file implementation Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j code clean up, added mergeavg_bp and mergeadd_bp, need testing Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j merge master, fixed typos and added tests Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some minor fixes Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j added helper for mergeAddBp operation, this permits to skip nullify Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j file renaming changes and cuda some corrections, need some additional corrections Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j some additional corrections for merge ops Signed-off-by: Oleg <oleg.semeniv@gmail.com> * libnd4j more corrections per request for cuda more proper usage Signed-off-by: Oleg <oleg.semeniv@gmail.com>
2020-03-25 06:40:30 +01:00
void mergeAdd(sd::LaunchContext * context, const std::vector<const NDArray*>& inArrs, NDArray& output);
void mergeAddBp(sd::LaunchContext* context, const NDArray& gradient, std::vector<NDArray*>& outArrs);
2019-06-06 14:21:15 +02:00
void clipByNorm(sd::LaunchContext * context, NDArray& input, NDArray& output, const std::vector<int>& dimensions, const NDArray& clipNorm, const bool isInplace, const bool useAverage);
void clipByGlobalNorm(sd::LaunchContext * context, std::vector<NDArray*> const& inputs, double clipNorm, sd::memory::Workspace* workspace, std::vector<NDArray*>& outputs, bool isInplace);
2019-06-06 14:21:15 +02:00
void clipByNormBp(sd::LaunchContext * context, const NDArray& input, const NDArray& gradO, NDArray& gradI /*output*/, const std::vector<int>& dimensions, const NDArray& clipNorm, const bool useAverage);
2019-06-06 14:21:15 +02:00
void clipByAveragedNorm(sd::LaunchContext * context, NDArray& input, NDArray& output, const std::vector<int>& dimensions, const NDArray& clipNorm, const bool isInplace);
2019-06-06 14:21:15 +02:00
void mirrorPad(sd::LaunchContext * context, const NDArray& input, const NDArray& paddings, NDArray& output, const int mode);
2019-06-06 14:21:15 +02:00
profiling of stack and unstack ops (#261) * - profiling of stack and unstack ops Signed-off-by: Yurii <iuriish@yahoo.com> * - fix bug in cpu concat op Signed-off-by: Yurii <iuriish@yahoo.com> * - correction of cuda stack and unstack Signed-off-by: Yurii <iuriish@yahoo.com> * - change shape.h method which operates with unity dimensions strides Signed-off-by: Yurii <iuriish@yahoo.com> * - rearrange stack tests Signed-off-by: Yurii <iuriish@yahoo.com> * - correct evaluation of smallest stride for moving through contiguous axis Signed-off-by: Yurii <iuriish@yahoo.com> * - forgot to update signature of function strideOverContigAxis in cuda concat and split ops Signed-off-by: Yurii <iuriish@yahoo.com> * - remove ShapeUtils::shapeAsString method applied before input arrays validations Signed-off-by: Yurii <iuriish@yahoo.com> * - further removing of ShapeUtils::shapeAsString Signed-off-by: Yurii <iuriish@yahoo.com> * - take sub-array shapeIndo/offset calculation out of NDArray class - add possibility of contiguous memory copy in execTransformAny op if opNum == assign Signed-off-by: Yurii <iuriish@yahoo.com> * - correct test_empty_scatter_2 in EmptyTests.cpp Signed-off-by: Yurii <iuriish@yahoo.com> * - profiling of slice op Signed-off-by: Yurii <iuriish@yahoo.com> * - get rid of contiguous memcpy for some cases in concat and split ops Signed-off-by: Yurii <iuriish@yahoo.com> * - forgot to declare oid nd4j::SpecialMethods<T>::splitCpuGeneric Signed-off-by: Yurii <iuriish@yahoo.com> * - correct typo in calculation of threads in cuda split op Signed-off-by: Yurii <iuriish@yahoo.com> * - forgot to correct another set of threads variables in split cuda ops Signed-off-by: Yurii <iuriish@yahoo.com> * - further conflicts resolving Signed-off-by: Yurii <iuriish@yahoo.com> Co-authored-by: raver119 <raver119@gmail.com>
2020-03-03 05:32:37 +01:00
void clipByValue(sd::LaunchContext * context, NDArray& input, double leftBound, double rightBound, NDArray& output);
void mirrorPad(sd::LaunchContext * context, const NDArray& input, const NDArray& paddings, NDArray& output, const int mode);
void concat(sd::LaunchContext * context, const std::vector<const NDArray*>& inArrs, NDArray& output, const int axis);
2019-06-06 14:21:15 +02:00
void tileBP(sd::LaunchContext * context, const NDArray& gradO /*input*/, NDArray& gradI /*output*/, const std::vector<Nd4jLong> reps);
void split(sd::LaunchContext* context, const NDArray& input, std::vector<NDArray*>& outArrs, const int axis);
2019-06-06 14:21:15 +02:00
}
}
}
#endif //LIBND4J_TRANSFORMS_H