Alex Black 1170827c18 Merge master to upstream (#7945)
* Shugeo strided slice zeros (#14)

* Modified strided_slice op to properly work with empty-like shapes.

* Fixed test for reduce_mean with empty-like input.

* [WIP] Last merge (#15)

* correct logsoftmax looss (#2)

* Small SameDiff listener fix (#4)

* Various fixes (#6)

* #7839 Fix for asXMatrix and tests

* #7866 EmbeddingSequenceLayer dtype fix + test

* #7856 SameDiff save/load stream methods

* #7859 RegressionEvaluation rank 4 fix + tests + axis configuration

* EvaluationBinary 3d/4d

* More evaluation 3d/4d tests

* #7847 Evaluation empty checks

* Small test ifx

* #7848 Fix median edge case

* Improve DL4J samediff layer tests

* [WIP] FastText wrapper implemented (#8)

* FastText implemented

* Some fixes

* Fix shapes for wordsNearest

* Validation of input vectors

* Fixes

* Fixed test

* Thread tagged

* Some tweaks

* setContextClassLoader for DeallocatorServiceThread

* Numpy format tests (#1)

* Various fixes (#11)

* #7852 SameDiff gather fix

* #7892 SameDiff placeholder to constant conversion

* #7890 validate input rank for MLN/CG init methods

* Fix broken permute shape calculation

* Permute and gather fixes

* Tests

* #7850 LogSumExp fix + test

* Handful of test fixes

* Empty arrays with non-scalar shapes (#10)

* minor rearrangements for lambdas

* empty tensors with non-scalar shapes

* numpy empty tensors with non-scalar shapes

* few more empty tweaks

* Small fixes

* conv3d signature update

* micro fix in batchnorm mkldnn

* Import fixes

* Fix

* MKL-DNN update

* Small fill fix

* fill with empty input + test

* Fixes

* Small error improvement

* Fix

* one special test

* couple of fixes for lstm

* Rewrite TFGraphMapper.getNDArrayFromTensor to be maintainable and less error prone

* Fixes

* FP16

* Unsigned

* BFloat16

* Fill op - empty tweaks

* - couple of fixes for empty arrays construction
- stack updated

* strided slice fix

* one transform test

* provide method for reducing shapeInfo in case of input array is empty

* Fixed reduceAlongDimensions to use empty input properly.

* couple of broadcast tests

* couple of tests broadcast tests + tweak to make them pass

* add check of non-empty to methods producing sub-arrays

* Fixed reshapeC with zeros in shape.

* complete empty check in reduce_... legacy ops

* Concat and cumsum/prod

* Tweak to empty shape inference on import

* add empty check to the rest of reduce legacy ops

* one more test

* correct typo in evalReduceShapeInfoEmpty

* Added tests for reduce_* ops to tests with zero shapes.

* few more tests for empty reductions

* Fixed strided_slice op with empty case and tests.

* one more empty reduction test

* Fixed strided_slice test.

* add empty check to NDArray::reshapei

* infOrMax

* empty min/max with infinity tests

* made unstack working correctly with empty arrays

* few IndexReduce tests + tweaks for empty shapes

* add test for empty concat

* few tests fixed

* Validation fix for reductions on empty shapes

* Reverse fix

* Reduction shape calc fixes

* SameDiff.generateOutputVariable: don't use shape function to determine number of outputs

* Range fix

* - NDArray constructor updated for scalars/empty arrays
- few tests fixed

* More fixes

* Empty creator fixes

* concat fix

* concat fix

* TF import tests: allow 'both all NaN' and 'both all inf' to pass

* Slice, zero fraction, and reshape fixes

* transpose, gather

* Zero fraction

* scalar cast fix

* Empty reduction axis support

* few more tests fixed

* Fixed input checks conforming with TF for concat op and tests.

* few tests fixed

* matmul scalar shape fix

* Fixed checkout for data type and scalarity with concat to allow non-empty scalars with vector concats.

* broadcast bool fix

* few more tests

* few more tests

* correct evalReduceShapeInfoEmpty

* argmax/argmin + tests

* one more empty edge case + one more test

* argmax/argmin/realdiv_bp tweaks

* empty reshape test + fix

* Helper fixes

* Small fixes

* Gather test fix

* Gather test fix

* Small fixes

* reduce scalar zero values

* scalar mean workaround

* Remove debug code

* along dim mean workaround

* one more test

* - equalsTo() tweak for empty arrays
- one more test

* broadcast tweaks

* [WIP] Fixing outstanding issues for NLP (#9)

* Avoid using not-inited objects

* Test fixed.

* Redundant method avoided for models like FastText

* KMeans++ implementation

* KMeans++ implementation

* Disable parallel execution

* KMeans++

* Tests

* Dev branch merge (#16)

* SameDiff: convertDataType and gradient check util improvements (#12)

* GradCheck util improvements

* StopGradient constructor + test

* SameDiff: Add datatype conversion

* Javadoc and add DataType.isNumerical()

* Small fix

* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)

* TFGraphTestAllHelper: check intermediates in execution order

* Add missing debug listener

* [WIP] lstmBlock fix + other changes (#13)

- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite

* Small test fix

* CheckNumerics op wrapper

* Fix some issues on master (#17)

* Fix DataVec test issue

* Fix issue with dl4j SameDiff output layer

* Dtype fix for lambda layers

* #7912 BertIterator dtype fix (use float32 not global default)

* [WIP] Next set of CUDA stuff (#7)

New CUDA implementations and improvements

* bad file

* Dev branch master merge (#23)

* SameDiff: convertDataType and gradient check util improvements (#12)

* GradCheck util improvements

* StopGradient constructor + test

* SameDiff: Add datatype conversion

* Javadoc and add DataType.isNumerical()

* Small fix

* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)

* TFGraphTestAllHelper: check intermediates in execution order

* Add missing debug listener

* [WIP] lstmBlock fix + other changes (#13)

- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite

* Small test fix

* CheckNumerics op wrapper

* Compatibility of deserialization (#18)

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* SameDiff: add activation gradient checking support for debugging (#19)

* SameDiff gradient checker: first pass on activation gradient checks

* Fixes + tests for activation gradient checking

* Javadoc

* [WIP] Some nd4j data type corrections (#20)

* Adjust data type

* Set correct Data type.

* Size of proper data type.

* fix averaged cpu load (#22)

* SameDiff ops, TF import and fixes (#24)

* CheckNumerics tests + fixes + misc fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fake quant

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fixes

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* FakeQuantWithMinMaxArgs

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* CheckNumerics fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix libnd4j ALL_INTS and ALL_FLOATS declaration (uint and bfloat types)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Small fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Javadoc

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Exception tweak

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix for out of scope stack allocated var use

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ignores

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Ignore for known failing test (already logged issue)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Merge upstream to fork (#25)

* Add thousand-separator commas to TotalParams (#7915)

* Add thousand-separator commas to TotalParams

The number of parameters can be quite large, and it would help the reading of the summary printout to have the TotalParams column & values at the bottom have thousand-separator-commas in them.

* Add thousand-separator commas to MultiLayerNetwork

Corresponding change to MultiLayerNetwork

Signed-off-by: Jxtps Jxtps <jxtps435@gmail.com>

* Update contributing and issue/PR templates (#7934)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Fix link to AdaDelta paper (#7942)

Fix link to AdaDelta paper hosted on matthewzeiler.com

Signed-off-by: Jxtps

* Fixes, and ignores for known/logged failing issues (#7943)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* SameDiff + DL4J/SameDiff: Multiple fixes (#28)

* #7919 HDF5 attribute buffer length fix

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7909 Arbiter constructor exception ux improvements

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7925 RNN output layer length checks

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7939 Add listener for validating inputs are not incorrectly modified

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* #7939 Integrate NonInplaceValidationListener into tests

* #7844 DL4J SameDiff fixes for variable minibatch size

* DL4J SameDiff fixes - ensure gradient for input placeholder is available

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* Tweaks to ExternalErrorsFunction - use placeholders, make more robust

* Another fix

* More fixes

* More SameDiff/DL4J fixes

* Scope out scalar array creation in BaseScalarOp

* Remove debug code

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* [WIP] Final dev branch merge (#29)

* SameDiff: convertDataType and gradient check util improvements (#12)

* GradCheck util improvements

* StopGradient constructor + test

* SameDiff: Add datatype conversion

* Javadoc and add DataType.isNumerical()

* Small fix

* Fix SameDiff TF import test cases intermediate naming (workaround for bad default)

* TFGraphTestAllHelper: check intermediates in execution order

* Add missing debug listener

* [WIP] lstmBlock fix + other changes (#13)

- fixes lstmBlock issue
- changes NDArray method reshape(), permute(), transpose() by making them return instance instead of pointer
- CheckNumerics op
- fixes for ReduceBool IsInfOrNan & IsFinite

* Small test fix

* CheckNumerics op wrapper

* Compatibility of deserialization (#18)

Signed-off-by: Alexander Stoyakin <alexander.stoyakin@gmail.com>

* SameDiff: add activation gradient checking support for debugging (#19)

* SameDiff gradient checker: first pass on activation gradient checks

* Fixes + tests for activation gradient checking

* Javadoc

* [WIP] Some nd4j data type corrections (#20)

* Adjust data type

* Set correct Data type.

* Size of proper data type.

* fix averaged cpu load (#22)

* [WIP] Multiple dataset iterators (#27)

* Splitting dataset into arbitrary number

* Fixes

* Multiple split of iterator

* Test

* Test

* Some fixes

* signature change

* one more tweak

Signed-off-by: raver119 <raver119@gmail.com>

* one more test for sequential use of DataSetIteratorSplitter

Signed-off-by: raver119 <raver119@gmail.com>

* Fixes

* Fixes

* one more test for Alexander

Signed-off-by: raver119 <raver119@gmail.com>

* Some fixes

* Some fixes

* one more test for Alexander

Signed-off-by: raver119 <raver119@gmail.com>

* minor test fix

Signed-off-by: raver119 <raver119@gmail.com>

* Some fixes

* Some fixes

* couple of assertions tweaked

Signed-off-by: raver119 <raver119@gmail.com>

* MDS splitter test :/

Signed-off-by: raver119 <raver119@gmail.com>

* Minor refactoring

* Multi dataset

* Some fixes

* More tests

* Small number of test fixes/improvements (failures on CI) (#31)

Signed-off-by: AlexDBlack <blacka101@gmail.com>

* [WIP] More CUDA stuff (#26)

* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* LRN BP CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* less memory

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed bug with crop_and_resize op helper.

* get rid of unnecessary index-calculation dunction

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed sort with nth_element cuda-based helper.

* Refactored nth_element.

* Refactored nth_element op and tests.

* Modified usage of dim array with sortTad routine.

* Refactored main routine of helper for non_max_image_suppression op.

* non_max_image_suppression op helper with cuda kernel implementation. Initial revision.

* fix vol2col cuda kernel

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* topK concept

Signed-off-by: raver119 <raver119@gmail.com>

* unsorted topK with scanWitdh of 1

Signed-off-by: raver119 <raver119@gmail.com>

* correct vol2col tests

* sorted/unsorted topK

Signed-off-by: raver119 <raver119@gmail.com>

* implementation and fixing col2im/col2vol

* Corrected usage flags with input/output with reverse op.

* dup is const now

Signed-off-by: raver119 <raver119@gmail.com>

* percentile op

Signed-off-by: raver119 <raver119@gmail.com>

* group tests for mapool2d

Signed-off-by: Yurii <yurii@skymind.io>

* special test for george

Signed-off-by: raver119 <raver119@gmail.com>

* less threads for sortTad

Signed-off-by: raver119 <raver119@gmail.com>

* provide conv2d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* remove auther in sort tad kernel code

Signed-off-by: Yurii <yurii@skymind.io>

* provide depthwise_conv2d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* - max_pooling_with_argmax
- null check for special use

Signed-off-by: raver119 <raver119@gmail.com>

* dts cuda

Signed-off-by: raver119 <raver119@gmail.com>

* provide sconv2d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* std cuda

Signed-off-by: raver119 <raver119@gmail.com>

* Refactored non_max_suppression op to conform TF implementation.

* Improved suppression helper.

* provide pooling3d for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* minor lstm rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* more of minor lstm rearrangements

Signed-off-by: raver119 <raver119@gmail.com>

* (bi)dynamic_rnn

Signed-off-by: raver119 <raver119@gmail.com>

* templates init order

Signed-off-by: raver119 <raver119@gmail.com>

* Refactored non_max_suppression op.

* Added cuda kernel for non_max_suppression.

* CPU sort by key/value

Signed-off-by: raver119 <raver119@gmail.com>

* CPU sort TAD by key/value

Signed-off-by: raver119 <raver119@gmail.com>

* CPU sort TAD by key/value tests

Signed-off-by: raver119 <raver119@gmail.com>

* Eliminate compiler error with cuda implementation.

* - repaired gradCheck in cuda
- provide conv2d_bp for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* missed signature

Signed-off-by: raver119 <raver119@gmail.com>

* provide depthwise_conv2d_bp for cuda

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of lup helper with cuda kernel. Initial commit.

* further work on backprops for convolutions

Signed-off-by: Yurii <yurii@skymind.io>

* CUDA linear sort by key/val

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA tad sort by key/val

Signed-off-by: raver119 <raver119@gmail.com>

* start providing of backprop for pooling2d/3d

Signed-off-by: Yurii <yurii@skymind.io>

* Added atomicAdd for bool datatype.

* dynamic partition concept

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic partition concept

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic partition scalar CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* important comment

Signed-off-by: raver119 <raver119@gmail.com>

* fix pooling2d/3d backprop helpers

Signed-off-by: Yurii <yurii@skymind.io>

* Added non-linear test with dynamic_partition.

* Improved test for dynamic_partition.

* dynamic_partition TAD concept

Signed-off-by: raver119 <raver119@gmail.com>

* - dynamic_partition TAD CUDA impl
- dynamic_partition TAD CPU fix

Signed-off-by: raver119 <raver119@gmail.com>

* - rewrite cpu code for usampling2d/3d
- write cuda code for usampling2d/3d

Signed-off-by: Yurii <yurii@skymind.io>

* dynamic_stitch CUDA vector case

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic_stitch CUDA TAD case concept

Signed-off-by: raver119 <raver119@gmail.com>

* dynamic_stitch CUDA TAD case impl

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for dynamic_stitch 3D-4D cases.

* minor tests tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed type check for dynamic stitch.

* min/max bp

Signed-off-by: raver119 <raver119@gmail.com>

* rewrite code for upsampling2d/3d cpu

Signed-off-by: Yurii <yurii@skymind.io>

* reduce min/max/norm_max bp

Signed-off-by: raver119 <raver119@gmail.com>

* lup implementation. Additional enhancements.

* provide code for upsamling2d/3d backprop

Signed-off-by: Yurii <yurii@skymind.io>

* weightedCrossEntropyWithLogits

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed template math atomicMul for 64bit ints.

* Refactored dynamic_partition_bp op.

* inverseBroadcast fix

Signed-off-by: raver119 <raver119@gmail.com>

* DynamicPartitionBP test datatype fixed.

* - nd4j_atomicMul Windows fix
- cpu/NDArrayLambda.hpp excluded from CUDA

Signed-off-by: raver119 <raver119@gmail.com>
2019-06-27 18:37:04 +03:00

1052 lines
47 KiB
C++

/*******************************************************************************
* Copyright (c) 2015-2018 Skymind, Inc.
*
* This program and the accompanying materials are made available under the
* terms of the Apache License, Version 2.0 which is available at
* https://www.apache.org/licenses/LICENSE-2.0.
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*
* SPDX-License-Identifier: Apache-2.0
******************************************************************************/
//
// @author GS <sgazeos@gmail.com>
//
#include <ops/declarable/helpers/segment.h>
#include <ShapeUtils.h>
namespace nd4j {
namespace ops {
namespace helpers {
// segment max
template <typename T>
static void segmentMaxFunctor_(NDArray* input, NDArray* indices, NDArray* output) {
//int numClasses = output->sizeAt(0);
// if input is a vector: (as if in doc sample)
Nd4jLong idx = indices->e<Nd4jLong>(0);
if (input->isVector()) {
T val = input->e<T>(0);
for (Nd4jLong e = 1; e < indices->lengthOf(); e++) {
if (idx == indices->e<Nd4jLong>(e)) {
// max
val = nd4j::math::nd4j_max<T>(val, input->t<T>(e));
}
else {
idx = indices->e<Nd4jLong>(e);
val = input->t<T>(e);
}
output->t<T>(idx) = val;
}
}
else {
std::vector<int> restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
auto listOfTensors = input->allTensorsAlongDimension(restDims);
auto listOfOutTensors = output->allTensorsAlongDimension(restDims);
auto numOfClasses = output->sizeAt(0); // number of classes
std::vector<std::pair<NDArray*, int>> outputs(numOfClasses);
auto maxT = listOfOutTensors->at(idx);
//int pos = 0;
maxT->assign(listOfTensors->at(0));
for (Nd4jLong i = 1; i < indices->lengthOf(); i++) {
if (indices->e<int>(i) == idx) {
for (Nd4jLong e = 0; e < maxT->lengthOf(); e++) {
maxT->t<T>(e) = nd4j::math::nd4j_max(maxT->t<T>(e), listOfTensors->at(i)->t<T>(e));
}
}
else {
idx = indices->e<Nd4jLong>(i);
maxT = listOfOutTensors->at(idx);
maxT->assign(listOfTensors->at(i));
}
}
delete listOfTensors;
delete listOfOutTensors;
}
}
// segmen min
template <typename T>
static void segmentMinFunctor_(NDArray* input, NDArray* indices, NDArray* output) {
//int numClasses = output->sizeAt(0);
// if input is a vector: (as if in doc sample)
Nd4jLong idx = indices->e<Nd4jLong>(0);
if (input->isVector()) {
T val = input->e<T>(0);
for (int e = 1; e < indices->lengthOf(); e++) {
if (idx == indices->e<Nd4jLong>(e)) {
// min
val = nd4j::math::nd4j_min<T>(val, input->t<T>(e));
}
else {
idx = indices->e<int>(e);
val = input->t<T>(e);
}
output->t<T>(idx) = val;
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfTensors( input->allTensorsAlongDimension(restDims) );
std::unique_ptr<ResultSet> listOfOutTensors( output->allTensorsAlongDimension(restDims) );
int numOfClasses = output->sizeAt(0); // number of classes
std::vector<std::pair<NDArray*, int>> outputs(numOfClasses);
auto minT = listOfOutTensors->at(idx);
int pos = 0;
minT->assign(listOfTensors->at(0));
for (Nd4jLong i = 1; i < indices->lengthOf(); i++) {
if (indices->e<T>(i) == idx) {
for (int e = 0; e < minT->lengthOf(); e++) {
minT->p(e, nd4j::math::nd4j_min(minT->e<T>(e), listOfTensors->at(i)->e<T>(e)));
}
}
else {
idx = indices->e<T>(i);
minT = listOfOutTensors->at(idx);
minT->assign(listOfTensors->at(i));
}
}
}
}
// segmen mean
template <typename T>
static void segmentMeanFunctor_(NDArray* input, NDArray* indices, NDArray* output) {
int numClasses = output->sizeAt(0);
// if input is a vector: (as if in doc sample)
int idx = indices->e<int>(0);
if (input->isVector()) {
T val = T(0.f);
int count = 0;
for (int e = 0; e < indices->lengthOf(); e++) {
if (idx == indices->e<int>(e)) {
// mean
val += input->e<T>(e);
count++;
}
else {
output->p<T>(idx, val / count);
idx = indices->e<int>(e);
val = input->e<T>(e);
count = 1;
}
output->p<T>(idx, val / count);
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
auto listOfTensors = input->allTensorsAlongDimension(restDims);
auto listOfOutTensors = output->allTensorsAlongDimension(restDims);
int numOfClasses = output->sizeAt(0); // number of classes
std::vector<std::pair<NDArray*, int>> outputs(numOfClasses);
auto meanT = listOfOutTensors->at(idx);
int count = 1;
auto meanV = meanT->dup();
meanV->assign(listOfTensors->at(0));
for (int i = 1; i < indices->lengthOf(); i++) {
if (indices->e<int>(i) == idx) {
PRAGMA_OMP_PARALLEL_FOR
for (int e = 0; e < meanT->lengthOf(); e++) {
meanV->p<T>(e, meanV->e<T>(e) + listOfTensors->at(i)->e<T>(e));
}
count++;
}
else {
//meanT->assign(meanV);
meanV->applyScalar(scalar::Divide, count, meanT, nullptr);
idx = indices->e<int>(i);
meanT = listOfOutTensors->at(idx);
meanV->assign(listOfTensors->at(i));
count = 1;
}
meanV->applyScalar(scalar::Divide, count, meanT, nullptr);
}
delete meanV;
delete listOfTensors;
delete listOfOutTensors;
}
}
template <typename T>
static void segmentSumFunctor_(NDArray* input, NDArray* indices, NDArray* output) {
int numClasses = output->sizeAt(0);
// if input is a vector: (as if in doc sample)
int idx = indices->e<int>(0);
if (input->isVector()) {
T val = T(0.f);
int count = 0;
for (int e = 0; e < indices->lengthOf(); e++) {
if (idx == indices->e<int>(e)) {
// sum
val += input->t<T>(e);
}
else {
idx = indices->e<int>(e);
val = input->t<T>(e);
}
output->p(idx, val);
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
auto listOfTensors = input->allTensorsAlongDimension(restDims);
auto listOfOutTensors = output->allTensorsAlongDimension(restDims);
int numOfClasses = output->sizeAt(0); // number of classes
std::vector<std::pair<NDArray*, int>> outputs(numOfClasses);
auto sumT = listOfOutTensors->at(idx);
for (int i = 0; i < indices->lengthOf(); i++) {
if (indices->e<int>(i) == idx) {
PRAGMA_OMP_PARALLEL_FOR
for (int e = 0; e < sumT->lengthOf(); e++) {
sumT->p(e, sumT->e<T>(e) + listOfTensors->at(i)->e<T>(e));
}
}
else {
idx = indices->e<int>(i);
sumT = listOfOutTensors->at(idx);
sumT->assign(listOfTensors->at(i));
}
}
delete listOfTensors;
delete listOfOutTensors;
}
}
template <typename T>
static void segmentProdFunctor_(NDArray* input, NDArray* indices, NDArray* output) {
//int numClasses = output->sizeAt(0);
// if input is a vector: (as if in doc sample)
int idx = indices->e<int>(0);
output->assign(1.f);
if (input->isVector()) {
T val = input->e<T>(0);
int count = 0;
for (int e = 1; e < indices->lengthOf(); e++) {
if (idx == indices->e<int>(e)) {
// sum
val *= input->e<T>(e);
}
else {
idx = indices->e<int>(e);
val = input->e<T>(e);
}
output->p(idx, val);
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
auto listOfTensors = input->allTensorsAlongDimension(restDims);
auto listOfOutTensors = output->allTensorsAlongDimension(restDims);
int numOfClasses = output->sizeAt(0); // number of classes
auto sumT = listOfOutTensors->at(idx);
sumT->assign(listOfTensors->at(0));
for (int i = 1; i < indices->lengthOf(); i++) {
if (indices->e<int>(i) == idx) {
PRAGMA_OMP_PARALLEL_FOR
for (int e = 0; e < sumT->lengthOf(); e++) {
sumT->p(e, sumT->e<T>(e) * listOfTensors->at(i)->e<T>(e));
}
}
else {
idx = indices->e<int>(i);
sumT = listOfOutTensors->at(idx);
sumT->assign(listOfTensors->at(i));
}
}
delete listOfTensors;
delete listOfOutTensors;
}
}
// template <typename T>
// static bool segmentIndicesValidate_(NDArray* indices, NDArray& aexpected, NDArray& anOutput) {
// }
void segmentMaxFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* output) {
BUILD_SINGLE_SELECTOR(input->dataType(), segmentMaxFunctor_, (input, indices, output), LIBND4J_TYPES);
}
void segmentMinFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* output) {
BUILD_SINGLE_SELECTOR(input->dataType(), segmentMinFunctor_, (input, indices, output), LIBND4J_TYPES);
}
void segmentMeanFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* output) {
BUILD_SINGLE_SELECTOR(input->dataType(), segmentMeanFunctor_, (input, indices, output), LIBND4J_TYPES);
}
void segmentSumFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* output) {
BUILD_SINGLE_SELECTOR(input->dataType(), segmentSumFunctor_, (input, indices, output), LIBND4J_TYPES);
}
void segmentProdFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* output) {
BUILD_SINGLE_SELECTOR(input->dataType(), segmentProdFunctor_, (input, indices, output), LIBND4J_TYPES);
}
bool segmentIndicesValidate(nd4j::LaunchContext * context, NDArray* indices, NDArray& expected, NDArray& output) {
auto val = indices->e(0);
for (int e = 1; e < indices->lengthOf(); e++) {
output = indices->e(e);
if (val.e<Nd4jLong>(0) > output.e<Nd4jLong>(0))
return false;
val = indices->e(e);
}
return true;
}
//BUILD_SINGLE_TEMPLATE(template bool segmentIndicesValidate_, (NDArray*, NDArray&, NDArray&), LIBND4J_TYPES);
BUILD_SINGLE_TEMPLATE(template void segmentProdFunctor_, (NDArray* input, NDArray* indices, NDArray* output), LIBND4J_TYPES);
BUILD_SINGLE_TEMPLATE(template void segmentSumFunctor_, (NDArray* input, NDArray* indices, NDArray* output), LIBND4J_TYPES);
BUILD_SINGLE_TEMPLATE(template void segmentMeanFunctor_, (NDArray* input, NDArray* indices, NDArray* output), LIBND4J_TYPES);
BUILD_SINGLE_TEMPLATE(template void segmentMinFunctor_, (NDArray* input, NDArray* indices, NDArray* output), LIBND4J_TYPES);
BUILD_SINGLE_TEMPLATE(template void segmentMaxFunctor_, (NDArray* input, NDArray* indices, NDArray* output), LIBND4J_TYPES);
// -------------------------------------------------------------------------------------------------------------- //
// Unsorted segment ops
// -------------------------------------------------------------------------------------------------------------- //
bool unsortedSegmentIndicesValidate(nd4j::LaunchContext * context, NDArray* indices, Nd4jLong expected, Nd4jLong& output) {
Nd4jLong val = indices->e<Nd4jLong>(0);
Nd4jLong maxInd = indices->argMax();
if (indices->e<Nd4jLong>(maxInd) >= expected) {
output = val;
return false;
}
output = expected;
return true;
}
template <typename T>
static void unsortedSegmentMaxFunctor_(NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output) {
// if input is a vector: (as if in doc sample)
//int idx = static_cast<int>((*indices)(0.));
std::map<Nd4jLong, std::vector<Nd4jLong>> idxs;//(indices->lengthOf());
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e)
idxs[indices->e<Nd4jLong>(e)].push_back(e);
//std::sort(idxs.begin(), idxs.end());
if (input->isVector()) { // 1D case
T maxVal = DataTypeUtils::max<T>();
output->assign(-maxVal);
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
T val = input->e<T>(fi->second.at(0));
for (Nd4jLong idx = 1; idx < fi->second.size(); ++idx) {
val = nd4j::math::nd4j_max(val, input->e<T>(fi->second.at(idx)));
}
output->p(fi->first, val);
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
T maxVal = DataTypeUtils::max<T>();
output->assign(-maxVal);
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
auto outputT = listOfOutTensors->at(fi->first);
outputT->assign(listOfTensors->at(fi->second.at(0)));
for (Nd4jLong idx = 1; idx < fi->second.size(); ++idx) {
auto maxT = listOfTensors->at(fi->second.at(idx));
for (Nd4jLong e = 0; e < outputT->lengthOf(); ++e) {
T val = nd4j::math::nd4j_max(maxT->e<T>(e), outputT->e<T>(e));
outputT->p(e, val);
}
}
//outputT->assign(maxT);
}
}
}
void unsortedSegmentMaxFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output) {
BUILD_SINGLE_SELECTOR(input->dataType(), unsortedSegmentMaxFunctor_, (input, indices, numOfClasses, output), NUMERIC_TYPES);
}
BUILD_SINGLE_TEMPLATE(template void unsortedSegmentMaxFunctor_, (NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output), NUMERIC_TYPES);
template <typename T>
static void unsortedSegmentMinFunctor_(NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output) {
// if input is a vector: (as if in doc sample)
//int idx = static_cast<int>((*indices)(0.));
std::map<Nd4jLong, std::vector<Nd4jLong>> idxs;//(indices->lengthOf());
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e)
idxs[indices->e<Nd4jLong>(e)].push_back(e);
//std::sort(idxs.begin(), idxs.end());
if (input->isVector()) { // 1D case
T maxVal = DataTypeUtils::max<T>();
output->assign(maxVal);
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
T val = input->t<T>(fi->second.at(0));
for (size_t idx = 1; idx < fi->second.size(); ++idx) {
val = nd4j::math::nd4j_min(val, input->t<T>(fi->second.at(idx)));
}
output->t<T>(fi->first) = val;
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
T maxVal = DataTypeUtils::max<T>();
output->assign(maxVal);
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
auto outputT = listOfOutTensors->at(fi->first);
outputT->assign(listOfTensors->at(fi->second.at(0)));
for (Nd4jLong idx = 1; idx < fi->second.size(); ++idx) {
auto minT = listOfTensors->at(fi->second.at(idx));
for (Nd4jLong e = 0; e < outputT->lengthOf(); ++e) {
outputT->t<T>(e) = nd4j::math::nd4j_min(minT->t<T>(e), outputT->t<T>(e));
}
}
//outputT->assign(maxT);
}
}
}
void unsortedSegmentMinFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output) {
BUILD_SINGLE_SELECTOR(input->dataType(), unsortedSegmentMinFunctor_, (input, indices, numOfClasses, output),
NUMERIC_TYPES);
}
BUILD_SINGLE_TEMPLATE(template void unsortedSegmentMinFunctor_, (NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output), NUMERIC_TYPES);
void unsortedSegmentMeanFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output) {
std::map<Nd4jLong, std::vector<Nd4jLong>> idxs;//(indices->lengthOf());
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e)
idxs[indices->e<Nd4jLong>(e)].push_back(e);
//std::sort(idxs.begin(), idxs.end());
if (input->isVector()) { // 1D case
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
double sumValue = input->e<double>(fi->second.at(0));
int loop_size = fi->second.size();
PRAGMA_OMP_PARALLEL_FOR_SIMD_REDUCTION(+:sumValue)
for (size_t idx = 1; idx < loop_size; ++idx) {
sumValue += input->e<double>(fi->second.at(idx));
}
output->p(fi->first, sumValue / fi->second.size());
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
auto outputT = listOfOutTensors->at(fi->first);
outputT->assign(listOfTensors->at(fi->second.at(0)));
auto loopSize = fi->second.size();
PRAGMA_OMP_PARALLEL_FOR
for (Nd4jLong idx = 1; idx < loopSize; ++idx) {
auto current = listOfTensors->at(fi->second.at(idx));
*outputT += *current;
}
(*outputT) /= double(fi->second.size());
}
}
}
void unsortedSegmentSumFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output) {
std::map<Nd4jLong, std::vector<Nd4jLong>> idxs;//(indices->lengthOf());
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e)
idxs[indices->e<Nd4jLong>(e)].push_back(e);
if (input->isVector()) { // 1D case
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
double sumValue = input->e<double>(fi->second.at(0));
Nd4jLong loop_size = fi->second.size();
PRAGMA_OMP_PARALLEL_FOR_REDUCTION(+:sumValue)
for (Nd4jLong idx = 1; idx < loop_size; ++idx) {
sumValue += input->e<double>(fi->second.at(idx));
}
output->p(fi->first, sumValue);
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
auto outputT = listOfOutTensors->at(fi->first);
outputT->assign(listOfTensors->at(fi->second.at(0)));
Nd4jLong loop_size = fi->second.size();
PRAGMA_OMP_PARALLEL_FOR
for (Nd4jLong idx = 1; idx < loop_size; ++idx) {
auto current = listOfTensors->at(fi->second.at(idx));
*(outputT) += *current;
}
//outputT->assign(maxT);
}
}
}
template <typename T>
void unsortedSegmentProdFunctor_(NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output) {
std::map<Nd4jLong, std::vector<Nd4jLong>> idxs;//(indices->lengthOf());
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e)
idxs[indices->e<Nd4jLong>(e)].push_back(e);
//std::sort(idxs.begin(), idxs.end());
output->assign(1.f);
if (input->isVector()) { // 1D case
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
T prodValue = input->e<T>(fi->second.at(0));
for (size_t idx = 1; idx < fi->second.size(); ++idx) {
prodValue *= input->e<T>(fi->second.at(idx));
}
output->p(fi->first, prodValue);
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
auto outputT = listOfOutTensors->at(fi->first);
outputT->assign(listOfTensors->at(fi->second.at(0)));
for (Nd4jLong idx = 1; idx < fi->second.size(); ++idx) {
auto current = listOfTensors->at(fi->second.at(idx));
*outputT *= *current;
}
}
}
}
void unsortedSegmentProdFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output) {
BUILD_SINGLE_SELECTOR(input->dataType(), unsortedSegmentProdFunctor_, (input, indices, numOfClasses, output), NUMERIC_TYPES);
}
BUILD_SINGLE_TEMPLATE(template void unsortedSegmentProdFunctor_, (NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output), NUMERIC_TYPES);
void unsortedSegmentSqrtNFunctor(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, Nd4jLong numOfClasses, NDArray* output) {
std::map<Nd4jLong, std::vector<Nd4jLong>> idxs;//(indices->lengthOf());
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e)
idxs[indices->e<Nd4jLong>(e)].push_back(e);
//std::sort(idxs.begin(), idxs.end());
if (input->isVector()) { // 1D case
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
double sumValue = input->e<double>(fi->second.at(0));
for (Nd4jLong idx = 1; idx < fi->second.size(); ++idx) {
sumValue += input->e<double>(fi->second.at(idx));
}
output->p(fi->first, sumValue / nd4j::math::nd4j_sqrt<Nd4jLong, double>(fi->second.size()));
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
for (auto fi = idxs.begin(); fi != idxs.end(); ++fi) {
auto outputT = listOfOutTensors->at(fi->first);
outputT->assign(listOfTensors->at(fi->second.at(0)));
for (Nd4jLong idx = 1; idx < fi->second.size(); ++idx) {
auto current = listOfTensors->at(fi->second.at(idx));
*outputT += *current;
}
//outputT->assign(maxT);
(*outputT) /= nd4j::math::nd4j_sqrt<size_t, double>(fi->second.size());
}
}
}
// -------------------------------------------------------------------------------------------------------------- //
// Backpropagate ops helpers
// -------------------------------------------------------------------------------------------------------------- //
// Sorted backpropagate ops
//
// segment max
template <typename T>
int segmentMaxFunctorBP_(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, NDArray* output) {
//int numOfClasses = gradOut->sizeAt(0);
// if input is a vector: (as if in doc sample)
auto tempRes = gradOut->dup();
segmentMaxFunctor_<T>(input, indices, tempRes);
if (input->isVector()) {
Nd4jLong loop_size = input->lengthOf();
PRAGMA_OMP_PARALLEL_FOR
for (Nd4jLong e = 0; e < loop_size; ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
if (nd4j::math::nd4j_abs(tempRes->e<T>(classNum) - input->e<T>(e)) <= T(1.e-6))
output->p(e, gradOut->e<T>(classNum));
}
}
else {
std::vector<int> restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfBPTensors(tempRes->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
//int numOfClasses = tempRes->sizeAt(0); // number of classes
//std::vector<std::pair<NDArray*, int>> outputs(numOfClasses);
PRAGMA_OMP_PARALLEL_FOR
for (Nd4jLong i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
for (Nd4jLong e = 0; e < current->lengthOf(); e++) {
if (nd4j::math::nd4j_abs(listOfBPTensors->at(classNum)->e<T>(e) - current->e<T>(e)) <= T(1.e-6))
currentOut->p(e, currentGradOut->e<T>(e));
}
}
}
delete tempRes;
return ND4J_STATUS_OK;
}
int segmentMaxFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, NDArray* output) {
BUILD_SINGLE_SELECTOR(output->dataType(), return segmentMaxFunctorBP_, (context, input, indices, gradOut, output), NUMERIC_TYPES);
}
BUILD_SINGLE_TEMPLATE(template int segmentMaxFunctorBP_, (nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, NDArray* output), NUMERIC_TYPES);
// segmen min
int segmentMinFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, NDArray* output) {
std::unique_ptr<NDArray> tempRes(gradOut->dup());
segmentMinFunctor(context, input, indices, tempRes.get());
if (input->isVector()) {
PRAGMA_OMP_PARALLEL_FOR
for (Nd4jLong e = 0; e < input->lengthOf(); ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
if (nd4j::math::nd4j_abs(tempRes->e<double>(classNum) - input->e<double>(e)) < 1.e-5)
output->p(e, gradOut->e<double>(classNum));
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfBPTensors(tempRes->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
//int numOfClasses = tempRes->sizeAt(0); // number of classes
//std::vector<std::pair<NDArray*, int>> outputs(numOfClasses);
output->assign(0.);
int pos = 0;
PRAGMA_OMP_PARALLEL_FOR
for (int i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
for (int e = 0; e < current->lengthOf(); e++) {
if (nd4j::math::nd4j_abs(listOfBPTensors->at(classNum)->e<double>(e) - current->e<double>(e)) < 1.e-5)
currentOut->p(e, currentGradOut->e<double>(e));
}
}
}
return ND4J_STATUS_OK;
}
// segmen mean
int segmentMeanFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, NDArray* output) {
int numClasses = output->sizeAt(0);
std::map<Nd4jLong, Nd4jLong> classCount;//(numClasses);
for (Nd4jLong count = 0; count < numClasses; ++count) {
classCount[count] = 0;
}
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e) {
classCount[indices->e<Nd4jLong>(e)] ++;
}
// if input is a vector: (as if in doc sample)
if (input->isVector()) {
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
output->p(e, gradOut->e<double>(classNum) / classCount[classNum]);
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
//int numOfClasses = tempRes->sizeAt(0); // number of classes
//std::vector<std::pair<NDArray*, int>> outputs(numOfClasses);
int pos = 0;
PRAGMA_OMP_PARALLEL_FOR
for (int i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
for (int e = 0; e < current->lengthOf(); e++) {
currentOut->p(e, currentGradOut->e<double>(e) / classCount[classNum]);
}
}
}
return ND4J_STATUS_OK;
}
int segmentSumFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, NDArray* output) {
// int numClasses = output->sizeAt(0);
// if input is a vector: (as if in doc sample)
Nd4jLong idx = indices->e<Nd4jLong>(0);
if (input->isVector()) {
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
output->p(e, gradOut->e<double>(classNum));
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
PRAGMA_OMP_PARALLEL_FOR
for (int i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
currentOut->assign(currentGradOut);
}
}
return ND4J_STATUS_OK;
}
int segmentProdFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, NDArray* output) {
auto tempRes = gradOut->dup();
segmentProdFunctor(context, input, indices, tempRes);
if (input->isVector()) {
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
output->p(e, gradOut->e<double>(classNum) * tempRes->e<double>(classNum)/ input->e<double>(e));
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfBPTensors(tempRes->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
//int numOfClasses = tempRes->sizeAt(0); // number of classes
//std::vector<std::pair<NDArray*, int>> outputs(numOfClasses);
PRAGMA_OMP_PARALLEL_FOR
for (int i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
NDArray* currentFFOut = listOfBPTensors->at(classNum);
currentOut->assign((*currentFFOut) * (*currentGradOut) / (*current));
}
}
delete tempRes;
return ND4J_STATUS_OK;
}
// -------------------------------------------------------------------------------------------------------------- //
// Unsorted backpropagate segment ops
// -------------------------------------------------------------------------------------------------------------- //
template <typename T>
static int unsortedSegmentMaxFunctorBP_(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, Nd4jLong numOfClasses, NDArray* output) {
// int numOfClasses = gradOut->sizeAt(0);
// if input is a vector: (as if in doc sample)
auto tempRes = gradOut->dup();
unsortedSegmentMaxFunctor(context, input, indices, numOfClasses, tempRes);
if (input->isVector()) {
for (Nd4jLong e = 0; e < input->lengthOf(); ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
if (nd4j::math::nd4j_abs(tempRes->e<double>(classNum) - input->e<double>(e)) < 1.e-5)
output->p(e, gradOut->e<T>(classNum));
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfBPTensors(tempRes->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
for (int i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
for (int e = 0; e < current->lengthOf(); e++) {
if (nd4j::math::nd4j_abs(listOfBPTensors->at(classNum)->e<double>(e) - current->e<double>(e)) < 1.e-5)
currentOut->p(e, currentGradOut->e<T>(e));
}
}
}
delete tempRes;
return ND4J_STATUS_OK;
}
int unsortedSegmentMaxFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, Nd4jLong numOfClasses, NDArray* output) {
BUILD_SINGLE_SELECTOR(output->dataType(), return unsortedSegmentMaxFunctorBP_, (context, input, indices, gradOut, numOfClasses, output), NUMERIC_TYPES);
}
BUILD_SINGLE_TEMPLATE(template int unsortedSegmentMaxFunctorBP_, (nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, Nd4jLong numOfClasses, NDArray* output), NUMERIC_TYPES);
template <typename T>
static int unsortedSegmentMinFunctorBP_(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, Nd4jLong numOfClasses, NDArray* output) {
auto tempRes = gradOut->dup();
unsortedSegmentMinFunctor(context, input, indices, numOfClasses, tempRes);
if (input->isVector()) {
PRAGMA_OMP_PARALLEL_FOR
for (Nd4jLong e = 0; e < input->lengthOf(); ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
if (nd4j::math::nd4j_abs(tempRes->t<T>(classNum) - input->t<T>(e)) < 1.e-6)
output->t<T>(e) = gradOut->t<T>(classNum);
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfBPTensors(tempRes->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
//int numOfClasses = tempRes->sizeAt(0); // number of classes
//std::vector<std::pair<NDArray*, int>> outputs(numOfClasses);
PRAGMA_OMP_PARALLEL_FOR
for (int i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
for (int e = 0; e < current->lengthOf(); e++) {
if (nd4j::math::nd4j_abs(listOfBPTensors->at(classNum)->t<T>(e) - current->t<T>(e)) < 1.e-6)
currentOut->t<T>(e) = currentGradOut->t<T>(e);
}
}
}
delete tempRes;
return ND4J_STATUS_OK;
}
int unsortedSegmentMinFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, Nd4jLong numOfClasses, NDArray* output) {
BUILD_SINGLE_SELECTOR(output->dataType(), return unsortedSegmentMinFunctorBP_, (context, input, indices, gradOut, numOfClasses, output), NUMERIC_TYPES);
}
BUILD_SINGLE_TEMPLATE(template int unsortedSegmentMinFunctorBP_, (nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, Nd4jLong numOfClasses, NDArray* output), NUMERIC_TYPES);
int unsortedSegmentMeanFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, Nd4jLong numOfClasses, NDArray* output) {
std::map<Nd4jLong, Nd4jLong> classCount;//(numClasses);
for (Nd4jLong count = 0; count < numOfClasses; ++count) {
classCount[count] = 0;
}
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e) {
classCount[indices->e<Nd4jLong>(e)]++;
}
// if input is a vector: (as if in doc sample)
if (input->isVector()) {
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
output->p(e, gradOut->e<double>(classNum) / classCount[classNum]);
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
for (int i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
currentOut->assign(*currentGradOut / double(classCount[classNum]));
}
}
return ND4J_STATUS_OK;
}
int unsortedSegmentSumFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, Nd4jLong numOfClasses, NDArray* output) {
// if input is a vector: (as if in doc sample)
Nd4jLong idx = indices->e<Nd4jLong>(0);
if (input->isVector()) {
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
output->p(e, gradOut->e<double>(classNum));
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
PRAGMA_OMP_PARALLEL_FOR
for (int i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
//NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
currentOut->assign(currentGradOut);
}
}
return ND4J_STATUS_OK;
}
int unsortedSegmentProdFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, Nd4jLong numOfClasses, NDArray* output) {
auto tempRes = gradOut->dup();
unsortedSegmentProdFunctor(context, input, indices, numOfClasses, tempRes);
if (input->isVector()) {
PRAGMA_OMP_PARALLEL_FOR
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
output->p<double>(e, gradOut->e<double>(classNum) * tempRes->e<double>(classNum)/ input->e<double>(e));
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfBPTensors(tempRes->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
PRAGMA_OMP_PARALLEL_FOR
for (int i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
auto currentFFOut = listOfBPTensors->at(classNum);
currentOut->assign((*currentFFOut) * (*currentGradOut) / (*current));
}
}
delete tempRes;
return ND4J_STATUS_OK;
}
// template <typename T>
int unsortedSegmentSqrtNFunctorBP(nd4j::LaunchContext * context, NDArray* input, NDArray* indices, NDArray* gradOut, Nd4jLong numOfClasses, NDArray* output) {
std::map<Nd4jLong, Nd4jLong> classCount;//(numClasses);
for (Nd4jLong count = 0; count < numOfClasses; ++count) {
classCount[count] = 0;
}
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e) {
classCount[indices->e<Nd4jLong>(e)]++;
}
// if input is a vector: (as if in doc sample)
if (input->isVector()) {
PRAGMA_OMP_PARALLEL_FOR
for (Nd4jLong e = 0; e < indices->lengthOf(); ++e) {
Nd4jLong classNum = indices->e<Nd4jLong>(e);
output->p(e, gradOut->e<double>(classNum) / nd4j::math::nd4j_sqrt<double,double>(classCount[classNum]));
}
}
else {
auto restDims = ShapeUtils::evalDimsToExclude(input->rankOf(), {0});
std::unique_ptr<ResultSet> listOfGradOuts(gradOut->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfTensors(input->allTensorsAlongDimension(restDims));
std::unique_ptr<ResultSet> listOfOutTensors(output->allTensorsAlongDimension(restDims));
//int numOfClasses = tempRes->sizeAt(0); // number of classes
//std::vector<std::pair<NDArray*, int>> outputs(numOfClasses);
PRAGMA_OMP_PARALLEL_FOR
for (int i = 0; i < indices->lengthOf(); i++) {
Nd4jLong classNum = indices->e<Nd4jLong>(i);
NDArray* current = listOfTensors->at(i);
NDArray* currentOut = listOfOutTensors->at(i);
NDArray* currentGradOut = listOfGradOuts->at(classNum);
for (int e = 0; e < current->lengthOf(); e++) {
currentOut->p(e, currentGradOut->e<double>(e) / nd4j::math::nd4j_sqrt<double,double>(classCount[classNum]));
}
}
}
return ND4J_STATUS_OK;
}
}
}
}