cavis/libnd4j/tests_cpu/layers_tests/WorkspaceTests.cpp
raver119 3c4e959e21 [WIP] More of CUDA (#95)
* initial commit

Signed-off-by: raver119 <raver119@gmail.com>

* Implementation of hashcode cuda helper. Working edition.

* Fixed parallel test input arangements.

* Fixed tests for hashcode op.

* Fixed shape calculation for image:crop_and_resize op and test.

* NativeOps tests. Initial test suite.

* Added tests for indexReduce methods.

* Added test on execBroadcast with NDArray as dimensions.

* Added test on execBroadcastBool with NDArray as dimensions.

* Added tests on execPairwiseTransform and execPairwiseTransofrmBool.

* Added tests for execReduce with scalar results.

* Added reduce tests for non-empty dims array.

* Added tests for reduce3.

* Added tests for execScalar.

* Added tests for execSummaryStats.

* - provide cpu/cuda code for batch_to_space
- testing it

Signed-off-by: Yurii <yurii@skymind.io>

* - remove old test for batch_to_space (had wrong format and numbers were not checked)

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed complilation errors with test.

* Added test for execTransformFloat.

* Added test for execTransformSame.

* Added test for execTransformBool.

* Added test for execTransformStrict.

* Added tests for execScalar/execScalarBool with TADs.

* Added test for flatten.

* - provide cpu/cuda code for space_to_Batch operaion

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for concat.

* comment unnecessary stuff in s_t_b

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for specialConcat.

* Added tests for memcpy/set routines.

* Fixed pullRow cuda test.

* Added pullRow test.

* Added average test.

* - correct typo in NDArray::applyPairwiseTransform(nd4j::pairwise::BoolOps op...)

Signed-off-by: Yurii <yurii@skymind.io>

* - debugging and fixing cuda tests in JavaInteropTests file

Signed-off-by: Yurii <yurii@skymind.io>

* - correct some tests

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for shuffle.

* Fixed ops declarations.

* Restored omp and added shuffle test.

* Added convertTypes test.

* Added tests for execRandom. Eliminated usage of RandomBuffer with NativeOps.

* Added sort tests.

* Added tests for execCustomOp.

* - further debuging and fixing tests terminated with crash

Signed-off-by: Yurii <yurii@skymind.io>

* Added tests for calculateOutputShapes.

* Addded Benchmarks test.

* Commented benchmark tests.

* change assertion

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for apply_sgd op. Added cpu helper for that op.

* Implement cuda helper for aplly_sgd op. Fixed tests for NativeOps.

* Added test for assign broadcastable.

* Added tests for assign_bp op.

* Added tests for axpy op.

* - assign/execScalar/execTransformAny signature change
- minor test fix

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed axpy op.

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* - fix tests for nativeOps::concat

Signed-off-by: Yurii <yurii@skymind.io>

* sequential transform/scalar

Signed-off-by: raver119 <raver119@gmail.com>

* allow nested parallelism

Signed-off-by: raver119 <raver119@gmail.com>

* assign_bp leak fix

Signed-off-by: raver119 <raver119@gmail.com>

* block setRNG fix

Signed-off-by: raver119 <raver119@gmail.com>

* enable parallelism by default

Signed-off-by: raver119 <raver119@gmail.com>

* enable nested parallelism by default

Signed-off-by: raver119 <raver119@gmail.com>

* Added cuda implementation for row_count helper.

* Added implementation for tnse gains op helper.

* - take into account possible situations when input arrays are empty in reduce_ cuda stuff

Signed-off-by: Yurii <yurii@skymind.io>

* Implemented tsne/edge_forces op cuda-based helper. Parallelized cpu-based helper for edge_forces.

* Added kernel for tsne/symmetrized op heleper.

* Implementation of tsne/symmetrized op cuda helper. Working edition.

* Eliminated waste printfs.

* Added test for broadcastgradientargs op.

* host-only fallback for empty reduce float

Signed-off-by: raver119 <raver119@gmail.com>

* - some tests fixes

Signed-off-by: Yurii <yurii@skymind.io>

* - correct the rest of reduce_ stuff

Signed-off-by: Yurii <yurii@skymind.io>

* - further correction of reduce_ stuff

Signed-off-by: Yurii <yurii@skymind.io>

* Added test for Cbow op. Also added cuda implementation for cbow helpers.

* - improve code of stack operation for scalar case

Signed-off-by: Yurii <yurii@skymind.io>

* - provide cuda kernel for gatherND operation

Signed-off-by: Yurii <yurii@skymind.io>

* Implementation of cbow helpers with cuda kernels.

* minor tests tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* minor tests tweaks

Signed-off-by: raver119 <raver119@gmail.com>

* - further correction of cuda stuff

Signed-off-by: Yurii <yurii@skymind.io>

* Implementatation of cbow op helper with cuda kernels. Working edition.

* Skip random testing for cudablas case.

* lstmBlockCell context fix

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for ELU and ELU_BP ops.

* Added tests for eq_scalar, gt_scalar, gte_scalar and lte_scalar ops.

* Added tests for neq_scalar.

* Added test for noop.

* - further work on clipbynorm_bp

Signed-off-by: Yurii <yurii@skymind.io>

* - get rid of concat op call, use instead direct concat helper call

Signed-off-by: Yurii <yurii@skymind.io>

* lstmBlockCell context fix

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for lrelu and lrelu_bp.

* Added tests for selu and selu_bp.

* Fixed lrelu derivative helpers.

* - some corrections in lstm

Signed-off-by: Yurii <yurii@skymind.io>

* operator * result shape fix

Signed-off-by: raver119 <raver119@gmail.com>

* - correct typo in lstmCell

Signed-off-by: Yurii <yurii@skymind.io>

* few tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* CUDA inverse broadcast bool fix

Signed-off-by: raver119 <raver119@gmail.com>

* disable MMAP test for CUDA

Signed-off-by: raver119 <raver119@gmail.com>

* BooleanOp syncToDevice

Signed-off-by: raver119 <raver119@gmail.com>

* meh

Signed-off-by: raver119 <raver119@gmail.com>

* additional data types for im2col/col2im

Signed-off-by: raver119 <raver119@gmail.com>

* Added test for firas_sparse op.

* one more RandomBuffer test excluded

Signed-off-by: raver119 <raver119@gmail.com>

* Added tests for flatten op.

* Added test for Floor op.

* bunch of tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* mmulDot tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* Implemented floordiv_bp op and tests.

* Fixed scalar case with cuda implementation for bds.

* - work on cuda kernel for clip_by_norm backprop op is completed

Signed-off-by: Yurii <yurii@skymind.io>

* Eliminate cbow crach.

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* Eliminated abortion with batched nlp test.

* more tests fixed

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed shared flag initializing.

* disabled bunch of cpu workspaces tests

Signed-off-by: raver119 <raver119@gmail.com>

* scalar operators fix: missing registerSpecialUse call

Signed-off-by: raver119 <raver119@gmail.com>

* Fixed logdet for cuda and tests.

* - correct clipBynorm_bp

Signed-off-by: Yurii <yurii@skymind.io>

* Fixed crop_and_resize shape datatype.

* - correct some mmul tests

Signed-off-by: Yurii <yurii@skymind.io>
2019-08-05 11:27:05 +10:00

291 lines
7.3 KiB
C++

/*******************************************************************************
* Copyright (c) 2015-2018 Skymind, Inc.
*
* This program and the accompanying materials are made available under the
* terms of the Apache License, Version 2.0 which is available at
* https://www.apache.org/licenses/LICENSE-2.0.
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*
* SPDX-License-Identifier: Apache-2.0
******************************************************************************/
//
// @author raver119@gmail.com
//
#ifndef LIBND4J_WORKSPACETESTS_H
#define LIBND4J_WORKSPACETESTS_H
#include "testlayers.h"
#include <NDArray.h>
#include <Workspace.h>
#include <MemoryRegistrator.h>
#include <MmulHelper.h>
using namespace nd4j;
using namespace nd4j::memory;
class WorkspaceTests : public testing::Test {
};
TEST_F(WorkspaceTests, BasicInitialization1) {
Workspace workspace(1024);
ASSERT_EQ(1024, workspace.getCurrentSize());
ASSERT_EQ(0, workspace.getCurrentOffset());
}
TEST_F(WorkspaceTests, BasicInitialization2) {
Workspace workspace(65536);
ASSERT_EQ(0, workspace.getCurrentOffset());
LaunchContext ctx;
ctx.setWorkspace(&workspace);
auto array = NDArrayFactory::create<float>('c', {5, 5}, &ctx);
array.p(0, 1.0f);
array.p(5, 1.0f);
auto v = array.reduceNumber(reduce::Sum);
auto f = v.e<float>(0);
v.printShapeInfo("v shape");
ASSERT_NEAR(2.0f, f, 1e-5);
ASSERT_TRUE(workspace.getCurrentOffset() > 0);
}
TEST_F(WorkspaceTests, BasicInitialization3) {
Workspace workspace;
ASSERT_EQ(0, workspace.getCurrentOffset());
LaunchContext ctx;
ctx.setWorkspace(&workspace);
auto array = NDArrayFactory::create<float>('c', {5, 5}, &ctx);
array.p(0, 1.0f);
array.p(5, 1.0f);
auto v = array.reduceNumber(reduce::Sum);
auto f = v.e<float>(0);
v.printShapeInfo("v shape");
ASSERT_NEAR(2.0f, array.reduceNumber(reduce::Sum).e<float>(0), 1e-5);
ASSERT_TRUE(workspace.getCurrentOffset() == 0);
}
TEST_F(WorkspaceTests, ResetTest1) {
Workspace workspace(65536);
LaunchContext ctx;
ctx.setWorkspace(&workspace);
auto array = NDArrayFactory::create<float>('c', {5, 5}, &ctx);
array.p(0, 1.0f);
array.p(5, 1.0f);
workspace.scopeOut();
for (int e = 0; e < 5; e++) {
workspace.scopeIn();
auto array2 = NDArrayFactory::create<float>('c', {5, 5}, &ctx);
array2.p(0, 1.0f);
array2.p(5, 1.0f);
ASSERT_NEAR(2.0f, array2.reduceNumber(reduce::Sum).e<float>(0), 1e-5);
workspace.scopeOut();
}
ASSERT_EQ(65536, workspace.getCurrentSize());
ASSERT_EQ(0, workspace.getCurrentOffset());
ASSERT_EQ(0, workspace.getSpilledSize());
}
TEST_F(WorkspaceTests, StretchTest1) {
if (!Environment::getInstance()->isCPU())
return;
Workspace workspace(128);
void* ptr = workspace.allocateBytes(8);
workspace.scopeOut();
ASSERT_EQ(0, workspace.getSpilledSize());
ASSERT_EQ(0, workspace.getSpilledSecondarySize());
ASSERT_EQ(0, workspace.getCurrentOffset());
ASSERT_EQ(0, workspace.getCurrentSecondaryOffset());
workspace.scopeIn();
for (int e = 0; e < 10; e++) {
workspace.allocateBytes(128);
}
ASSERT_EQ(128 * 9, workspace.getSpilledSize());
workspace.scopeOut();
workspace.scopeIn();
ASSERT_EQ(0, workspace.getCurrentOffset());
// we should have absolutely different pointer here, due to reallocation
void* ptr2 = workspace.allocateBytes(8);
//ASSERT_FALSE(ptr == ptr2);
ASSERT_EQ(1280, workspace.getCurrentSize());
ASSERT_EQ(0, workspace.getSpilledSize());
}
TEST_F(WorkspaceTests, NewInWorkspaceTest1) {
if (!Environment::getInstance()->isCPU())
return;
Workspace ws(65536);
ASSERT_EQ(65536, ws.getCurrentSize());
ASSERT_EQ(0, ws.getCurrentOffset());
ASSERT_FALSE(MemoryRegistrator::getInstance()->hasWorkspaceAttached());
MemoryRegistrator::getInstance()->attachWorkspace(&ws);
ASSERT_TRUE(MemoryRegistrator::getInstance()->hasWorkspaceAttached());
auto ast = NDArrayFactory::create_<float>('c', {5, 5});
ASSERT_TRUE(ws.getCurrentOffset() > 0);
delete ast;
MemoryRegistrator::getInstance()->forgetWorkspace();
ASSERT_FALSE(MemoryRegistrator::getInstance()->hasWorkspaceAttached());
ASSERT_TRUE(MemoryRegistrator::getInstance()->getWorkspace() == nullptr);
}
TEST_F(WorkspaceTests, NewInWorkspaceTest2) {
Workspace ws(65536);
LaunchContext ctx;
ctx.setWorkspace(&ws);
ASSERT_EQ(65536, ws.getCurrentSize());
ASSERT_EQ(0, ws.getCurrentOffset());
MemoryRegistrator::getInstance()->attachWorkspace(&ws);
auto ast = NDArrayFactory::create_<float>('c', {5, 5}, &ctx);
ASSERT_TRUE(ws.getCurrentOffset() > 0);
delete ast;
MemoryRegistrator::getInstance()->forgetWorkspace();
}
TEST_F(WorkspaceTests, CloneTest1) {
if (!Environment::getInstance()->isCPU())
return;
Workspace ws(65536);
ws.allocateBytes(65536 * 2);
ASSERT_EQ(65536 * 2, ws.getSpilledSize());
auto clone = ws.clone();
ASSERT_EQ(65536 * 2, clone->getCurrentSize());
ASSERT_EQ(0, clone->getCurrentOffset());
ASSERT_EQ(0, clone->getSpilledSize());
delete clone;
}
TEST_F(WorkspaceTests, Test_Arrays_1) {
Workspace ws(65536);
LaunchContext ctx;
ctx.setWorkspace(&ws);
auto x = NDArrayFactory::create<float>('c', {3, 3}, {1, 2, 3, 4, 5, 6, 7, 8, 9}, &ctx);
// x.printIndexedBuffer("x0");
auto y = NDArrayFactory::create<float>('c', {3, 3}, {-1, -2, -3, -4, -5, -6, -7, -8, -9}, &ctx);
// x.printIndexedBuffer("x2");
auto z = NDArrayFactory::create<float>('c', {3, 3}, {0, 0, 0, 0, 0, 0, 0, 0, 0}, &ctx);
MmulHelper::mmul(&x, &y, &z);
y.assign(&x);
// x.printIndexedBuffer("x3");
// y.printIndexedBuffer("y");
// z.printIndexedBuffer("z");
}
#ifdef GRAPH_FILES_OK
TEST_F(WorkspaceTests, Test_Graph_1) {
auto graph = GraphExecutioner::importFromFlatBuffers("./resources/ae_00.fb");
auto workspace = graph->getVariableSpace()->workspace();
auto status = GraphExecutioner::execute(graph);
ASSERT_EQ(Status::OK(), status);
delete graph;
}
#endif
TEST_F(WorkspaceTests, Test_Externalized_1) {
if (!Environment::getInstance()->isCPU())
return;
char buffer[10000];
ExternalWorkspace pojo((Nd4jPointer) buffer, 10000, nullptr, 0);
ASSERT_EQ(10000, pojo.sizeHost());
ASSERT_EQ(0, pojo.sizeDevice());
Workspace ws(&pojo);
ASSERT_EQ(10000, ws.getCurrentSize());
ASSERT_EQ(10000, ws.getAllocatedSize());
LaunchContext ctx;
ctx.setWorkspace(&ws);
auto x = NDArrayFactory::create<float>('c', {10, 10}, &ctx);
// only buffer size goes into account
ASSERT_EQ(400, ws.getUsedSize());
ASSERT_EQ(400, ws.getCurrentOffset());
x.assign(2.0);
float m = x.meanNumber().e<float>(0);
ASSERT_NEAR(2.0f, m, 1e-5);
}
// TODO: uncomment this test once long shapes are introduced
/*
TEST_F(WorkspaceTests, Test_Big_Allocation_1) {
Workspace ws(65536);
NDArray<float> x('c', {256, 64, 384, 384}, &ws);
}
*/
#endif //LIBND4J_WORKSPACETESTS_H