cavis/libnd4j/blas/Environment.cpp

340 lines
9.4 KiB
C++
Raw Normal View History

2019-06-06 14:21:15 +02:00
/*******************************************************************************
* Copyright (c) 2015-2018 Skymind, Inc.
*
* This program and the accompanying materials are made available under the
* terms of the Apache License, Version 2.0 which is available at
* https://www.apache.org/licenses/LICENSE-2.0.
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*
* SPDX-License-Identifier: Apache-2.0
******************************************************************************/
//
// Created by raver119 on 06.10.2017.
//
#include <iostream>
#include <cstdlib>
#include <stdexcept>
#include <string>
#include "Environment.h"
#include <helpers/StringUtils.h>
#include <thread>
#include <helpers/logger.h>
2019-06-06 14:21:15 +02:00
[WIP] More of CUDA (#95) * initial commit Signed-off-by: raver119 <raver119@gmail.com> * Implementation of hashcode cuda helper. Working edition. * Fixed parallel test input arangements. * Fixed tests for hashcode op. * Fixed shape calculation for image:crop_and_resize op and test. * NativeOps tests. Initial test suite. * Added tests for indexReduce methods. * Added test on execBroadcast with NDArray as dimensions. * Added test on execBroadcastBool with NDArray as dimensions. * Added tests on execPairwiseTransform and execPairwiseTransofrmBool. * Added tests for execReduce with scalar results. * Added reduce tests for non-empty dims array. * Added tests for reduce3. * Added tests for execScalar. * Added tests for execSummaryStats. * - provide cpu/cuda code for batch_to_space - testing it Signed-off-by: Yurii <yurii@skymind.io> * - remove old test for batch_to_space (had wrong format and numbers were not checked) Signed-off-by: Yurii <yurii@skymind.io> * Fixed complilation errors with test. * Added test for execTransformFloat. * Added test for execTransformSame. * Added test for execTransformBool. * Added test for execTransformStrict. * Added tests for execScalar/execScalarBool with TADs. * Added test for flatten. * - provide cpu/cuda code for space_to_Batch operaion Signed-off-by: Yurii <yurii@skymind.io> * Added test for concat. * comment unnecessary stuff in s_t_b Signed-off-by: Yurii <yurii@skymind.io> * Added test for specialConcat. * Added tests for memcpy/set routines. * Fixed pullRow cuda test. * Added pullRow test. * Added average test. * - correct typo in NDArray::applyPairwiseTransform(nd4j::pairwise::BoolOps op...) Signed-off-by: Yurii <yurii@skymind.io> * - debugging and fixing cuda tests in JavaInteropTests file Signed-off-by: Yurii <yurii@skymind.io> * - correct some tests Signed-off-by: Yurii <yurii@skymind.io> * Added test for shuffle. * Fixed ops declarations. * Restored omp and added shuffle test. * Added convertTypes test. * Added tests for execRandom. Eliminated usage of RandomBuffer with NativeOps. * Added sort tests. * Added tests for execCustomOp. * - further debuging and fixing tests terminated with crash Signed-off-by: Yurii <yurii@skymind.io> * Added tests for calculateOutputShapes. * Addded Benchmarks test. * Commented benchmark tests. * change assertion Signed-off-by: raver119 <raver119@gmail.com> * Added tests for apply_sgd op. Added cpu helper for that op. * Implement cuda helper for aplly_sgd op. Fixed tests for NativeOps. * Added test for assign broadcastable. * Added tests for assign_bp op. * Added tests for axpy op. * - assign/execScalar/execTransformAny signature change - minor test fix Signed-off-by: raver119 <raver119@gmail.com> * Fixed axpy op. * meh Signed-off-by: raver119 <raver119@gmail.com> * - fix tests for nativeOps::concat Signed-off-by: Yurii <yurii@skymind.io> * sequential transform/scalar Signed-off-by: raver119 <raver119@gmail.com> * allow nested parallelism Signed-off-by: raver119 <raver119@gmail.com> * assign_bp leak fix Signed-off-by: raver119 <raver119@gmail.com> * block setRNG fix Signed-off-by: raver119 <raver119@gmail.com> * enable parallelism by default Signed-off-by: raver119 <raver119@gmail.com> * enable nested parallelism by default Signed-off-by: raver119 <raver119@gmail.com> * Added cuda implementation for row_count helper. * Added implementation for tnse gains op helper. * - take into account possible situations when input arrays are empty in reduce_ cuda stuff Signed-off-by: Yurii <yurii@skymind.io> * Implemented tsne/edge_forces op cuda-based helper. Parallelized cpu-based helper for edge_forces. * Added kernel for tsne/symmetrized op heleper. * Implementation of tsne/symmetrized op cuda helper. Working edition. * Eliminated waste printfs. * Added test for broadcastgradientargs op. * host-only fallback for empty reduce float Signed-off-by: raver119 <raver119@gmail.com> * - some tests fixes Signed-off-by: Yurii <yurii@skymind.io> * - correct the rest of reduce_ stuff Signed-off-by: Yurii <yurii@skymind.io> * - further correction of reduce_ stuff Signed-off-by: Yurii <yurii@skymind.io> * Added test for Cbow op. Also added cuda implementation for cbow helpers. * - improve code of stack operation for scalar case Signed-off-by: Yurii <yurii@skymind.io> * - provide cuda kernel for gatherND operation Signed-off-by: Yurii <yurii@skymind.io> * Implementation of cbow helpers with cuda kernels. * minor tests tweaks Signed-off-by: raver119 <raver119@gmail.com> * minor tests tweaks Signed-off-by: raver119 <raver119@gmail.com> * - further correction of cuda stuff Signed-off-by: Yurii <yurii@skymind.io> * Implementatation of cbow op helper with cuda kernels. Working edition. * Skip random testing for cudablas case. * lstmBlockCell context fix Signed-off-by: raver119 <raver119@gmail.com> * Added tests for ELU and ELU_BP ops. * Added tests for eq_scalar, gt_scalar, gte_scalar and lte_scalar ops. * Added tests for neq_scalar. * Added test for noop. * - further work on clipbynorm_bp Signed-off-by: Yurii <yurii@skymind.io> * - get rid of concat op call, use instead direct concat helper call Signed-off-by: Yurii <yurii@skymind.io> * lstmBlockCell context fix Signed-off-by: raver119 <raver119@gmail.com> * Added tests for lrelu and lrelu_bp. * Added tests for selu and selu_bp. * Fixed lrelu derivative helpers. * - some corrections in lstm Signed-off-by: Yurii <yurii@skymind.io> * operator * result shape fix Signed-off-by: raver119 <raver119@gmail.com> * - correct typo in lstmCell Signed-off-by: Yurii <yurii@skymind.io> * few tests fixed Signed-off-by: raver119 <raver119@gmail.com> * CUDA inverse broadcast bool fix Signed-off-by: raver119 <raver119@gmail.com> * disable MMAP test for CUDA Signed-off-by: raver119 <raver119@gmail.com> * BooleanOp syncToDevice Signed-off-by: raver119 <raver119@gmail.com> * meh Signed-off-by: raver119 <raver119@gmail.com> * additional data types for im2col/col2im Signed-off-by: raver119 <raver119@gmail.com> * Added test for firas_sparse op. * one more RandomBuffer test excluded Signed-off-by: raver119 <raver119@gmail.com> * Added tests for flatten op. * Added test for Floor op. * bunch of tests fixed Signed-off-by: raver119 <raver119@gmail.com> * mmulDot tests fixed Signed-off-by: raver119 <raver119@gmail.com> * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * Implemented floordiv_bp op and tests. * Fixed scalar case with cuda implementation for bds. * - work on cuda kernel for clip_by_norm backprop op is completed Signed-off-by: Yurii <yurii@skymind.io> * Eliminate cbow crach. * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * Eliminated abortion with batched nlp test. * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * Fixed shared flag initializing. * disabled bunch of cpu workspaces tests Signed-off-by: raver119 <raver119@gmail.com> * scalar operators fix: missing registerSpecialUse call Signed-off-by: raver119 <raver119@gmail.com> * Fixed logdet for cuda and tests. * - correct clipBynorm_bp Signed-off-by: Yurii <yurii@skymind.io> * Fixed crop_and_resize shape datatype. * - correct some mmul tests Signed-off-by: Yurii <yurii@skymind.io>
2019-08-02 19:01:03 +02:00
#ifdef _OPENMP
#include <omp.h>
#endif
2019-06-06 14:21:15 +02:00
#ifdef __CUDABLAS__
#include <cuda.h>
#include <cuda_runtime.h>
#include "BlasVersionHelper.h"
2019-06-06 14:21:15 +02:00
#endif
namespace nd4j {
nd4j::Environment::Environment() {
_tadThreshold.store(1);
2019-06-06 14:21:15 +02:00
_elementThreshold.store(1024);
_verbose.store(false);
_debug.store(false);
_profile.store(false);
_precBoost.store(false);
_leaks.store(false);
_dataType.store(nd4j::DataType::FLOAT32);
_maxThreads = std::thread::hardware_concurrency();
_maxMasterThreads = _maxThreads.load();
2019-06-06 14:21:15 +02:00
#ifndef ANDROID
const char* omp_threads = std::getenv("OMP_NUM_THREADS");
if (omp_threads != nullptr) {
try {
std::string omp(omp_threads);
int val = std::stoi(omp);
_maxThreads.store(val);
_maxMasterThreads.store(val);
2019-06-06 14:21:15 +02:00
} catch (std::invalid_argument &e) {
// just do nothing
} catch (std::out_of_range &e) {
// still do nothing
}
}
/**
* Defines size of thread pool used for parallelism
*/
const char* max_threads = std::getenv("SD_MAX_THREADS");
if (max_threads != nullptr) {
try {
std::string t(max_threads);
int val = std::stoi(t);
_maxThreads.store(val);
} catch (std::invalid_argument &e) {
// just do nothing
} catch (std::out_of_range &e) {
// still do nothing
}
}
/**
* Defines max number of threads usable at once
*/
const char* max_master_threads = std::getenv("SD_MASTER_THREADS");
if (max_master_threads != nullptr) {
try {
std::string t(max_master_threads);
int val = std::stoi(t);
_maxMasterThreads.store(val);
} catch (std::invalid_argument &e) {
// just do nothing
} catch (std::out_of_range &e) {
// still do nothing
}
}
if (_maxMasterThreads.load() > _maxThreads.load()) {
nd4j_printf("Warning! MAX_MASTER_THREADS > MAX_THREADS, tuning them down to match each other\n","");
_maxMasterThreads.store(_maxThreads.load());
}
/**
* If this env var is defined - we'll disallow use of platform-specific helpers (mkldnn, cudnn, etc)
*/
const char* forbid_helpers = std::getenv("SD_FORBID_HELPERS");
if (max_master_threads != nullptr) {
_allowHelpers = false;
}
/**
* This var defines max amount of host memory library can allocate
*/
const char* max_primary_memory = std::getenv("SD_MAX_PRIMARY_BYTES");
if (max_primary_memory != nullptr) {
try {
std::string t(max_primary_memory);
auto val = std::stol(t);
_maxTotalPrimaryMemory.store(val);
} catch (std::invalid_argument &e) {
// just do nothing
} catch (std::out_of_range &e) {
// still do nothing
}
}
/**
* This var defines max amount of special (i.e. device) memory library can allocate on all devices combined
*/
const char* max_special_memory = std::getenv("SD_MAX_SPECIAL_BYTES");
if (max_special_memory != nullptr) {
try {
std::string t(max_special_memory);
auto val = std::stol(t);
_maxTotalSpecialMemory.store(val);
} catch (std::invalid_argument &e) {
// just do nothing
} catch (std::out_of_range &e) {
// still do nothing
}
}
/**
* This var defines max amount of special (i.e. device) memory library can allocate on all devices combined
*/
const char* max_device_memory = std::getenv("SD_MAX_DEVICE_BYTES");
if (max_device_memory != nullptr) {
try {
std::string t(max_device_memory);
auto val = std::stol(t);
_maxDeviceMemory.store(val);
} catch (std::invalid_argument &e) {
// just do nothing
} catch (std::out_of_range &e) {
// still do nothing
}
}
2019-06-06 14:21:15 +02:00
#endif
#ifdef __CUDABLAS__
int devCnt = 0;
cudaGetDeviceCount(&devCnt);
auto devProperties = new cudaDeviceProp[devCnt];
for (int i = 0; i < devCnt; i++) {
cudaSetDevice(i);
cudaGetDeviceProperties(&devProperties[i], i);
//cudaDeviceSetLimit(cudaLimitStackSize, 4096);
Pair p(devProperties[i].major, devProperties[i].minor);
_capabilities.emplace_back(p);
}
SameDiff cleanup and fixes (#12) * #8160 Remove resolvePrepertiesFromSameDiffBeforeExecution Signed-off-by: AlexDBlack <blacka101@gmail.com> * SameDiff API cleanup Signed-off-by: AlexDBlack <blacka101@gmail.com> * More SameDiff cleanup Signed-off-by: AlexDBlack <blacka101@gmail.com> * Small fixes Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8248 Switch SameDiff variable init from lazy to creation time for more predictable behaviour Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8252 TanhDerivative javadoc Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8225 Deconvolution2D input validation Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8265 Switch SameDiff.outputs() to user settable, instead of unreliable 'best guess' Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8224 SameDiff.zero and .one create constants, not variables Signed-off-by: AlexDBlack <blacka101@gmail.com> * More cleanup and fixes Signed-off-by: AlexDBlack <blacka101@gmail.com> * Small test fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * Small fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * DL4J SameDiff fixes Signed-off-by: AlexDBlack <blacka101@gmail.com> * Re-add hack for Deconvolution2DLayer until #8315 is resolved Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8270 Move CUDA device/version logging to Java; can be disabled via existing org.nd4j.log.initialization system property Signed-off-by: AlexDBlack <blacka101@gmail.com> * All ND4J init logging checks system property Signed-off-by: AlexDBlack <blacka101@gmail.com> * Small tweak Signed-off-by: AlexDBlack <blacka101@gmail.com> * Remove redundant device logging Signed-off-by: AlexDBlack <blacka101@gmail.com> * One more fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * UX improvements Signed-off-by: AlexDBlack <blacka101@gmail.com> * Deconv fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * Add deconv tests Signed-off-by: AlexDBlack <blacka101@gmail.com> * Cleanup Signed-off-by: AlexDBlack <blacka101@gmail.com> * Remove debug code Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-26 03:38:08 +02:00
BlasVersionHelper ver;
_blasMajorVersion = ver._blasMajorVersion;
_blasMinorVersion = ver._blasMinorVersion;
_blasPatchVersion = ver._blasPatchVersion;
2019-06-06 14:21:15 +02:00
cudaSetDevice(0);
delete[] devProperties;
[WIP] More of CUDA (#95) * initial commit Signed-off-by: raver119 <raver119@gmail.com> * Implementation of hashcode cuda helper. Working edition. * Fixed parallel test input arangements. * Fixed tests for hashcode op. * Fixed shape calculation for image:crop_and_resize op and test. * NativeOps tests. Initial test suite. * Added tests for indexReduce methods. * Added test on execBroadcast with NDArray as dimensions. * Added test on execBroadcastBool with NDArray as dimensions. * Added tests on execPairwiseTransform and execPairwiseTransofrmBool. * Added tests for execReduce with scalar results. * Added reduce tests for non-empty dims array. * Added tests for reduce3. * Added tests for execScalar. * Added tests for execSummaryStats. * - provide cpu/cuda code for batch_to_space - testing it Signed-off-by: Yurii <yurii@skymind.io> * - remove old test for batch_to_space (had wrong format and numbers were not checked) Signed-off-by: Yurii <yurii@skymind.io> * Fixed complilation errors with test. * Added test for execTransformFloat. * Added test for execTransformSame. * Added test for execTransformBool. * Added test for execTransformStrict. * Added tests for execScalar/execScalarBool with TADs. * Added test for flatten. * - provide cpu/cuda code for space_to_Batch operaion Signed-off-by: Yurii <yurii@skymind.io> * Added test for concat. * comment unnecessary stuff in s_t_b Signed-off-by: Yurii <yurii@skymind.io> * Added test for specialConcat. * Added tests for memcpy/set routines. * Fixed pullRow cuda test. * Added pullRow test. * Added average test. * - correct typo in NDArray::applyPairwiseTransform(nd4j::pairwise::BoolOps op...) Signed-off-by: Yurii <yurii@skymind.io> * - debugging and fixing cuda tests in JavaInteropTests file Signed-off-by: Yurii <yurii@skymind.io> * - correct some tests Signed-off-by: Yurii <yurii@skymind.io> * Added test for shuffle. * Fixed ops declarations. * Restored omp and added shuffle test. * Added convertTypes test. * Added tests for execRandom. Eliminated usage of RandomBuffer with NativeOps. * Added sort tests. * Added tests for execCustomOp. * - further debuging and fixing tests terminated with crash Signed-off-by: Yurii <yurii@skymind.io> * Added tests for calculateOutputShapes. * Addded Benchmarks test. * Commented benchmark tests. * change assertion Signed-off-by: raver119 <raver119@gmail.com> * Added tests for apply_sgd op. Added cpu helper for that op. * Implement cuda helper for aplly_sgd op. Fixed tests for NativeOps. * Added test for assign broadcastable. * Added tests for assign_bp op. * Added tests for axpy op. * - assign/execScalar/execTransformAny signature change - minor test fix Signed-off-by: raver119 <raver119@gmail.com> * Fixed axpy op. * meh Signed-off-by: raver119 <raver119@gmail.com> * - fix tests for nativeOps::concat Signed-off-by: Yurii <yurii@skymind.io> * sequential transform/scalar Signed-off-by: raver119 <raver119@gmail.com> * allow nested parallelism Signed-off-by: raver119 <raver119@gmail.com> * assign_bp leak fix Signed-off-by: raver119 <raver119@gmail.com> * block setRNG fix Signed-off-by: raver119 <raver119@gmail.com> * enable parallelism by default Signed-off-by: raver119 <raver119@gmail.com> * enable nested parallelism by default Signed-off-by: raver119 <raver119@gmail.com> * Added cuda implementation for row_count helper. * Added implementation for tnse gains op helper. * - take into account possible situations when input arrays are empty in reduce_ cuda stuff Signed-off-by: Yurii <yurii@skymind.io> * Implemented tsne/edge_forces op cuda-based helper. Parallelized cpu-based helper for edge_forces. * Added kernel for tsne/symmetrized op heleper. * Implementation of tsne/symmetrized op cuda helper. Working edition. * Eliminated waste printfs. * Added test for broadcastgradientargs op. * host-only fallback for empty reduce float Signed-off-by: raver119 <raver119@gmail.com> * - some tests fixes Signed-off-by: Yurii <yurii@skymind.io> * - correct the rest of reduce_ stuff Signed-off-by: Yurii <yurii@skymind.io> * - further correction of reduce_ stuff Signed-off-by: Yurii <yurii@skymind.io> * Added test for Cbow op. Also added cuda implementation for cbow helpers. * - improve code of stack operation for scalar case Signed-off-by: Yurii <yurii@skymind.io> * - provide cuda kernel for gatherND operation Signed-off-by: Yurii <yurii@skymind.io> * Implementation of cbow helpers with cuda kernels. * minor tests tweaks Signed-off-by: raver119 <raver119@gmail.com> * minor tests tweaks Signed-off-by: raver119 <raver119@gmail.com> * - further correction of cuda stuff Signed-off-by: Yurii <yurii@skymind.io> * Implementatation of cbow op helper with cuda kernels. Working edition. * Skip random testing for cudablas case. * lstmBlockCell context fix Signed-off-by: raver119 <raver119@gmail.com> * Added tests for ELU and ELU_BP ops. * Added tests for eq_scalar, gt_scalar, gte_scalar and lte_scalar ops. * Added tests for neq_scalar. * Added test for noop. * - further work on clipbynorm_bp Signed-off-by: Yurii <yurii@skymind.io> * - get rid of concat op call, use instead direct concat helper call Signed-off-by: Yurii <yurii@skymind.io> * lstmBlockCell context fix Signed-off-by: raver119 <raver119@gmail.com> * Added tests for lrelu and lrelu_bp. * Added tests for selu and selu_bp. * Fixed lrelu derivative helpers. * - some corrections in lstm Signed-off-by: Yurii <yurii@skymind.io> * operator * result shape fix Signed-off-by: raver119 <raver119@gmail.com> * - correct typo in lstmCell Signed-off-by: Yurii <yurii@skymind.io> * few tests fixed Signed-off-by: raver119 <raver119@gmail.com> * CUDA inverse broadcast bool fix Signed-off-by: raver119 <raver119@gmail.com> * disable MMAP test for CUDA Signed-off-by: raver119 <raver119@gmail.com> * BooleanOp syncToDevice Signed-off-by: raver119 <raver119@gmail.com> * meh Signed-off-by: raver119 <raver119@gmail.com> * additional data types for im2col/col2im Signed-off-by: raver119 <raver119@gmail.com> * Added test for firas_sparse op. * one more RandomBuffer test excluded Signed-off-by: raver119 <raver119@gmail.com> * Added tests for flatten op. * Added test for Floor op. * bunch of tests fixed Signed-off-by: raver119 <raver119@gmail.com> * mmulDot tests fixed Signed-off-by: raver119 <raver119@gmail.com> * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * Implemented floordiv_bp op and tests. * Fixed scalar case with cuda implementation for bds. * - work on cuda kernel for clip_by_norm backprop op is completed Signed-off-by: Yurii <yurii@skymind.io> * Eliminate cbow crach. * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * Eliminated abortion with batched nlp test. * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * Fixed shared flag initializing. * disabled bunch of cpu workspaces tests Signed-off-by: raver119 <raver119@gmail.com> * scalar operators fix: missing registerSpecialUse call Signed-off-by: raver119 <raver119@gmail.com> * Fixed logdet for cuda and tests. * - correct clipBynorm_bp Signed-off-by: Yurii <yurii@skymind.io> * Fixed crop_and_resize shape datatype. * - correct some mmul tests Signed-off-by: Yurii <yurii@skymind.io>
2019-08-02 19:01:03 +02:00
#else
2019-06-06 14:21:15 +02:00
#endif
}
nd4j::Environment::~Environment() {
//
}
void Environment::setMaxPrimaryMemory(uint64_t maxBytes) {
_maxTotalPrimaryMemory = maxBytes;
}
void Environment::setMaxSpecialyMemory(uint64_t maxBytes) {
_maxTotalSpecialMemory;
}
void Environment::setMaxDeviceMemory(uint64_t maxBytes) {
_maxDeviceMemory = maxBytes;
}
2019-06-06 14:21:15 +02:00
Environment *Environment::getInstance() {
if (_instance == 0)
_instance = new Environment();
return _instance;
}
bool Environment::isVerbose() {
return _verbose.load();
}
bool Environment::isExperimentalBuild() {
return _experimental;
}
nd4j::DataType Environment::defaultFloatDataType() {
return _dataType.load();
}
std::vector<Pair>& Environment::capabilities() {
return _capabilities;
}
void Environment::setDefaultFloatDataType(nd4j::DataType dtype) {
if (dtype != nd4j::DataType::FLOAT32 && dtype != nd4j::DataType::DOUBLE && dtype != nd4j::DataType::FLOAT8 && dtype != nd4j::DataType::HALF)
throw std::runtime_error("Default Float data type must be one of [FLOAT8, FLOAT16, FLOAT32, DOUBLE]");
_dataType.store(dtype);
}
void Environment::setVerbose(bool reallyVerbose) {
_verbose = reallyVerbose;
}
bool Environment::isDebug() {
return _debug.load();
}
bool Environment::isProfiling() {
return _profile.load();
}
bool Environment::isDetectingLeaks() {
return _leaks.load();
}
void Environment::setLeaksDetector(bool reallyDetect) {
_leaks.store(reallyDetect);
}
void Environment::setProfiling(bool reallyProfile) {
_profile.store(reallyProfile);
}
bool Environment::isDebugAndVerbose() {
return this->isDebug() && this->isVerbose();
}
void Environment::setDebug(bool reallyDebug) {
_debug = reallyDebug;
}
int Environment::tadThreshold() {
return _tadThreshold.load();
}
void Environment::setTadThreshold(int threshold) {
_tadThreshold = threshold;
}
int Environment::elementwiseThreshold() {
return _elementThreshold.load();
}
void Environment::setElementwiseThreshold(int threshold) {
_elementThreshold = threshold;
}
int Environment::maxThreads() {
return _maxThreads.load();
}
int Environment::maxMasterThreads() {
return _maxMasterThreads.load();
}
2019-06-06 14:21:15 +02:00
void Environment::setMaxThreads(int max) {
//_maxThreads.store(max);
}
void Environment::setMaxMasterThreads(int max) {
//_maxMasterThreads = max;
2019-06-06 14:21:15 +02:00
}
bool Environment::precisionBoostAllowed() {
return _precBoost.load();
}
void Environment::allowPrecisionBoost(bool reallyAllow) {
_precBoost.store(reallyAllow);
}
bool Environment::isCPU() {
#ifdef __CUDABLAS__
return false;
#else
return true;
#endif
}
SameDiff cleanup and fixes (#12) * #8160 Remove resolvePrepertiesFromSameDiffBeforeExecution Signed-off-by: AlexDBlack <blacka101@gmail.com> * SameDiff API cleanup Signed-off-by: AlexDBlack <blacka101@gmail.com> * More SameDiff cleanup Signed-off-by: AlexDBlack <blacka101@gmail.com> * Small fixes Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8248 Switch SameDiff variable init from lazy to creation time for more predictable behaviour Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8252 TanhDerivative javadoc Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8225 Deconvolution2D input validation Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8265 Switch SameDiff.outputs() to user settable, instead of unreliable 'best guess' Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8224 SameDiff.zero and .one create constants, not variables Signed-off-by: AlexDBlack <blacka101@gmail.com> * More cleanup and fixes Signed-off-by: AlexDBlack <blacka101@gmail.com> * Small test fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * Small fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * DL4J SameDiff fixes Signed-off-by: AlexDBlack <blacka101@gmail.com> * Re-add hack for Deconvolution2DLayer until #8315 is resolved Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8270 Move CUDA device/version logging to Java; can be disabled via existing org.nd4j.log.initialization system property Signed-off-by: AlexDBlack <blacka101@gmail.com> * All ND4J init logging checks system property Signed-off-by: AlexDBlack <blacka101@gmail.com> * Small tweak Signed-off-by: AlexDBlack <blacka101@gmail.com> * Remove redundant device logging Signed-off-by: AlexDBlack <blacka101@gmail.com> * One more fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * UX improvements Signed-off-by: AlexDBlack <blacka101@gmail.com> * Deconv fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * Add deconv tests Signed-off-by: AlexDBlack <blacka101@gmail.com> * Cleanup Signed-off-by: AlexDBlack <blacka101@gmail.com> * Remove debug code Signed-off-by: AlexDBlack <blacka101@gmail.com>
2019-10-26 03:38:08 +02:00
int Environment::blasMajorVersion(){
return _blasMajorVersion;
}
int Environment::blasMinorVersion(){
return _blasMinorVersion;
}
int Environment::blasPatchVersion(){
return _blasPatchVersion;
}
bool Environment::helpersAllowed() {
return _allowHelpers.load();
}
void Environment::allowHelpers(bool reallyAllow) {
_allowHelpers.store(reallyAllow);
}
2019-06-06 14:21:15 +02:00
nd4j::Environment *nd4j::Environment::_instance = 0;
}