[WIP] multi-device support (#80)
* fix pad javadoc and @see links. (#72) Signed-off-by: Robert Altena <Rob@Ra-ai.com> * [WIP] More fixes (#73) * special tests for ConstantTadHelper/ConstantShapeHelper Signed-off-by: raver119 <raver119@gmail.com> * release methods for data buffers Signed-off-by: raver119 <raver119@gmail.com> * delete temporary buffer Java side Signed-off-by: raver119 <raver119@gmail.com> * delete temporary buffer Java side Signed-off-by: raver119 <raver119@gmail.com> * delete temporary TadPack C++/Java side (#74) Signed-off-by: raver119 <raver119@gmail.com> * Zoo model TF import test updates (#75) * argLine fix, update compression_gru comment * updated comment for xception * undid but commented argLine change * updated xlnet comment * copyright headers * - new NDArray methods like()/ulike() (#77) - fix for depthwise_conv2d_bp + special test Signed-off-by: raver119 <raver119@gmail.com> * upsampling2d fix CUDA Signed-off-by: raver119 <raver119@gmail.com> * DL4J trace logging (#79) * MLN/CG trace logging for debugging Signed-off-by: AlexDBlack <blacka101@gmail.com> * Tiny tweak Signed-off-by: AlexDBlack <blacka101@gmail.com> * strided_slice_bp shape fn leak fix Signed-off-by: raver119 <raver119@gmail.com> * SameDiff fixes and naming (#78) * remove SDVariable inplace methods * import methods * npe fix in OpVal * removed SameDiff inplace ops from tests * Naming updates, moved to centralized methods in SameDiff, should use op_#:# for everything * quick fixes * javadoc * SDVariable eval with placeholders * use regex match * better matching * initial commit Signed-off-by: raver119 <raver119@gmail.com> * initial commit Signed-off-by: raver119 <raver119@gmail.com> * fix javadoc. (#76) * fix javadoc. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * replace most @see with @link s. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * 4 additional tests Signed-off-by: raver119 <raver119@gmail.com> * launch context reorganization Signed-off-by: raver119 <raver119@gmail.com> * LaunchContext reorganization Signed-off-by: raver119 <raver119@gmail.com> * per-device LaunchContext Signed-off-by: raver119 <raver119@gmail.com> * Various DL4J/ND4J fixes (#81) * #7954 Force refresh of UI when switching tabs on overview page Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8017 Concurrent modification exception (synchronize) fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8033 Don't initialize updater in middle of writing memory crash dump Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8208 Fix shape checks for ND4J int[] creator methods Signed-off-by: AlexDBlack <blacka101@gmail.com> * #6385 #7992 Keras import naming fixes + cleanup Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8016 Upsampling3D - add NDHWC format support Signed-off-by: AlexDBlack <blacka101@gmail.com> * ContextBuffers as separate entity Signed-off-by: raver119 <raver119@gmail.com> * Refactor NativeOps.h to export C functions * Actually export functions from NativeOps.h * Adapt the Java wrappers in ND4J generated with JavaCPP * Create C wrappers for some of the C++ classes currently used by ND4J * ContextBuffers as separate entity Signed-off-by: raver119 <raver119@gmail.com> * remove duplicate code in createBufferDetached. (#83) Signed-off-by: Robert Altena <Rob@Ra-ai.com> * Keras model import - updater lr fix (#84) * Keras model import - updater lr fix Signed-off-by: eraly <susan.eraly@gmail.com> * Keras model import - updater lr fix, cleanup Signed-off-by: eraly <susan.eraly@gmail.com> * ContextBuffers as separate entity Signed-off-by: raver119 <raver119@gmail.com> * ContextBuffers as separate entity Signed-off-by: raver119 <raver119@gmail.com> * Fix functions of OpaqueVariablesSet * thread-local buffers/affinity Signed-off-by: raver119 <raver119@gmail.com> * thread safety for LaunchContext Signed-off-by: raver119 <raver119@gmail.com> * more of thread safety Signed-off-by: raver119 <raver119@gmail.com> * one more multi threaded test Signed-off-by: raver119 <raver119@gmail.com> * SameDiff Convolution Config validation, better output methods (#82) * Conv Config validation & tests Signed-off-by: Ryan Nett <rnett@skymind.io> * stackOutputs utility method Signed-off-by: Ryan Nett <rnett@skymind.io> * use constructor for validation, support negative kernel sizes (infered from weights) Signed-off-by: Ryan Nett <rnett@skymind.io> * better output methods Signed-off-by: Ryan Nett <rnett@skymind.io> * move output to be with fit and evaluate Signed-off-by: Ryan Nett <rnett@skymind.io> * fixes Signed-off-by: Ryan Nett <rnett@skymind.io> * more fixes Signed-off-by: Ryan Nett <rnett@skymind.io> * refactor duplicate code from pad methods. (#86) * refactor duplicate code from pad methods. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * replace switch with if. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * Various ND4J/DL4J fixes and improvements (#87) * Reshape and reallocate - small fixes Signed-off-by: AlexDBlack <blacka101@gmail.com> * Reshape and reallocate - small fixes Signed-off-by: AlexDBlack <blacka101@gmail.com> * #6488 ElementWiseVertex broadcast support Signed-off-by: AlexDBlack <blacka101@gmail.com> * Constructors and broadcast supported it Transforms.max/min Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8054 ElementWiseVertex now supports broadcast inputs Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8057 Nd4j.create overload dtype fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * #7551 ND4J Shape validation fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * [WIP] Numpy boolean import (#91) * numpy bool type Signed-off-by: raver119 <raver119@gmail.com> * numpy bool java side Signed-off-by: raver119 <raver119@gmail.com> * remove create method with unused parameter. (#89) * remove create method with unused parameter. * removed more unused methods. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * removing more unused code. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * last removal of unused code. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * remove createSparse methods. (#92) Signed-off-by: Robert Altena <Rob@Ra-ai.com> * Various ND4J/DL4J fixes (#90) * Deprecate Old*Op instances Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8063 #8054 Broadcast exceptions + cleanup inplace ops Signed-off-by: AlexDBlack <blacka101@gmail.com> * Small fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * Remove bad test condition Signed-off-by: AlexDBlack <blacka101@gmail.com> * #7993 Fix shape function issue in crop_and_resize op Signed-off-by: AlexDBlack <blacka101@gmail.com> * DL4J SameDiff lambda layer fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8029 Fix for pnorm backprop math Signed-off-by: AlexDBlack <blacka101@gmail.com> * #8038 Fix Op profiler NaN/Inf triggering + add tests (#93) Signed-off-by: AlexDBlack <blacka101@gmail.com> * createUninitializedDetached refactoring. (#94) * wip * update interface, add null implementations. * Breaking one test in a weird way. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * createUninitializedDetached refactored. Signed-off-by: Robert Altena <Rob@Ra-ai.com> * cuda build fix for issues introduced by recent refactoring Signed-off-by: raver119 <raver119@gmail.com> * [WIP] More of CUDA (#95) * initial commit Signed-off-by: raver119 <raver119@gmail.com> * Implementation of hashcode cuda helper. Working edition. * Fixed parallel test input arangements. * Fixed tests for hashcode op. * Fixed shape calculation for image:crop_and_resize op and test. * NativeOps tests. Initial test suite. * Added tests for indexReduce methods. * Added test on execBroadcast with NDArray as dimensions. * Added test on execBroadcastBool with NDArray as dimensions. * Added tests on execPairwiseTransform and execPairwiseTransofrmBool. * Added tests for execReduce with scalar results. * Added reduce tests for non-empty dims array. * Added tests for reduce3. * Added tests for execScalar. * Added tests for execSummaryStats. * - provide cpu/cuda code for batch_to_space - testing it Signed-off-by: Yurii <yurii@skymind.io> * - remove old test for batch_to_space (had wrong format and numbers were not checked) Signed-off-by: Yurii <yurii@skymind.io> * Fixed complilation errors with test. * Added test for execTransformFloat. * Added test for execTransformSame. * Added test for execTransformBool. * Added test for execTransformStrict. * Added tests for execScalar/execScalarBool with TADs. * Added test for flatten. * - provide cpu/cuda code for space_to_Batch operaion Signed-off-by: Yurii <yurii@skymind.io> * Added test for concat. * comment unnecessary stuff in s_t_b Signed-off-by: Yurii <yurii@skymind.io> * Added test for specialConcat. * Added tests for memcpy/set routines. * Fixed pullRow cuda test. * Added pullRow test. * Added average test. * - correct typo in NDArray::applyPairwiseTransform(nd4j::pairwise::BoolOps op...) Signed-off-by: Yurii <yurii@skymind.io> * - debugging and fixing cuda tests in JavaInteropTests file Signed-off-by: Yurii <yurii@skymind.io> * - correct some tests Signed-off-by: Yurii <yurii@skymind.io> * Added test for shuffle. * Fixed ops declarations. * Restored omp and added shuffle test. * Added convertTypes test. * Added tests for execRandom. Eliminated usage of RandomBuffer with NativeOps. * Added sort tests. * Added tests for execCustomOp. * - further debuging and fixing tests terminated with crash Signed-off-by: Yurii <yurii@skymind.io> * Added tests for calculateOutputShapes. * Addded Benchmarks test. * Commented benchmark tests. * change assertion Signed-off-by: raver119 <raver119@gmail.com> * Added tests for apply_sgd op. Added cpu helper for that op. * Implement cuda helper for aplly_sgd op. Fixed tests for NativeOps. * Added test for assign broadcastable. * Added tests for assign_bp op. * Added tests for axpy op. * - assign/execScalar/execTransformAny signature change - minor test fix Signed-off-by: raver119 <raver119@gmail.com> * Fixed axpy op. * meh Signed-off-by: raver119 <raver119@gmail.com> * - fix tests for nativeOps::concat Signed-off-by: Yurii <yurii@skymind.io> * sequential transform/scalar Signed-off-by: raver119 <raver119@gmail.com> * allow nested parallelism Signed-off-by: raver119 <raver119@gmail.com> * assign_bp leak fix Signed-off-by: raver119 <raver119@gmail.com> * block setRNG fix Signed-off-by: raver119 <raver119@gmail.com> * enable parallelism by default Signed-off-by: raver119 <raver119@gmail.com> * enable nested parallelism by default Signed-off-by: raver119 <raver119@gmail.com> * Added cuda implementation for row_count helper. * Added implementation for tnse gains op helper. * - take into account possible situations when input arrays are empty in reduce_ cuda stuff Signed-off-by: Yurii <yurii@skymind.io> * Implemented tsne/edge_forces op cuda-based helper. Parallelized cpu-based helper for edge_forces. * Added kernel for tsne/symmetrized op heleper. * Implementation of tsne/symmetrized op cuda helper. Working edition. * Eliminated waste printfs. * Added test for broadcastgradientargs op. * host-only fallback for empty reduce float Signed-off-by: raver119 <raver119@gmail.com> * - some tests fixes Signed-off-by: Yurii <yurii@skymind.io> * - correct the rest of reduce_ stuff Signed-off-by: Yurii <yurii@skymind.io> * - further correction of reduce_ stuff Signed-off-by: Yurii <yurii@skymind.io> * Added test for Cbow op. Also added cuda implementation for cbow helpers. * - improve code of stack operation for scalar case Signed-off-by: Yurii <yurii@skymind.io> * - provide cuda kernel for gatherND operation Signed-off-by: Yurii <yurii@skymind.io> * Implementation of cbow helpers with cuda kernels. * minor tests tweaks Signed-off-by: raver119 <raver119@gmail.com> * minor tests tweaks Signed-off-by: raver119 <raver119@gmail.com> * - further correction of cuda stuff Signed-off-by: Yurii <yurii@skymind.io> * Implementatation of cbow op helper with cuda kernels. Working edition. * Skip random testing for cudablas case. * lstmBlockCell context fix Signed-off-by: raver119 <raver119@gmail.com> * Added tests for ELU and ELU_BP ops. * Added tests for eq_scalar, gt_scalar, gte_scalar and lte_scalar ops. * Added tests for neq_scalar. * Added test for noop. * - further work on clipbynorm_bp Signed-off-by: Yurii <yurii@skymind.io> * - get rid of concat op call, use instead direct concat helper call Signed-off-by: Yurii <yurii@skymind.io> * lstmBlockCell context fix Signed-off-by: raver119 <raver119@gmail.com> * Added tests for lrelu and lrelu_bp. * Added tests for selu and selu_bp. * Fixed lrelu derivative helpers. * - some corrections in lstm Signed-off-by: Yurii <yurii@skymind.io> * operator * result shape fix Signed-off-by: raver119 <raver119@gmail.com> * - correct typo in lstmCell Signed-off-by: Yurii <yurii@skymind.io> * few tests fixed Signed-off-by: raver119 <raver119@gmail.com> * CUDA inverse broadcast bool fix Signed-off-by: raver119 <raver119@gmail.com> * disable MMAP test for CUDA Signed-off-by: raver119 <raver119@gmail.com> * BooleanOp syncToDevice Signed-off-by: raver119 <raver119@gmail.com> * meh Signed-off-by: raver119 <raver119@gmail.com> * additional data types for im2col/col2im Signed-off-by: raver119 <raver119@gmail.com> * Added test for firas_sparse op. * one more RandomBuffer test excluded Signed-off-by: raver119 <raver119@gmail.com> * Added tests for flatten op. * Added test for Floor op. * bunch of tests fixed Signed-off-by: raver119 <raver119@gmail.com> * mmulDot tests fixed Signed-off-by: raver119 <raver119@gmail.com> * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * Implemented floordiv_bp op and tests. * Fixed scalar case with cuda implementation for bds. * - work on cuda kernel for clip_by_norm backprop op is completed Signed-off-by: Yurii <yurii@skymind.io> * Eliminate cbow crach. * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * Eliminated abortion with batched nlp test. * more tests fixed Signed-off-by: raver119 <raver119@gmail.com> * Fixed shared flag initializing. * disabled bunch of cpu workspaces tests Signed-off-by: raver119 <raver119@gmail.com> * scalar operators fix: missing registerSpecialUse call Signed-off-by: raver119 <raver119@gmail.com> * Fixed logdet for cuda and tests. * - correct clipBynorm_bp Signed-off-by: Yurii <yurii@skymind.io> * Fixed crop_and_resize shape datatype. * - correct some mmul tests Signed-off-by: Yurii <yurii@skymind.io> * build fix Signed-off-by: raver119 <raver119@gmail.com> * exclude two methods for JNI Signed-off-by: raver119 <raver119@gmail.com> * exclude two methods for JNI Signed-off-by: raver119 <raver119@gmail.com> * exclude two methods for JNI (#97) Signed-off-by: raver119 <raver119@gmail.com> * temporary stack fix Signed-off-by: raver119 <raver119@gmail.com> * round robin affinity test Signed-off-by: raver119 <raver119@gmail.com> * get rid of legacy CudaContext methods Signed-off-by: raver119 <raver119@gmail.com> * get rid of legacy ContextPool classes/methods Signed-off-by: raver119 <raver119@gmail.com> * one legacy test removed Signed-off-by: raver119 <raver119@gmail.com> * few more fields rearranged Signed-off-by: raver119 <raver119@gmail.com> * OpaqueLaunchContext Signed-off-by: raver119 <raver119@gmail.com> * OpaqueLaunchContext++ Signed-off-by: raver119 <raver119@gmail.com> * more of OpaqueLaunchContext methods Signed-off-by: raver119 <raver119@gmail.com> * LaunchContext -> CudaContext Signed-off-by: raver119 <raver119@gmail.com> * AffinityManger changes Signed-off-by: raver119 <raver119@gmail.com> * AffinityManger changes Signed-off-by: raver119 <raver119@gmail.com> * cusolver handles Signed-off-by: raver119 <raver119@gmail.com> * typo Signed-off-by: raver119 <raver119@gmail.com> * cusolver method Signed-off-by: raver119 <raver119@gmail.com> * cusolver handle propagated Signed-off-by: raver119 <raver119@gmail.com> * blas/solver handles Signed-off-by: raver119 <raver119@gmail.com> * one more test Signed-off-by: raver119 <raver119@gmail.com> * legacy concat implementations replaced with new CustomOp Signed-off-by: raver119 <raver119@gmail.com> * one more test Signed-off-by: raver119 <raver119@gmail.com> * concat now uses way more blocks Signed-off-by: raver119 <raver119@gmail.com> * print Signed-off-by: raver119 <raver119@gmail.com> * no more triple template mmul Signed-off-by: raver119 <raver119@gmail.com> * bunch of kernels have dtypes reconsidered Signed-off-by: raver119 <raver119@gmail.com> * bunch of kernels have dtypes reconsidered Signed-off-by: raver119 <raver119@gmail.com> * bitonic sort reorganized Signed-off-by: raver119 <raver119@gmail.com> * bunch of cpu stuff removed from cuda scope Signed-off-by: raver119 <raver119@gmail.com> * bunch of cpu stuff removed from cuda scope Signed-off-by: raver119 <raver119@gmail.com> * type conversions moved to generic impl Signed-off-by: raver119 <raver119@gmail.com> * cpu data types pass Signed-off-by: raver119 <raver119@gmail.com> * non_max_suppression Signed-off-by: raver119 <raver119@gmail.com> * sortByValue fix Signed-off-by: raver119 <raver119@gmail.com> * ignore all mixed datatype tests for mmul Signed-off-by: raver119 <raver119@gmail.com> * special handling of OpProfiler exceptions Signed-off-by: raver119 <raver119@gmail.com> * - one failing concat test in cpp - Nd4j.tile now uses op internally Signed-off-by: raver119 <raver119@gmail.com> * get back dtype exception for legacy arrays deserialization Signed-off-by: raver119 <raver119@gmail.com>master
parent
ec847e034b
commit
53ca9a76e8
|
@ -163,9 +163,9 @@ if(CUDA_BLAS)
|
||||||
if(CUDA_VERSION VERSION_GREATER "9.2") # cuda 10
|
if(CUDA_VERSION VERSION_GREATER "9.2") # cuda 10
|
||||||
if ("${COMPUTE}" STREQUAL "all")
|
if ("${COMPUTE}" STREQUAL "all")
|
||||||
if (APPLE)
|
if (APPLE)
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w --cudart=static -O3 --expt-extended-lambda -gencode arch=compute_35,code=sm_35 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w --cudart=static -O3 --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_35,code=sm_35 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60)
|
||||||
else()
|
else()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w --cudart=static -O3 --expt-extended-lambda -gencode arch=compute_35,code=sm_35 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_70,code=sm_70)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w --cudart=static -O3 --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_35,code=sm_35 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_70,code=sm_70)
|
||||||
endif()
|
endif()
|
||||||
else()
|
else()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w --cudart=static --expt-extended-lambda -O3 -Xfatbin -compress-all -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w --cudart=static --expt-extended-lambda -O3 -Xfatbin -compress-all -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
||||||
|
@ -173,24 +173,24 @@ if(CUDA_BLAS)
|
||||||
elseif(CUDA_VERSION VERSION_GREATER "8.0") # cuda 9
|
elseif(CUDA_VERSION VERSION_GREATER "8.0") # cuda 9
|
||||||
if ("${COMPUTE}" STREQUAL "all")
|
if ("${COMPUTE}" STREQUAL "all")
|
||||||
if (APPLE)
|
if (APPLE)
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w --cudart=static -O3 --expt-extended-lambda -gencode arch=compute_35,code=sm_35 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w --cudart=static -O3 --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_35,code=sm_35 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60)
|
||||||
else()
|
else()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w --cudart=static -O3 --expt-extended-lambda -gencode arch=compute_35,code=sm_35 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w --cudart=static -O3 --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_35,code=sm_35 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60)
|
||||||
endif()
|
endif()
|
||||||
else()
|
else()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w --cudart=static --expt-extended-lambda -O3 -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w --cudart=static --expt-extended-lambda -O3 -Xfatbin -compress-all -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
||||||
endif()
|
endif()
|
||||||
elseif (CUDA_VERSION VERSION_GREATER "7.5") # cuda 8.0
|
elseif (CUDA_VERSION VERSION_GREATER "7.5") # cuda 8.0
|
||||||
if ("${COMPUTE}" STREQUAL "all")
|
if ("${COMPUTE}" STREQUAL "all")
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_8 ${EXPM} -w --cudart=static -O3 --expt-extended-lambda -gencode arch=compute_30,code=sm_30 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_8 ${EXPM} -w --cudart=static -O3 --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_30,code=sm_30 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60)
|
||||||
else()
|
else()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_8 ${EXPM} -w --cudart=static --expt-extended-lambda -O3 -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_8 ${EXPM} -w --cudart=static --expt-extended-lambda -O3 -Xfatbin -compress-all -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
||||||
endif()
|
endif()
|
||||||
else()
|
else()
|
||||||
if ("${COMPUTE}" STREQUAL "all")
|
if ("${COMPUTE}" STREQUAL "all")
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_75 ${EXPM} --cudart=static --expt-extended-lambda -O3 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_52,code=sm_52 )
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_75 ${EXPM} --cudart=static --expt-extended-lambda -O3 -Xfatbin -compress-all -gencode arch=compute_30,code=sm_30 -gencode arch=compute_52,code=sm_52 )
|
||||||
else()
|
else()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_75 ${EXPM} --cudart=static --expt-extended-lambda -O3 -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_75 ${EXPM} --cudart=static --expt-extended-lambda -O3 -Xfatbin -compress-all -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
|
@ -205,34 +205,34 @@ if(CUDA_BLAS)
|
||||||
message("CUDA 10 Debug build")
|
message("CUDA 10 Debug build")
|
||||||
if ("${COMPUTE}" STREQUAL "all")
|
if ("${COMPUTE}" STREQUAL "all")
|
||||||
if (APPLE)
|
if (APPLE)
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62)
|
||||||
elseif()
|
elseif()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70)
|
||||||
endif()
|
endif()
|
||||||
else()
|
else()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -arch=compute_${COMPUTE} -code=compute_${COMPUTE})
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_10 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -Xfatbin -compress-all -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
||||||
endif()
|
endif()
|
||||||
elseif(CUDA_VERSION VERSION_GREATER "8.0") # cuda 9
|
elseif(CUDA_VERSION VERSION_GREATER "8.0") # cuda 9
|
||||||
if ("${COMPUTE}" STREQUAL "all")
|
if ("${COMPUTE}" STREQUAL "all")
|
||||||
if (APPLE)
|
if (APPLE)
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62)
|
||||||
elseif()
|
elseif()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70)
|
||||||
endif()
|
endif()
|
||||||
else()
|
else()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_9 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -Xfatbin -compress-all -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
||||||
endif()
|
endif()
|
||||||
elseif (CUDA_VERSION VERSION_GREATER "7.5") # cuda 8
|
elseif (CUDA_VERSION VERSION_GREATER "7.5") # cuda 8
|
||||||
if ("${COMPUTE}" STREQUAL "all")
|
if ("${COMPUTE}" STREQUAL "all")
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_8 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_8 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62)
|
||||||
else()
|
else()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_8 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_8 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -Xfatbin -compress-all -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
||||||
endif()
|
endif()
|
||||||
else()
|
else()
|
||||||
if ("${COMPUTE}" STREQUAL "all")
|
if ("${COMPUTE}" STREQUAL "all")
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_75 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53)
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_75 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -Xfatbin -compress-all -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53)
|
||||||
else()
|
else()
|
||||||
list(APPEND CUDA_NVCC_FLAGS -DCUDA_75 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
list(APPEND CUDA_NVCC_FLAGS -DCUDA_75 ${EXPM} -w -G -g --cudart=static --expt-extended-lambda -Xfatbin -compress-all -arch=compute_${COMPUTE} -code=sm_${COMPUTE})
|
||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
endif()
|
endif()
|
||||||
|
@ -249,7 +249,7 @@ if(CUDA_BLAS)
|
||||||
file(GLOB_RECURSE OPS_SOURCES false ../include/ops/impl/*.cpp ../include/ops/declarable/impl/*.cpp ../include/ops/*.h)
|
file(GLOB_RECURSE OPS_SOURCES false ../include/ops/impl/*.cpp ../include/ops/declarable/impl/*.cpp ../include/ops/*.h)
|
||||||
file(GLOB_RECURSE HELPERS_SOURCES false ../include/helpers/impl/*.cpp ../include/helpers/*.cu ../include/helpers/*.cupp ../include/helpers/*.h)
|
file(GLOB_RECURSE HELPERS_SOURCES false ../include/helpers/impl/*.cpp ../include/helpers/*.cu ../include/helpers/*.cupp ../include/helpers/*.h)
|
||||||
file(GLOB_RECURSE INDEXING_SOURCES false ../include/indexing/*.cpp ../include/indexing/*.h)
|
file(GLOB_RECURSE INDEXING_SOURCES false ../include/indexing/*.cpp ../include/indexing/*.h)
|
||||||
file(GLOB_RECURSE LOOPS_SOURCES false ../include/loops/*.cpp ../include/loops/*.h)
|
file(GLOB_RECURSE LOOPS_SOURCES false ../include/loops/impl/*.cpp ../include/loops/*.h)
|
||||||
file(GLOB_RECURSE LOOPS_SOURCES_CUDA false ../include/loops/*.cu)
|
file(GLOB_RECURSE LOOPS_SOURCES_CUDA false ../include/loops/*.cu)
|
||||||
|
|
||||||
if (NOT BUILD_TESTS)
|
if (NOT BUILD_TESTS)
|
||||||
|
|
|
@ -1769,6 +1769,17 @@ ND4J_EXPORT void deleteRandomGenerator(OpaqueRandomGenerator* ptr);
|
||||||
ND4J_EXPORT const char* runLightBenchmarkSuit(bool printOut);
|
ND4J_EXPORT const char* runLightBenchmarkSuit(bool printOut);
|
||||||
ND4J_EXPORT const char* runFullBenchmarkSuit(bool printOut);
|
ND4J_EXPORT const char* runFullBenchmarkSuit(bool printOut);
|
||||||
|
|
||||||
|
typedef nd4j::LaunchContext OpaqueLaunchContext;
|
||||||
|
|
||||||
|
ND4J_EXPORT OpaqueLaunchContext* defaultLaunchContext();
|
||||||
|
ND4J_EXPORT Nd4jPointer lcScalarPointer(OpaqueLaunchContext* lc);
|
||||||
|
ND4J_EXPORT Nd4jPointer lcReductionPointer(OpaqueLaunchContext* lc);
|
||||||
|
ND4J_EXPORT Nd4jPointer lcAllocationPointer(OpaqueLaunchContext* lc);
|
||||||
|
ND4J_EXPORT Nd4jPointer lcExecutionStream(OpaqueLaunchContext* lc);
|
||||||
|
ND4J_EXPORT Nd4jPointer lcCopyStream(OpaqueLaunchContext* lc);
|
||||||
|
ND4J_EXPORT Nd4jPointer lcBlasHandle(OpaqueLaunchContext* lc);
|
||||||
|
ND4J_EXPORT Nd4jPointer lcSolverHandle(OpaqueLaunchContext* lc);
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#endif //NATIVEOPERATIONS_NATIVEOPS_H
|
#endif //NATIVEOPERATIONS_NATIVEOPS_H
|
||||||
|
|
|
@ -2985,6 +2985,38 @@ const char* runFullBenchmarkSuit(bool printOut) {
|
||||||
return chars;
|
return chars;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
nd4j::LaunchContext* defaultLaunchContext() {
|
||||||
|
return LaunchContext::defaultContext();
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcScalarPointer(OpaqueLaunchContext* lc) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcReductionPointer(OpaqueLaunchContext* lc) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcAllocationPointer(OpaqueLaunchContext* lc) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcExecutionStream(OpaqueLaunchContext* lc) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcCopyStream(OpaqueLaunchContext* lc) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcBlasHandle(OpaqueLaunchContext* lc) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcSolverHandle(OpaqueLaunchContext* lc) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void flattenGeneric,(Nd4jPointer*, int, char, void*, Nd4jLong*, void*, Nd4jLong*), LIBND4J_TYPES);
|
BUILD_SINGLE_TEMPLATE(template void flattenGeneric,(Nd4jPointer*, int, char, void*, Nd4jLong*, void*, Nd4jLong*), LIBND4J_TYPES);
|
||||||
BUILD_SINGLE_TEMPLATE(template void pullRowsGeneric, (void *, Nd4jLong*, void*, Nd4jLong*, const int, Nd4jLong*, Nd4jLong*, Nd4jLong*, Nd4jLong*, Nd4jLong*), LIBND4J_TYPES);
|
BUILD_SINGLE_TEMPLATE(template void pullRowsGeneric, (void *, Nd4jLong*, void*, Nd4jLong*, const int, Nd4jLong*, Nd4jLong*, Nd4jLong*, Nd4jLong*, Nd4jLong*), LIBND4J_TYPES);
|
||||||
|
|
|
@ -356,7 +356,7 @@ void NDArray::tile(const std::vector<Nd4jLong>& reps, NDArray& target) const {
|
||||||
auto stream = getContext()->getCudaStream();
|
auto stream = getContext()->getCudaStream();
|
||||||
|
|
||||||
prepareSpecialUse({&target}, {this});
|
prepareSpecialUse({&target}, {this});
|
||||||
BUILD_DOUBLE_SELECTOR(target.dataType(), dataType(), tileKernelHH, (getSpecialBuffer(), getSpecialShapeInfo(), target.getSpecialBuffer(), target.getSpecialShapeInfo(), targetLen, ews, stream), LIBND4J_TYPES, LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(target.dataType(), tileKernelHH, (getSpecialBuffer(), getSpecialShapeInfo(), target.getSpecialBuffer(), target.getSpecialShapeInfo(), targetLen, ews, stream), LIBND4J_TYPES);
|
||||||
registerSpecialUse({&target}, {this});
|
registerSpecialUse({&target}, {this});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -375,7 +375,7 @@ void NDArray::tile(NDArray& target) const {
|
||||||
auto stream = getContext()->getCudaStream();
|
auto stream = getContext()->getCudaStream();
|
||||||
|
|
||||||
prepareSpecialUse({&target}, {this});
|
prepareSpecialUse({&target}, {this});
|
||||||
BUILD_DOUBLE_SELECTOR(target.dataType(), dataType(), tileKernelHH, (getSpecialBuffer(), getSpecialShapeInfo(), target.getSpecialBuffer(), target.getSpecialShapeInfo(), targetLen, ews, stream), LIBND4J_TYPES, LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(target.dataType(), tileKernelHH, (getSpecialBuffer(), getSpecialShapeInfo(), target.getSpecialBuffer(), target.getSpecialShapeInfo(), targetLen, ews, stream), LIBND4J_TYPES);
|
||||||
registerSpecialUse({&target}, {this});
|
registerSpecialUse({&target}, {this});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -434,7 +434,7 @@ void NDArray::repeat(int dimension, NDArray& target) const {
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&target}, {this});
|
NDArray::prepareSpecialUse({&target}, {this});
|
||||||
auto stream = getContext()->getCudaStream();
|
auto stream = getContext()->getCudaStream();
|
||||||
BUILD_DOUBLE_SELECTOR(target.dataType(), dataType(), repeatKernelHH, (getSpecialBuffer(), target.getSpecialBuffer(), numTads, lengthOf(), packX.platformShapeInfo(), packX.platformOffsets(), packZ.platformShapeInfo(), packZ.platformOffsets(), *stream), LIBND4J_TYPES, LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(target.dataType(), repeatKernelHH, (getSpecialBuffer(), target.getSpecialBuffer(), numTads, lengthOf(), packX.platformShapeInfo(), packX.platformOffsets(), packZ.platformShapeInfo(), packZ.platformOffsets(), *stream), LIBND4J_TYPES);
|
||||||
NDArray::registerSpecialUse({&target}, {this});
|
NDArray::registerSpecialUse({&target}, {this});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -23,6 +23,14 @@
|
||||||
#include <cuda.h>
|
#include <cuda.h>
|
||||||
#include <cuda_runtime.h>
|
#include <cuda_runtime.h>
|
||||||
|
|
||||||
|
static Nd4jLong __device__ __noinline__ __getIndexOffset(Nd4jLong index, Nd4jLong *shapeInfo, Nd4jLong length) {
|
||||||
|
return shape::getIndexOffset(index, shapeInfo, length);
|
||||||
|
}
|
||||||
|
|
||||||
|
static Nd4jLong __device__ __noinline__ __length(Nd4jLong *shapeInfo) {
|
||||||
|
return shape::length(shapeInfo);
|
||||||
|
}
|
||||||
|
|
||||||
template <typename T, typename Lambda> static _CUDA_G void lambdaKernel(void* vx, Nd4jLong *xShapeInfo, void *vz, Nd4jLong *zShapeInfo, Lambda lambda);
|
template <typename T, typename Lambda> static _CUDA_G void lambdaKernel(void* vx, Nd4jLong *xShapeInfo, void *vz, Nd4jLong *zShapeInfo, Lambda lambda);
|
||||||
template <typename T, typename Lambda> static _CUDA_G void lambdaIndexedKernel(void* vx, Nd4jLong *xShapeInfo, void *vz, Nd4jLong *zShapeInfo, Lambda lambda);
|
template <typename T, typename Lambda> static _CUDA_G void lambdaIndexedKernel(void* vx, Nd4jLong *xShapeInfo, void *vz, Nd4jLong *zShapeInfo, Lambda lambda);
|
||||||
template <typename T, typename Lambda> static _CUDA_G void lambdaIndexedPairwiseKernel(void* vx, Nd4jLong *xShapeInfo, void* vy, Nd4jLong *yShapeInfo, void *vz, Nd4jLong *zShapeInfo, Lambda lambda);
|
template <typename T, typename Lambda> static _CUDA_G void lambdaIndexedPairwiseKernel(void* vx, Nd4jLong *xShapeInfo, void* vy, Nd4jLong *yShapeInfo, void *vz, Nd4jLong *zShapeInfo, Lambda lambda);
|
||||||
|
@ -86,7 +94,7 @@ static _CUDA_G void lambdaKernel(void* vx, Nd4jLong *xShapeInfo, void *vz, Nd4jL
|
||||||
auto xOrder = shape::order(xShapeInfo);
|
auto xOrder = shape::order(xShapeInfo);
|
||||||
auto zOrder = shape::order(zShapeInfo);
|
auto zOrder = shape::order(zShapeInfo);
|
||||||
|
|
||||||
auto zLength = shape::length(zShapeInfo);
|
auto zLength = __length(zShapeInfo);
|
||||||
|
|
||||||
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
|
||||||
|
@ -95,8 +103,8 @@ static _CUDA_G void lambdaKernel(void* vx, Nd4jLong *xShapeInfo, void *vz, Nd4jL
|
||||||
z[e * zEws] = lambda(x[e * xEws]);
|
z[e * zEws] = lambda(x[e * xEws]);
|
||||||
} else {
|
} else {
|
||||||
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
||||||
auto xOffset = shape::getIndexOffset(e, xShapeInfo, zLength);
|
auto xOffset = __getIndexOffset(e, xShapeInfo, zLength);
|
||||||
auto zOffset = shape::getIndexOffset(e, zShapeInfo, zLength);
|
auto zOffset = __getIndexOffset(e, zShapeInfo, zLength);
|
||||||
|
|
||||||
z[zOffset] = lambda(x[xOffset]);
|
z[zOffset] = lambda(x[xOffset]);
|
||||||
}
|
}
|
||||||
|
@ -115,7 +123,7 @@ static _CUDA_G void lambdaIndexedKernel(void* vx, Nd4jLong *xShapeInfo, void *vz
|
||||||
auto xOrder = shape::order(xShapeInfo);
|
auto xOrder = shape::order(xShapeInfo);
|
||||||
auto zOrder = shape::order(zShapeInfo);
|
auto zOrder = shape::order(zShapeInfo);
|
||||||
|
|
||||||
auto zLength = shape::length(zShapeInfo);
|
auto zLength = __length(zShapeInfo);
|
||||||
|
|
||||||
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
|
||||||
|
@ -124,8 +132,8 @@ static _CUDA_G void lambdaIndexedKernel(void* vx, Nd4jLong *xShapeInfo, void *vz
|
||||||
z[e * zEws] = lambda(e, x[e * xEws]);
|
z[e * zEws] = lambda(e, x[e * xEws]);
|
||||||
} else {
|
} else {
|
||||||
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
||||||
auto xOffset = shape::getIndexOffset(e, xShapeInfo, zLength);
|
auto xOffset = __getIndexOffset(e, xShapeInfo, zLength);
|
||||||
auto zOffset = shape::getIndexOffset(e, zShapeInfo, zLength);
|
auto zOffset = __getIndexOffset(e, zShapeInfo, zLength);
|
||||||
|
|
||||||
z[zOffset] = lambda(e, x[xOffset]);
|
z[zOffset] = lambda(e, x[xOffset]);
|
||||||
}
|
}
|
||||||
|
@ -147,7 +155,7 @@ static _CUDA_G void lambdaIndexedPairwiseKernel(void* vx, Nd4jLong *xShapeInfo,
|
||||||
auto yOrder = shape::order(yShapeInfo);
|
auto yOrder = shape::order(yShapeInfo);
|
||||||
auto zOrder = shape::order(zShapeInfo);
|
auto zOrder = shape::order(zShapeInfo);
|
||||||
|
|
||||||
auto zLength = shape::length(zShapeInfo);
|
auto zLength = __length(zShapeInfo);
|
||||||
|
|
||||||
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
|
||||||
|
@ -156,9 +164,9 @@ static _CUDA_G void lambdaIndexedPairwiseKernel(void* vx, Nd4jLong *xShapeInfo,
|
||||||
z[e * zEws] = lambda(e, x[e * xEws], y[e * yEws]);
|
z[e * zEws] = lambda(e, x[e * xEws], y[e * yEws]);
|
||||||
} else {
|
} else {
|
||||||
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
||||||
auto xOffset = shape::getIndexOffset(e, xShapeInfo, zLength);
|
auto xOffset = __getIndexOffset(e, xShapeInfo, zLength);
|
||||||
auto yOffset = shape::getIndexOffset(e, yShapeInfo, zLength);
|
auto yOffset = __getIndexOffset(e, yShapeInfo, zLength);
|
||||||
auto zOffset = shape::getIndexOffset(e, zShapeInfo, zLength);
|
auto zOffset = __getIndexOffset(e, zShapeInfo, zLength);
|
||||||
|
|
||||||
z[zOffset] = lambda(e, x[xOffset], y[yOffset]);
|
z[zOffset] = lambda(e, x[xOffset], y[yOffset]);
|
||||||
}
|
}
|
||||||
|
@ -180,7 +188,7 @@ static _CUDA_G void lambdaPairwiseKernel(void* vx, Nd4jLong *xShapeInfo, void* v
|
||||||
auto yOrder = shape::order(yShapeInfo);
|
auto yOrder = shape::order(yShapeInfo);
|
||||||
auto zOrder = shape::order(zShapeInfo);
|
auto zOrder = shape::order(zShapeInfo);
|
||||||
|
|
||||||
auto zLength = shape::length(zShapeInfo);
|
auto zLength = __length(zShapeInfo);
|
||||||
|
|
||||||
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
|
||||||
|
@ -189,9 +197,9 @@ static _CUDA_G void lambdaPairwiseKernel(void* vx, Nd4jLong *xShapeInfo, void* v
|
||||||
z[e * zEws] = lambda(x[e * xEws], y[e * yEws]);
|
z[e * zEws] = lambda(x[e * xEws], y[e * yEws]);
|
||||||
} else {
|
} else {
|
||||||
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
||||||
auto xOffset = shape::getIndexOffset(e, xShapeInfo, zLength);
|
auto xOffset = __getIndexOffset(e, xShapeInfo, zLength);
|
||||||
auto yOffset = shape::getIndexOffset(e, yShapeInfo, zLength);
|
auto yOffset = __getIndexOffset(e, yShapeInfo, zLength);
|
||||||
auto zOffset = shape::getIndexOffset(e, zShapeInfo, zLength);
|
auto zOffset = __getIndexOffset(e, zShapeInfo, zLength);
|
||||||
|
|
||||||
z[zOffset] = lambda(x[xOffset], y[yOffset]);
|
z[zOffset] = lambda(x[xOffset], y[yOffset]);
|
||||||
}
|
}
|
||||||
|
@ -216,7 +224,7 @@ static _CUDA_G void lambdaTriplewiseKernel(void* vw, Nd4jLong *wShapeInfo, void*
|
||||||
auto yOrder = shape::order(yShapeInfo);
|
auto yOrder = shape::order(yShapeInfo);
|
||||||
auto zOrder = shape::order(zShapeInfo);
|
auto zOrder = shape::order(zShapeInfo);
|
||||||
|
|
||||||
auto zLength = shape::length(zShapeInfo);
|
auto zLength = __length(zShapeInfo);
|
||||||
|
|
||||||
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
auto tid = threadIdx.x + blockIdx.x * blockDim.x;
|
||||||
|
|
||||||
|
@ -225,10 +233,10 @@ static _CUDA_G void lambdaTriplewiseKernel(void* vw, Nd4jLong *wShapeInfo, void*
|
||||||
z[e * zEws] = lambda(w[e * wEws], x[e * xEws], y[e * yEws]);
|
z[e * zEws] = lambda(w[e * wEws], x[e * xEws], y[e * yEws]);
|
||||||
} else {
|
} else {
|
||||||
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
||||||
auto wOffset = shape::getIndexOffset(e, wShapeInfo, zLength);
|
auto wOffset = __getIndexOffset(e, wShapeInfo, zLength);
|
||||||
auto xOffset = shape::getIndexOffset(e, xShapeInfo, zLength);
|
auto xOffset = __getIndexOffset(e, xShapeInfo, zLength);
|
||||||
auto yOffset = shape::getIndexOffset(e, yShapeInfo, zLength);
|
auto yOffset = __getIndexOffset(e, yShapeInfo, zLength);
|
||||||
auto zOffset = shape::getIndexOffset(e, zShapeInfo, zLength);
|
auto zOffset = __getIndexOffset(e, zShapeInfo, zLength);
|
||||||
|
|
||||||
z[zOffset] = lambda(w[wOffset], x[xOffset], y[yOffset]);
|
z[zOffset] = lambda(w[wOffset], x[xOffset], y[yOffset]);
|
||||||
}
|
}
|
||||||
|
|
|
@ -28,6 +28,7 @@
|
||||||
#include <helpers/threshold.h>
|
#include <helpers/threshold.h>
|
||||||
#include <ops/specials_cuda.h>
|
#include <ops/specials_cuda.h>
|
||||||
#include <helpers/DebugHelper.h>
|
#include <helpers/DebugHelper.h>
|
||||||
|
#include <AffinityManager.h>
|
||||||
|
|
||||||
#include <exceptions/datatype_exception.h>
|
#include <exceptions/datatype_exception.h>
|
||||||
#include <helpers/CudaLaunchHelper.h>
|
#include <helpers/CudaLaunchHelper.h>
|
||||||
|
@ -1691,11 +1692,7 @@ void setOmpMinThreads(int threads) {
|
||||||
}
|
}
|
||||||
|
|
||||||
int getDevice() {
|
int getDevice() {
|
||||||
int curDevice = -1;
|
return nd4j::AffinityManager::currentDeviceId();
|
||||||
|
|
||||||
cudaGetDevice(&curDevice);
|
|
||||||
|
|
||||||
return curDevice;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void setElementThreshold(int num) {
|
void setElementThreshold(int num) {
|
||||||
|
@ -2391,8 +2388,8 @@ void sortByValue(Nd4jPointer *extraPointers,
|
||||||
|
|
||||||
auto xLength = shape::length(xShapeInfo);
|
auto xLength = shape::length(xShapeInfo);
|
||||||
auto xEWS = shape::elementWiseStride(xShapeInfo);
|
auto xEWS = shape::elementWiseStride(xShapeInfo);
|
||||||
auto xType = nd4j::ArrayOptions::dataType(xShapeInfo);
|
auto xType = nd4j::ArrayOptions::dataType(yShapeInfo);
|
||||||
auto yType = nd4j::ArrayOptions::dataType(yShapeInfo);
|
auto yType = nd4j::ArrayOptions::dataType(xShapeInfo);
|
||||||
|
|
||||||
|
|
||||||
// check if xLength is a power of 2, and use bitonic sort, if that's the case
|
// check if xLength is a power of 2, and use bitonic sort, if that's the case
|
||||||
|
@ -2406,7 +2403,7 @@ void sortByValue(Nd4jPointer *extraPointers,
|
||||||
|
|
||||||
for (int k = 2; k <= xLength; k = 2*k) {
|
for (int k = 2; k <= xLength; k = 2*k) {
|
||||||
for (int j = k >> 1; j > 0; j = j >> 1) {
|
for (int j = k >> 1; j > 0; j = j >> 1) {
|
||||||
BUILD_DOUBLE_SELECTOR(xType, yType, bitonicSortStepGenericValue, (launchDims, stream, dX, dXShapeInfo, dy, dyShapeInfo, j, k, xLength, descending), LIBND4J_TYPES, LIBND4J_TYPES);
|
BUILD_DOUBLE_SELECTOR(xType, yType, bitonicSortStepGenericKey, (launchDims, stream, dy, dyShapeInfo, dX, dXShapeInfo, j, k, xLength, descending), LIBND4J_TYPES, LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
@ -2430,7 +2427,7 @@ void sortByValue(Nd4jPointer *extraPointers,
|
||||||
int rev = 0;
|
int rev = 0;
|
||||||
do{
|
do{
|
||||||
int half = n >> 1;
|
int half = n >> 1;
|
||||||
BUILD_DOUBLE_SELECTOR(xType, yType, bitonicArbitraryStepGenericValue, (launchDims, stream, dX, dXShapeInfo, dy, dyShapeInfo, n, xLength, rev, descending), LIBND4J_TYPES, LIBND4J_TYPES);
|
BUILD_DOUBLE_SELECTOR(xType, yType, bitonicArbitraryStepGenericKey, (launchDims, stream, dy, dyShapeInfo, dX, dXShapeInfo, n, xLength, rev, descending), LIBND4J_TYPES, LIBND4J_TYPES);
|
||||||
n>>=1;
|
n>>=1;
|
||||||
rev = 1;
|
rev = 1;
|
||||||
} while(n > 1);
|
} while(n > 1);
|
||||||
|
@ -3342,6 +3339,7 @@ Nd4jLong getConstantDataBufferSizeOf(nd4j::ConstantDataBuffer* dbf) {
|
||||||
nd4j::graph::Context* createGraphContext(int nodeId) {
|
nd4j::graph::Context* createGraphContext(int nodeId) {
|
||||||
return new nd4j::graph::Context(nodeId);
|
return new nd4j::graph::Context(nodeId);
|
||||||
}
|
}
|
||||||
|
|
||||||
nd4j::graph::RandomGenerator* getGraphContextRandomGenerator(nd4j::graph::Context* ptr) {
|
nd4j::graph::RandomGenerator* getGraphContextRandomGenerator(nd4j::graph::Context* ptr) {
|
||||||
return &ptr->randomGenerator();
|
return &ptr->randomGenerator();
|
||||||
}
|
}
|
||||||
|
@ -3460,3 +3458,35 @@ const char* runFullBenchmarkSuit(bool printOut) {
|
||||||
Nd4jLong getCachedMemory(int deviceId) {
|
Nd4jLong getCachedMemory(int deviceId) {
|
||||||
return nd4j::ConstantHelper::getInstance()->getCachedAmount(deviceId);
|
return nd4j::ConstantHelper::getInstance()->getCachedAmount(deviceId);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
nd4j::LaunchContext* defaultLaunchContext() {
|
||||||
|
return LaunchContext::defaultContext();
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcScalarPointer(OpaqueLaunchContext* lc) {
|
||||||
|
return lc->getScalarPointer();
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcReductionPointer(OpaqueLaunchContext* lc) {
|
||||||
|
return lc->getReductionPointer();
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcAllocationPointer(OpaqueLaunchContext* lc) {
|
||||||
|
return lc->getAllocationPointer();
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcExecutionStream(OpaqueLaunchContext* lc) {
|
||||||
|
return lc->getCudaStream();
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcCopyStream(OpaqueLaunchContext* lc) {
|
||||||
|
return lc->getCudaSpecialStream();
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcBlasHandle(OpaqueLaunchContext* lc) {
|
||||||
|
return lc->getCublasHandle();
|
||||||
|
}
|
||||||
|
|
||||||
|
Nd4jPointer lcSolverHandle(OpaqueLaunchContext* lc) {
|
||||||
|
return lc->getCusolverHandle();
|
||||||
|
}
|
|
@ -14,29 +14,33 @@
|
||||||
* SPDX-License-Identifier: Apache-2.0
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
******************************************************************************/
|
******************************************************************************/
|
||||||
|
|
||||||
package org.nd4j.jita.allocator.garbage;
|
//
|
||||||
|
// @author raver119@gmail.com
|
||||||
|
//
|
||||||
|
|
||||||
import lombok.NonNull;
|
#ifndef LIBND4J_AFFINITYMANAGER_H
|
||||||
import lombok.extern.slf4j.Slf4j;
|
#define LIBND4J_AFFINITYMANAGER_H
|
||||||
import org.nd4j.jita.allocator.impl.AtomicAllocator;
|
|
||||||
import org.nd4j.linalg.api.memory.Deallocator;
|
|
||||||
import org.nd4j.linalg.factory.Nd4j;
|
|
||||||
import org.nd4j.linalg.jcublas.context.CudaContext;
|
|
||||||
|
|
||||||
/**
|
#include <dll.h>
|
||||||
* This class provides Deallocator implementation for tracking/releasing CudaContexts once thread holding it dies
|
#include <pointercast.h>
|
||||||
* @author raver119@gmail.com
|
#include <atomic>
|
||||||
*/
|
#include <mutex>
|
||||||
@Slf4j
|
|
||||||
public class ContextDeallocator implements Deallocator {
|
|
||||||
private CudaContext context;
|
|
||||||
|
|
||||||
public ContextDeallocator(@NonNull CudaContext context) {
|
namespace nd4j {
|
||||||
this.context = context;
|
class ND4J_EXPORT AffinityManager {
|
||||||
}
|
private:
|
||||||
|
static std::atomic<int> _lastDevice;
|
||||||
|
static int _numberOfDevices;
|
||||||
|
static std::mutex _currentMutex;
|
||||||
|
static std::mutex _numberMutex;
|
||||||
|
|
||||||
@Override
|
public:
|
||||||
public void deallocate() {
|
static int currentNativeDeviceId();
|
||||||
AtomicAllocator.getInstance().getContextPool().releaseContext(context);
|
static int currentDeviceId();
|
||||||
}
|
static int numberOfDevices();
|
||||||
|
static void setCurrentDevice(int deviceId);
|
||||||
|
static void setCurrentNativeDevice(int deviceId);
|
||||||
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#endif //DEV_TESTS_AFFINITYMANAGER_H
|
|
@ -0,0 +1,58 @@
|
||||||
|
/*******************************************************************************
|
||||||
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
||||||
|
*
|
||||||
|
* This program and the accompanying materials are made available under the
|
||||||
|
* terms of the Apache License, Version 2.0 which is available at
|
||||||
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
* License for the specific language governing permissions and limitations
|
||||||
|
* under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
******************************************************************************/
|
||||||
|
|
||||||
|
//
|
||||||
|
// @author raver119@gmail.com
|
||||||
|
//
|
||||||
|
|
||||||
|
#ifndef LIBND4J_CONTEXTBUFFERS_H
|
||||||
|
#define LIBND4J_CONTEXTBUFFERS_H
|
||||||
|
|
||||||
|
#include <dll.h>
|
||||||
|
#include <pointercast.h>
|
||||||
|
|
||||||
|
namespace nd4j {
|
||||||
|
class ND4J_EXPORT ContextBuffers {
|
||||||
|
private:
|
||||||
|
void* _reductionPointer;
|
||||||
|
void* _scalarPointer;
|
||||||
|
void* _allocationPointer;
|
||||||
|
bool _allocated = true;
|
||||||
|
|
||||||
|
int _deviceId = -1;
|
||||||
|
|
||||||
|
void initialize();
|
||||||
|
public:
|
||||||
|
ContextBuffers();
|
||||||
|
ContextBuffers(void* rPointer, void* sPointer, void* aPointer, bool isOwner = false);
|
||||||
|
~ContextBuffers();
|
||||||
|
|
||||||
|
void* reductionBuffer();
|
||||||
|
void* scalarBuffer();
|
||||||
|
void* allocationBuffer();
|
||||||
|
|
||||||
|
void setReductionBuffer(void* pointer);
|
||||||
|
void setScalarBuffer(void* pointer);
|
||||||
|
void setAllocationBuffer(void* pointer);
|
||||||
|
|
||||||
|
void triggerOwnership(bool isOwner);
|
||||||
|
|
||||||
|
int deviceId();
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
#endif //DEV_TESTS_CONTEXTBUFFERS_H
|
|
@ -35,6 +35,8 @@
|
||||||
#include <op_boilerplate.h>
|
#include <op_boilerplate.h>
|
||||||
#include <memory/Workspace.h>
|
#include <memory/Workspace.h>
|
||||||
#include <vector>
|
#include <vector>
|
||||||
|
#include <mutex>
|
||||||
|
#include <execution/ContextBuffers.h>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -44,17 +46,16 @@ class ND4J_EXPORT LaunchContext {
|
||||||
|
|
||||||
private:
|
private:
|
||||||
static std::vector<std::shared_ptr<LaunchContext>> _contexts;
|
static std::vector<std::shared_ptr<LaunchContext>> _contexts;
|
||||||
|
static std::mutex _mutex;
|
||||||
|
|
||||||
#ifdef __CUDABLAS__
|
#ifdef __CUDABLAS__
|
||||||
|
|
||||||
#ifndef __JAVACPP_HACK__
|
#ifndef __JAVACPP_HACK__
|
||||||
|
|
||||||
void* _reductionPointer;
|
cudaStream_t* _cudaStream = nullptr;
|
||||||
void* _scalarPointer;
|
cudaStream_t* _cudaSpecialStream = nullptr;
|
||||||
int* _allocationPointer;
|
void* _cublasHandle = nullptr;
|
||||||
cudaStream_t *_cudaStream = nullptr;
|
void* _cusolverHandle = nullptr;
|
||||||
cudaStream_t *_cudaSpecialStream = nullptr;
|
|
||||||
void *_cublasHandle = nullptr;
|
|
||||||
|
|
||||||
#endif // JCPP
|
#endif // JCPP
|
||||||
|
|
||||||
|
@ -62,31 +63,27 @@ class ND4J_EXPORT LaunchContext {
|
||||||
#endif // CUDA
|
#endif // CUDA
|
||||||
nd4j::memory::Workspace* _workspace = nullptr;
|
nd4j::memory::Workspace* _workspace = nullptr;
|
||||||
int _deviceID = 0;
|
int _deviceID = 0;
|
||||||
|
|
||||||
public:
|
public:
|
||||||
#ifdef __CUDABLAS__
|
#ifdef __CUDABLAS__
|
||||||
|
|
||||||
#ifndef __JAVACPP_HACK__
|
#ifndef __JAVACPP_HACK__
|
||||||
LaunchContext(cudaStream_t* cudaStream, cudaStream_t& specialCudaStream, void* reductionPointer = nullptr, void* scalarPointer = nullptr, int* allocationPointer = nullptr);
|
LaunchContext(cudaStream_t* cudaStream, cudaStream_t& specialCudaStream, void* reductionPointer = nullptr, void* scalarPointer = nullptr, int* allocationPointer = nullptr);
|
||||||
|
|
||||||
FORCEINLINE void* getReductionPointer () const {return _reductionPointer;};
|
void* getReductionPointer () const;
|
||||||
|
void* getScalarPointer() const;
|
||||||
|
int* getAllocationPointer() const;
|
||||||
|
void* getCublasHandle() const;
|
||||||
|
void* getCusolverHandle() const;
|
||||||
|
cudaStream_t* getCudaStream() const;
|
||||||
|
cudaStream_t* getCudaSpecialStream() const;
|
||||||
|
|
||||||
FORCEINLINE void* getScalarPointer() const {return _scalarPointer;};
|
void setReductionPointer (void* reductionPointer);
|
||||||
|
void setScalarPointer(void* scalarPointer);
|
||||||
FORCEINLINE int* getAllocationPointer() const {return _allocationPointer;};
|
void setAllocationPointer(int* allocationPointer);
|
||||||
|
void setCudaStream(cudaStream_t* cudaStream);
|
||||||
FORCEINLINE void* getCublasHandle() const {return _cublasHandle;};
|
void setCudaSpecialStream(cudaStream_t* cudaStream);
|
||||||
FORCEINLINE cudaStream_t* getCudaStream() const {return _cudaStream;};
|
void setCublasHandle(void *handle);
|
||||||
FORCEINLINE cudaStream_t* getCudaSpecialStream() const {return _cudaSpecialStream;};
|
|
||||||
|
|
||||||
FORCEINLINE void setReductionPointer (void* reductionPointer) {_reductionPointer = reductionPointer;};
|
|
||||||
|
|
||||||
FORCEINLINE void setScalarPointer(void* scalarPointer) {_scalarPointer = scalarPointer;};
|
|
||||||
|
|
||||||
FORCEINLINE void setAllocationPointer(int* allocationPointer) {_allocationPointer = allocationPointer;};
|
|
||||||
|
|
||||||
FORCEINLINE void setCudaStream(cudaStream_t* cudaStream) {_cudaStream = cudaStream;};
|
|
||||||
FORCEINLINE void setCudaSpecialStream(cudaStream_t* cudaStream) {_cudaSpecialStream = cudaStream;};
|
|
||||||
FORCEINLINE void setCublasHandle(void *handle) {_cublasHandle = handle; };
|
|
||||||
|
|
||||||
|
|
||||||
#endif // JCPP
|
#endif // JCPP
|
||||||
|
|
|
@ -14,28 +14,30 @@
|
||||||
* SPDX-License-Identifier: Apache-2.0
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
******************************************************************************/
|
******************************************************************************/
|
||||||
|
|
||||||
package org.nd4j.jita.allocator.context;
|
//
|
||||||
|
// @author raver119@gmail.com
|
||||||
|
//
|
||||||
|
|
||||||
import org.nd4j.linalg.jcublas.context.CudaContext;
|
#include <execution/AffinityManager.h>
|
||||||
|
|
||||||
/**
|
namespace nd4j {
|
||||||
* This interface describes pool of CudaContext objects, used to execute kernels
|
int AffinityManager::currentDeviceId() {
|
||||||
* @author raver119@gmail.com
|
return 0;
|
||||||
*/
|
}
|
||||||
public interface ContextPool {
|
|
||||||
/**
|
|
||||||
* This method returns CudaContext for given device
|
|
||||||
* @param deviceId
|
|
||||||
* @return
|
|
||||||
*/
|
|
||||||
CudaContext acquireContextForDevice(Integer deviceId);
|
|
||||||
|
|
||||||
@Deprecated
|
int AffinityManager::currentNativeDeviceId() {
|
||||||
ContextPack acquireContextPackForDevice(Integer deviceId);
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
int AffinityManager::numberOfDevices() {
|
||||||
* This method returns CudaContext to the pool for reuse
|
return 1;
|
||||||
* @param context
|
}
|
||||||
*/
|
|
||||||
void releaseContext(CudaContext context);
|
void AffinityManager::setCurrentDevice(int deviceId) {
|
||||||
|
// no-op
|
||||||
|
}
|
||||||
|
|
||||||
|
void AffinityManager::setCurrentNativeDevice(int deviceId) {
|
||||||
|
// no-op
|
||||||
|
}
|
||||||
}
|
}
|
|
@ -0,0 +1,74 @@
|
||||||
|
/*******************************************************************************
|
||||||
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
||||||
|
*
|
||||||
|
* This program and the accompanying materials are made available under the
|
||||||
|
* terms of the Apache License, Version 2.0 which is available at
|
||||||
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
* License for the specific language governing permissions and limitations
|
||||||
|
* under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
******************************************************************************/
|
||||||
|
|
||||||
|
//
|
||||||
|
// @author raver119@gmail.com
|
||||||
|
//
|
||||||
|
#include <execution/ContextBuffers.h>
|
||||||
|
#include <execution/AffinityManager.h>
|
||||||
|
|
||||||
|
namespace nd4j {
|
||||||
|
ContextBuffers::ContextBuffers() {
|
||||||
|
_deviceId = AffinityManager::currentDeviceId();
|
||||||
|
}
|
||||||
|
|
||||||
|
ContextBuffers::~ContextBuffers() {
|
||||||
|
// no-op
|
||||||
|
}
|
||||||
|
|
||||||
|
ContextBuffers::ContextBuffers(void* rPointer, void* sPointer, void* aPointer, bool isOwner) {
|
||||||
|
_reductionPointer = rPointer;
|
||||||
|
_scalarPointer = sPointer;
|
||||||
|
_allocationPointer = aPointer;
|
||||||
|
_allocated = isOwner;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ContextBuffers::initialize() {
|
||||||
|
// no-op
|
||||||
|
}
|
||||||
|
|
||||||
|
void* ContextBuffers::reductionBuffer() {
|
||||||
|
return _reductionPointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void* ContextBuffers::scalarBuffer() {
|
||||||
|
return _scalarPointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void* ContextBuffers::allocationBuffer() {
|
||||||
|
return _allocationPointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ContextBuffers::setReductionBuffer(void* pointer) {
|
||||||
|
_reductionPointer = pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ContextBuffers::setScalarBuffer(void* pointer) {
|
||||||
|
_scalarPointer = pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ContextBuffers::setAllocationBuffer(void* pointer) {
|
||||||
|
_allocationPointer = pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ContextBuffers::triggerOwnership(bool isOwner) {
|
||||||
|
_allocated = isOwner;
|
||||||
|
}
|
||||||
|
|
||||||
|
int ContextBuffers::deviceId() {
|
||||||
|
return _deviceId;
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,56 @@
|
||||||
|
/*******************************************************************************
|
||||||
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
||||||
|
*
|
||||||
|
* This program and the accompanying materials are made available under the
|
||||||
|
* terms of the Apache License, Version 2.0 which is available at
|
||||||
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
* License for the specific language governing permissions and limitations
|
||||||
|
* under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
******************************************************************************/
|
||||||
|
|
||||||
|
//
|
||||||
|
// Created by raver119 on 30.11.17.
|
||||||
|
//
|
||||||
|
|
||||||
|
#include <execution/LaunchContext.h>
|
||||||
|
#include <logger.h>
|
||||||
|
#include <exceptions/cuda_exception.h>
|
||||||
|
#include <thread>
|
||||||
|
|
||||||
|
thread_local nd4j::ContextBuffers contextBuffers = nd4j::ContextBuffers();
|
||||||
|
|
||||||
|
namespace nd4j {
|
||||||
|
|
||||||
|
LaunchContext::~LaunchContext() {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
std::vector<std::shared_ptr<LaunchContext>> LaunchContext::_contexts = std::vector<std::shared_ptr<LaunchContext>>();
|
||||||
|
|
||||||
|
////////////////////////////////////////////////////////////////////////
|
||||||
|
LaunchContext::LaunchContext() {
|
||||||
|
// default constructor, just to make clang/ranlib happy
|
||||||
|
_workspace = nullptr;
|
||||||
|
_deviceID = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
LaunchContext::LaunchContext(Nd4jPointer cudaStream, Nd4jPointer reductionPointer, Nd4jPointer scalarPointer, Nd4jPointer allocationPointer) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
LaunchContext* LaunchContext::defaultContext() {
|
||||||
|
// TODO: we need it to be device-aware, but only once we add NUMA support for cpu
|
||||||
|
if (LaunchContext::_contexts.empty()) {
|
||||||
|
LaunchContext::_contexts.emplace_back(std::make_shared<LaunchContext>());
|
||||||
|
}
|
||||||
|
|
||||||
|
// return context for current device
|
||||||
|
return LaunchContext::_contexts[0].get();
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,108 @@
|
||||||
|
/*******************************************************************************
|
||||||
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
||||||
|
*
|
||||||
|
* This program and the accompanying materials are made available under the
|
||||||
|
* terms of the Apache License, Version 2.0 which is available at
|
||||||
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
* License for the specific language governing permissions and limitations
|
||||||
|
* under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
******************************************************************************/
|
||||||
|
|
||||||
|
//
|
||||||
|
// @author raver119@gmail.com
|
||||||
|
//
|
||||||
|
|
||||||
|
#include <logger.h>
|
||||||
|
#include <execution/AffinityManager.h>
|
||||||
|
#include <exceptions/cuda_exception.h>
|
||||||
|
|
||||||
|
thread_local int globalThreadToDevice = -1;
|
||||||
|
|
||||||
|
namespace nd4j {
|
||||||
|
std::mutex AffinityManager::_currentMutex;
|
||||||
|
std::mutex AffinityManager::_numberMutex;
|
||||||
|
int AffinityManager::_numberOfDevices = -1;
|
||||||
|
|
||||||
|
int AffinityManager::currentDeviceId() {
|
||||||
|
// if there's no affinity set - set it now
|
||||||
|
if (globalThreadToDevice < 0) {
|
||||||
|
|
||||||
|
// this block must be thread-local
|
||||||
|
_currentMutex.lock();
|
||||||
|
|
||||||
|
globalThreadToDevice = _lastDevice++;
|
||||||
|
|
||||||
|
// we need to check if we've got deviceId >= number of actual devices, and reset to zero otherwise
|
||||||
|
if (globalThreadToDevice >= numberOfDevices()) {
|
||||||
|
globalThreadToDevice = 0;
|
||||||
|
_lastDevice = numberOfDevices() > 1 ? 1 : 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
_currentMutex.unlock();
|
||||||
|
|
||||||
|
setCurrentDevice(globalThreadToDevice);
|
||||||
|
}
|
||||||
|
|
||||||
|
// if we already know affinity - just return it
|
||||||
|
if (globalThreadToDevice >= 0)
|
||||||
|
return globalThreadToDevice;
|
||||||
|
|
||||||
|
int dev = 0;
|
||||||
|
auto res = cudaGetDevice(&dev);
|
||||||
|
|
||||||
|
if (res != 0)
|
||||||
|
throw cuda_exception::build("cudaGetDevice failed", res);
|
||||||
|
|
||||||
|
return dev;
|
||||||
|
}
|
||||||
|
|
||||||
|
int AffinityManager::currentNativeDeviceId() {
|
||||||
|
int dev = 0;
|
||||||
|
auto res = cudaGetDevice(&dev);
|
||||||
|
|
||||||
|
if (res != 0)
|
||||||
|
throw cuda_exception::build("cudaGetDevice failed", res);
|
||||||
|
|
||||||
|
return dev;
|
||||||
|
}
|
||||||
|
|
||||||
|
int AffinityManager::numberOfDevices() {
|
||||||
|
_numberMutex.lock();
|
||||||
|
// we want to cache number of devices
|
||||||
|
if (_numberOfDevices <= 0) {
|
||||||
|
int dev = 0;
|
||||||
|
auto res = cudaGetDeviceCount(&dev);
|
||||||
|
|
||||||
|
if (res != 0)
|
||||||
|
throw cuda_exception::build("cudaGetDeviceCount failed", res);
|
||||||
|
|
||||||
|
_numberOfDevices = dev;
|
||||||
|
}
|
||||||
|
_numberMutex.unlock();
|
||||||
|
|
||||||
|
return _numberOfDevices;
|
||||||
|
}
|
||||||
|
|
||||||
|
void AffinityManager::setCurrentNativeDevice(int deviceId) {
|
||||||
|
auto res = cudaSetDevice(deviceId);
|
||||||
|
}
|
||||||
|
|
||||||
|
void AffinityManager::setCurrentDevice(int deviceId) {
|
||||||
|
auto res = cudaSetDevice(deviceId);
|
||||||
|
if (res != 0)
|
||||||
|
throw cuda_exception::build("cudaSetDevice failed", res);
|
||||||
|
|
||||||
|
// update thread-device affinity
|
||||||
|
globalThreadToDevice = deviceId;
|
||||||
|
|
||||||
|
// TODO: update context buffers?
|
||||||
|
}
|
||||||
|
|
||||||
|
std::atomic<int> AffinityManager::_lastDevice;// = std::atomic<int>(initialV);
|
||||||
|
}
|
|
@ -0,0 +1,116 @@
|
||||||
|
/*******************************************************************************
|
||||||
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
||||||
|
*
|
||||||
|
* This program and the accompanying materials are made available under the
|
||||||
|
* terms of the Apache License, Version 2.0 which is available at
|
||||||
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
* License for the specific language governing permissions and limitations
|
||||||
|
* under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
******************************************************************************/
|
||||||
|
|
||||||
|
//
|
||||||
|
// @author raver119@gmail.com
|
||||||
|
//
|
||||||
|
|
||||||
|
#include <execution/ContextBuffers.h>
|
||||||
|
#include <logger.h>
|
||||||
|
#include <AffinityManager.h>
|
||||||
|
|
||||||
|
#include <cuda.h>
|
||||||
|
#include <cuda_runtime_api.h>
|
||||||
|
#include <cuda_runtime.h>
|
||||||
|
#include <cuda_device_runtime_api.h>
|
||||||
|
|
||||||
|
namespace nd4j {
|
||||||
|
ContextBuffers::ContextBuffers() {
|
||||||
|
nd4j_printf("Creating ContextBuffers for device [%i]\n", AffinityManager::currentDeviceId());
|
||||||
|
_deviceId = AffinityManager::currentDeviceId();
|
||||||
|
}
|
||||||
|
|
||||||
|
ContextBuffers::~ContextBuffers() {
|
||||||
|
if (_allocated) {
|
||||||
|
nd4j_printf("Releasing ContextBuffers\n","");
|
||||||
|
|
||||||
|
if (_allocationPointer != nullptr)
|
||||||
|
cudaFree(_allocationPointer);
|
||||||
|
|
||||||
|
if (_scalarPointer != nullptr)
|
||||||
|
cudaFree(_scalarPointer);
|
||||||
|
|
||||||
|
if (_allocationPointer != nullptr)
|
||||||
|
cudaFree(_reductionPointer);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ContextBuffers::ContextBuffers(void* rPointer, void* sPointer, void* aPointer, bool isOwner) {
|
||||||
|
_reductionPointer = rPointer;
|
||||||
|
_scalarPointer = sPointer;
|
||||||
|
_allocationPointer = aPointer;
|
||||||
|
_allocated = isOwner;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ContextBuffers::initialize() {
|
||||||
|
nd4j_printf("Initializing buffers on deviceId [%i]\n", AffinityManager::currentNativeDeviceId());
|
||||||
|
|
||||||
|
auto res = cudaMalloc(reinterpret_cast<void**>(&_reductionPointer), 1024 * 1024 * 8);
|
||||||
|
if (res != 0)
|
||||||
|
throw std::runtime_error("_reductionPointer allocation failed");
|
||||||
|
|
||||||
|
res = cudaMalloc(reinterpret_cast<void**>(&_scalarPointer), 16);
|
||||||
|
if (res != 0)
|
||||||
|
throw std::runtime_error("_scalarPointer allocation failed");
|
||||||
|
|
||||||
|
res = cudaMalloc(reinterpret_cast<void**>(&_allocationPointer), 1024 * 1024 * 8);
|
||||||
|
if (res != 0)
|
||||||
|
throw std::runtime_error("_allocationPointer allocation failed");
|
||||||
|
|
||||||
|
_allocated = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
void* ContextBuffers::reductionBuffer() {
|
||||||
|
if (_reductionPointer == nullptr)
|
||||||
|
initialize();
|
||||||
|
|
||||||
|
return _reductionPointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void* ContextBuffers::scalarBuffer() {
|
||||||
|
if (_scalarPointer == nullptr)
|
||||||
|
initialize();
|
||||||
|
|
||||||
|
return _scalarPointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void* ContextBuffers::allocationBuffer() {
|
||||||
|
if (_allocationPointer == nullptr)
|
||||||
|
initialize();
|
||||||
|
|
||||||
|
return _allocationPointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ContextBuffers::setReductionBuffer(void* pointer) {
|
||||||
|
_reductionPointer = pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ContextBuffers::setScalarBuffer(void* pointer) {
|
||||||
|
_scalarPointer = pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ContextBuffers::setAllocationBuffer(void* pointer) {
|
||||||
|
_allocationPointer = pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
void ContextBuffers::triggerOwnership(bool isOwner) {
|
||||||
|
_allocated = isOwner;
|
||||||
|
}
|
||||||
|
|
||||||
|
int ContextBuffers::deviceId() {
|
||||||
|
return _deviceId;
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,182 @@
|
||||||
|
/*******************************************************************************
|
||||||
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
||||||
|
*
|
||||||
|
* This program and the accompanying materials are made available under the
|
||||||
|
* terms of the Apache License, Version 2.0 which is available at
|
||||||
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
* License for the specific language governing permissions and limitations
|
||||||
|
* under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
******************************************************************************/
|
||||||
|
|
||||||
|
//
|
||||||
|
// Created by raver119 on 30.11.17.
|
||||||
|
//
|
||||||
|
|
||||||
|
#include <execution/LaunchContext.h>
|
||||||
|
#include <logger.h>
|
||||||
|
#include <exceptions/cuda_exception.h>
|
||||||
|
#include <helpers/cublasHelper.h>
|
||||||
|
#include <thread>
|
||||||
|
#include <execution/AffinityManager.h>
|
||||||
|
|
||||||
|
thread_local nd4j::ContextBuffers contextBuffers = nd4j::ContextBuffers();
|
||||||
|
|
||||||
|
namespace nd4j {
|
||||||
|
|
||||||
|
std::vector<std::shared_ptr<LaunchContext>> LaunchContext::_contexts = std::vector<std::shared_ptr<LaunchContext>>();
|
||||||
|
std::mutex LaunchContext::_mutex;
|
||||||
|
|
||||||
|
////////////////////////////////////////////////////////////////////////
|
||||||
|
LaunchContext::LaunchContext(cudaStream_t *cudaStream, cudaStream_t& specialCudaStream, void* reductionPointer, void* scalarPointer, int* allocationPointer) {
|
||||||
|
|
||||||
|
_cudaStream = cudaStream;
|
||||||
|
_cudaSpecialStream = &specialCudaStream; // ideal is = new cudaStream_t; *_cudaSpecialStream = specialCudaStream;
|
||||||
|
//_reductionPointer = reductionPointer;
|
||||||
|
//_scalarPointer = scalarPointer;
|
||||||
|
//_allocationPointer = allocationPointer;
|
||||||
|
_workspace = nullptr;
|
||||||
|
_isAllocated = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
LaunchContext::~LaunchContext() {
|
||||||
|
if (_isAllocated) {
|
||||||
|
cudaStreamSynchronize(*_cudaStream);
|
||||||
|
cudaStreamSynchronize(*_cudaSpecialStream);
|
||||||
|
|
||||||
|
cudaStreamDestroy(*_cudaStream);
|
||||||
|
cudaStreamDestroy(*_cudaSpecialStream);
|
||||||
|
|
||||||
|
delete _cudaStream;
|
||||||
|
delete _cudaSpecialStream;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
////////////////////////////////////////////////////////////////////////
|
||||||
|
LaunchContext::LaunchContext() {
|
||||||
|
// default constructor, just to make clang/ranlib happy
|
||||||
|
_workspace = nullptr;
|
||||||
|
_deviceID = 0;
|
||||||
|
|
||||||
|
_isAllocated = true;
|
||||||
|
_cudaStream = new cudaStream_t();
|
||||||
|
_cudaSpecialStream = new cudaStream_t();
|
||||||
|
if (nullptr == _cudaStream || nullptr == _cudaSpecialStream)
|
||||||
|
throw std::runtime_error("Failed to allocate memory for new CUDA stream");
|
||||||
|
|
||||||
|
cudaError_t err = cudaStreamCreate(_cudaStream);
|
||||||
|
if (err != 0)
|
||||||
|
throw cuda_exception::build("Failed to create default CUDA stream with launch context", err);
|
||||||
|
|
||||||
|
err = cudaStreamCreate(_cudaSpecialStream);
|
||||||
|
if (err != 0)
|
||||||
|
throw cuda_exception::build("Failed to create special CUDA stream with launch context", err);
|
||||||
|
|
||||||
|
_cublasHandle = CublasHelper::getInstance()->handle();
|
||||||
|
|
||||||
|
_cusolverHandle = CublasHelper::getInstance()->solver();
|
||||||
|
|
||||||
|
auto res = cudaStreamSynchronize(*_cudaStream);
|
||||||
|
if (res != 0)
|
||||||
|
throw cuda_exception::build("Initial sync failed", res);
|
||||||
|
}
|
||||||
|
|
||||||
|
LaunchContext::LaunchContext(Nd4jPointer cudaStream, Nd4jPointer reductionPointer, Nd4jPointer scalarPointer, Nd4jPointer allocationPointer) {
|
||||||
|
_isAllocated = false;
|
||||||
|
_cudaStream = reinterpret_cast<cudaStream_t*>(cudaStream);
|
||||||
|
_cudaSpecialStream = reinterpret_cast<cudaStream_t*>(cudaStream);
|
||||||
|
//_reductionPointer = reductionPointer;
|
||||||
|
//_scalarPointer = scalarPointer;
|
||||||
|
//_allocationPointer = reinterpret_cast<int *>(allocationPointer);
|
||||||
|
}
|
||||||
|
|
||||||
|
LaunchContext* LaunchContext::defaultContext() {
|
||||||
|
/**
|
||||||
|
* This method returns LaunchContext, that has multiple entities within:
|
||||||
|
* 1) temporary buffers. they must be per-thread
|
||||||
|
* 2) CUDA stream. it must be either per-thread or per-device
|
||||||
|
* 3) cuBLAS handle. it must be per-device
|
||||||
|
*/
|
||||||
|
auto deviceId = AffinityManager::currentDeviceId();
|
||||||
|
|
||||||
|
// we need this block synchronous, to avoid double initialization etc
|
||||||
|
_mutex.lock();
|
||||||
|
if (LaunchContext::_contexts.empty()) {
|
||||||
|
// create one context per device
|
||||||
|
auto numDevices = AffinityManager::numberOfDevices();
|
||||||
|
|
||||||
|
_contexts.resize(numDevices);
|
||||||
|
for (int e = 0; e < numDevices; e++) {
|
||||||
|
AffinityManager::setCurrentDevice(e);
|
||||||
|
|
||||||
|
LaunchContext::_contexts[e] = std::make_shared<LaunchContext>();
|
||||||
|
}
|
||||||
|
|
||||||
|
// don't forget to restore device back again
|
||||||
|
AffinityManager::setCurrentDevice(deviceId);
|
||||||
|
}
|
||||||
|
_mutex.unlock();
|
||||||
|
|
||||||
|
// return context for current device
|
||||||
|
return LaunchContext::_contexts[deviceId].get();
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void* LaunchContext::getReductionPointer () const {
|
||||||
|
return contextBuffers.reductionBuffer();
|
||||||
|
};
|
||||||
|
|
||||||
|
void* LaunchContext::getScalarPointer() const {
|
||||||
|
return contextBuffers.scalarBuffer();
|
||||||
|
};
|
||||||
|
|
||||||
|
int* LaunchContext::getAllocationPointer() const {
|
||||||
|
return reinterpret_cast<int*>(contextBuffers.allocationBuffer());
|
||||||
|
};
|
||||||
|
|
||||||
|
void* LaunchContext::getCublasHandle() const {
|
||||||
|
return _cublasHandle;
|
||||||
|
};
|
||||||
|
|
||||||
|
void* LaunchContext::getCusolverHandle() const {
|
||||||
|
return _cusolverHandle;
|
||||||
|
};
|
||||||
|
|
||||||
|
cudaStream_t* LaunchContext::getCudaStream() const {
|
||||||
|
return _cudaStream;
|
||||||
|
};
|
||||||
|
|
||||||
|
cudaStream_t* LaunchContext::getCudaSpecialStream() const {
|
||||||
|
return _cudaSpecialStream;
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
void LaunchContext::setReductionPointer (void* reductionPointer) {
|
||||||
|
contextBuffers.setReductionBuffer(reductionPointer);
|
||||||
|
};
|
||||||
|
|
||||||
|
void LaunchContext::setScalarPointer(void* scalarPointer) {
|
||||||
|
contextBuffers.setScalarBuffer(scalarPointer);
|
||||||
|
};
|
||||||
|
|
||||||
|
void LaunchContext::setAllocationPointer(int* allocationPointer) {
|
||||||
|
contextBuffers.setAllocationBuffer(allocationPointer);
|
||||||
|
};
|
||||||
|
|
||||||
|
void LaunchContext::setCudaStream(cudaStream_t* cudaStream) {
|
||||||
|
_cudaStream = cudaStream;
|
||||||
|
};
|
||||||
|
|
||||||
|
void LaunchContext::setCudaSpecialStream(cudaStream_t* cudaStream) {
|
||||||
|
_cudaSpecialStream = cudaStream;
|
||||||
|
};
|
||||||
|
|
||||||
|
void LaunchContext::setCublasHandle(void *handle) {
|
||||||
|
_cublasHandle = handle;
|
||||||
|
};
|
||||||
|
}
|
|
@ -1,130 +0,0 @@
|
||||||
/*******************************************************************************
|
|
||||||
* Copyright (c) 2015-2018 Skymind, Inc.
|
|
||||||
*
|
|
||||||
* This program and the accompanying materials are made available under the
|
|
||||||
* terms of the Apache License, Version 2.0 which is available at
|
|
||||||
* https://www.apache.org/licenses/LICENSE-2.0.
|
|
||||||
*
|
|
||||||
* Unless required by applicable law or agreed to in writing, software
|
|
||||||
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
* License for the specific language governing permissions and limitations
|
|
||||||
* under the License.
|
|
||||||
*
|
|
||||||
* SPDX-License-Identifier: Apache-2.0
|
|
||||||
******************************************************************************/
|
|
||||||
|
|
||||||
//
|
|
||||||
// Created by raver119 on 30.11.17.
|
|
||||||
//
|
|
||||||
|
|
||||||
#include <execution/LaunchContext.h>
|
|
||||||
#include <logger.h>
|
|
||||||
#include <exceptions/cuda_exception.h>
|
|
||||||
#include <helpers/cublasHelper.h>
|
|
||||||
|
|
||||||
namespace nd4j {
|
|
||||||
|
|
||||||
#ifdef __CUDABLAS__
|
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////
|
|
||||||
LaunchContext::LaunchContext(cudaStream_t *cudaStream, cudaStream_t& specialCudaStream, void* reductionPointer, void* scalarPointer, int* allocationPointer) {
|
|
||||||
|
|
||||||
_cudaStream = cudaStream;
|
|
||||||
_cudaSpecialStream = &specialCudaStream; // ideal is = new cudaStream_t; *_cudaSpecialStream = specialCudaStream;
|
|
||||||
_reductionPointer = reductionPointer;
|
|
||||||
_scalarPointer = scalarPointer;
|
|
||||||
_allocationPointer = allocationPointer;
|
|
||||||
_workspace = nullptr;
|
|
||||||
_isAllocated = false;
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
LaunchContext::~LaunchContext() {
|
|
||||||
#ifdef __CUDABLAS__
|
|
||||||
if (_isAllocated) {
|
|
||||||
cudaStreamSynchronize(*_cudaStream);
|
|
||||||
cudaStreamSynchronize(*_cudaSpecialStream);
|
|
||||||
|
|
||||||
cudaStreamDestroy(*_cudaStream);
|
|
||||||
cudaStreamDestroy(*_cudaSpecialStream);
|
|
||||||
|
|
||||||
delete _cudaStream;
|
|
||||||
delete _cudaSpecialStream;
|
|
||||||
|
|
||||||
cudaFree(_reductionPointer);
|
|
||||||
cudaFree(_allocationPointer);
|
|
||||||
cudaFree(_scalarPointer);
|
|
||||||
|
|
||||||
cublas::destroyHandle(_cublasHandle);
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
}
|
|
||||||
|
|
||||||
std::vector<std::shared_ptr<LaunchContext>> LaunchContext::_contexts = std::vector<std::shared_ptr<LaunchContext>>();
|
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////
|
|
||||||
LaunchContext::LaunchContext() {
|
|
||||||
// default constructor, just to make clang/ranlib happy
|
|
||||||
_workspace = nullptr;
|
|
||||||
_deviceID = 0;
|
|
||||||
|
|
||||||
#ifdef __CUDABLAS__
|
|
||||||
_isAllocated = true;
|
|
||||||
_cudaStream = new cudaStream_t();
|
|
||||||
_cudaSpecialStream = new cudaStream_t();
|
|
||||||
if (nullptr == _cudaStream || nullptr == _cudaSpecialStream)
|
|
||||||
throw std::runtime_error("Failed to allocate memory for new CUDA stream");
|
|
||||||
|
|
||||||
cudaError_t err = cudaStreamCreate(_cudaStream);
|
|
||||||
if (err != 0)
|
|
||||||
throw cuda_exception::build("Failed to create default CUDA stream with launch context", err);
|
|
||||||
|
|
||||||
err = cudaStreamCreate(_cudaSpecialStream);
|
|
||||||
if (err != 0)
|
|
||||||
throw cuda_exception::build("Failed to create special CUDA stream with launch context", err);
|
|
||||||
|
|
||||||
_cublasHandle = cublas::handle();
|
|
||||||
|
|
||||||
auto res = cudaStreamSynchronize(*_cudaStream);
|
|
||||||
if (res != 0)
|
|
||||||
throw cuda_exception::build("Initial sync failed", res);
|
|
||||||
|
|
||||||
res = cudaMalloc(reinterpret_cast<void**>(&_reductionPointer), 1024 * 1024 * 8);
|
|
||||||
if (res != 0)
|
|
||||||
throw std::runtime_error("_reductionPointer allocation failed");
|
|
||||||
|
|
||||||
res = cudaMalloc(reinterpret_cast<void**>(&_scalarPointer), 8);
|
|
||||||
if (res != 0)
|
|
||||||
throw std::runtime_error("_scalarPointer allocation failed");
|
|
||||||
|
|
||||||
res = cudaMalloc(reinterpret_cast<void**>(&_allocationPointer), 1024 * 1024 * 8);
|
|
||||||
if (res != 0)
|
|
||||||
throw std::runtime_error("_allocationPointer allocation failed");
|
|
||||||
#else
|
|
||||||
//
|
|
||||||
#endif
|
|
||||||
}
|
|
||||||
|
|
||||||
LaunchContext::LaunchContext(Nd4jPointer cudaStream, Nd4jPointer reductionPointer, Nd4jPointer scalarPointer, Nd4jPointer allocationPointer) {
|
|
||||||
#ifdef __CUDABLAS__
|
|
||||||
_isAllocated = false;
|
|
||||||
_cudaStream = reinterpret_cast<cudaStream_t*>(cudaStream);
|
|
||||||
_cudaSpecialStream = reinterpret_cast<cudaStream_t*>(cudaStream);
|
|
||||||
_reductionPointer = reductionPointer;
|
|
||||||
_scalarPointer = scalarPointer;
|
|
||||||
_allocationPointer = reinterpret_cast<int *>(allocationPointer);
|
|
||||||
#else
|
|
||||||
// no-op
|
|
||||||
#endif
|
|
||||||
}
|
|
||||||
|
|
||||||
LaunchContext* LaunchContext::defaultContext() {
|
|
||||||
// TODO: we need it to be device-aware
|
|
||||||
if (LaunchContext::_contexts.empty()) {
|
|
||||||
LaunchContext::_contexts.emplace_back(std::make_shared<LaunchContext>());
|
|
||||||
}
|
|
||||||
return LaunchContext::_contexts[0].get();
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
|
@ -21,6 +21,7 @@
|
||||||
#ifndef __CUDABLAS__
|
#ifndef __CUDABLAS__
|
||||||
|
|
||||||
#include <ConstantHelper.h>
|
#include <ConstantHelper.h>
|
||||||
|
#include <execution/AffinityManager.h>
|
||||||
#include <types/types.h>
|
#include <types/types.h>
|
||||||
#include <loops/type_conversions.h>
|
#include <loops/type_conversions.h>
|
||||||
#include <type_boilerplate.h>
|
#include <type_boilerplate.h>
|
||||||
|
@ -59,11 +60,11 @@ namespace nd4j {
|
||||||
}
|
}
|
||||||
|
|
||||||
int ConstantHelper::getCurrentDevice() {
|
int ConstantHelper::getCurrentDevice() {
|
||||||
return 0L;
|
return AffinityManager::currentDeviceId();
|
||||||
}
|
}
|
||||||
|
|
||||||
int ConstantHelper::getNumberOfDevices() {
|
int ConstantHelper::getNumberOfDevices() {
|
||||||
return 1;
|
return AffinityManager::numberOfDevices();
|
||||||
}
|
}
|
||||||
|
|
||||||
ConstantDataBuffer* ConstantHelper::constantBuffer(const ConstantDescriptor &descriptor, nd4j::DataType dataType) {
|
ConstantDataBuffer* ConstantHelper::constantBuffer(const ConstantDescriptor &descriptor, nd4j::DataType dataType) {
|
||||||
|
|
|
@ -21,6 +21,7 @@
|
||||||
#include "../MmulHelper.h"
|
#include "../MmulHelper.h"
|
||||||
#include <NDArrayFactory.h>
|
#include <NDArrayFactory.h>
|
||||||
#include <helpers/BlasHelper.h>
|
#include <helpers/BlasHelper.h>
|
||||||
|
#include <exceptions/datatype_exception.h>
|
||||||
|
|
||||||
|
|
||||||
namespace nd4j {
|
namespace nd4j {
|
||||||
|
@ -148,6 +149,11 @@ static void usualDot(const Nd4jLong length, const double alpha, const void* vX,
|
||||||
//////////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////////
|
||||||
// MXK x KxN = MxN
|
// MXK x KxN = MxN
|
||||||
NDArray* MmulHelper::mmulMxM(const NDArray* A, const NDArray* B, NDArray* C, const double alpha, const double beta, const char outOrder) {
|
NDArray* MmulHelper::mmulMxM(const NDArray* A, const NDArray* B, NDArray* C, const double alpha, const double beta, const char outOrder) {
|
||||||
|
if (A->dataType() != B->dataType())
|
||||||
|
throw datatype_exception::build("mmulMxM expects all data types to be the same", A->dataType(), B->dataType());
|
||||||
|
|
||||||
|
if (C != nullptr && A->dataType() != C->dataType())
|
||||||
|
throw datatype_exception::build("mmulMxM expects all data types to be the same", A->dataType(), C->dataType());
|
||||||
|
|
||||||
if(A->rankOf() != 2)
|
if(A->rankOf() != 2)
|
||||||
throw std::runtime_error("MmulHelper::mmulMxM: rank of A array is not equal 2 !");
|
throw std::runtime_error("MmulHelper::mmulMxM: rank of A array is not equal 2 !");
|
||||||
|
@ -212,7 +218,8 @@ NDArray* MmulHelper::mmulMxM(const NDArray* A, const NDArray* B, NDArray* C, con
|
||||||
BlasHelper::getInstance()->dgemm()(blasOrder, transAblas, transBblas, M, N, K, (double) alpha, reinterpret_cast<double *>(pA->getBuffer()), lda, reinterpret_cast<double *>(pB->getBuffer()), ldb, (double) beta, reinterpret_cast<double *>(pC->getBuffer()), ldc);
|
BlasHelper::getInstance()->dgemm()(blasOrder, transAblas, transBblas, M, N, K, (double) alpha, reinterpret_cast<double *>(pA->getBuffer()), lda, reinterpret_cast<double *>(pB->getBuffer()), ldb, (double) beta, reinterpret_cast<double *>(pC->getBuffer()), ldc);
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
BUILD_TRIPLE_SELECTOR(aType, bType, cType, usualGemm, (cOrder, transA, transB, M, N, K, alpha, pA->getBuffer(), lda, pB->getBuffer(), ldb, beta, pC->getBuffer(), ldc), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_THRICE(aType, usualGemm, (cOrder, transA, transB, M, N, K, alpha, pA->getBuffer(), lda, pB->getBuffer(), ldb, beta, pC->getBuffer(), ldc), NUMERIC_TYPES);
|
||||||
|
//BUILD_TRIPLE_SELECTOR(aType, bType, cType, usualGemm, (cOrder, transA, transB, M, N, K, alpha, pA->getBuffer(), lda, pB->getBuffer(), ldb, beta, pC->getBuffer(), ldc), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
if(pC != C) {
|
if(pC != C) {
|
||||||
|
@ -230,6 +237,11 @@ NDArray* MmulHelper::mmulMxM(const NDArray* A, const NDArray* B, NDArray* C, con
|
||||||
////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////
|
||||||
// MXN x N = M
|
// MXN x N = M
|
||||||
NDArray* MmulHelper::mmulMxV(const NDArray* A, const NDArray* X, nd4j::NDArray* Y, const double alpha, const double beta, const char outOrder) {
|
NDArray* MmulHelper::mmulMxV(const NDArray* A, const NDArray* X, nd4j::NDArray* Y, const double alpha, const double beta, const char outOrder) {
|
||||||
|
if (X->dataType() != A->dataType())
|
||||||
|
throw datatype_exception::build("mmulMxV expects all data types to be the same", A->dataType(), X->dataType());
|
||||||
|
|
||||||
|
if (Y != nullptr && X->dataType() != Y->dataType())
|
||||||
|
throw datatype_exception::build("mmulMxV expects all data types to be the same", A->dataType(), Y->dataType());
|
||||||
|
|
||||||
int xLenDim, yLenDim(0);
|
int xLenDim, yLenDim(0);
|
||||||
|
|
||||||
|
@ -279,7 +291,8 @@ NDArray* MmulHelper::mmulMxV(const NDArray* A, const NDArray* X, nd4j::NDArray*
|
||||||
BlasHelper::getInstance()->sgemv()(blasOrder, CblasNoTrans, M, N, (float)alpha, (float*)pA->getBuffer(), lda, (float*)X->getBuffer(), incx, (float)beta, (float*)Y->getBuffer(), incy);
|
BlasHelper::getInstance()->sgemv()(blasOrder, CblasNoTrans, M, N, (float)alpha, (float*)pA->getBuffer(), lda, (float*)X->getBuffer(), incx, (float)beta, (float*)Y->getBuffer(), incy);
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
BUILD_TRIPLE_SELECTOR(aType, xType, yType, usualGemv, (pA->ordering(), M, N, alpha, pA->getBuffer(), lda, X->getBuffer(), incx, beta, Y->getBuffer(), incy), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_THRICE(aType, usualGemv, (pA->ordering(), M, N, alpha, pA->getBuffer(), lda, X->getBuffer(), incx, beta, Y->getBuffer(), incy), NUMERIC_TYPES);
|
||||||
|
//BUILD_TRIPLE_SELECTOR(aType, xType, yType, usualGemv, (pA->ordering(), M, N, alpha, pA->getBuffer(), lda, X->getBuffer(), incx, beta, Y->getBuffer(), incy), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
if(pA != A)
|
if(pA != A)
|
||||||
|
@ -291,6 +304,11 @@ NDArray* MmulHelper::mmulMxV(const NDArray* A, const NDArray* X, nd4j::NDArray*
|
||||||
////////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////////
|
||||||
// (X * Y) = Z[0]
|
// (X * Y) = Z[0]
|
||||||
NDArray* MmulHelper::dot(const NDArray* X, const NDArray* Y, nd4j::NDArray* Z, const double alpha, const double beta) {
|
NDArray* MmulHelper::dot(const NDArray* X, const NDArray* Y, nd4j::NDArray* Z, const double alpha, const double beta) {
|
||||||
|
if (X->dataType() != Y->dataType())
|
||||||
|
throw datatype_exception::build("Dot expects all data types to be the same", X->dataType(), Y->dataType());
|
||||||
|
|
||||||
|
if (Z != nullptr && X->dataType() != Z->dataType())
|
||||||
|
throw datatype_exception::build("Dot expects all data types to be the same", X->dataType(), Z->dataType());
|
||||||
|
|
||||||
int xLenDim(0), yLenDim(0);
|
int xLenDim(0), yLenDim(0);
|
||||||
|
|
||||||
|
@ -316,13 +334,14 @@ NDArray* MmulHelper::dot(const NDArray* X, const NDArray* Y, nd4j::NDArray* Z, c
|
||||||
const auto yType = Y->dataType();
|
const auto yType = Y->dataType();
|
||||||
const auto zType = Z->dataType();
|
const auto zType = Z->dataType();
|
||||||
|
|
||||||
BUILD_TRIPLE_SELECTOR(xType, yType, zType, usualDot, (length, alpha, X->getBuffer(), incx, Y->getBuffer(), incy, beta, Z->getBuffer()), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_THRICE(xType, usualDot, (length, alpha, X->getBuffer(), incx, Y->getBuffer(), incy, beta, Z->getBuffer()), NUMERIC_TYPES);
|
||||||
|
//BUILD_TRIPLE_SELECTOR(xType, yType, zType, usualDot, (length, alpha, X->getBuffer(), incx, Y->getBuffer(), incy, beta, Z->getBuffer()), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
||||||
|
|
||||||
return Z;
|
return Z;
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_TRIPLE_TEMPLATE(template void usualGemm, (const char cOrder, const bool transA, const bool transB, const int M, const int N, const int K, const double alpha, const void* A, const int lda, const void* B, const int ldb, const double beta, void* C, const int ldc), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
//BUILD_TRIPLE_TEMPLATE(template void usualGemm, (const char cOrder, const bool transA, const bool transB, const int M, const int N, const int K, const double alpha, const void* A, const int lda, const void* B, const int ldb, const double beta, void* C, const int ldc), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
||||||
BUILD_TRIPLE_TEMPLATE(template void usualGemv, (const char aOrder, const int M, const int N, const double alpha, const void* A, const int lda, const void* B, const int incx, const double beta, void* C, const int incy), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
//BUILD_TRIPLE_TEMPLATE(template void usualGemv, (const char aOrder, const int M, const int N, const double alpha, const void* A, const int lda, const void* B, const int incx, const double beta, void* C, const int incy), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
||||||
BUILD_TRIPLE_TEMPLATE(template void usualDot, (const Nd4jLong length, const double alpha, const void* vX, const Nd4jLong incx, const void* vY, const Nd4jLong incy, const double beta, void* vZ), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
//BUILD_TRIPLE_TEMPLATE(template void usualDot, (const Nd4jLong length, const double alpha, const void* vX, const Nd4jLong incx, const void* vY, const Nd4jLong incy, const double beta, void* vZ), LIBND4J_TYPES, FLOAT_TYPES, FLOAT_TYPES);
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -21,13 +21,41 @@
|
||||||
#include "../cublasHelper.h"
|
#include "../cublasHelper.h"
|
||||||
|
|
||||||
namespace nd4j {
|
namespace nd4j {
|
||||||
namespace cublas {
|
static void* handle_() {
|
||||||
void* handle() {
|
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
void destroyHandle(void* handle) {
|
static void destroyHandle_(void* handle) {
|
||||||
//
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
CublasHelper::CublasHelper() {
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
CublasHelper::~CublasHelper() {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
CublasHelper* CublasHelper::getInstance() {
|
||||||
|
if (!_INSTANCE)
|
||||||
|
_INSTANCE = new nd4j::CublasHelper();
|
||||||
|
|
||||||
|
return _INSTANCE;
|
||||||
|
}
|
||||||
|
|
||||||
|
void* CublasHelper::handle() {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
void* CublasHelper::solver() {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
void* CublasHelper::handle(int deviceId) {
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
nd4j::CublasHelper* nd4j::CublasHelper::_INSTANCE = 0;
|
||||||
}
|
}
|
|
@ -21,12 +21,28 @@
|
||||||
#ifndef DEV_TESTS_CUBLASHELPER_H
|
#ifndef DEV_TESTS_CUBLASHELPER_H
|
||||||
#define DEV_TESTS_CUBLASHELPER_H
|
#define DEV_TESTS_CUBLASHELPER_H
|
||||||
|
|
||||||
namespace nd4j {
|
#include <dll.h>
|
||||||
namespace cublas {
|
#include <pointercast.h>
|
||||||
void* handle();
|
#include <vector>
|
||||||
|
|
||||||
void destroyHandle(void* handle);
|
namespace nd4j {
|
||||||
}
|
class CublasHelper {
|
||||||
|
private:
|
||||||
|
static CublasHelper *_INSTANCE;
|
||||||
|
|
||||||
|
std::vector<void*> _cache;
|
||||||
|
std::vector<void*> _solvers;
|
||||||
|
|
||||||
|
CublasHelper();
|
||||||
|
~CublasHelper();
|
||||||
|
public:
|
||||||
|
static CublasHelper* getInstance();
|
||||||
|
|
||||||
|
void* solver();
|
||||||
|
|
||||||
|
void* handle();
|
||||||
|
void* handle(int deviceId);
|
||||||
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
#endif //DEV_TESTS_CUBLASHELPER_H
|
#endif //DEV_TESTS_CUBLASHELPER_H
|
||||||
|
|
|
@ -26,6 +26,7 @@
|
||||||
#include <logger.h>
|
#include <logger.h>
|
||||||
#include <cuda_runtime.h>
|
#include <cuda_runtime.h>
|
||||||
#include <cuda.h>
|
#include <cuda.h>
|
||||||
|
#include <execution/AffinityManager.h>
|
||||||
|
|
||||||
#define CONSTANT_LIMIT 49152
|
#define CONSTANT_LIMIT 49152
|
||||||
|
|
||||||
|
@ -43,23 +44,11 @@ namespace nd4j {
|
||||||
}
|
}
|
||||||
|
|
||||||
int ConstantHelper::getCurrentDevice() {
|
int ConstantHelper::getCurrentDevice() {
|
||||||
int dev = 0;
|
return AffinityManager::currentDeviceId();
|
||||||
auto res = cudaGetDevice(&dev);
|
|
||||||
|
|
||||||
if (res != 0)
|
|
||||||
throw cuda_exception::build("cudaGetDevice failed", res);
|
|
||||||
|
|
||||||
return dev;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int ConstantHelper::getNumberOfDevices() {
|
int ConstantHelper::getNumberOfDevices() {
|
||||||
int dev = 0;
|
return AffinityManager::numberOfDevices();
|
||||||
auto res = cudaGetDeviceCount(&dev);
|
|
||||||
|
|
||||||
if (res != 0)
|
|
||||||
throw cuda_exception::build("cudaGetDeviceCount failed", res);
|
|
||||||
|
|
||||||
return dev;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -250,8 +250,8 @@ NDArray* MmulHelper::mmulMxM(const NDArray* A, const NDArray* B, NDArray* C, dou
|
||||||
blocksPerGrid.y = math::nd4j_ceil<double, int>(static_cast<double>(M) / threadsPerBlock.y); // rows
|
blocksPerGrid.y = math::nd4j_ceil<double, int>(static_cast<double>(M) / threadsPerBlock.y); // rows
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_TRIPLE_SELECTOR(aType, bType, cType, usualGemm, (blocksPerGrid, threadsPerBlock, stream, transA, transB, M, N, K, alpha, pA->getSpecialBuffer(), lda, pB->getSpecialBuffer(), ldb, beta, pC->getSpecialBuffer(), ldc), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
//BUILD_TRIPLE_SELECTOR(aType, bType, cType, usualGemm, (blocksPerGrid, threadsPerBlock, stream, transA, transB, M, N, K, alpha, pA->getSpecialBuffer(), lda, pB->getSpecialBuffer(), ldb, beta, pC->getSpecialBuffer(), ldc), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
||||||
// BUILD_SINGLE_SELECTOR_THRICE(aType, usualGemm, (blocksPerGrid, threadsPerBlock, stream, transA, transB, M, N, K, alpha, pA->getSpecialBuffer(), lda, pB->getSpecialBuffer(), ldb, beta, pC->getSpecialBuffer(), ldc), NUMERIC_TYPES)
|
BUILD_SINGLE_SELECTOR_THRICE(aType, usualGemm, (blocksPerGrid, threadsPerBlock, stream, transA, transB, M, N, K, alpha, pA->getSpecialBuffer(), lda, pB->getSpecialBuffer(), ldb, beta, pC->getSpecialBuffer(), ldc), NUMERIC_TYPES)
|
||||||
}
|
}
|
||||||
|
|
||||||
if (status != CUBLAS_STATUS_SUCCESS) throw cuda_exception::build("MmulHelper::mmulMxM cuda failed !", status);
|
if (status != CUBLAS_STATUS_SUCCESS) throw cuda_exception::build("MmulHelper::mmulMxM cuda failed !", status);
|
||||||
|
@ -339,8 +339,8 @@ NDArray* MmulHelper::mmulMxV(const NDArray* A, const NDArray* X, nd4j::NDArray*
|
||||||
threadsPerBlock.x = 512;
|
threadsPerBlock.x = 512;
|
||||||
blocksPerGrid.x = math::nd4j_ceil<double, int>(static_cast<double>(M) / threadsPerBlock.x); // rows
|
blocksPerGrid.x = math::nd4j_ceil<double, int>(static_cast<double>(M) / threadsPerBlock.x); // rows
|
||||||
}
|
}
|
||||||
BUILD_TRIPLE_SELECTOR(aType, xType, yType, usualGemv, (blocksPerGrid, threadsPerBlock, stream, transA, M, N, alpha, pA->getSpecialBuffer(), lda, X->getSpecialBuffer(), incx, beta, Y->getSpecialBuffer(), incy), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
//BUILD_TRIPLE_SELECTOR(aType, xType, yType, usualGemv, (blocksPerGrid, threadsPerBlock, stream, transA, M, N, alpha, pA->getSpecialBuffer(), lda, X->getSpecialBuffer(), incx, beta, Y->getSpecialBuffer(), incy), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
||||||
// BUILD_SINGLE_SELECTOR_THRICE(xType, usualGemv, (blocksPerGrid, threadsPerBlock, stream, transA, M, N, alpha, pA->getSpecialBuffer(), lda, X->getSpecialBuffer(), incx, beta, Y->getSpecialBuffer(), incy), NUMERIC_TYPES)
|
BUILD_SINGLE_SELECTOR_THRICE(xType, usualGemv, (blocksPerGrid, threadsPerBlock, stream, transA, M, N, alpha, pA->getSpecialBuffer(), lda, X->getSpecialBuffer(), incx, beta, Y->getSpecialBuffer(), incy), NUMERIC_TYPES)
|
||||||
}
|
}
|
||||||
|
|
||||||
if (status != CUBLAS_STATUS_SUCCESS) throw cuda_exception::build("MmulHelper::mmulMxV cuda failed !", status);
|
if (status != CUBLAS_STATUS_SUCCESS) throw cuda_exception::build("MmulHelper::mmulMxV cuda failed !", status);
|
||||||
|
@ -397,8 +397,8 @@ NDArray* MmulHelper::dot(const NDArray* X, const NDArray* Y, nd4j::NDArray* Z, c
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({Z}, {X, Y});
|
NDArray::prepareSpecialUse({Z}, {X, Y});
|
||||||
|
|
||||||
BUILD_TRIPLE_SELECTOR(xType, yType, zType, usualDot, (blocksPerGrid, threadsPerBlock, stream, length, alpha, X->getSpecialBuffer(), incx, Y->getSpecialBuffer(), incy, beta, Z->getSpecialBuffer()), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
//BUILD_TRIPLE_SELECTOR(xType, yType, zType, usualDot, (blocksPerGrid, threadsPerBlock, stream, length, alpha, X->getSpecialBuffer(), incx, Y->getSpecialBuffer(), incy, beta, Z->getSpecialBuffer()), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
||||||
// BUILD_SINGLE_SELECTOR_THRICE(xType, usualDot, (blocksPerGrid, threadsPerBlock, stream, length, alpha, X->getSpecialBuffer(), incx, Y->getSpecialBuffer(), incy, beta, Z->getSpecialBuffer()), NUMERIC_TYPES)
|
BUILD_SINGLE_SELECTOR_THRICE(xType, usualDot, (blocksPerGrid, threadsPerBlock, stream, length, alpha, X->getSpecialBuffer(), incx, Y->getSpecialBuffer(), incy, beta, Z->getSpecialBuffer()), NUMERIC_TYPES)
|
||||||
|
|
||||||
auto cudaResult = cudaStreamSynchronize(*stream);
|
auto cudaResult = cudaStreamSynchronize(*stream);
|
||||||
if (cudaResult != 0) throw cuda_exception::build("MmulHelper::dot cuda failed !", cudaResult);
|
if (cudaResult != 0) throw cuda_exception::build("MmulHelper::dot cuda failed !", cudaResult);
|
||||||
|
@ -408,8 +408,8 @@ NDArray* MmulHelper::dot(const NDArray* X, const NDArray* Y, nd4j::NDArray* Z, c
|
||||||
return Z;
|
return Z;
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_TRIPLE_TEMPLATE(template void usualGemm, (const dim3 &blocksPerGrid, const dim3 &threadsPerBlock, cudaStream_t *stream, const bool transA, const bool transB, const int M, const int N, const int K, const double alpha, const void* vA, const int lda, const void* vB, const int ldb, const double beta, void* vC, const int ldc), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
//BUILD_TRIPLE_TEMPLATE(template void usualGemm, (const dim3 &blocksPerGrid, const dim3 &threadsPerBlock, cudaStream_t *stream, const bool transA, const bool transB, const int M, const int N, const int K, const double alpha, const void* vA, const int lda, const void* vB, const int ldb, const double beta, void* vC, const int ldc), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
||||||
BUILD_TRIPLE_TEMPLATE(template void usualGemv, (const dim3 &blocksPerGrid, const dim3 &threadsPerBlock, cudaStream_t *stream, const bool transA, const int M, const int N, const double alpha, const void* vA, const int lda, const void* vB, const int incx, const double beta, void* vC, const int incy), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
//BUILD_TRIPLE_TEMPLATE(template void usualGemv, (const dim3 &blocksPerGrid, const dim3 &threadsPerBlock, cudaStream_t *stream, const bool transA, const int M, const int N, const double alpha, const void* vA, const int lda, const void* vB, const int incx, const double beta, void* vC, const int incy), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
||||||
BUILD_TRIPLE_TEMPLATE(template void usualDot, (const dim3 &blocksPerGrid, const dim3 &threadsPerBlock, cudaStream_t *stream, const Nd4jLong length, const double alpha, const void* vX, const Nd4jLong incx, const void* vY, const Nd4jLong incy, const double beta, void* vZ), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
//BUILD_TRIPLE_TEMPLATE(template void usualDot, (const dim3 &blocksPerGrid, const dim3 &threadsPerBlock, cudaStream_t *stream, const Nd4jLong length, const double alpha, const void* vX, const Nd4jLong incx, const void* vY, const Nd4jLong incy, const double beta, void* vZ), NUMERIC_TYPES, NUMERIC_TYPES, FLOAT_TYPES);
|
||||||
|
|
||||||
}
|
}
|
|
@ -20,12 +20,15 @@
|
||||||
|
|
||||||
|
|
||||||
#include <cublas_v2.h>
|
#include <cublas_v2.h>
|
||||||
|
#include <cusolverDn.h>
|
||||||
#include "../cublasHelper.h"
|
#include "../cublasHelper.h"
|
||||||
#include <exceptions/cuda_exception.h>
|
#include <exceptions/cuda_exception.h>
|
||||||
#include <helpers/logger.h>
|
#include <helpers/logger.h>
|
||||||
|
#include <execution/AffinityManager.h>
|
||||||
|
|
||||||
namespace nd4j {
|
namespace nd4j {
|
||||||
void* cublas::handle() {
|
|
||||||
|
static void* handle_() {
|
||||||
auto _handle = new cublasHandle_t();
|
auto _handle = new cublasHandle_t();
|
||||||
auto status = cublasCreate_v2(_handle); // initialize CUBLAS context
|
auto status = cublasCreate_v2(_handle); // initialize CUBLAS context
|
||||||
if (status != CUBLAS_STATUS_SUCCESS)
|
if (status != CUBLAS_STATUS_SUCCESS)
|
||||||
|
@ -34,7 +37,16 @@ namespace nd4j {
|
||||||
return reinterpret_cast<void *>(_handle);
|
return reinterpret_cast<void *>(_handle);
|
||||||
}
|
}
|
||||||
|
|
||||||
void cublas::destroyHandle(void* handle) {
|
static void* solver_() {
|
||||||
|
auto cusolverH = new cusolverDnHandle_t();
|
||||||
|
auto status = cusolverDnCreate(cusolverH);
|
||||||
|
if (status != CUSOLVER_STATUS_SUCCESS)
|
||||||
|
throw cuda_exception::build("cuSolver handle creation failed !", status);
|
||||||
|
|
||||||
|
return cusolverH;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void destroyHandle_(void* handle) {
|
||||||
auto ch = reinterpret_cast<cublasHandle_t *>(handle);
|
auto ch = reinterpret_cast<cublasHandle_t *>(handle);
|
||||||
auto status = cublasDestroy_v2(*ch);
|
auto status = cublasDestroy_v2(*ch);
|
||||||
if (status != CUBLAS_STATUS_SUCCESS)
|
if (status != CUBLAS_STATUS_SUCCESS)
|
||||||
|
@ -42,4 +54,57 @@ namespace nd4j {
|
||||||
|
|
||||||
delete ch;
|
delete ch;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
CublasHelper::CublasHelper() {
|
||||||
|
auto numDevices = AffinityManager::numberOfDevices();
|
||||||
|
auto currentDevice = AffinityManager::currentDeviceId();
|
||||||
|
_cache.resize(numDevices);
|
||||||
|
_solvers.resize(numDevices);
|
||||||
|
for (int e = 0; e < numDevices; e++) {
|
||||||
|
AffinityManager::setCurrentDevice(e);
|
||||||
|
|
||||||
|
_cache[e] = handle_();
|
||||||
|
_solvers[e] = solver_();
|
||||||
|
}
|
||||||
|
|
||||||
|
// don't forget to restore back original device
|
||||||
|
AffinityManager::setCurrentDevice(currentDevice);
|
||||||
|
}
|
||||||
|
|
||||||
|
CublasHelper::~CublasHelper() {
|
||||||
|
auto numDevices = AffinityManager::numberOfDevices();
|
||||||
|
|
||||||
|
for (int e = 0; e < numDevices; e++)
|
||||||
|
destroyHandle_(_cache[e]);
|
||||||
|
}
|
||||||
|
|
||||||
|
CublasHelper* CublasHelper::getInstance() {
|
||||||
|
if (!_INSTANCE)
|
||||||
|
_INSTANCE = new nd4j::CublasHelper();
|
||||||
|
|
||||||
|
return _INSTANCE;
|
||||||
|
}
|
||||||
|
|
||||||
|
void* CublasHelper::handle() {
|
||||||
|
auto deviceId = AffinityManager::currentDeviceId();
|
||||||
|
return handle(deviceId);
|
||||||
|
}
|
||||||
|
|
||||||
|
void* CublasHelper::solver() {
|
||||||
|
auto deviceId = AffinityManager::currentDeviceId();
|
||||||
|
if (deviceId < 0 || deviceId > _solvers.size())
|
||||||
|
throw cuda_exception::build("requested deviceId doesn't look valid", deviceId);
|
||||||
|
|
||||||
|
return _solvers[deviceId];
|
||||||
|
}
|
||||||
|
|
||||||
|
void* CublasHelper::handle(int deviceId) {
|
||||||
|
if (deviceId < 0 || deviceId > _cache.size())
|
||||||
|
throw cuda_exception::build("requested deviceId doesn't look valid", deviceId);
|
||||||
|
|
||||||
|
return _cache[deviceId];
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
nd4j::CublasHelper* nd4j::CublasHelper::_INSTANCE = 0;
|
||||||
}
|
}
|
|
@ -60,9 +60,18 @@ static __global__ void broadcastInverseSimple(
|
||||||
functions::broadcast::Broadcast<X,Y,Z>::template transformInverseCuda<OpClass>(x,xShapeInfo,y,yShapeInfo,z,zShapeInfo,dimension,dimensionLength,tadOnlyShapeInfo,tadOffsets,tadOnlyShapeInfoZ,tadOffsetsZ);
|
functions::broadcast::Broadcast<X,Y,Z>::template transformInverseCuda<OpClass>(x,xShapeInfo,y,yShapeInfo,z,zShapeInfo,dimension,dimensionLength,tadOnlyShapeInfo,tadOffsets,tadOnlyShapeInfoZ,tadOffsetsZ);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
namespace functions {
|
namespace functions {
|
||||||
namespace broadcast {
|
namespace broadcast {
|
||||||
|
|
||||||
|
static Nd4jLong __device__ __noinline__ _getIndexOffset(Nd4jLong index, Nd4jLong *shapeInfo, Nd4jLong length) {
|
||||||
|
return shape::getIndexOffset(index, shapeInfo, length);
|
||||||
|
}
|
||||||
|
|
||||||
|
static Nd4jLong __device__ __noinline__ _length(Nd4jLong *shapeInfo) {
|
||||||
|
return shape::length(shapeInfo);
|
||||||
|
}
|
||||||
|
|
||||||
template<typename X, typename Y, typename Z>
|
template<typename X, typename Y, typename Z>
|
||||||
template <typename OpClass>
|
template <typename OpClass>
|
||||||
__host__ void Broadcast<X,Y,Z>::intermediateBroadcast(dim3 launchDims, cudaStream_t *stream, void *x, Nd4jLong *xShapeInfo, void *y, Nd4jLong *yShapeInfo, void *z, Nd4jLong *zShapeInfo, int *dimension, int dimensionLength, Nd4jLong *tadOnlyShapeInfo, Nd4jLong *tadOffsets, Nd4jLong *tadOnlyShapeInfoZ, Nd4jLong *tadOffsetsZ) {
|
__host__ void Broadcast<X,Y,Z>::intermediateBroadcast(dim3 launchDims, cudaStream_t *stream, void *x, Nd4jLong *xShapeInfo, void *y, Nd4jLong *yShapeInfo, void *z, Nd4jLong *zShapeInfo, int *dimension, int dimensionLength, Nd4jLong *tadOnlyShapeInfo, Nd4jLong *tadOffsets, Nd4jLong *tadOnlyShapeInfoZ, Nd4jLong *tadOffsetsZ) {
|
||||||
|
@ -120,9 +129,9 @@ namespace functions {
|
||||||
|
|
||||||
if (threadIdx.x == 0) {
|
if (threadIdx.x == 0) {
|
||||||
|
|
||||||
tadLength = shape::length(tadOnlyShapeInfo);
|
tadLength = _length(tadOnlyShapeInfo);
|
||||||
tadEWS = shape::elementWiseStride(tadOnlyShapeInfo);
|
tadEWS = shape::elementWiseStride(tadOnlyShapeInfo);
|
||||||
numTads = shape::length(yShapeInfo) / tadLength;
|
numTads = _length(yShapeInfo) / tadLength;
|
||||||
xEWS = shape::elementWiseStride(xShapeInfo);
|
xEWS = shape::elementWiseStride(xShapeInfo);
|
||||||
zEWS = shape::elementWiseStride(tadOnlyShapeInfoZ);
|
zEWS = shape::elementWiseStride(tadOnlyShapeInfoZ);
|
||||||
}
|
}
|
||||||
|
@ -146,9 +155,9 @@ namespace functions {
|
||||||
else {
|
else {
|
||||||
// it is expected that x and z tads and y array all have the same length
|
// it is expected that x and z tads and y array all have the same length
|
||||||
for (Nd4jLong i = threadIdx.x; i < tadLength; i+= blockDim.x) {
|
for (Nd4jLong i = threadIdx.x; i < tadLength; i+= blockDim.x) {
|
||||||
auto xOffset = shape::getIndexOffset(i, xShapeInfo, tadLength);
|
auto xOffset = _getIndexOffset(i, xShapeInfo, tadLength);
|
||||||
auto yOffset = shape::getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
auto yOffset = _getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
||||||
auto zOffset = shape::getIndexOffset(i, tadOnlyShapeInfoZ, tadLength);
|
auto zOffset = _getIndexOffset(i, tadOnlyShapeInfoZ, tadLength);
|
||||||
rZ[zOffset] = OpType::op(x[xOffset], rY[yOffset]);
|
rZ[zOffset] = OpType::op(x[xOffset], rY[yOffset]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -186,9 +195,9 @@ namespace functions {
|
||||||
|
|
||||||
if (threadIdx.x == 0) {
|
if (threadIdx.x == 0) {
|
||||||
|
|
||||||
tadLength = shape::length(tadOnlyShapeInfo);
|
tadLength = _length(tadOnlyShapeInfo);
|
||||||
tadEWS = shape::elementWiseStride(tadOnlyShapeInfo);
|
tadEWS = shape::elementWiseStride(tadOnlyShapeInfo);
|
||||||
numTads = shape::length(xShapeInfo) / tadLength;
|
numTads = _length(xShapeInfo) / tadLength;
|
||||||
yEWS = shape::elementWiseStride(yShapeInfo);
|
yEWS = shape::elementWiseStride(yShapeInfo);
|
||||||
zEWS = shape::elementWiseStride(tadOnlyShapeInfoZ);
|
zEWS = shape::elementWiseStride(tadOnlyShapeInfoZ);
|
||||||
}
|
}
|
||||||
|
@ -212,14 +221,15 @@ namespace functions {
|
||||||
// it is expected that x and z tads and y array all have the same length
|
// it is expected that x and z tads and y array all have the same length
|
||||||
for (Nd4jLong i = threadIdx.x; i < tadLength; i+= blockDim.x) {
|
for (Nd4jLong i = threadIdx.x; i < tadLength; i+= blockDim.x) {
|
||||||
|
|
||||||
auto xOffset = shape::getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
auto xOffset = _getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
||||||
auto yOffset = shape::getIndexOffset(i, yShapeInfo, tadLength);
|
auto yOffset = _getIndexOffset(i, yShapeInfo, tadLength);
|
||||||
auto zOffset = shape::getIndexOffset(i, tadOnlyShapeInfoZ, tadLength);
|
auto zOffset = _getIndexOffset(i, tadOnlyShapeInfoZ, tadLength);
|
||||||
rZ[zOffset] = OpType::op(rX[xOffset], y[yOffset]);
|
rZ[zOffset] = OpType::op(rX[xOffset], y[yOffset]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
BUILD_PAIRWISE_TEMPLATE(template class ND4J_EXPORT Broadcast, , PAIRWISE_TYPES_0);
|
BUILD_PAIRWISE_TEMPLATE(template class ND4J_EXPORT Broadcast, , PAIRWISE_TYPES_0);
|
||||||
BUILD_PAIRWISE_TEMPLATE(template class ND4J_EXPORT Broadcast, , PAIRWISE_TYPES_1);
|
BUILD_PAIRWISE_TEMPLATE(template class ND4J_EXPORT Broadcast, , PAIRWISE_TYPES_1);
|
||||||
|
|
|
@ -0,0 +1,115 @@
|
||||||
|
/*******************************************************************************
|
||||||
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
||||||
|
*
|
||||||
|
* This program and the accompanying materials are made available under the
|
||||||
|
* terms of the Apache License, Version 2.0 which is available at
|
||||||
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
* License for the specific language governing permissions and limitations
|
||||||
|
* under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
******************************************************************************/
|
||||||
|
|
||||||
|
//
|
||||||
|
// @author raver119@gmail.com
|
||||||
|
//
|
||||||
|
|
||||||
|
#include <op_boilerplate.h>
|
||||||
|
#include <loops/broadcasting.h>
|
||||||
|
#include <loops/legacy_ops.h>
|
||||||
|
#include <types/types.h>
|
||||||
|
#include <Environment.h>
|
||||||
|
#include <cuda.h>
|
||||||
|
#include <cuda_runtime.h>
|
||||||
|
#include <string>
|
||||||
|
#include <stdexcept>
|
||||||
|
#include <StringUtils.h>
|
||||||
|
#include <specials_cuda.h>
|
||||||
|
|
||||||
|
namespace functions {
|
||||||
|
namespace broadcast {
|
||||||
|
template <typename X, typename Y, typename Z>
|
||||||
|
void Broadcast<X, Y, Z>::execInverse(int opNum,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong *yShapeInfo,
|
||||||
|
void *result,
|
||||||
|
Nd4jLong *resultShapeInfo,
|
||||||
|
int *dimension,
|
||||||
|
int dimensionLength,
|
||||||
|
Nd4jLong *tadShapeInfo,
|
||||||
|
Nd4jLong *tadOffset,
|
||||||
|
Nd4jLong *tadShapeInfoZ,
|
||||||
|
Nd4jLong *tadOffsetZ) {
|
||||||
|
//
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename X, typename Y, typename Z>
|
||||||
|
void Broadcast<X, Y, Z>::exec(int opNum,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong *yShapeInfo,
|
||||||
|
void *result,
|
||||||
|
Nd4jLong *resultShapeInfo,
|
||||||
|
int *dimension,
|
||||||
|
int dimensionLength,
|
||||||
|
Nd4jLong *tadShapeInfo,
|
||||||
|
Nd4jLong *tadOffset,
|
||||||
|
Nd4jLong *tadShapeInfoZ,
|
||||||
|
Nd4jLong *tadOffsetZ) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* CPU execution
|
||||||
|
* @param x the input
|
||||||
|
* @param xShapeInfo the x shape information
|
||||||
|
* @param y the y data
|
||||||
|
* @param yShapeInfo the y shape information
|
||||||
|
* @param result the result
|
||||||
|
* @param resultShapeInfo the result shape information
|
||||||
|
* @param dimension the dimension to broadcast along long
|
||||||
|
* @param dimensionLength the length of the dimension buffer
|
||||||
|
*/
|
||||||
|
template <typename X, typename Y, typename Z>
|
||||||
|
template<typename OpType>
|
||||||
|
void Broadcast<X, Y, Z>::exec(void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong *yShapeInfo,
|
||||||
|
void *result,
|
||||||
|
Nd4jLong *resultShapeInfo,
|
||||||
|
int *dimension,
|
||||||
|
int dimensionLength,
|
||||||
|
Nd4jLong *tadShapeInfo,
|
||||||
|
Nd4jLong *tadOffset,
|
||||||
|
Nd4jLong *tadShapeInfoZ,
|
||||||
|
Nd4jLong *tadOffsetZ) {
|
||||||
|
//
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y, typename Z>
|
||||||
|
template<typename OpType>
|
||||||
|
void Broadcast<X, Y, Z>::execInverse(void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong *yShapeInfo,
|
||||||
|
void *result,
|
||||||
|
Nd4jLong *resultShapeInfo,
|
||||||
|
int *dimension,
|
||||||
|
int dimensionLength,
|
||||||
|
Nd4jLong *tadShapeInfo,
|
||||||
|
Nd4jLong *tadOffset,
|
||||||
|
Nd4jLong *tadShapeInfoZ,
|
||||||
|
Nd4jLong *tadOffsetZ) {
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -224,6 +224,77 @@ namespace functions {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
void BroadcastBool<X,Y>::exec(int opNum,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong *yShapeInfo,
|
||||||
|
void *result,
|
||||||
|
Nd4jLong *resultShapeInfo,
|
||||||
|
int *dimension,
|
||||||
|
int dimensionLength,
|
||||||
|
Nd4jLong *tadShapeInfo,
|
||||||
|
Nd4jLong *tadOffset,
|
||||||
|
Nd4jLong *tadShapeInfoZ,
|
||||||
|
Nd4jLong *tadOffsetZ) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
void BroadcastBool<X,Y>::execInverse(int opNum,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong *yShapeInfo,
|
||||||
|
void *result,
|
||||||
|
Nd4jLong *resultShapeInfo,
|
||||||
|
int *dimension,
|
||||||
|
int dimensionLength,
|
||||||
|
Nd4jLong *tadShapeInfo,
|
||||||
|
Nd4jLong *tadOffset,
|
||||||
|
Nd4jLong *tadShapeInfoZ,
|
||||||
|
Nd4jLong *tadOffsetZ) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void BroadcastBool<X,Y>::exec(void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong *yShapeInfo,
|
||||||
|
void *result,
|
||||||
|
Nd4jLong *resultShapeInfo,
|
||||||
|
int *dimension,
|
||||||
|
int dimensionLength,
|
||||||
|
Nd4jLong *tadShapeInfo,
|
||||||
|
Nd4jLong *tadOffset,
|
||||||
|
Nd4jLong *tadShapeInfoZ,
|
||||||
|
Nd4jLong *tadOffsetZ) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void BroadcastBool<X,Y>::execInverse(void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong *yShapeInfo,
|
||||||
|
void *result,
|
||||||
|
Nd4jLong *resultShapeInfo,
|
||||||
|
int *dimension,
|
||||||
|
int dimensionLength,
|
||||||
|
Nd4jLong *tadShapeInfo,
|
||||||
|
Nd4jLong *tadOffset,
|
||||||
|
Nd4jLong *tadShapeInfoZ,
|
||||||
|
Nd4jLong *tadOffsetZ) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT BroadcastBool, , LIBND4J_TYPES, BOOL_TYPES);
|
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT BroadcastBool, , LIBND4J_TYPES, BOOL_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
|
@ -361,6 +361,32 @@ namespace functions {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
Nd4jLong IndexReduce<T>::execScalar(const int opNum, void *x, Nd4jLong *xShapeInfo, void *extraParams) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
void IndexReduce<T>::exec(const int opNum, void *x, Nd4jLong *xShapeInfo, void *extraParams, Nd4jLong *result, Nd4jLong *resultShapeInfoBuffer, int *dimension, int dimensionLength, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffset) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
template<typename OpType>
|
||||||
|
Nd4jLong IndexReduce<T>:: execScalar(void *x, Nd4jLong *xShapeInfo, void *extraParams) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
template<typename OpType>
|
||||||
|
_CUDA_H void IndexReduce<T>::exec(void *x, Nd4jLong *xShapeInfo, void *extraParams, Nd4jLong *result, Nd4jLong *resultShapeInfoBuffer, int *dimension, int dimensionLength, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffset) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template class ND4J_EXPORT IndexReduce, , LIBND4J_TYPES);
|
BUILD_SINGLE_TEMPLATE(template class ND4J_EXPORT IndexReduce, , LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,79 @@
|
||||||
|
/*******************************************************************************
|
||||||
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
||||||
|
*
|
||||||
|
* This program and the accompanying materials are made available under the
|
||||||
|
* terms of the Apache License, Version 2.0 which is available at
|
||||||
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
* License for the specific language governing permissions and limitations
|
||||||
|
* under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
******************************************************************************/
|
||||||
|
|
||||||
|
//
|
||||||
|
// @author raver119@gmail.com
|
||||||
|
//
|
||||||
|
|
||||||
|
#include "../pairwise_transform.h"
|
||||||
|
|
||||||
|
namespace functions {
|
||||||
|
namespace pairwise_transforms {
|
||||||
|
template <typename X, typename Y, typename Z>
|
||||||
|
void PairWiseTransform<X, Y, Z>::exec(
|
||||||
|
const int opNum,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong *yShapeInfo,
|
||||||
|
void *z,
|
||||||
|
Nd4jLong *zShapeInfo,
|
||||||
|
void *extraParams) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename X, typename Y, typename Z>
|
||||||
|
void PairWiseTransform<X, Y, Z>::exec(
|
||||||
|
const int opNum,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong xStride,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong yStride,
|
||||||
|
void *z,
|
||||||
|
Nd4jLong resultStride,
|
||||||
|
void *extraParams,
|
||||||
|
Nd4jLong len) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y, typename Z>
|
||||||
|
template<typename OpType>
|
||||||
|
void PairWiseTransform<X, Y, Z>:: exec(
|
||||||
|
void *vx,
|
||||||
|
Nd4jLong* xShapeInfo,
|
||||||
|
void *vy,
|
||||||
|
Nd4jLong* yShapeInfo,
|
||||||
|
void *vresult,
|
||||||
|
Nd4jLong* zShapeInfo,
|
||||||
|
void *vextraParams) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename X, typename Y, typename Z>
|
||||||
|
template<typename OpType>
|
||||||
|
void PairWiseTransform<X, Y, Z>::exec(void *vx,
|
||||||
|
Nd4jLong xStride,
|
||||||
|
void *vy,
|
||||||
|
Nd4jLong yStride,
|
||||||
|
void *vresult,
|
||||||
|
Nd4jLong resultStride,
|
||||||
|
void *vextraParams,
|
||||||
|
const Nd4jLong len) {
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -111,6 +111,63 @@ void PairWiseBoolTransform<X,Y>::executeCudaShaped(dim3& launchDims, cudaStream_
|
||||||
DISPATCH_BY_OPNUM_TT(intermediateShaped, PARAMS(launchDims, stream, vx, xShapeInfo, vy, yShapeInfo, vz, zShapeInfo, vextraParams), PAIRWISE_BOOL_OPS);
|
DISPATCH_BY_OPNUM_TT(intermediateShaped, PARAMS(launchDims, stream, vx, xShapeInfo, vy, yShapeInfo, vz, zShapeInfo, vextraParams), PAIRWISE_BOOL_OPS);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
void PairWiseBoolTransform<X,Y>::exec(
|
||||||
|
const int opNum,
|
||||||
|
void *dx,
|
||||||
|
Nd4jLong *xShapeBuffer,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong *yShapeBuffer,
|
||||||
|
void *result,
|
||||||
|
Nd4jLong *resultShapeBuffer,
|
||||||
|
void *extraParams) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
void PairWiseBoolTransform<X,Y>::exec(
|
||||||
|
const int opNum,
|
||||||
|
void *dx,
|
||||||
|
Nd4jLong xStride,
|
||||||
|
void *y,
|
||||||
|
Nd4jLong yStride,
|
||||||
|
void *result,
|
||||||
|
Nd4jLong resultStride,
|
||||||
|
void *extraParams,
|
||||||
|
Nd4jLong n) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void PairWiseBoolTransform<X,Y>::exec(
|
||||||
|
void *vx,
|
||||||
|
Nd4jLong* xShapeBuffer,
|
||||||
|
void *vy,
|
||||||
|
Nd4jLong* yShapeBuffer,
|
||||||
|
void *vresult,
|
||||||
|
Nd4jLong* resultShapeBuffer,
|
||||||
|
void *vextraParams) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void PairWiseBoolTransform<X,Y>::exec(void *vx,
|
||||||
|
Nd4jLong xStride,
|
||||||
|
void *vy,
|
||||||
|
Nd4jLong yStride,
|
||||||
|
void *vresult,
|
||||||
|
Nd4jLong resultStride,
|
||||||
|
void *vextraParams,
|
||||||
|
const Nd4jLong n) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT PairWiseBoolTransform, , LIBND4J_TYPES, BOOL_TYPES);
|
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT PairWiseBoolTransform, , LIBND4J_TYPES, BOOL_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -442,6 +442,39 @@ namespace functions {
|
||||||
DEBUG_KERNEL(stream, opNum);
|
DEBUG_KERNEL(stream, opNum);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
template<typename OpClass>
|
||||||
|
void RandomFunction<T>::execTransform(Nd4jPointer state, void *x, Nd4jLong *xShapeBuffer, void *y, Nd4jLong *yShapeBuffer, void *z, Nd4jLong *zShapeBuffer, void *extraArguments) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
template<typename OpClass>
|
||||||
|
void RandomFunction<T>::execTransform(Nd4jPointer state, void *x, Nd4jLong *xShapeBuffer, void *z, Nd4jLong *zShapeBuffer, void *extraArguments) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
template<typename OpClass>
|
||||||
|
void RandomFunction<T>::execTransform(Nd4jPointer state, void *z, Nd4jLong *zShapeBuffer, void *extraArguments) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
void RandomFunction<T>::execTransform(int opNum, Nd4jPointer state, void *x, Nd4jLong *xShapeBuffer, void *z, Nd4jLong *zShapeBuffer, void *extraArguments) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
void RandomFunction<T>::execTransform(int opNum, Nd4jPointer state, void *x, Nd4jLong *xShapeBuffer, void *y, Nd4jLong *yShapeBuffer, void *z, Nd4jLong *zShapeBuffer, void *extraArguments) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename T>
|
||||||
|
void RandomFunction<T>::execTransform(int opNum, Nd4jPointer state, void *z, Nd4jLong *zShapeBuffer, void *extraArguments) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template class ND4J_EXPORT RandomFunction, , FLOAT_TYPES);
|
BUILD_SINGLE_TEMPLATE(template class ND4J_EXPORT RandomFunction, , FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,82 @@
|
||||||
|
/*******************************************************************************
|
||||||
|
* Copyright (c) 2015-2018 Skymind, Inc.
|
||||||
|
*
|
||||||
|
* This program and the accompanying materials are made available under the
|
||||||
|
* terms of the Apache License, Version 2.0 which is available at
|
||||||
|
* https://www.apache.org/licenses/LICENSE-2.0.
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
* License for the specific language governing permissions and limitations
|
||||||
|
* under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
******************************************************************************/
|
||||||
|
|
||||||
|
//
|
||||||
|
// @author raver119@gmail.com
|
||||||
|
//
|
||||||
|
|
||||||
|
|
||||||
|
#include <op_boilerplate.h>
|
||||||
|
#include <loops/reduce3.h>
|
||||||
|
#include <loops/legacy_ops.h>
|
||||||
|
#include <types/types.h>
|
||||||
|
#include <specials_cuda.h>
|
||||||
|
|
||||||
|
namespace functions {
|
||||||
|
namespace reduce3 {
|
||||||
|
template <typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void Reduce3<X,Y>::execScalar(void *vx, Nd4jLong *xShapeInfo, void *vextraParams, void *vy, Nd4jLong *yShapeInfo, void *vz, Nd4jLong *zShapeInfo) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
void Reduce3<X,Y>::execScalar(const int opNum, void *x, Nd4jLong *xShapeInfo, void *extraParamsVals, void *y, Nd4jLong *yShapeInfo, void *z, Nd4jLong *zShapeInfo) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void Reduce3<X,Y>::exec(void *vx, Nd4jLong *xShapeInfo, void *vextraParams, void *vy, Nd4jLong *yShapeInfo, void *vz, Nd4jLong *zShapeInfo, int *dimension, int dimensionLength) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void Reduce3<X,Y>::exec(void *vx, Nd4jLong *xShapeInfo, void *vextraParams, void *vy, Nd4jLong *yShapeInfo, void *vz, Nd4jLong *zShapeInfo, int *dimension, int dimensionLength, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void Reduce3<X,Y>::execAll(void *vx, Nd4jLong *xShapeInfo, void *vextraParams, void *vy, Nd4jLong *yShapeInfo, void *vz, Nd4jLong *zShapeInfo, int *dimension, int dimensionLength, Nd4jLong *xTadShapeInfo, Nd4jLong *xOffsets, Nd4jLong *yTadShapeInfo, Nd4jLong *yOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
void Reduce3<X,Y>::exec(const int opNum, void *vx, Nd4jLong *xShapeInfo, void *extraParamsVals, void *vy, Nd4jLong *yShapeInfo, void *vz, Nd4jLong *zShapeInfo, int *dimension, int dimensionLength) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
void Reduce3<X,Y>::exec(const int opNum, void *vx, Nd4jLong *xShapeInfo, void *extraParamsVals, void *vy, Nd4jLong *yShapeInfo, void *vz, Nd4jLong *zShapeInfo, int *dimension, int dimensionLength, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
void Reduce3<X,Y>::execAll(const int opNum, void *vx, Nd4jLong *xShapeInfo, void *extraParamsVals, void *vy, Nd4jLong *yShapeInfo, void *vz, Nd4jLong *zShapeInfo, int *dimension, int dimensionLength, Nd4jLong *xTadShapeInfo, Nd4jLong *xOffsets, Nd4jLong *yTadShapeInfo, Nd4jLong *yOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
|
@ -14,21 +14,19 @@
|
||||||
* SPDX-License-Identifier: Apache-2.0
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
******************************************************************************/
|
******************************************************************************/
|
||||||
|
|
||||||
package org.nd4j.jita.allocator.context;
|
//
|
||||||
|
// @author raver119@gmail.com
|
||||||
|
//
|
||||||
|
|
||||||
import lombok.AllArgsConstructor;
|
#include "loops/scalar.h"
|
||||||
import lombok.Data;
|
#include <cuda.h>
|
||||||
import lombok.NoArgsConstructor;
|
#include <cuda_runtime.h>
|
||||||
|
#include <op_boilerplate.h>
|
||||||
|
#include <helpers/TAD.h>
|
||||||
|
#include <types/types.h>
|
||||||
|
|
||||||
/**
|
namespace functions {
|
||||||
* This is simple class-independant storage for device contexts.
|
namespace scalar {
|
||||||
*
|
|
||||||
* TODO: Something better then typecast required here
|
}
|
||||||
* @author raver119@gmail.com
|
|
||||||
*/
|
|
||||||
@Data
|
|
||||||
@NoArgsConstructor
|
|
||||||
@AllArgsConstructor
|
|
||||||
public class ExternalContext {
|
|
||||||
private Object context;
|
|
||||||
}
|
}
|
|
@ -231,6 +231,41 @@ void ScalarBoolTransform<X,Y>::executeCudaAlongDimension(dim3& launchDims, cudaS
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT ScalarBoolTransform, , LIBND4J_TYPES, BOOL_TYPES);
|
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT ScalarBoolTransform, , LIBND4J_TYPES, BOOL_TYPES);
|
||||||
|
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
template <typename OpType>
|
||||||
|
void ScalarBoolTransform<X,Y>::transform(void *x, Nd4jLong *xShapeInfo, void *extraParams, void *z, Nd4jLong *zShapeInfo, void *scalars, int *dimension, int dimensionLength, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets, Nd4jLong *tadShapeInfoZ, Nd4jLong *tadOffsetsZ) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
void ScalarBoolTransform<X,Y>::transform(int opNum, void *x, Nd4jLong *xShapeInfo, void *extraParams, void *z, Nd4jLong *zShapeInfo, void *scalars, int *dimension, int dimensionLength, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets, Nd4jLong *tadShapeInfoZ, Nd4jLong *tadOffsetsZ) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
void ScalarBoolTransform<X,Y>::transform(const int opNum, void *x, Nd4jLong *xShapeInfo, void *result, Nd4jLong *resultShapeInfo, void *scalar, void *extraParams) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
void ScalarBoolTransform<X,Y>::transform(const int opNum, void *x, Nd4jLong xStride, void *result, Nd4jLong resultStride, void *scalar, void *extraParams, const Nd4jLong n) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void ScalarBoolTransform<X,Y>::transform(void *x, Nd4jLong *xShapeInfo, void *result, Nd4jLong *resultShapeInfo, void *scalar, void *extraParams) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template<typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void ScalarBoolTransform<X,Y>::transform(void *x, Nd4jLong xStride, void *result, Nd4jLong resultStride, void *scalar, void *extraParams, const Nd4jLong n) {
|
||||||
|
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -21,84 +21,6 @@
|
||||||
|
|
||||||
#include <ops/specials_cuda.h>
|
#include <ops/specials_cuda.h>
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
template <typename X, typename Y>
|
|
||||||
__global__ void bitonicArbitraryStepKernelValue(void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int window, int length, int reverse, bool descending) {
|
|
||||||
auto x = static_cast<X*>(vx);
|
|
||||||
auto y = static_cast<Y*>(vy);
|
|
||||||
|
|
||||||
int tid = threadIdx.x + blockDim.x * blockIdx.x;
|
|
||||||
int half = window>>1;
|
|
||||||
|
|
||||||
__shared__ Nd4jLong xLength;
|
|
||||||
if (threadIdx.x == 0) {
|
|
||||||
xLength = shape::length(xShapeInfo);
|
|
||||||
}
|
|
||||||
__syncthreads();
|
|
||||||
|
|
||||||
//for (int i = 0; i < length; i+= window)
|
|
||||||
/*
|
|
||||||
if window == 4;
|
|
||||||
iterations will be: 0; 4; 8; 12; 16; 20
|
|
||||||
if gridDim = 3;
|
|
||||||
on first iteration we'll have: 0; 4; 8;
|
|
||||||
on second iteration we'll have: 0 + (3 * 4) = 12; 4 + (3 * 4) = 16; 8 + (3 * 4) = 20
|
|
||||||
*/
|
|
||||||
int firstPosition;
|
|
||||||
int firstStep;
|
|
||||||
int secondPosition;
|
|
||||||
int secondStep;
|
|
||||||
|
|
||||||
int WARP_SIZE = 32;
|
|
||||||
int numWarps = (gridDim.x * blockDim.x) / 32;
|
|
||||||
int warpId = tid / WARP_SIZE;
|
|
||||||
int warpIdx = tid % WARP_SIZE;
|
|
||||||
|
|
||||||
if (half >= 128) {
|
|
||||||
firstPosition = blockIdx.x * window;
|
|
||||||
firstStep = gridDim.x * window;
|
|
||||||
|
|
||||||
secondPosition = threadIdx.x;
|
|
||||||
secondStep = blockDim.x;
|
|
||||||
} else if (half >= 32) {
|
|
||||||
firstPosition = warpId * window;
|
|
||||||
firstStep = numWarps * window;
|
|
||||||
|
|
||||||
secondPosition = warpIdx;
|
|
||||||
secondStep = WARP_SIZE;
|
|
||||||
} else {
|
|
||||||
firstPosition = tid * window;
|
|
||||||
firstStep = blockDim.x * gridDim.x * window;
|
|
||||||
|
|
||||||
secondPosition = 0;
|
|
||||||
secondStep = 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
for (int i = firstPosition; i < length; i += firstStep) {
|
|
||||||
for (int j = secondPosition; j < half; j += secondStep) {
|
|
||||||
int it = (reverse) ? i + j + half : i + window - j - 1;
|
|
||||||
int ij = i+j;
|
|
||||||
if (it < length && ij < length ) {
|
|
||||||
int posIT = shape::getIndexOffset(it, yShapeInfo, xLength);
|
|
||||||
int posIJ = shape::getIndexOffset(ij, yShapeInfo, xLength);
|
|
||||||
|
|
||||||
Y v0 = y[posIJ];
|
|
||||||
Y v1 = y[posIT];
|
|
||||||
|
|
||||||
if(!descending == (v0 > v1)) {
|
|
||||||
y[posIJ] = v1;
|
|
||||||
y[posIT] = v0;
|
|
||||||
|
|
||||||
X xtemp = x[posIJ];
|
|
||||||
x[posIJ] = x[posIT];
|
|
||||||
x[posIT] = xtemp;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template <typename X, typename Y>
|
template <typename X, typename Y>
|
||||||
__global__ void bitonicArbitraryStepKernelKey(void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int window, int length, int reverse, bool descending) {
|
__global__ void bitonicArbitraryStepKernelKey(void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int window, int length, int reverse, bool descending) {
|
||||||
|
@ -264,11 +186,5 @@ __host__ void bitonicArbitraryStepGenericKey(dim3 &launchDims, cudaStream_t *str
|
||||||
bitonicArbitraryStepKernelKey<X,Y><<<launchDims.x, launchDims.y, launchDims.z, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, window, length, reverse, descending);
|
bitonicArbitraryStepKernelKey<X,Y><<<launchDims.x, launchDims.y, launchDims.z, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, window, length, reverse, descending);
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename X, typename Y>
|
|
||||||
__host__ void bitonicArbitraryStepGenericValue(dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int window, int length, int reverse, bool descending) {
|
|
||||||
bitonicArbitraryStepKernelValue<X,Y><<<launchDims.x, launchDims.y, launchDims.z, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, window, length, reverse, descending);
|
|
||||||
}
|
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void ND4J_EXPORT bitonicArbitraryStepGeneric, (dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, int window, int length, int reverse, bool descending), LIBND4J_TYPES);
|
BUILD_SINGLE_TEMPLATE(template void ND4J_EXPORT bitonicArbitraryStepGeneric, (dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, int window, int length, int reverse, bool descending), LIBND4J_TYPES);
|
||||||
BUILD_DOUBLE_TEMPLATE(template void ND4J_EXPORT bitonicArbitraryStepGenericKey, (dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int window, int length, int reverse, bool descending), LIBND4J_TYPES, LIBND4J_TYPES);
|
BUILD_DOUBLE_TEMPLATE(template void ND4J_EXPORT bitonicArbitraryStepGenericKey, (dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int window, int length, int reverse, bool descending), LIBND4J_TYPES, LIBND4J_TYPES);
|
||||||
BUILD_DOUBLE_TEMPLATE(template void ND4J_EXPORT bitonicArbitraryStepGenericValue, (dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int window, int length, int reverse, bool descending), LIBND4J_TYPES, LIBND4J_TYPES);
|
|
||||||
|
|
|
@ -21,60 +21,6 @@
|
||||||
|
|
||||||
#include <ops/specials_cuda.h>
|
#include <ops/specials_cuda.h>
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
template <typename X, typename Y>
|
|
||||||
__global__ void bitonicSortStepKernelValue(void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int j, int k, int length, bool descending) {
|
|
||||||
|
|
||||||
auto x = static_cast<X*>(vx);
|
|
||||||
auto y = static_cast<Y*>(vy);
|
|
||||||
|
|
||||||
unsigned int i, ixj; /* Sorting partners: i and ixj */
|
|
||||||
i = threadIdx.x + blockDim.x * blockIdx.x;
|
|
||||||
|
|
||||||
__shared__ Nd4jLong xLength;
|
|
||||||
if (threadIdx.x == 0)
|
|
||||||
xLength = shape::length(xShapeInfo);
|
|
||||||
|
|
||||||
__syncthreads();
|
|
||||||
|
|
||||||
|
|
||||||
if (i >= length)
|
|
||||||
return;
|
|
||||||
|
|
||||||
ixj = i^j;
|
|
||||||
|
|
||||||
/* The threads with the lowest ids sort the array. */
|
|
||||||
if ((ixj)>i) {
|
|
||||||
int posI = shape::getIndexOffset(i, yShapeInfo, xLength);
|
|
||||||
int posIXJ = shape::getIndexOffset(ixj, yShapeInfo, xLength);
|
|
||||||
|
|
||||||
if ((i&k)==0) {
|
|
||||||
/* Sort ascending */
|
|
||||||
if (!descending == (y[posI]>y[posIXJ])) {
|
|
||||||
/* exchange(i,ixj); */
|
|
||||||
X temp = x[posI];
|
|
||||||
x[posI] = x[posIXJ];
|
|
||||||
x[posIXJ] = temp;
|
|
||||||
|
|
||||||
Y ytemp = y[posI];
|
|
||||||
y[posI] = y[posIXJ];
|
|
||||||
y[posIXJ] = ytemp;
|
|
||||||
}
|
|
||||||
} else if ((i&k)!=0) {
|
|
||||||
/* Sort descending */
|
|
||||||
if (!descending == (y[posI]<y[posIXJ])) {
|
|
||||||
/* exchange(i,ixj); */
|
|
||||||
X temp = x[posI];
|
|
||||||
x[posI] = x[posIXJ];
|
|
||||||
x[posIXJ] = temp;
|
|
||||||
|
|
||||||
Y ytemp = y[posI];
|
|
||||||
y[posI] = y[posIXJ];
|
|
||||||
y[posIXJ] = ytemp;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template <typename X, typename Y>
|
template <typename X, typename Y>
|
||||||
|
@ -189,13 +135,6 @@ __host__ void bitonicSortStepGenericKey(dim3 &launchDims, cudaStream_t *stream,
|
||||||
bitonicSortStepKernelKey<X,Y><<<launchDims.x, launchDims.y, launchDims.z, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, j, k, length, descending);
|
bitonicSortStepKernelKey<X,Y><<<launchDims.x, launchDims.y, launchDims.z, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, j, k, length, descending);
|
||||||
}
|
}
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
|
||||||
template <typename X, typename Y>
|
|
||||||
__host__ void bitonicSortStepGenericValue(dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int j, int k, int length, bool descending) {
|
|
||||||
bitonicSortStepKernelValue<X,Y><<<launchDims.x, launchDims.y, launchDims.z, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, j, k, length, descending);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void ND4J_EXPORT bitonicSortStepGeneric, (dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, int j, int k, int length, bool descending), LIBND4J_TYPES);
|
BUILD_SINGLE_TEMPLATE(template void ND4J_EXPORT bitonicSortStepGeneric, (dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, int j, int k, int length, bool descending), LIBND4J_TYPES);
|
||||||
BUILD_DOUBLE_TEMPLATE(template void ND4J_EXPORT bitonicSortStepGenericKey, (dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int j, int k, int length, bool descending), LIBND4J_TYPES, LIBND4J_TYPES);
|
BUILD_DOUBLE_TEMPLATE(template void ND4J_EXPORT bitonicSortStepGenericKey, (dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int j, int k, int length, bool descending), LIBND4J_TYPES, LIBND4J_TYPES);
|
||||||
BUILD_DOUBLE_TEMPLATE(template void ND4J_EXPORT bitonicSortStepGenericValue, (dim3 &launchDims, cudaStream_t *stream, void *vx, Nd4jLong *xShapeInfo, void *vy, Nd4jLong *yShapeInfo, int j, int k, int length, bool descending), LIBND4J_TYPES, LIBND4J_TYPES);
|
|
||||||
|
|
|
@ -62,9 +62,9 @@ namespace nd4j {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
BUILD_DOUBLE_TEMPLATE(template __global__ void repeatKernelDouble, (void const* inputBuffer, void* outputBuffer,
|
BUILD_SINGLE_TEMPLATE_TWICE(template __global__ void repeatKernelDouble, (void const* inputBuffer, void* outputBuffer,
|
||||||
Nd4jLong numTads, Nd4jLong inputLength, Nd4jLong* tadOnlyInputShapeInfo, Nd4jLong *tadInputOffsets,
|
Nd4jLong numTads, Nd4jLong inputLength, Nd4jLong* tadOnlyInputShapeInfo, Nd4jLong *tadInputOffsets,
|
||||||
Nd4jLong* tadOnlyOutputShapeInfo, Nd4jLong *tadOutputOffsets), LIBND4J_TYPES, LIBND4J_TYPES);
|
Nd4jLong* tadOnlyOutputShapeInfo, Nd4jLong *tadOutputOffsets), LIBND4J_TYPES);
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
void repeatKernelH(void const* inputBuffer, void* outputBuffer, Nd4jLong numTads, Nd4jLong inputLength, Nd4jLong outputLength,
|
void repeatKernelH(void const* inputBuffer, void* outputBuffer, Nd4jLong numTads, Nd4jLong inputLength, Nd4jLong outputLength,
|
||||||
|
@ -88,10 +88,10 @@ namespace nd4j {
|
||||||
dim3 launchDims(256, 512, 8192);
|
dim3 launchDims(256, 512, 8192);
|
||||||
repeatKernelDouble<X,Y><<<launchDims.x, launchDims.y, launchDims.z, stream>>>(inputBuffer, outputBuffer, numTads, inputLength, tadOnlyInputShapeInfo, tadInputOffsets, tadOnlyOutputShapeInfo, tadOutputOffsets);
|
repeatKernelDouble<X,Y><<<launchDims.x, launchDims.y, launchDims.z, stream>>>(inputBuffer, outputBuffer, numTads, inputLength, tadOnlyInputShapeInfo, tadInputOffsets, tadOnlyOutputShapeInfo, tadOutputOffsets);
|
||||||
}
|
}
|
||||||
BUILD_DOUBLE_TEMPLATE(template void repeatKernelHH, (void const* inputBuffer, void* outputBuffer, Nd4jLong numTads, Nd4jLong inputLength,
|
BUILD_SINGLE_TEMPLATE_TWICE(template void repeatKernelHH, (void const* inputBuffer, void* outputBuffer, Nd4jLong numTads, Nd4jLong inputLength,
|
||||||
Nd4jLong* tadOnlyInputShapeInfo, Nd4jLong *tadInputOffsets,
|
Nd4jLong* tadOnlyInputShapeInfo, Nd4jLong *tadInputOffsets,
|
||||||
Nd4jLong* tadOnlyOutputShapeInfo, Nd4jLong *tadOutputOffsets,
|
Nd4jLong* tadOnlyOutputShapeInfo, Nd4jLong *tadOutputOffsets,
|
||||||
cudaStream_t stream), LIBND4J_TYPES, LIBND4J_TYPES);
|
cudaStream_t stream), LIBND4J_TYPES);
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
|
@ -21,6 +21,17 @@
|
||||||
#include <loops/special_kernels.h>
|
#include <loops/special_kernels.h>
|
||||||
|
|
||||||
namespace nd4j {
|
namespace nd4j {
|
||||||
|
static Nd4jLong __device__ __noinline__ _getIndexOffset(Nd4jLong index, Nd4jLong *shapeInfo, Nd4jLong length) {
|
||||||
|
return shape::getIndexOffset(index, shapeInfo, length);
|
||||||
|
}
|
||||||
|
|
||||||
|
static Nd4jLong __device__ __noinline__ _subArrayOffset(Nd4jLong index, Nd4jLong *shapeInfoA, Nd4jLong *shapeInfoB) {
|
||||||
|
return shape::subArrayOffset(index, shapeInfoA, shapeInfoB);
|
||||||
|
}
|
||||||
|
|
||||||
|
static Nd4jLong __device__ __noinline__ _length(Nd4jLong *shapeInfo) {
|
||||||
|
return shape::length(shapeInfo);
|
||||||
|
}
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////
|
||||||
template<typename T>
|
template<typename T>
|
||||||
|
@ -34,31 +45,20 @@ namespace nd4j {
|
||||||
//const auto resultLength = shape::length(outputShape);
|
//const auto resultLength = shape::length(outputShape);
|
||||||
if (shape::order(outputShape) == 'c') { // ews == 1 always here
|
if (shape::order(outputShape) == 'c') { // ews == 1 always here
|
||||||
for (int i = tid; i < resultLength; i += totalThreads) {
|
for (int i = tid; i < resultLength; i += totalThreads) {
|
||||||
auto yOffset = shape::subArrayOffset(i, outputShape, inputShape);
|
auto yOffset = _subArrayOffset(i, outputShape, inputShape);
|
||||||
*(reinterpret_cast<T *>(outputBuffer) + i) = *(reinterpret_cast<T const *>(inputBuffer) + yOffset);
|
*(reinterpret_cast<T *>(outputBuffer) + i) = *(reinterpret_cast<T const *>(inputBuffer) + yOffset);
|
||||||
}
|
}
|
||||||
// for(Nd4jLong i=0; i<resultLen; ++i) {
|
|
||||||
// auto yOffset = shape::subArrayOffset(newShapeInfo, _shapeInfo, i);
|
|
||||||
// BUILD_SINGLE_SELECTOR(xType, this->template templatedAssign, (newBuff, i, this->_buffer, yOffset), LIBND4J_TYPES);
|
|
||||||
//
|
|
||||||
// }
|
|
||||||
} else {
|
} else {
|
||||||
//
|
|
||||||
//auto inputLength = shape::lenght(inputShape);
|
|
||||||
for (int i = tid; i < resultLength; i += totalThreads) {
|
for (int i = tid; i < resultLength; i += totalThreads) {
|
||||||
auto xOffset = shape::getIndexOffset(i, outputShape, resultLength);
|
auto xOffset = _getIndexOffset(i, outputShape, resultLength);
|
||||||
auto yOffset = shape::subArrayOffset(i, outputShape, inputShape);
|
auto yOffset = _subArrayOffset(i, outputShape, inputShape);
|
||||||
*(reinterpret_cast<T *>(outputBuffer) + xOffset) = *(reinterpret_cast<T const *>(inputBuffer) +
|
*(reinterpret_cast<T *>(outputBuffer) + xOffset) = *(reinterpret_cast<T const *>(inputBuffer) + yOffset);
|
||||||
yOffset);
|
|
||||||
// BUILD_SINGLE_SELECTOR(xType, this->template templatedAssign, (newBuff, xOffset, this->_buffer, yOffset), LIBND4J_TYPES);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template __global__ void tileKernel,
|
BUILD_SINGLE_TEMPLATE(template __global__ void tileKernel,(void const* inputBuffer, Nd4jLong* inputShape, void* outputBuffer, Nd4jLong* outputShape, Nd4jLong resultLength), LIBND4J_TYPES);
|
||||||
(void const* inputBuffer, Nd4jLong* inputShape, void* outputBuffer, Nd4jLong* outputShape, Nd4jLong resultLength),
|
|
||||||
LIBND4J_TYPES);
|
|
||||||
|
|
||||||
template<typename T>
|
template<typename T>
|
||||||
void tileKernelH(void const *inputBuffer, Nd4jLong *inputShape, void *outputBuffer, Nd4jLong *outputShape, Nd4jLong resultLength, cudaStream_t *stream) {
|
void tileKernelH(void const *inputBuffer, Nd4jLong *inputShape, void *outputBuffer, Nd4jLong *outputShape, Nd4jLong resultLength, cudaStream_t *stream) {
|
||||||
|
@ -77,29 +77,26 @@ namespace nd4j {
|
||||||
|
|
||||||
if (ordering == 'c' && ews == 1) { // ews == 1 always here
|
if (ordering == 'c' && ews == 1) { // ews == 1 always here
|
||||||
for (int i = tid; i < resultLength; i += totalThreads) {
|
for (int i = tid; i < resultLength; i += totalThreads) {
|
||||||
auto yOffset = shape::subArrayOffset(i, outputShape, inputShape);
|
auto yOffset = _subArrayOffset(i, outputShape, inputShape);
|
||||||
*(reinterpret_cast<X *>(outputBuffer) + i) = static_cast<X>(*(reinterpret_cast<Y const *>(inputBuffer) +
|
*(reinterpret_cast<X *>(outputBuffer) + i) = static_cast<X>(*(reinterpret_cast<Y const *>(inputBuffer) + yOffset));
|
||||||
yOffset));
|
|
||||||
}
|
}
|
||||||
} else if (ordering == 'c' && ews > 1) {
|
} else if (ordering == 'c' && ews > 1) {
|
||||||
for (int i = tid; i < resultLength; i += totalThreads) {
|
for (int i = tid; i < resultLength; i += totalThreads) {
|
||||||
auto yOffset = shape::subArrayOffset(i, outputShape, inputShape);
|
auto yOffset = _subArrayOffset(i, outputShape, inputShape);
|
||||||
*(reinterpret_cast<X *>(outputBuffer) + i * ews) = static_cast<X>(*(
|
*(reinterpret_cast<X *>(outputBuffer) + i * ews) = static_cast<X>(*(reinterpret_cast<Y const *>(inputBuffer) + yOffset));
|
||||||
reinterpret_cast<Y const *>(inputBuffer) + yOffset));
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
|
||||||
for (int i = tid; i < resultLength; i += totalThreads) {
|
for (int i = tid; i < resultLength; i += totalThreads) {
|
||||||
|
|
||||||
auto xOffset = shape::getIndexOffset(i, outputShape, resultLength);
|
auto xOffset = _getIndexOffset(i, outputShape, resultLength);
|
||||||
auto yOffset = shape::subArrayOffset(i, outputShape, inputShape);
|
auto yOffset = _subArrayOffset(i, outputShape, inputShape);
|
||||||
*(reinterpret_cast<X *>(outputBuffer) + xOffset) = static_cast<X>(*(
|
*(reinterpret_cast<X *>(outputBuffer) + xOffset) = static_cast<X>(*(reinterpret_cast<Y const *>(inputBuffer) + yOffset));
|
||||||
reinterpret_cast<Y const *>(inputBuffer) + yOffset));
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template __global__ void tileKernelDouble, (void const* inputBuffer, Nd4jLong* inputShape, void* outputBuffer, Nd4jLong* outputShape, Nd4jLong resultLength, Nd4jLong ews), LIBND4J_TYPES, LIBND4J_TYPES);
|
BUILD_SINGLE_TEMPLATE_TWICE(template __global__ void tileKernelDouble, (void const* inputBuffer, Nd4jLong* inputShape, void* outputBuffer, Nd4jLong* outputShape, Nd4jLong resultLength, Nd4jLong ews), LIBND4J_TYPES);
|
||||||
|
|
||||||
template<typename X, typename Y>
|
template<typename X, typename Y>
|
||||||
void tileKernelHH(void const *inputBuffer, Nd4jLong *inputShape, void *outputBuffer, Nd4jLong *outputShape, Nd4jLong resultLength, Nd4jLong ews, cudaStream_t *stream) {
|
void tileKernelHH(void const *inputBuffer, Nd4jLong *inputShape, void *outputBuffer, Nd4jLong *outputShape, Nd4jLong resultLength, Nd4jLong ews, cudaStream_t *stream) {
|
||||||
|
@ -107,5 +104,5 @@ namespace nd4j {
|
||||||
tileKernelDouble<X, Y><<<launchDims.x, launchDims.y, launchDims.z, *stream>>>(inputBuffer, inputShape, outputBuffer, outputShape, resultLength, ews);
|
tileKernelDouble<X, Y><<<launchDims.x, launchDims.y, launchDims.z, *stream>>>(inputBuffer, inputShape, outputBuffer, outputShape, resultLength, ews);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void tileKernelHH, (void const* inputBuffer, Nd4jLong* inputShape, void* outputBuffer, Nd4jLong* outputShape, Nd4jLong resultLength, Nd4jLong ews, cudaStream_t *stream),LIBND4J_TYPES, LIBND4J_TYPES);
|
BUILD_SINGLE_TEMPLATE_TWICE(template void tileKernelHH, (void const* inputBuffer, Nd4jLong* inputShape, void* outputBuffer, Nd4jLong* outputShape, Nd4jLong resultLength, Nd4jLong ews, cudaStream_t *stream),LIBND4J_TYPES);
|
||||||
}
|
}
|
|
@ -413,6 +413,74 @@ void _CUDA_G summaryStatsReduceT(int op, void *dx, Nd4jLong *xShapeInfo, int xRa
|
||||||
DEBUG_KERNEL(stream, opNum);
|
DEBUG_KERNEL(stream, opNum);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
Y SummaryStatsReduce<X,Y>::execScalar(int opNum,
|
||||||
|
bool biasCorrected,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *extraParams) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
void SummaryStatsReduce<X,Y>::execScalar(int opNum,
|
||||||
|
bool biasCorrected,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *extraParams,
|
||||||
|
void *vz,
|
||||||
|
Nd4jLong *resultShapeInfoBuffer) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
void SummaryStatsReduce<X,Y>::exec(int opNum,
|
||||||
|
bool biasCorrected,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *extraParams,
|
||||||
|
void *vz,
|
||||||
|
Nd4jLong *resultShapeInfoBuffer,
|
||||||
|
int *dimension, int dimensionLength) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
Y SummaryStatsReduce<X,Y>::execScalar(bool biasCorrected,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *extraParams) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void SummaryStatsReduce<X,Y>::execScalar(bool biasCorrected,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *extraParams,
|
||||||
|
void *vz,
|
||||||
|
Nd4jLong *resultShapeInfoBuffer) {
|
||||||
|
//
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
template <typename X, typename Y>
|
||||||
|
template<typename OpType>
|
||||||
|
void SummaryStatsReduce<X,Y>::exec(bool biasCorrected,
|
||||||
|
void *x,
|
||||||
|
Nd4jLong *xShapeInfo,
|
||||||
|
void *extraParams,
|
||||||
|
void *vz,
|
||||||
|
Nd4jLong *resultShapeInfoBuffer,
|
||||||
|
int *dimension,
|
||||||
|
int dimensionLength) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT SummaryStatsReduce, , LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT SummaryStatsReduce, , LIBND4J_TYPES, FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
|
@ -114,6 +114,17 @@ namespace functions {
|
||||||
nd4j::DebugHelper::checkErrorCode(stream, "transformAny(...) failed");
|
nd4j::DebugHelper::checkErrorCode(stream, "transformAny(...) failed");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Z>
|
||||||
|
void TransformAny<X,Z>::exec(int opNum, void *dx, Nd4jLong *xShapeInfo, void *vz, Nd4jLong *zShapeInfo, void *extraParams, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets, bool allowParallelism) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Z>
|
||||||
|
template <typename OpType>
|
||||||
|
void TransformAny<X,Z>::exec(void *dx, Nd4jLong *xShapeInfo, void *vz, Nd4jLong *zShapeInfo, void *extraParams, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets, bool allowParallelism) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT TransformAny, , LIBND4J_TYPES, LIBND4J_TYPES);
|
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT TransformAny, , LIBND4J_TYPES, LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -120,6 +120,17 @@ namespace functions {
|
||||||
nd4j::DebugHelper::checkErrorCode(stream, "transformBool(...) failed");
|
nd4j::DebugHelper::checkErrorCode(stream, "transformBool(...) failed");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Z>
|
||||||
|
void TransformBool<X,Z>::exec(int opNum, void *dx, Nd4jLong *xShapeInfo, void *result, Nd4jLong *resultShapeInfo, void *extraParams, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Z>
|
||||||
|
template <typename OpType>
|
||||||
|
void TransformBool<X,Z>::exec(void *dx, Nd4jLong *xShapeInfo, void *result, Nd4jLong *resultShapeInfo, void *extraParams, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT TransformBool, , LIBND4J_TYPES, BOOL_TYPES);
|
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT TransformBool, , LIBND4J_TYPES, BOOL_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -142,6 +142,17 @@ namespace functions {
|
||||||
nd4j::DebugHelper::checkErrorCode(stream, "transformFloat(...) failed");
|
nd4j::DebugHelper::checkErrorCode(stream, "transformFloat(...) failed");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Z>
|
||||||
|
void TransformFloat<X,Z>::exec(int opNum, void *dx, Nd4jLong *xShapeInfo, void *result, Nd4jLong *resultShapeInfo, void *extraParams, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X, typename Z>
|
||||||
|
template <typename OpType>
|
||||||
|
void TransformFloat<X,Z>::exec(void *dx, Nd4jLong *xShapeInfo, void *result, Nd4jLong *resultShapeInfo, void *extraParams, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT TransformFloat, , LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_DOUBLE_TEMPLATE(template class ND4J_EXPORT TransformFloat, , LIBND4J_TYPES, FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
|
@ -118,6 +118,17 @@ namespace functions {
|
||||||
nd4j::DebugHelper::checkErrorCode(stream, "transformSame(...) failed");
|
nd4j::DebugHelper::checkErrorCode(stream, "transformSame(...) failed");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template<typename X>
|
||||||
|
void TransformSame<X>::exec(int opNum, void *dx, Nd4jLong *xShapeInfo, void *result, Nd4jLong *resultShapeInfo, void *extraParams, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X>
|
||||||
|
template <typename OpType>
|
||||||
|
void TransformSame<X>::exec(void *dx, Nd4jLong *xShapeInfo, void *result, Nd4jLong *resultShapeInfo, void *extraParams, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template class ND4J_EXPORT TransformSame, , LIBND4J_TYPES);
|
BUILD_SINGLE_TEMPLATE(template class ND4J_EXPORT TransformSame, , LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -119,6 +119,17 @@ namespace functions {
|
||||||
nd4j::DebugHelper::checkErrorCode(stream, "transformStrict(...) failed");
|
nd4j::DebugHelper::checkErrorCode(stream, "transformStrict(...) failed");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template<typename X>
|
||||||
|
void TransformStrict<X>::exec(int opNum, void *dx, Nd4jLong *xShapeInfo, void *result, Nd4jLong *resultShapeInfo, void *extraParams, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
template<typename X>
|
||||||
|
template <typename OpType>
|
||||||
|
void TransformStrict<X>::exec(void *dx, Nd4jLong *xShapeInfo, void *result, Nd4jLong *resultShapeInfo, void *extraParams, Nd4jLong *tadShapeInfo, Nd4jLong *tadOffsets) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template class ND4J_EXPORT TransformStrict, , FLOAT_TYPES);
|
BUILD_SINGLE_TEMPLATE(template class ND4J_EXPORT TransformStrict, , FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -209,15 +209,6 @@ PRAGMA_OMP_ATOMIC_ARGS(write)
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
_CUDA_H Nd4jLong TypeCast::estimateQuantizedSize(Nd4jLong rawSize) {
|
|
||||||
if (rawSize <= 0)
|
|
||||||
throw std::runtime_error("Input size for quantization can't be <= 0");
|
|
||||||
|
|
||||||
// 2 fp32 values for max/min, and rawSize number of BYTES
|
|
||||||
return 8 + rawSize;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
template void TypeCast::convertFromThreshold<float>(Nd4jPointer * extras, void *dx, Nd4jLong N, void *dz);
|
template void TypeCast::convertFromThreshold<float>(Nd4jPointer * extras, void *dx, Nd4jLong N, void *dz);
|
||||||
template void TypeCast::convertFromThreshold<float16>(Nd4jPointer * extras, void *dx, Nd4jLong N, void *dz);
|
template void TypeCast::convertFromThreshold<float16>(Nd4jPointer * extras, void *dx, Nd4jLong N, void *dz);
|
||||||
template void TypeCast::convertFromThreshold<double>(Nd4jPointer * extras, void *dx, Nd4jLong N, void *dz);
|
template void TypeCast::convertFromThreshold<double>(Nd4jPointer * extras, void *dx, Nd4jLong N, void *dz);
|
|
@ -69,7 +69,14 @@ namespace nd4j {
|
||||||
template <typename T>
|
template <typename T>
|
||||||
static _CUDA_H void convertFromThreshold(Nd4jPointer * extras, void *dx, Nd4jLong N, void *dz);
|
static _CUDA_H void convertFromThreshold(Nd4jPointer * extras, void *dx, Nd4jLong N, void *dz);
|
||||||
|
|
||||||
static _CUDA_H Nd4jLong estimateQuantizedSize(Nd4jLong rawSize);
|
FORCEINLINE static _CUDA_H Nd4jLong estimateQuantizedSize(Nd4jLong rawSize) {
|
||||||
|
if (rawSize <= 0)
|
||||||
|
throw std::runtime_error("Input size for quantization can't be <= 0");
|
||||||
|
|
||||||
|
// 2 fp32 values for max/min, and rawSize number of BYTES
|
||||||
|
return 8 + rawSize;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
static _CUDA_H void convertToQuantized(Nd4jPointer *extras, void *dx, Nd4jLong N, void *dz);
|
static _CUDA_H void convertToQuantized(Nd4jPointer *extras, void *dx, Nd4jLong N, void *dz);
|
||||||
|
|
|
@ -75,7 +75,7 @@ namespace nd4j {
|
||||||
DECLARE_TYPES(non_max_suppression) {
|
DECLARE_TYPES(non_max_suppression) {
|
||||||
getOpDescriptor()
|
getOpDescriptor()
|
||||||
->setAllowedInputTypes(nd4j::DataType::ANY)
|
->setAllowedInputTypes(nd4j::DataType::ANY)
|
||||||
->setAllowedOutputTypes({ALL_INTS});
|
->setAllowedOutputTypes({ALL_INDICES});
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -87,8 +87,7 @@ static void adjustHue_(const NDArray *input, const NDArray* deltaScalarArr, NDAr
|
||||||
|
|
||||||
|
|
||||||
void adjustHue(nd4j::LaunchContext* context, const NDArray *input, const NDArray* deltaScalarArr, NDArray *output, const int dimC) {
|
void adjustHue(nd4j::LaunchContext* context, const NDArray *input, const NDArray* deltaScalarArr, NDArray *output, const int dimC) {
|
||||||
|
BUILD_SINGLE_SELECTOR(input->dataType(), adjustHue_, (input, deltaScalarArr, output, dimC), FLOAT_TYPES);
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), adjustHue_, (input, deltaScalarArr, output, dimC), LIBND4J_TYPES);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -89,7 +89,7 @@ static void adjustSaturation_(const NDArray *input, const NDArray* factorScalarA
|
||||||
|
|
||||||
void adjustSaturation(nd4j::LaunchContext* context, const NDArray *input, const NDArray* factorScalarArr, NDArray *output, const int dimC) {
|
void adjustSaturation(nd4j::LaunchContext* context, const NDArray *input, const NDArray* factorScalarArr, NDArray *output, const int dimC) {
|
||||||
|
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), adjustSaturation_, (input, factorScalarArr, output, dimC), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), adjustSaturation_, (input, factorScalarArr, output, dimC), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
|
|
@ -119,11 +119,9 @@ void col2im_(nd4j::LaunchContext & context, const NDArray& input, NDArray& outp
|
||||||
|
|
||||||
|
|
||||||
void col2im(nd4j::LaunchContext & context, const NDArray& input, NDArray& output, const int sH, const int sW, const int pH, const int pW, const int iH, const int iW, const int dH, const int dW) {
|
void col2im(nd4j::LaunchContext & context, const NDArray& input, NDArray& output, const int sH, const int sW, const int pH, const int pW, const int iH, const int iW, const int dH, const int dW) {
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), col2im_, (context, input, output, sH, sW, pH, pW, iH, iW, dH, dW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), col2im_, (context, input, output, sH, sW, pH, pW, iH, iW, dH, dW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void col2im_, (nd4j::LaunchContext & context, const NDArray& input, NDArray& output, const int sH, const int sW, const int pH, const int pW, const int iH, const int iW, const int dH, const int dW), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -2445,71 +2445,52 @@ void ConvolutionUtils::getMKLDNNMemoryDescConv3d(
|
||||||
|
|
||||||
|
|
||||||
void ConvolutionUtils::conv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
void ConvolutionUtils::conv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), output->dataType(), conv2d_, (block, input, weights, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), conv2d_, (block, input, weights, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::conv2dBP(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
void ConvolutionUtils::conv2dBP(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), gradO->dataType(), conv2dBP_, (block, input, weights, bias, gradO, gradI, gradW, gradB, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), conv2dBP_, (block, input, weights, bias, gradO, gradI, gradW, gradB, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::depthwiseConv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
void ConvolutionUtils::depthwiseConv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), output->dataType(), depthwiseConv2d_, (input, weights, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), depthwiseConv2d_, (input, weights, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::depthwiseConv2dBP(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
void ConvolutionUtils::depthwiseConv2dBP(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), gradO->dataType(), depthwiseConv2dBP_, (input, weights, bias, gradO, gradI, gradW, gradB, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), depthwiseConv2dBP_, (input, weights, bias, gradO, gradI, gradW, gradB, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::sconv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weightsDepth, const NDArray* weightsPoint, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
void ConvolutionUtils::sconv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weightsDepth, const NDArray* weightsPoint, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), output->dataType(), sconv2d_, (block, input, weightsDepth, weightsPoint, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), sconv2d_, (block, input, weightsDepth, weightsPoint, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::vol2col(nd4j::graph::Context& block, const NDArray& volume, NDArray& columns, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW) {
|
void ConvolutionUtils::vol2col(nd4j::graph::Context& block, const NDArray& volume, NDArray& columns, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW) {
|
||||||
BUILD_SINGLE_SELECTOR(volume.dataType(), vol2col_, (volume, columns, sD, sH, sW, pD, pH, pW, dD, dH, dW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(volume.dataType(), vol2col_, (volume, columns, sD, sH, sW, pD, pH, pW, dD, dH, dW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::col2vol(nd4j::graph::Context& block, const NDArray& columns, NDArray& volume, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW) {
|
void ConvolutionUtils::col2vol(nd4j::graph::Context& block, const NDArray& columns, NDArray& volume, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW) {
|
||||||
BUILD_SINGLE_SELECTOR(volume.dataType(), col2vol_, (columns, volume, sD, sH, sW, pD, pH, pW, dD, dH, dW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(volume.dataType(), col2vol_, (columns, volume, sD, sH, sW, pD, pH, pW, dD, dH, dW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::upsampling2d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int factorH, const int factorW, const bool isNCHW) {
|
void ConvolutionUtils::upsampling2d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int factorH, const int factorW, const bool isNCHW) {
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), upsampling2d_, (input, output, factorH, factorW, isNCHW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), upsampling2d_, (input, output, factorH, factorW, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::upsampling3d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int factorD, const int factorH, const int factorW, const bool isNCDHW) {
|
void ConvolutionUtils::upsampling3d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int factorD, const int factorH, const int factorW, const bool isNCDHW) {
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), upsampling3d_, (input, output, factorD, factorH, factorW, isNCDHW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), upsampling3d_, (input, output, factorD, factorH, factorW, isNCDHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::upsampling2dBP(nd4j::graph::Context& block, const NDArray& gradO, NDArray& gradI, const bool isNCHW) {
|
void ConvolutionUtils::upsampling2dBP(nd4j::graph::Context& block, const NDArray& gradO, NDArray& gradI, const bool isNCHW) {
|
||||||
BUILD_SINGLE_SELECTOR(gradO.dataType(), upsampling2dBP_, (gradO, gradI, isNCHW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(gradO.dataType(), upsampling2dBP_, (gradO, gradI, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::upsampling3dBP(nd4j::graph::Context& block, const NDArray& gradO, NDArray& gradI, const bool isNCHW) {
|
void ConvolutionUtils::upsampling3dBP(nd4j::graph::Context& block, const NDArray& gradO, NDArray& gradI, const bool isNCHW) {
|
||||||
BUILD_SINGLE_SELECTOR(gradO.dataType(), upsampling3dBP_, (gradO, gradI, isNCHW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(gradO.dataType(), upsampling3dBP_, (gradO, gradI, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
void ConvolutionUtils::pooling2d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const PoolingType poolingMode, const int extraParam0) {
|
void ConvolutionUtils::pooling2d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const PoolingType poolingMode, const int extraParam0) {
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), pooling2d_, (block, input, output, kH, kW, sH, sW, pH, pW, dH, dW, poolingMode, extraParam0), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), pooling2d_, (block, input, output, kH, kW, sH, sW, pH, pW, dH, dW, poolingMode, extraParam0), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::pooling3d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
void ConvolutionUtils::pooling3d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), pooling3d_, (block, input, output, kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), pooling3d_, (block, input, output, kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::pooling2dBP(nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
void ConvolutionUtils::pooling2dBP(nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), pooling2dBP_, (block, input, gradO, gradI, kH, kW, sH, sW, pH, pW, dH, dW, poolingMode, extraParam0), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), pooling2dBP_, (block, input, gradO, gradI, kH, kW, sH, sW, pH, pW, dH, dW, poolingMode, extraParam0), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
void ConvolutionUtils::pooling3dBP(nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
void ConvolutionUtils::pooling3dBP(nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), pooling3dBP_, (block, input, gradO, gradI, kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), pooling3dBP_, (block, input, gradO, gradI, kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void conv2d_, (nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void conv2dBP_, (nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void depthwiseConv2d_, (const NDArray* input, const NDArray* weights, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void depthwiseConv2dBP_, (const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void sconv2d_, (nd4j::graph::Context& block, const NDArray* input, const NDArray* weightsDepth, const NDArray* weightsPoint, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void upsampling2d_, (const NDArray& input, NDArray& output, const int factorH, const int factorW, const bool isNCHW), LIBND4J_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void upsampling3d_, (const NDArray& input, NDArray& output, const int factorD, const int factorH, const int factorW, const bool isNCDHW), LIBND4J_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void upsampling2dBP_, (const NDArray& gradO, NDArray& gradI, const bool isNCHW), LIBND4J_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void upsampling3dBP_, (const NDArray& gradO, NDArray& gradI, const bool isNCHW), LIBND4J_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void vol2col_, (const NDArray& volume, NDArray& columns, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW), LIBND4J_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void col2vol_, (const NDArray& columns, NDArray& volume, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW), LIBND4J_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void pooling2d_, (nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int poolingMode, const int extraParam0), LIBND4J_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void pooling3d_, (nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0), LIBND4J_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void pooling2dBP_, (nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int poolingMode, const int extraParam0), LIBND4J_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void pooling3dBP_, (nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
|
@ -81,10 +81,8 @@ static void dilation2d_(NDArray *input, NDArray *weights, NDArray *output, const
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void dilation2d_, (NDArray *input, NDArray *weights, NDArray *output, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void dilation2d(nd4j::LaunchContext* context, NDArray *input, NDArray *weights, NDArray *output, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW) {
|
void dilation2d(nd4j::LaunchContext* context, NDArray *input, NDArray *weights, NDArray *output, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), output->dataType(), dilation2d_, (input, weights, output, sH, sW, pH, pW, dH, dW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), dilation2d_, (input, weights, output, sH, sW, pH, pW, dH, dW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -76,7 +76,7 @@ namespace nd4j {
|
||||||
double min_val = input.reduceNumber(reduce::SameOps::Min).e<double>(0);
|
double min_val = input.reduceNumber(reduce::SameOps::Min).e<double>(0);
|
||||||
double max_val = input.reduceNumber(reduce::SameOps::Max).e<double>(0);
|
double max_val = input.reduceNumber(reduce::SameOps::Max).e<double>(0);
|
||||||
|
|
||||||
BUILD_DOUBLE_SELECTOR(input.dataType(), output.dataType(), histogram_, (input.buffer(), input.shapeInfo(), output.getBuffer(), output.getShapeInfo(), numBins, min_val, max_val), LIBND4J_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(input.dataType(), output.dataType(), histogram_, (input.buffer(), input.shapeInfo(), output.getBuffer(), output.getShapeInfo(), numBins, min_val, max_val), LIBND4J_TYPES, INDEXING_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -122,11 +122,9 @@ static void im2col_(nd4j::LaunchContext & context, const NDArray& input, NDArra
|
||||||
|
|
||||||
|
|
||||||
void im2col(nd4j::LaunchContext & context, const NDArray& im, NDArray& col, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const NDArray& arrZeroPadVal) {
|
void im2col(nd4j::LaunchContext & context, const NDArray& im, NDArray& col, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const NDArray& arrZeroPadVal) {
|
||||||
BUILD_SINGLE_SELECTOR(im.dataType(), im2col_, (context, im, col, kH, kW, sH, sW, pH, pW, dH, dW, arrZeroPadVal), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(im.dataType(), im2col_, (context, im, col, kH, kW, sH, sW, pH, pW, dH, dW, arrZeroPadVal), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void im2col_, (nd4j::LaunchContext & context, const NDArray& im, NDArray& col, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const NDArray& arrZeroPadVal), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -334,10 +334,6 @@ namespace helpers {
|
||||||
BUILD_TRIPLE_SELECTOR(images->dataType(), boxes->dataType(), indices->dataType(), cropAndResizeFunctor_,
|
BUILD_TRIPLE_SELECTOR(images->dataType(), boxes->dataType(), indices->dataType(), cropAndResizeFunctor_,
|
||||||
(images, boxes, indices, cropSize, method, extrapolationVal, crops), NUMERIC_TYPES, FLOAT_TYPES, INTEGER_TYPES);
|
(images, boxes, indices, cropSize, method, extrapolationVal, crops), NUMERIC_TYPES, FLOAT_TYPES, INTEGER_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_TRIPLE_TEMPLATE(template void cropAndResizeFunctor_,
|
|
||||||
(NDArray const* images, NDArray const* boxes, NDArray const* indices, NDArray const* cropSize, int method, double extrapolationVal, NDArray* crops),
|
|
||||||
NUMERIC_TYPES, FLOAT_TYPES, INTEGER_TYPES);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -32,7 +32,6 @@ namespace helpers {
|
||||||
|
|
||||||
theFirst->applyPairwiseLambda<T>(theSecond, functor, nullptr);
|
theFirst->applyPairwiseLambda<T>(theSecond, functor, nullptr);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void reluDerivative__, (NDArray* input, NDArray* epsilon), FLOAT_TYPES);
|
|
||||||
|
|
||||||
void reluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond) {
|
void reluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reluDerivative__, (theFirst, theSecond), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reluDerivative__, (theFirst, theSecond), FLOAT_TYPES);
|
||||||
|
@ -46,7 +45,6 @@ namespace helpers {
|
||||||
|
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void reluDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void reluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void reluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
|
@ -61,8 +59,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void relu6Derivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void relu6Derivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void relu6Derivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), relu6Derivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), relu6Derivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -76,8 +72,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void leakyReluDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void leakyReluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void leakyReluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), leakyReluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), leakyReluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -91,8 +85,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void eluDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void eluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void eluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), eluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), eluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -106,8 +98,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void seluDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void seluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void seluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), seluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), seluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -121,8 +111,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void cubeDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void cubeDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void cubeDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), cubeDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), cubeDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -137,8 +125,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void reduceNorm1_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void reduceNorm1(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void reduceNorm1(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reduceNorm1_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reduceNorm1_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -153,8 +139,6 @@ namespace helpers {
|
||||||
logits->applyPairwiseLambda<T>(labels, functor, output);
|
logits->applyPairwiseLambda<T>(labels, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void sigmCrossEntropy_, (NDArray* logits, NDArray* labels, NDArray* output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void sigmCrossEntropy(nd4j::LaunchContext * context, NDArray* logits, NDArray* labels, NDArray* output) {
|
void sigmCrossEntropy(nd4j::LaunchContext * context, NDArray* logits, NDArray* labels, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(logits->dataType(), sigmCrossEntropy_, (logits, labels, output), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(logits->dataType(), sigmCrossEntropy_, (logits, labels, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -173,8 +157,6 @@ namespace helpers {
|
||||||
logits->applyPairwiseLambda<T>(labels, functor, output);
|
logits->applyPairwiseLambda<T>(labels, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void sigmCrossEntropyGrad_, (NDArray* logits, NDArray* labels, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void sigmCrossEntropyGrad(nd4j::LaunchContext * context, NDArray* logits, NDArray* labels, NDArray* output) {
|
void sigmCrossEntropyGrad(nd4j::LaunchContext * context, NDArray* logits, NDArray* labels, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(logits->dataType(), sigmCrossEntropyGrad_, (logits, labels, output), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(logits->dataType(), sigmCrossEntropyGrad_, (logits, labels, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -190,8 +172,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void tanhDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void tanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void tanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), tanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), tanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -207,8 +187,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void hardTanhDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void hardTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void hardTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), hardTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), hardTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -222,8 +200,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void rationalTanhDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void rationalTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void rationalTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), rationalTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), rationalTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -237,8 +213,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void rectifiedTanhDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void rectifiedTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void rectifiedTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), rectifiedTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), rectifiedTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -256,8 +230,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void softSignDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void softSignDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void softSignDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), softSignDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), softSignDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -272,8 +244,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void softPlusDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void softPlusDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void softPlusDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), softPlusDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), softPlusDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -291,8 +261,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void sigmoidDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void sigmoidDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void sigmoidDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), sigmoidDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), sigmoidDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -306,8 +274,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
input->applyPairwiseLambda<T>(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void hardSigmoidDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void hardSigmoidDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void hardSigmoidDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), hardSigmoidDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), hardSigmoidDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -347,13 +313,10 @@ namespace helpers {
|
||||||
void logSumExp(nd4j::LaunchContext * context, NDArray* input, NDArray* axis, NDArray* output) {
|
void logSumExp(nd4j::LaunchContext * context, NDArray* input, NDArray* axis, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), logSumExp_, (input, axis, output), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), logSumExp_, (input, axis, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void logSumExp_, (NDArray* input, NDArray* axis, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void logSumExp(nd4j::LaunchContext * context, NDArray* input, NDArray* subtrah, NDArray* axis, NDArray* output) {
|
void logSumExp(nd4j::LaunchContext * context, NDArray* input, NDArray* subtrah, NDArray* axis, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), logSumExp_, (input, subtrah, axis, output), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), logSumExp_, (input, subtrah, axis, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void logSumExp_, (NDArray* input, NDArray* subtrah, NDArray* axis, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template <typename T>
|
template <typename T>
|
||||||
|
@ -393,7 +356,6 @@ static void weightedCrossEntropyWithLogitsFunctor_(NDArray const* targets, NDArr
|
||||||
void weightedCrossEntropyWithLogitsFunctor(nd4j::LaunchContext * context, NDArray const* targets, NDArray const* input, NDArray const* weights, NDArray* output) {
|
void weightedCrossEntropyWithLogitsFunctor(nd4j::LaunchContext * context, NDArray const* targets, NDArray const* input, NDArray const* weights, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(targets->dataType(), weightedCrossEntropyWithLogitsFunctor_, (targets, input, weights, output), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(targets->dataType(), weightedCrossEntropyWithLogitsFunctor_, (targets, input, weights, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void weightedCrossEntropyWithLogitsFunctor_, (NDArray const* targets, NDArray const* input, NDArray const* weights, NDArray* output), FLOAT_TYPES);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -410,10 +410,9 @@ static void lrnBP_(const NDArray& input, const NDArray& gradO, NDArray& gradI, c
|
||||||
gradI *= gradO;
|
gradI *= gradO;
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void lrnBP_, (const NDArray& input, const NDArray& gradO, NDArray& gradI, const int depth, const float bias, const float alpha, const float beta), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void lrnBP(nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int depth, const float bias, const float alpha, const float beta) {
|
void lrnBP(nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int depth, const float bias, const float alpha, const float beta) {
|
||||||
BUILD_DOUBLE_SELECTOR(input.dataType(), gradO.dataType(), lrnBP_, (input, gradO, gradI, depth, bias, alpha, beta), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_DOUBLE_SELECTOR(input.dataType(), gradO.dataType(), lrnBP_, (input, gradO, gradI, depth, bias, alpha, beta), FLOAT_TYPES, FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -345,8 +345,6 @@ template <typename T>
|
||||||
int cholesky(nd4j::LaunchContext * context, NDArray* input, NDArray* output, bool inplace) {
|
int cholesky(nd4j::LaunchContext * context, NDArray* input, NDArray* output, bool inplace) {
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), return cholesky_, (input, output, inplace), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), return cholesky_, (input, output, inplace), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template int cholesky_, (NDArray* input, NDArray* output, bool inplace), FLOAT_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template int inverse_, (NDArray* input, NDArray* output), FLOAT_TYPES);
|
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
int logdetFunctor_(NDArray* input, NDArray* output) {
|
int logdetFunctor_(NDArray* input, NDArray* output) {
|
||||||
|
|
|
@ -1,64 +0,0 @@
|
||||||
/*******************************************************************************
|
|
||||||
* Copyright (c) 2015-2018 Skymind, Inc.
|
|
||||||
*
|
|
||||||
* This program and the accompanying materials are made available under the
|
|
||||||
* terms of the Apache License, Version 2.0 which is available at
|
|
||||||
* https://www.apache.org/licenses/LICENSE-2.0.
|
|
||||||
*
|
|
||||||
* Unless required by applicable law or agreed to in writing, software
|
|
||||||
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
* License for the specific language governing permissions and limitations
|
|
||||||
* under the License.
|
|
||||||
*
|
|
||||||
* SPDX-License-Identifier: Apache-2.0
|
|
||||||
******************************************************************************/
|
|
||||||
|
|
||||||
//
|
|
||||||
// Created by raver119 on 20.12.17.
|
|
||||||
//
|
|
||||||
|
|
||||||
#include <ops/declarable/helpers/matmul.h>
|
|
||||||
|
|
||||||
namespace nd4j {
|
|
||||||
namespace ops {
|
|
||||||
namespace helpers {
|
|
||||||
template <typename X, typename Y, typename Z>
|
|
||||||
void __matmul(NDArray *vA, NDArray *vB, NDArray *vC, int transA, int transB, double alpha, double beta) {
|
|
||||||
CBLAS_TRANSPOSE tA = (CBLAS_TRANSPOSE) transA;
|
|
||||||
CBLAS_TRANSPOSE tB = (CBLAS_TRANSPOSE) transB;
|
|
||||||
|
|
||||||
int M = vA->sizeAt(0);
|
|
||||||
int N = vB->sizeAt(1);
|
|
||||||
int K = vA->sizeAt(1);
|
|
||||||
|
|
||||||
int ldA = transA == CblasNoTrans ? M : K;
|
|
||||||
int ldB = transB == CblasNoTrans ? K : N;
|
|
||||||
int ldC = M;
|
|
||||||
|
|
||||||
auto A = reinterpret_cast<X *>(vA->buffer());
|
|
||||||
auto B = reinterpret_cast<Y *>(vB->buffer());
|
|
||||||
auto C = reinterpret_cast<Z *>(vC->buffer());
|
|
||||||
|
|
||||||
PRAGMA_OMP_PARALLEL_FOR_SIMD_COLLAPSE(2)
|
|
||||||
for (int m = 0; m < M; ++m) {
|
|
||||||
for (int n = 0; n < N; ++n) {
|
|
||||||
Z c_mnp = 0;
|
|
||||||
|
|
||||||
for (int k = 0; k < K; ++k)
|
|
||||||
c_mnp += (Z) A[tA == CblasNoTrans ? (m + k * ldA) : (m * ldA + k)] * (Z) B[tB == CblasNoTrans ? (k + n * ldB) : (k * ldB + n)];
|
|
||||||
|
|
||||||
C[m + n * ldC] = (Z) alpha * (Z) c_mnp + (Z) beta * (Z) C[m + n * ldC];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
void _matmul(nd4j::LaunchContext * context, NDArray *vA, NDArray *vB, NDArray *vC, int transA, int transB, double alpha, double beta) {
|
|
||||||
BUILD_TRIPLE_SELECTOR(vA->dataType(), vB->dataType(), vC->dataType(), __matmul, (vA, vB, vC, transA, transB, alpha, beta), LIBND4J_TYPES, LIBND4J_TYPES, LIBND4J_TYPES);
|
|
||||||
}
|
|
||||||
|
|
||||||
BUILD_TRIPLE_TEMPLATE(template void __matmul, (NDArray *A, NDArray *B, NDArray *C, int transA, int transB, double alpha, double beta), LIBND4J_TYPES, LIBND4J_TYPES, LIBND4J_TYPES);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -76,9 +76,6 @@ namespace helpers {
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), maxPoolingFunctor_, (block, input, values, params, indices), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), maxPoolingFunctor_, (block, input, values, params, indices), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void maxPoolingFunctor_, (nd4j::graph::Context& block, NDArray* input, NDArray* values, std::vector<int> const& params, NDArray* indices), FLOAT_TYPES);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
|
@ -32,7 +32,6 @@ namespace nd4j {
|
||||||
|
|
||||||
in.applyLambda<T>(lambda, &out);
|
in.applyLambda<T>(lambda, &out);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void toggle_bits__, (NDArray &in, NDArray &out), INTEGER_TYPES);
|
|
||||||
|
|
||||||
void __toggle_bits(nd4j::LaunchContext * context, NDArray& in, NDArray& out) {
|
void __toggle_bits(nd4j::LaunchContext * context, NDArray& in, NDArray& out) {
|
||||||
BUILD_SINGLE_SELECTOR(in.dataType(), toggle_bits__, (in, out), INTEGER_TYPES);
|
BUILD_SINGLE_SELECTOR(in.dataType(), toggle_bits__, (in, out), INTEGER_TYPES);
|
||||||
|
|
|
@ -56,9 +56,6 @@ static void triuBP_(nd4j::LaunchContext * context, const NDArray& input, const N
|
||||||
BUILD_SINGLE_SELECTOR(gradO.dataType(), triuBP_, (context, input, gradO, gradI, diagonal), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(gradO.dataType(), triuBP_, (context, input, gradO, gradI, diagonal), LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void triuBP_, (nd4j::LaunchContext * context, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int diagonal), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template <typename T>
|
template <typename T>
|
||||||
static void trace_(const NDArray& input, NDArray& output) {
|
static void trace_(const NDArray& input, NDArray& output) {
|
||||||
|
@ -78,8 +75,6 @@ static void trace_(const NDArray& input, NDArray& output) {
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), trace_, (input, output), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), trace_, (input, output), LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void trace_, (const NDArray& input, NDArray& output), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template <typename T>
|
template <typename T>
|
||||||
void randomShuffle_(NDArray& input, NDArray& output, nd4j::graph::RandomGenerator& rng, const bool isInplace) {
|
void randomShuffle_(NDArray& input, NDArray& output, nd4j::graph::RandomGenerator& rng, const bool isInplace) {
|
||||||
|
@ -173,14 +168,6 @@ void randomShuffle_(NDArray& input, NDArray& output, nd4j::graph::RandomGenerato
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), randomShuffle_, (input, output, rng, isInplace), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), randomShuffle_, (input, output, rng, isInplace), LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void randomShuffle_, (NDArray& input, NDArray& output, nd4j::graph::RandomGenerator& rng, const bool isInplace), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template<typename T>
|
template<typename T>
|
||||||
|
@ -387,8 +374,6 @@ void pad(nd4j::LaunchContext * context, const int mode, const NDArray& input, co
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), pad_, (mode, input, paddings, output, padValue), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), pad_, (mode, input, paddings, output, padValue), LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void pad_, (const int mode, const NDArray& input, const NDArray& paddings, NDArray& output, NDArray const& padValue), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////
|
||||||
/*// initial values of inIdx, outIdx, dim must be equal to zero
|
/*// initial values of inIdx, outIdx, dim must be equal to zero
|
||||||
template<typename T>
|
template<typename T>
|
||||||
|
@ -623,9 +608,8 @@ static void gatherND_(NDArray& input, NDArray& indices, NDArray& output) {
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////
|
||||||
void gatherND(nd4j::LaunchContext * context, NDArray& input, NDArray& indices, NDArray& output) {
|
void gatherND(nd4j::LaunchContext * context, NDArray& input, NDArray& indices, NDArray& output) {
|
||||||
BUILD_DOUBLE_SELECTOR(input.dataType(), indices.dataType(), gatherND_, (input, indices, output), LIBND4J_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(input.dataType(), indices.dataType(), gatherND_, (input, indices, output), LIBND4J_TYPES, INDEXING_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_DOUBLE_TEMPLATE(template void gatherND_, (NDArray& input, NDArray& indices, NDArray& output), LIBND4J_TYPES, INTEGER_TYPES);
|
|
||||||
|
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -705,8 +689,6 @@ static void gather_(NDArray* input, const NDArray* indices, NDArray* output, con
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), gather_, (input, indices, output, intArgs), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), gather_, (input, indices, output, intArgs), LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void gather_, (NDArray* input, const NDArray* indices, NDArray* output, const std::vector<int>& intArgs), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void eye(nd4j::LaunchContext * context, NDArray& output) {
|
void eye(nd4j::LaunchContext * context, NDArray& output) {
|
||||||
|
|
||||||
|
@ -826,7 +808,6 @@ static void mergeMaxIndex_(const std::vector<NDArray*>& inArrs, NDArray& output)
|
||||||
BUILD_SINGLE_SELECTOR(inArrs[0]->dataType(), mergeMaxIndex_, (inArrs, output), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(inArrs[0]->dataType(), mergeMaxIndex_, (inArrs, output), LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void mergeMaxIndex_, (const std::vector<NDArray*>& inArrs, NDArray& output), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template<typename T>
|
template<typename T>
|
||||||
|
@ -850,8 +831,6 @@ static void mergeMax_(const std::vector<NDArray*>& inArrs, NDArray& output) {
|
||||||
BUILD_SINGLE_SELECTOR(output.dataType(), mergeMax_, (inArrs, output), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(output.dataType(), mergeMax_, (inArrs, output), LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void mergeMax_, (const std::vector<NDArray*>& inArrs, NDArray& output), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template<typename T>
|
template<typename T>
|
||||||
static void mergeAvg_(const std::vector<NDArray*>& inArrs, NDArray& output) {
|
static void mergeAvg_(const std::vector<NDArray*>& inArrs, NDArray& output) {
|
||||||
|
@ -874,7 +853,6 @@ static void mergeAvg_(const std::vector<NDArray*>& inArrs, NDArray& output) {
|
||||||
BUILD_SINGLE_SELECTOR(output.dataType(), mergeAvg_, (inArrs, output), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(output.dataType(), mergeAvg_, (inArrs, output), LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void mergeAvg_, (const std::vector<NDArray*>& inArrs, NDArray& output), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template<typename T>
|
template<typename T>
|
||||||
|
@ -898,8 +876,6 @@ static void mergeAdd_(const std::vector<NDArray*>& inArrs, NDArray& output) {
|
||||||
BUILD_SINGLE_SELECTOR(output.dataType(), mergeAdd_, (inArrs, output), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(output.dataType(), mergeAdd_, (inArrs, output), LIBND4J_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void mergeAdd_, (const std::vector<NDArray*>& inArrs, NDArray& output), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template<typename T>
|
template<typename T>
|
||||||
static void clipByNorm_(NDArray& input, NDArray& output, const std::vector<int>& dimensions, const NDArray& clipNorm, const bool isInplace) {
|
static void clipByNorm_(NDArray& input, NDArray& output, const std::vector<int>& dimensions, const NDArray& clipNorm, const bool isInplace) {
|
||||||
|
@ -970,11 +946,6 @@ void clipByNorm(nd4j::LaunchContext * context, NDArray& input, NDArray& output,
|
||||||
BUILD_SINGLE_SELECTOR(output.dataType(), clipByNorm_, (input, output, dimensions, clipNorm, isInplace), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(output.dataType(), clipByNorm_, (input, output, dimensions, clipNorm, isInplace), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void clipByNorm_, (NDArray& input, NDArray& output, const std::vector<int>& dimensions, const NDArray& clipNorm, const bool isInplace), FLOAT_TYPES);
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -99,7 +99,7 @@ void prelu(nd4j::LaunchContext * context, const NDArray& input, const NDArray& a
|
||||||
const auto yType = alpha.dataType();
|
const auto yType = alpha.dataType();
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&output}, {&input, &alpha});
|
NDArray::prepareSpecialUse({&output}, {&input, &alpha});
|
||||||
BUILD_DOUBLE_SELECTOR(xType, yType, preluCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, context->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), alpha.getSpecialBuffer(), alpha.getSpecialShapeInfo(), output.getSpecialBuffer()), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(xType, preluCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, context->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), alpha.getSpecialBuffer(), alpha.getSpecialShapeInfo(), output.getSpecialBuffer()), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&output}, {&input, &alpha});
|
NDArray::registerSpecialUse({&output}, {&input, &alpha});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
@ -189,7 +189,7 @@ void preluBP(nd4j::LaunchContext* context, const NDArray& input, const NDArray&
|
||||||
const auto zType = alpha.dataType();
|
const auto zType = alpha.dataType();
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&dLdI, &dLdA}, {&input, &alpha, &dLdO});
|
NDArray::prepareSpecialUse({&dLdI, &dLdA}, {&input, &alpha, &dLdO});
|
||||||
BUILD_DOUBLE_SELECTOR(xType, zType, preluBPCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, context->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), alpha.getSpecialBuffer(), alpha.getSpecialShapeInfo(), dLdO.getSpecialBuffer(), dLdO.getSpecialShapeInfo(), dLdI.getSpecialBuffer(), dLdI.getSpecialShapeInfo(), dLdA.getSpecialBuffer(), dLdA.getSpecialShapeInfo()), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(xType, preluBPCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, context->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), alpha.getSpecialBuffer(), alpha.getSpecialShapeInfo(), dLdO.getSpecialBuffer(), dLdO.getSpecialShapeInfo(), dLdI.getSpecialBuffer(), dLdI.getSpecialShapeInfo(), dLdA.getSpecialBuffer(), dLdA.getSpecialShapeInfo()), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&dLdI, &dLdA}, {&input, &alpha, &dLdO});
|
NDArray::registerSpecialUse({&dLdI, &dLdA}, {&input, &alpha, &dLdO});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
@ -574,14 +574,6 @@ void softmaxDerivative(nd4j::LaunchContext * context, const NDArray& input, NDAr
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), thresholdReluDerivative_, (input, threshold, dLdO, output), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), thresholdReluDerivative_, (input, threshold, dLdO, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void thresholdReluDerivative_, (NDArray* input, double threshold, NDArray* dLdO, NDArray* output), FLOAT_TYPES);
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void preluCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void *vx, const Nd4jLong *xShapeInfo, const void *vy, const Nd4jLong *yShapeInfo, void *vz), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void preluBPCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void *vIn, const Nd4jLong *inShapeInfo, const void *vAlpha, const Nd4jLong *alphaShapeInfo, const void *vdLdO, const Nd4jLong *dLdOShapeInfo, void *vdLdI, const Nd4jLong *dLdIShapeInfo, void *vdLdA, const Nd4jLong *dLdAShapeInfo), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void softMaxForVectorCudaLauncher, (const cudaStream_t* stream, const void *vx, const Nd4jLong *xzShapeInfo, void *vz), FLOAT_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void softMaxDerivForVectorCudaLauncher, (const cudaStream_t* stream, const void *vx, const Nd4jLong *xzShapeInfo, void *vz), FLOAT_TYPES);
|
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -78,7 +78,6 @@ static _CUDA_H void adjustHueCudaLauncher(const int blocksPerGrid, const int thr
|
||||||
|
|
||||||
adjustHueCuda<T><<<blocksPerGrid, threadsPerBlock, 256, *stream>>>(vx, xShapeInfo, xTadOffsets, vz, zShapeInfo, zTadOffsets, numOfTads, deltaScalarArr->e<T>(0), dimC);
|
adjustHueCuda<T><<<blocksPerGrid, threadsPerBlock, 256, *stream>>>(vx, xShapeInfo, xTadOffsets, vz, zShapeInfo, zTadOffsets, numOfTads, deltaScalarArr->e<T>(0), dimC);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void adjustHueCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, const Nd4jLong* xTadOffsets, void* vz, const Nd4jLong* zShapeInfo, const Nd4jLong* zTadOffsets, const Nd4jLong numOfTads, const NDArray* deltaScalarArr, const int dimC), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////
|
||||||
void adjustHue(nd4j::LaunchContext* context, const NDArray *input, const NDArray* deltaScalarArr, NDArray *output, const int dimC) {
|
void adjustHue(nd4j::LaunchContext* context, const NDArray *input, const NDArray* deltaScalarArr, NDArray *output, const int dimC) {
|
||||||
|
@ -94,7 +93,7 @@ void adjustHue(nd4j::LaunchContext* context, const NDArray *input, const NDArray
|
||||||
PointersManager manager(context, "adjustHue");
|
PointersManager manager(context, "adjustHue");
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({output}, {input, deltaScalarArr});
|
NDArray::prepareSpecialUse({output}, {input, deltaScalarArr});
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), adjustHueCudaLauncher, (blocksPerGrid, threadsPerBlock, context->getCudaStream(), input->getSpecialBuffer(), input->getSpecialShapeInfo(), packX.platformOffsets(), output->specialBuffer(), output->specialShapeInfo(), packZ.platformOffsets(), numOfTads, deltaScalarArr, dimC), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), adjustHueCudaLauncher, (blocksPerGrid, threadsPerBlock, context->getCudaStream(), input->getSpecialBuffer(), input->getSpecialShapeInfo(), packX.platformOffsets(), output->specialBuffer(), output->specialShapeInfo(), packZ.platformOffsets(), numOfTads, deltaScalarArr, dimC), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({output}, {input, deltaScalarArr});
|
NDArray::registerSpecialUse({output}, {input, deltaScalarArr});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
|
|
@ -80,7 +80,6 @@ static _CUDA_H void adjustSaturationCudaLauncher(const int blocksPerGrid, const
|
||||||
|
|
||||||
adjustSaturationCuda<T><<<blocksPerGrid, threadsPerBlock, 256, *stream>>>(vx, xShapeInfo, xTadOffsets, vz, zShapeInfo, zTadOffsets, numOfTads, factorScalarArr->e<T>(0), dimC);
|
adjustSaturationCuda<T><<<blocksPerGrid, threadsPerBlock, 256, *stream>>>(vx, xShapeInfo, xTadOffsets, vz, zShapeInfo, zTadOffsets, numOfTads, factorScalarArr->e<T>(0), dimC);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void adjustSaturationCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, const Nd4jLong* xTadOffsets, void* vz, const Nd4jLong* zShapeInfo, const Nd4jLong* zTadOffsets, const Nd4jLong numOfTads, const NDArray* factorScalarArr, const int dimC), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////
|
////////////////////////////////////////////////////////////////////////
|
||||||
void adjustSaturation(nd4j::LaunchContext* context, const NDArray *input, const NDArray* factorScalarArr, NDArray *output, const int dimC) {
|
void adjustSaturation(nd4j::LaunchContext* context, const NDArray *input, const NDArray* factorScalarArr, NDArray *output, const int dimC) {
|
||||||
|
@ -96,7 +95,7 @@ void adjustSaturation(nd4j::LaunchContext* context, const NDArray *input, const
|
||||||
PointersManager manager(context, "adjustSaturation");
|
PointersManager manager(context, "adjustSaturation");
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({output}, {input, factorScalarArr});
|
NDArray::prepareSpecialUse({output}, {input, factorScalarArr});
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), adjustSaturationCudaLauncher, (blocksPerGrid, threadsPerBlock, context->getCudaStream(), input->getSpecialBuffer(), input->getSpecialShapeInfo(), packX.platformOffsets(), output->specialBuffer(), output->specialShapeInfo(), packZ.platformOffsets(), numOfTads, factorScalarArr, dimC), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), adjustSaturationCudaLauncher, (blocksPerGrid, threadsPerBlock, context->getCudaStream(), input->getSpecialBuffer(), input->getSpecialShapeInfo(), packX.platformOffsets(), output->specialBuffer(), output->specialShapeInfo(), packZ.platformOffsets(), numOfTads, factorScalarArr, dimC), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({output}, {input, factorScalarArr});
|
NDArray::registerSpecialUse({output}, {input, factorScalarArr});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
|
|
@ -182,7 +182,6 @@ __host__ static void batchnormCudaLauncher(const int blocksPerGrid, const int th
|
||||||
|
|
||||||
batchnormCuda<T><<<blocksPerGrid, threadsPerBlock, 1024, *stream>>>(vx, xShapeInfo, vMean, meanShapeInfo, vVariance, varianceShapeInfo, vGamma, gammaShapeInfo, vBeta, betaShapeInfo, vz, zShapeInfo, xTadShapeInfo, xTadOffsets, zTadShapeInfo, zTadOffsets, static_cast<T>(epsilon));
|
batchnormCuda<T><<<blocksPerGrid, threadsPerBlock, 1024, *stream>>>(vx, xShapeInfo, vMean, meanShapeInfo, vVariance, varianceShapeInfo, vGamma, gammaShapeInfo, vBeta, betaShapeInfo, vz, zShapeInfo, xTadShapeInfo, xTadOffsets, zTadShapeInfo, zTadOffsets, static_cast<T>(epsilon));
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void batchnormCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, const void* vMean, const Nd4jLong* meanShapeInfo, const void* vVariance, const Nd4jLong* varianceShapeInfo, const void* vGamma, const Nd4jLong* gammaShapeInfo, const void* vBeta, const Nd4jLong* betaShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const Nd4jLong* xTadShapeInfo, const Nd4jLong* xTadOffsets, const Nd4jLong* zTadShapeInfo, const Nd4jLong* zTadOffsets, const double epsilon), FLOAT_TYPES);
|
|
||||||
|
|
||||||
///////////////////////////////////////////////////////////////////
|
///////////////////////////////////////////////////////////////////
|
||||||
template<typename T>
|
template<typename T>
|
||||||
|
@ -198,7 +197,6 @@ __host__ static void batchnormCudaLauncher2(const int blocksPerGrid, const int t
|
||||||
|
|
||||||
batchnormCuda2<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vMean, meanShapeInfo, vVariance, varianceShapeInfo, vGamma, gammaShapeInfo, vBeta, betaShapeInfo, vz, zShapeInfo, numDims, dims, static_cast<T>(epsilon));
|
batchnormCuda2<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vMean, meanShapeInfo, vVariance, varianceShapeInfo, vGamma, gammaShapeInfo, vBeta, betaShapeInfo, vz, zShapeInfo, numDims, dims, static_cast<T>(epsilon));
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void batchnormCudaLauncher2, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, const void* vMean, const Nd4jLong* meanShapeInfo, const void* vVariance, const Nd4jLong* varianceShapeInfo, const void* vGamma, const Nd4jLong* gammaShapeInfo, const void* vBeta, const Nd4jLong* betaShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const int numDims, const int* dims, const double epsilon), FLOAT_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void batchnorm(const NDArray* input, const NDArray* mean, const NDArray* variance, const NDArray* gamma, const NDArray* beta, NDArray* output, const std::vector<int>& axes, const double epsilon) {
|
void batchnorm(const NDArray* input, const NDArray* mean, const NDArray* variance, const NDArray* gamma, const NDArray* beta, NDArray* output, const std::vector<int>& axes, const double epsilon) {
|
||||||
|
|
|
@ -107,7 +107,6 @@ namespace helpers {
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void bdsLoopH, (cudaStream_t* stream, void const* inputX, Nd4jLong const* inputXshape, void const* inputY, Nd4jLong const* inputYshape, void* output, Nd4jLong* outputShape), NUMERIC_TYPES);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -189,7 +189,6 @@ static void col2imCudaLauncher(const int blocksPerGrid, const int threadsPerBloc
|
||||||
// col2imCuda2<T><<<512, 512, 1024, *stream>>>(columns, image, colShapeInfo, imShapeInfo, sH, sW, pH, pW, dH, dW);
|
// col2imCuda2<T><<<512, 512, 1024, *stream>>>(columns, image, colShapeInfo, imShapeInfo, sH, sW, pH, pW, dH, dW);
|
||||||
col2imCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(columns, colShapeInfo, image, imShapeInfo, sH, sW, pH, pW, dH, dW);
|
col2imCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(columns, colShapeInfo, image, imShapeInfo, sH, sW, pH, pW, dH, dW);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void col2imCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t* stream, const void *col, const Nd4jLong *colShapeInfo, void *im, const Nd4jLong *imShapeInfo, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void col2im(nd4j::LaunchContext& context, const NDArray& col, NDArray& im, const int sH, const int sW, const int pH, const int pW, const int iH, const int iW, const int dH, const int dW) {
|
void col2im(nd4j::LaunchContext& context, const NDArray& col, NDArray& im, const int sH, const int sW, const int pH, const int pW, const int iH, const int iW, const int dH, const int dW) {
|
||||||
|
@ -201,7 +200,7 @@ void col2im(nd4j::LaunchContext& context, const NDArray& col, NDArray& im, const
|
||||||
const int sharedMem = col.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
const int sharedMem = col.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&im}, {&col});
|
NDArray::prepareSpecialUse({&im}, {&col});
|
||||||
BUILD_SINGLE_SELECTOR(im.dataType(), col2imCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, context.getCudaStream(), col.getSpecialBuffer(), col.getSpecialShapeInfo(), im.specialBuffer(), im.specialShapeInfo(), sH, sW, pH, pW, dH, dW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(im.dataType(), col2imCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, context.getCudaStream(), col.getSpecialBuffer(), col.getSpecialShapeInfo(), im.specialBuffer(), im.specialShapeInfo(), sH, sW, pH, pW, dH, dW), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&im}, {&col});
|
NDArray::registerSpecialUse({&im}, {&col});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
|
|
@ -98,7 +98,6 @@ static void vol2colCudaLauncher(const int blocksPerGrid, const int threadsPerBlo
|
||||||
|
|
||||||
vol2colCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(volume, volShapeInfo, columns, colShapeInfo, sD, sH, sW, pD, pH, pW, dD, dH, dW);
|
vol2colCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(volume, volShapeInfo, columns, colShapeInfo, sD, sH, sW, pD, pH, pW, dD, dH, dW);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void vol2colCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t* stream, const void *vol, const Nd4jLong *volShapeInfo, void *col, const Nd4jLong *colShapeInfo, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW), FLOAT_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::vol2col(nd4j::graph::Context& block, const NDArray& vol, NDArray& col, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW) {
|
void ConvolutionUtils::vol2col(nd4j::graph::Context& block, const NDArray& vol, NDArray& col, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW) {
|
||||||
|
@ -205,7 +204,6 @@ static void col2volCudaLauncher(const int blocksPerGrid, const int threadsPerBlo
|
||||||
|
|
||||||
col2volCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(columns, colShapeInfo, volume, volShapeInfo, sD, sH, sW, pD, pH, pW, dD, dH, dW);
|
col2volCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(columns, colShapeInfo, volume, volShapeInfo, sD, sH, sW, pD, pH, pW, dD, dH, dW);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void col2volCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t* stream, const void *col, const Nd4jLong *colShapeInfo, void *vol, const Nd4jLong *volShapeInfo, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW), FLOAT_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::col2vol(nd4j::graph::Context& block, const NDArray& col, NDArray& vol, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW) {
|
void ConvolutionUtils::col2vol(nd4j::graph::Context& block, const NDArray& col, NDArray& vol, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW) {
|
||||||
|
@ -285,7 +283,7 @@ static void conv2d_(nd4j::graph::Context& block, const NDArray* input, const NDA
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::conv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
void ConvolutionUtils::conv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), output->dataType(), conv2d_, (block, input, weights, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), conv2d_, (block, input, weights, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -345,7 +343,7 @@ static void depthwiseConv2d_(const NDArray* input, const NDArray* weights, const
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::depthwiseConv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
void ConvolutionUtils::depthwiseConv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), output->dataType(), depthwiseConv2d_, (input, weights, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), depthwiseConv2d_, (input, weights, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -390,7 +388,7 @@ static void sconv2d_(nd4j::graph::Context& block, const NDArray* input, const ND
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::sconv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weightsDepth, const NDArray* weightsPoint, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
void ConvolutionUtils::sconv2d(nd4j::graph::Context& block, const NDArray* input, const NDArray* weightsDepth, const NDArray* weightsPoint, const NDArray* bias, NDArray* output, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), output->dataType(), sconv2d_, (block, input, weightsDepth, weightsPoint, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), sconv2d_, (block, input, weightsDepth, weightsPoint, bias, output, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -488,7 +486,6 @@ template <typename X, typename Z>
|
||||||
static void avgPooling2dCudaLauncher(nd4j::LaunchContext & block, void *vx, Nd4jLong *vxShapeInfo, void *vz, Nd4jLong *vzShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int extraParam0) {
|
static void avgPooling2dCudaLauncher(nd4j::LaunchContext & block, void *vx, Nd4jLong *vxShapeInfo, void *vz, Nd4jLong *vzShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int extraParam0) {
|
||||||
avgPooling2dCuda<X, Z><<<512, 512, 4192, *block.getCudaStream()>>>(vx, vxShapeInfo, vz, vzShapeInfo, kH, kW, sH, sW, pH, pW, dH, dW, extraParam0);
|
avgPooling2dCuda<X, Z><<<512, 512, 4192, *block.getCudaStream()>>>(vx, vxShapeInfo, vz, vzShapeInfo, kH, kW, sH, sW, pH, pW, dH, dW, extraParam0);
|
||||||
}
|
}
|
||||||
BUILD_DOUBLE_TEMPLATE(template void avgPooling2dCudaLauncher, (nd4j::LaunchContext & block, void *vx, Nd4jLong *vxShapeInfo, void *vz, Nd4jLong *vzShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int extraParam0), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template <typename X, typename Z>
|
template <typename X, typename Z>
|
||||||
|
@ -582,7 +579,6 @@ template <typename X, typename Z>
|
||||||
static void pnormPooling2dCudaLauncher(nd4j::LaunchContext & block, void *vx, Nd4jLong *vxShapeInfo, void *vz, Nd4jLong *vzShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int extraParam0) {
|
static void pnormPooling2dCudaLauncher(nd4j::LaunchContext & block, void *vx, Nd4jLong *vxShapeInfo, void *vz, Nd4jLong *vzShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int extraParam0) {
|
||||||
pnormPooling2dCuda<X, Z><<<512, 512, 4192, *block.getCudaStream()>>>(vx, vxShapeInfo, vz, vzShapeInfo, kH, kW, sH, sW, pH, pW, dH, dW, extraParam0);
|
pnormPooling2dCuda<X, Z><<<512, 512, 4192, *block.getCudaStream()>>>(vx, vxShapeInfo, vz, vzShapeInfo, kH, kW, sH, sW, pH, pW, dH, dW, extraParam0);
|
||||||
}
|
}
|
||||||
BUILD_DOUBLE_TEMPLATE(template void pnormPooling2dCudaLauncher, (nd4j::LaunchContext & block, void *vx, Nd4jLong *vxShapeInfo, void *vz, Nd4jLong *vzShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int extraParam0), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template <typename X, typename Z>
|
template <typename X, typename Z>
|
||||||
|
@ -679,7 +675,6 @@ template <typename X, typename Z>
|
||||||
static void maxPooling2dCudaLauncher(nd4j::LaunchContext & block, void *vx, Nd4jLong *vxShapeInfo, void *vz, Nd4jLong *vzShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int extraParam0) {
|
static void maxPooling2dCudaLauncher(nd4j::LaunchContext & block, void *vx, Nd4jLong *vxShapeInfo, void *vz, Nd4jLong *vzShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int extraParam0) {
|
||||||
maxPooling2dCuda<X,Z><<<512, 512, 4192, *block.getCudaStream()>>>(vx, vxShapeInfo, vz, vzShapeInfo, kH, kW, sH, sW, pH, pW, dH, dW, extraParam0);
|
maxPooling2dCuda<X,Z><<<512, 512, 4192, *block.getCudaStream()>>>(vx, vxShapeInfo, vz, vzShapeInfo, kH, kW, sH, sW, pH, pW, dH, dW, extraParam0);
|
||||||
}
|
}
|
||||||
BUILD_DOUBLE_TEMPLATE(template void maxPooling2dCudaLauncher, (nd4j::LaunchContext & block, void *vx, Nd4jLong *vxShapeInfo, void *vz, Nd4jLong *vzShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int extraParam0), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::pooling2d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const PoolingType poolingMode, const int extraParam0) {
|
void ConvolutionUtils::pooling2d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const PoolingType poolingMode, const int extraParam0) {
|
||||||
|
@ -689,15 +684,15 @@ void ConvolutionUtils::pooling2d(nd4j::graph::Context& block, const NDArray& inp
|
||||||
switch (poolingMode) {
|
switch (poolingMode) {
|
||||||
|
|
||||||
case MAX_POOL: {
|
case MAX_POOL: {
|
||||||
BUILD_DOUBLE_SELECTOR(input.dataType(), output.dataType(), maxPooling2dCudaLauncher, (*block.launchContext(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.getSpecialBuffer(), output.getSpecialShapeInfo(), kH, kW, sH, sW, pH, pW, dH, dW, extraParam0), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input.dataType(), maxPooling2dCudaLauncher, (*block.launchContext(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.getSpecialBuffer(), output.getSpecialShapeInfo(), kH, kW, sH, sW, pH, pW, dH, dW, extraParam0), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case AVG_POOL: {
|
case AVG_POOL: {
|
||||||
BUILD_DOUBLE_SELECTOR(input.dataType(), output.dataType(), avgPooling2dCudaLauncher, (*block.launchContext(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.getSpecialBuffer(), output.getSpecialShapeInfo(), kH, kW, sH, sW, pH, pW, dH, dW, extraParam0), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input.dataType(), avgPooling2dCudaLauncher, (*block.launchContext(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.getSpecialBuffer(), output.getSpecialShapeInfo(), kH, kW, sH, sW, pH, pW, dH, dW, extraParam0), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case PNORM_POOL: {
|
case PNORM_POOL: {
|
||||||
BUILD_DOUBLE_SELECTOR(input.dataType(), output.dataType(), pnormPooling2dCudaLauncher, (*block.launchContext(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.getSpecialBuffer(), output.getSpecialShapeInfo(), kH, kW, sH, sW, pH, pW, dH, dW, extraParam0), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input.dataType(), pnormPooling2dCudaLauncher, (*block.launchContext(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.getSpecialBuffer(), output.getSpecialShapeInfo(), kH, kW, sH, sW, pH, pW, dH, dW, extraParam0), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
|
@ -845,7 +840,6 @@ static void pooling3dCudaLauncher(const int blocksPerGrid, const int threadsPerB
|
||||||
|
|
||||||
pooling3dCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vz, zShapeInfo, kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0);
|
pooling3dCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vz, zShapeInfo, kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void pooling3dCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::pooling3d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
void ConvolutionUtils::pooling3d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
||||||
|
@ -857,49 +851,12 @@ void ConvolutionUtils::pooling3d(nd4j::graph::Context& block, const NDArray& inp
|
||||||
const int sharedMem = output.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
const int sharedMem = output.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&output}, {&input});
|
NDArray::prepareSpecialUse({&output}, {&input});
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), pooling3dCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.specialBuffer(), output.specialShapeInfo(), kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), pooling3dCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.specialBuffer(), output.specialShapeInfo(), kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&output}, {&input});
|
NDArray::registerSpecialUse({&output}, {&input});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template <typename T>
|
template <typename T>
|
||||||
__global__ static void pooling2dBPCuda(const void* vx, const Nd4jLong* xShapeInfo, const void* vy, const Nd4jLong* yShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
__global__ static void pooling2dBPCuda(const void* vx, const Nd4jLong* xShapeInfo, const void* vy, const Nd4jLong* yShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
||||||
|
@ -1032,7 +989,6 @@ static void pooling2dBPCudaLauncher(const int blocksPerGrid, const int threadsPe
|
||||||
|
|
||||||
pooling2dBPCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, vz, zShapeInfo, kH, kW, sH, sW, pH, pW, dH, dW, poolingMode, extraParam0);
|
pooling2dBPCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, vz, zShapeInfo, kH, kW, sH, sW, pH, pW, dH, dW, poolingMode, extraParam0);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void pooling2dBPCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, const void* vy, const Nd4jLong* yShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int poolingMode, const int extraParam0), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::pooling2dBP(nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
void ConvolutionUtils::pooling2dBP(nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
||||||
|
@ -1047,7 +1003,7 @@ void ConvolutionUtils::pooling2dBP(nd4j::graph::Context& block, const NDArray& i
|
||||||
const int sharedMem = gradO.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
const int sharedMem = gradO.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&gradI}, {&input, &gradO});
|
NDArray::prepareSpecialUse({&gradI}, {&input, &gradO});
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), pooling2dBPCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), gradO.getSpecialBuffer(), gradO.getSpecialShapeInfo(), gradI.specialBuffer(), gradI.specialShapeInfo(), kH, kW, sH, sW, pH, pW, dH, dW, poolingMode, extraParam0), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), pooling2dBPCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), gradO.getSpecialBuffer(), gradO.getSpecialShapeInfo(), gradI.specialBuffer(), gradI.specialShapeInfo(), kH, kW, sH, sW, pH, pW, dH, dW, poolingMode, extraParam0), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&gradI}, {&input, &gradO});
|
NDArray::registerSpecialUse({&gradI}, {&input, &gradO});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
@ -1201,7 +1157,6 @@ static void pooling3dBPCudaLauncher(const int blocksPerGrid, const int threadsPe
|
||||||
|
|
||||||
pooling3dBPCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, vz, zShapeInfo, kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0);
|
pooling3dBPCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, vz, zShapeInfo, kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void pooling3dBPCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, const void* vy, const Nd4jLong* yShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::pooling3dBP(nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
void ConvolutionUtils::pooling3dBP(nd4j::graph::Context& block, const NDArray& input, const NDArray& gradO, NDArray& gradI, const int kD, const int kH, const int kW, const int sD, const int sH, const int sW, const int pD, const int pH, const int pW, const int dD, const int dH, const int dW, const int poolingMode, const int extraParam0) {
|
||||||
|
@ -1216,7 +1171,7 @@ void ConvolutionUtils::pooling3dBP(nd4j::graph::Context& block, const NDArray& i
|
||||||
const int sharedMem = gradO.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
const int sharedMem = gradO.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&gradI}, {&input, &gradO});
|
NDArray::prepareSpecialUse({&gradI}, {&input, &gradO});
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), pooling3dBPCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), gradO.getSpecialBuffer(), gradO.getSpecialShapeInfo(), gradI.specialBuffer(), gradI.specialShapeInfo(), kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), pooling3dBPCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), gradO.getSpecialBuffer(), gradO.getSpecialShapeInfo(), gradI.specialBuffer(), gradI.specialShapeInfo(), kD, kH, kW, sD, sH, sW, pD, pH, pW, dD, dH, dW, poolingMode, extraParam0), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&gradI}, {&input, &gradO});
|
NDArray::registerSpecialUse({&gradI}, {&input, &gradO});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
@ -1292,11 +1247,10 @@ static void conv2dBP_(nd4j::graph::Context& block, const NDArray* input, const N
|
||||||
delete gradI;
|
delete gradI;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
BUILD_DOUBLE_TEMPLATE(template void conv2dBP_, (nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::conv2dBP(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
void ConvolutionUtils::conv2dBP(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), gradO->dataType(), conv2dBP_, (block, input, weights, bias, gradO, gradI, gradW, gradB, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), conv2dBP_, (block, input, weights, bias, gradO, gradI, gradW, gradB, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -1374,11 +1328,10 @@ static void depthwiseConv2dBP_(const NDArray* input, const NDArray* weights, con
|
||||||
delete gradI;
|
delete gradI;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
BUILD_DOUBLE_TEMPLATE(template void depthwiseConv2dBP_, (const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::depthwiseConv2dBP(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
void ConvolutionUtils::depthwiseConv2dBP(nd4j::graph::Context& block, const NDArray* input, const NDArray* weights, const NDArray* bias, const NDArray* gradO, NDArray* gradI, NDArray* gradW, NDArray* gradB, const int kH, const int kW, const int sH, const int sW, int pH, int pW, const int dH, const int dW, const int isSameMode, const int isNCHW) {
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), gradO->dataType(), depthwiseConv2dBP_, (input, weights, bias, gradO, gradI, gradW, gradB, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), depthwiseConv2dBP_, (input, weights, bias, gradO, gradI, gradW, gradB, kH, kW, sH, sW, pH, pW, dH, dW, isSameMode, isNCHW), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -1434,7 +1387,6 @@ static void upsampling2dCudaLauncher(const int blocksPerGrid, const int threadsP
|
||||||
|
|
||||||
upsampling2dCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vz, zShapeInfo, factorH, factorW, isNCHW);
|
upsampling2dCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vz, zShapeInfo, factorH, factorW, isNCHW);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void upsampling2dCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const int factorH, const int factorW, const bool isNCHW), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::upsampling2d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int factorH, const int factorW, const bool isNCHW) {
|
void ConvolutionUtils::upsampling2d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int factorH, const int factorW, const bool isNCHW) {
|
||||||
|
@ -1446,7 +1398,7 @@ void ConvolutionUtils::upsampling2d(nd4j::graph::Context& block, const NDArray&
|
||||||
const int sharedMem = output.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
const int sharedMem = output.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&output}, {&input});
|
NDArray::prepareSpecialUse({&output}, {&input});
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), upsampling2dCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.specialBuffer(), output.specialShapeInfo(), factorH, factorW, isNCHW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), upsampling2dCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.specialBuffer(), output.specialShapeInfo(), factorH, factorW, isNCHW), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&output}, {&input});
|
NDArray::registerSpecialUse({&output}, {&input});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
@ -1505,7 +1457,6 @@ static void upsampling3dCudaLauncher(const int blocksPerGrid, const int threadsP
|
||||||
|
|
||||||
upsampling3dCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vz, zShapeInfo, factorD, factorH, factorW, isNCDHW);
|
upsampling3dCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vz, zShapeInfo, factorD, factorH, factorW, isNCDHW);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void upsampling3dCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const int factorD, const int factorH, const int factorW, const bool isNCDHW), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::upsampling3d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int factorD, const int factorH, const int factorW, const bool isNCDHW) {
|
void ConvolutionUtils::upsampling3d(nd4j::graph::Context& block, const NDArray& input, NDArray& output, const int factorD, const int factorH, const int factorW, const bool isNCDHW) {
|
||||||
|
@ -1517,7 +1468,7 @@ void ConvolutionUtils::upsampling3d(nd4j::graph::Context& block, const NDArray&
|
||||||
const int sharedMem = output.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
const int sharedMem = output.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&output}, {&input});
|
NDArray::prepareSpecialUse({&output}, {&input});
|
||||||
BUILD_SINGLE_SELECTOR(input.dataType(), upsampling3dCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.specialBuffer(), output.specialShapeInfo(), factorD, factorH, factorW, isNCDHW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(input.dataType(), upsampling3dCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), output.specialBuffer(), output.specialShapeInfo(), factorD, factorH, factorW, isNCDHW), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&output}, {&input});
|
NDArray::registerSpecialUse({&output}, {&input});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
@ -1579,7 +1530,6 @@ static void upsampling2dBPCudaLauncher(const int blocksPerGrid, const int thread
|
||||||
|
|
||||||
upsampling2dBPCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vz, zShapeInfo, isNCHW);
|
upsampling2dBPCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vz, zShapeInfo, isNCHW);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void upsampling2dBPCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const bool isNCHW), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::upsampling2dBP(nd4j::graph::Context& block, const NDArray& gradO, NDArray& gradI, const bool isNCHW) {
|
void ConvolutionUtils::upsampling2dBP(nd4j::graph::Context& block, const NDArray& gradO, NDArray& gradI, const bool isNCHW) {
|
||||||
|
@ -1591,7 +1541,7 @@ void ConvolutionUtils::upsampling2dBP(nd4j::graph::Context& block, const NDArray
|
||||||
const int sharedMem = gradI.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
const int sharedMem = gradI.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&gradI}, {&gradO});
|
NDArray::prepareSpecialUse({&gradI}, {&gradO});
|
||||||
BUILD_SINGLE_SELECTOR(gradI.dataType(), upsampling2dBPCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), gradO.getSpecialBuffer(), gradO.getSpecialShapeInfo(), gradI.specialBuffer(), gradI.specialShapeInfo(), isNCHW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(gradI.dataType(), upsampling2dBPCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), gradO.getSpecialBuffer(), gradO.getSpecialShapeInfo(), gradI.specialBuffer(), gradI.specialShapeInfo(), isNCHW), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&gradI}, {&gradO});
|
NDArray::registerSpecialUse({&gradI}, {&gradO});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
@ -1656,7 +1606,6 @@ static void upsampling3dBPCudaLauncher(const int blocksPerGrid, const int thread
|
||||||
|
|
||||||
upsampling3dBPCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vz, zShapeInfo, isNCDHW);
|
upsampling3dBPCuda<T><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vz, zShapeInfo, isNCDHW);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void upsampling3dBPCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const bool isNCDHW), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void ConvolutionUtils::upsampling3dBP(nd4j::graph::Context& block, const NDArray& gradO, NDArray& gradI, const bool isNCDHW) {
|
void ConvolutionUtils::upsampling3dBP(nd4j::graph::Context& block, const NDArray& gradO, NDArray& gradI, const bool isNCDHW) {
|
||||||
|
@ -1668,7 +1617,7 @@ void ConvolutionUtils::upsampling3dBP(nd4j::graph::Context& block, const NDArray
|
||||||
const int sharedMem = gradI.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
const int sharedMem = gradI.rankOf() * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&gradI}, {&gradO});
|
NDArray::prepareSpecialUse({&gradI}, {&gradO});
|
||||||
BUILD_SINGLE_SELECTOR(gradI.dataType(), upsampling3dBPCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), gradO.getSpecialBuffer(), gradO.getSpecialShapeInfo(), gradI.specialBuffer(), gradI.specialShapeInfo(), isNCDHW), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(gradI.dataType(), upsampling3dBPCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, block.launchContext()->getCudaStream(), gradO.getSpecialBuffer(), gradO.getSpecialShapeInfo(), gradI.specialBuffer(), gradI.specialShapeInfo(), isNCDHW), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&gradI}, {&gradO});
|
NDArray::registerSpecialUse({&gradI}, {&gradO});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
|
|
@ -100,19 +100,12 @@ static __global__ void diagFunctorKernel(void* outputBuffer, Nd4jLong* outputSha
|
||||||
input->syncToDevice();
|
input->syncToDevice();
|
||||||
|
|
||||||
diagPartFunctorKernel<T><<<launchDims.x, launchDims.y, launchDims.z, *stream>>>(output->specialBuffer(), output->specialShapeInfo(), input->getSpecialBuffer(), input->getSpecialShapeInfo(), outLen, inLen);
|
diagPartFunctorKernel<T><<<launchDims.x, launchDims.y, launchDims.z, *stream>>>(output->specialBuffer(), output->specialShapeInfo(), input->getSpecialBuffer(), input->getSpecialShapeInfo(), outLen, inLen);
|
||||||
// int i(0), j;
|
|
||||||
// for (j = 0;j < outLen; j++) {
|
|
||||||
// output->p(j, input->e(i));
|
|
||||||
// i += outLen + 1;
|
|
||||||
// }
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void _diagPartFunctor, (nd4j::LaunchContext * context, const NDArray* input, NDArray* output);, LIBND4J_TYPES);
|
|
||||||
|
|
||||||
void diagPartFunctor(nd4j::LaunchContext * context, NDArray const* input, NDArray* output) {
|
void diagPartFunctor(nd4j::LaunchContext * context, NDArray const* input, NDArray* output) {
|
||||||
auto zType = output->dataType();
|
auto zType = output->dataType();
|
||||||
BUILD_SINGLE_SELECTOR(zType, _diagPartFunctor, (context, input, output), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(zType, _diagPartFunctor, (context, input, output), NUMERIC_TYPES);
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -114,8 +114,6 @@ static void dilation2dCudaLauncher(const int blocksPerGrid, const int threadsPer
|
||||||
dilation2dCuda<X,Z><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, vz, zShapeInfo, sH, sW, pH, pW, dH, dW);
|
dilation2dCuda<X,Z><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, vz, zShapeInfo, sH, sW, pH, pW, dH, dW);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void dilation2dCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, const void* vy, const Nd4jLong* yShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW), LIBND4J_TYPES, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void dilation2d(nd4j::LaunchContext* context, NDArray *input, NDArray *weights, NDArray *output, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW) {
|
void dilation2d(nd4j::LaunchContext* context, NDArray *input, NDArray *weights, NDArray *output, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW) {
|
||||||
|
|
||||||
PointersManager manager(context, "dilation2d");
|
PointersManager manager(context, "dilation2d");
|
||||||
|
@ -125,7 +123,7 @@ void dilation2d(nd4j::LaunchContext* context, NDArray *input, NDArray *weights,
|
||||||
const int sharedMem = (weights->rankOf() + output->rankOf()) * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
const int sharedMem = (weights->rankOf() + output->rankOf()) * sizeof(Nd4jLong) * threadsPerBlock + 128;
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({output}, {input, weights});
|
NDArray::prepareSpecialUse({output}, {input, weights});
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), output->dataType(), dilation2dCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, context->getCudaStream(), input->getSpecialBuffer(), input->getSpecialShapeInfo(), weights->getSpecialBuffer(), weights->getSpecialShapeInfo(), output->specialBuffer(), output->specialShapeInfo(), sH, sW, pH, pW, dH, dW), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR_TWICE(input->dataType(), dilation2dCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, context->getCudaStream(), input->getSpecialBuffer(), input->getSpecialShapeInfo(), weights->getSpecialBuffer(), weights->getSpecialShapeInfo(), output->specialBuffer(), output->specialShapeInfo(), sH, sW, pH, pW, dH, dW), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({output}, {input, weights});
|
NDArray::registerSpecialUse({output}, {input, weights});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
|
|
@ -73,8 +73,6 @@ namespace helpers {
|
||||||
NDArray::registerSpecialUse({output}, {input});
|
NDArray::registerSpecialUse({output}, {input});
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void dropoutSimple, (nd4j::LaunchContext* context, NDArray const* input, NDArray* output, double probValue, int seed), FLOAT_TYPES);
|
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
int _dropOutFunctor(graph::Context& context, NDArray* input, NDArray* output, NDArray* reduceShape, int seed, double probValue) {
|
int _dropOutFunctor(graph::Context& context, NDArray* input, NDArray* output, NDArray* reduceShape, int seed, double probValue) {
|
||||||
|
|
||||||
|
@ -124,8 +122,6 @@ namespace helpers {
|
||||||
BUILD_SINGLE_SELECTOR(xType, return _dropOutFunctor, (context, input, output, reduceShape, seed, probValue), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(xType, return _dropOutFunctor, (context, input, output, reduceShape, seed, probValue), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template int _dropOutFunctor, (graph::Context& context, NDArray* input, NDArray* output, NDArray* reduceShape, int seed, double probValue);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
/////////////////////////////////// backrpopagations ///////////////////////////////////////////////
|
/////////////////////////////////// backrpopagations ///////////////////////////////////////////////
|
||||||
template <typename T>
|
template <typename T>
|
||||||
static __global__ void dropoutBPKernel(void* outputBuf, Nd4jLong* outputShape, void* gradOutBuf, Nd4jLong* gradOutShape, double probValue) {
|
static __global__ void dropoutBPKernel(void* outputBuf, Nd4jLong* outputShape, void* gradOutBuf, Nd4jLong* gradOutShape, double probValue) {
|
||||||
|
@ -260,17 +256,14 @@ namespace helpers {
|
||||||
int dropOutFunctorBP(graph::Context& context, NDArray* input, NDArray* gradOut, NDArray* output, NDArray* reduceShape, int seed, double probValue) {
|
int dropOutFunctorBP(graph::Context& context, NDArray* input, NDArray* gradOut, NDArray* output, NDArray* reduceShape, int seed, double probValue) {
|
||||||
BUILD_SINGLE_SELECTOR(context.dataType(), return dropOutFunctorBP_, (context, input, gradOut, output, reduceShape, seed, probValue), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(context.dataType(), return dropOutFunctorBP_, (context, input, gradOut, output, reduceShape, seed, probValue), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template int dropOutFunctorBP_, (graph::Context& context, NDArray* input, NDArray* gradOut, NDArray* output, NDArray* reduceShape, int seed, double probValue), FLOAT_TYPES);
|
|
||||||
|
|
||||||
int alphaDropOutFunctor(graph::Context& context, NDArray* input, NDArray* output, NDArray* reduceShape, int seed, double probValue, double alpha, double alpha1, double beta) {
|
int alphaDropOutFunctor(graph::Context& context, NDArray* input, NDArray* output, NDArray* reduceShape, int seed, double probValue, double alpha, double alpha1, double beta) {
|
||||||
BUILD_SINGLE_SELECTOR(context.dataType(), return alphaDropOutFunctor_, (context, input, output, reduceShape, seed, probValue, alpha, alpha1, beta), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(context.dataType(), return alphaDropOutFunctor_, (context, input, output, reduceShape, seed, probValue, alpha, alpha1, beta), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template int alphaDropOutFunctor_, (graph::Context& context, NDArray* input, NDArray* output, NDArray* reduceShape, int seed, double probValue, double alpha, double alpha1, double beta), FLOAT_TYPES);
|
|
||||||
|
|
||||||
int alphaDropOutFunctorBP(graph::Context& context, NDArray* input, NDArray* gradOut, NDArray* output, NDArray* reduceShape, int seed, double probValue, double alpha, double alpha1, double beta) {
|
int alphaDropOutFunctorBP(graph::Context& context, NDArray* input, NDArray* gradOut, NDArray* output, NDArray* reduceShape, int seed, double probValue, double alpha, double alpha1, double beta) {
|
||||||
BUILD_SINGLE_SELECTOR(context.dataType(), return alphaDropOutFunctorBP_, (context, input, gradOut, output, reduceShape, seed, probValue, alpha, alpha1, beta), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(context.dataType(), return alphaDropOutFunctorBP_, (context, input, gradOut, output, reduceShape, seed, probValue, alpha, alpha1, beta), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template int alphaDropOutFunctorBP_, (graph::Context& context, NDArray* input, NDArray* gradOut, NDArray* output, NDArray* reduceShape, int seed, double probValue, double alpha, double alpha1, double beta), FLOAT_TYPES);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -306,7 +306,7 @@ namespace nd4j {
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({}, {indices, input});
|
NDArray::prepareSpecialUse({}, {indices, input});
|
||||||
|
|
||||||
BUILD_DOUBLE_SELECTOR(xType, yType, _dynamicPartitionFunctor, (context, input, indices, outputList), LIBND4J_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(xType, yType, _dynamicPartitionFunctor, (context, input, indices, outputList), NUMERIC_TYPES, INDEXING_TYPES);
|
||||||
|
|
||||||
NDArray::registerSpecialUse({}, {indices, input});
|
NDArray::registerSpecialUse({}, {indices, input});
|
||||||
|
|
||||||
|
@ -336,7 +336,7 @@ namespace nd4j {
|
||||||
NDArray::prepareSpecialUse({output}, {});
|
NDArray::prepareSpecialUse({output}, {});
|
||||||
|
|
||||||
|
|
||||||
BUILD_DOUBLE_SELECTOR(xType, yType, _dynamicStitchFunctor, (context, inputs, indices, output), LIBND4J_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(xType, yType, _dynamicStitchFunctor, (context, inputs, indices, output), NUMERIC_TYPES, INDEXING_TYPES);
|
||||||
|
|
||||||
NDArray::registerSpecialUse({output}, {});
|
NDArray::registerSpecialUse({output}, {});
|
||||||
|
|
||||||
|
@ -346,22 +346,15 @@ namespace nd4j {
|
||||||
int dynamicStitchFunctorBP(nd4j::LaunchContext * context, std::vector<NDArray*> const& inputs, std::vector<NDArray*> const& indices, NDArray const* gradInput, std::vector<NDArray*>& outputList) {
|
int dynamicStitchFunctorBP(nd4j::LaunchContext * context, std::vector<NDArray*> const& inputs, std::vector<NDArray*> const& indices, NDArray const* gradInput, std::vector<NDArray*>& outputList) {
|
||||||
auto xType = inputs.at(0)->dataType();
|
auto xType = inputs.at(0)->dataType();
|
||||||
|
|
||||||
BUILD_SINGLE_SELECTOR(xType, return _dynamicStitchFunctorBP, (inputs, indices, gradInput, outputList), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(xType, return _dynamicStitchFunctorBP, (inputs, indices, gradInput, outputList), NUMERIC_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
void dynamicPartitionFunctorBP(nd4j::LaunchContext * context, NDArray const* input, NDArray const* indices, std::vector<NDArray*> const& inputGradientList, std::vector<NDArray*>& outputList) {
|
void dynamicPartitionFunctorBP(nd4j::LaunchContext * context, NDArray const* input, NDArray const* indices, std::vector<NDArray*> const& inputGradientList, std::vector<NDArray*>& outputList) {
|
||||||
auto xType = input->dataType();
|
auto xType = input->dataType();
|
||||||
|
|
||||||
BUILD_SINGLE_SELECTOR(xType, _dynamicPartitionFunctorBP, (input, indices, inputGradientList, outputList), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(xType, _dynamicPartitionFunctorBP, (input, indices, inputGradientList, outputList), NUMERIC_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void _dynamicPartitionFunctorBP, (NDArray const* input, NDArray const* indices, std::vector<NDArray*> const& inputGradientList, std::vector<NDArray*>& outputList);, LIBND4J_TYPES);
|
|
||||||
BUILD_SINGLE_TEMPLATE(template int _dynamicStitchFunctorBP, (std::vector<NDArray*> const& inputs, std::vector<NDArray*> const& indices, NDArray const* gradInput, std::vector<NDArray*>& outputList);, LIBND4J_TYPES);
|
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void _dynamicPartitionFunctor, (nd4j::LaunchContext * context, NDArray const* input, NDArray const* indices, std::vector<NDArray*>& outputList);, LIBND4J_TYPES, INTEGER_TYPES);
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template int _dynamicStitchFunctor, (nd4j::LaunchContext * context, std::vector<NDArray*> const& inputs, std::vector<NDArray*> const& indices, NDArray* output);, LIBND4J_TYPES, INTEGER_TYPES);
|
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -164,13 +164,13 @@ void gather(nd4j::LaunchContext * context, const NDArray* input, const NDArray*
|
||||||
sizeof(Nd4jLong)));
|
sizeof(Nd4jLong)));
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({output}, {input, pIndices});
|
NDArray::prepareSpecialUse({output}, {input, pIndices});
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), pIndices->dataType(), gatherCudaLauncher, (context->getCudaStream(), numOfSubArrs, input->getSpecialBuffer(), xShapeInfo, xOffsets, pIndices->getSpecialBuffer(), pIndices->getSpecialShapeInfo(), output->getSpecialBuffer(), zShapeInfo, zOffsets), NUMERIC_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(input->dataType(), pIndices->dataType(), gatherCudaLauncher, (context->getCudaStream(), numOfSubArrs, input->getSpecialBuffer(), xShapeInfo, xOffsets, pIndices->getSpecialBuffer(), pIndices->getSpecialShapeInfo(), output->getSpecialBuffer(), zShapeInfo, zOffsets), LIBND4J_TYPES, INDEXING_TYPES);
|
||||||
NDArray::registerSpecialUse({output}, {input, pIndices});
|
NDArray::registerSpecialUse({output}, {input, pIndices});
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
NDArray::prepareSpecialUse({output}, {input, pIndices});
|
NDArray::prepareSpecialUse({output}, {input, pIndices});
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), pIndices->dataType(), gatherCudaLinear, (context->getCudaStream(), input->getSpecialBuffer(), input->getSpecialShapeInfo(), pIndices->getSpecialBuffer(), pIndices->getSpecialShapeInfo(), output->specialBuffer(), output->specialShapeInfo()), NUMERIC_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(input->dataType(), pIndices->dataType(), gatherCudaLinear, (context->getCudaStream(), input->getSpecialBuffer(), input->getSpecialShapeInfo(), pIndices->getSpecialBuffer(), pIndices->getSpecialShapeInfo(), output->specialBuffer(), output->specialShapeInfo()), LIBND4J_TYPES, INDEXING_TYPES);
|
||||||
NDArray::registerSpecialUse({output}, {input, pIndices});
|
NDArray::registerSpecialUse({output}, {input, pIndices});
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -181,12 +181,6 @@ void gather(nd4j::LaunchContext * context, const NDArray* input, const NDArray*
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void gatherCudaLauncher, (const cudaStream_t *stream, const int numOfSubArrs, const void* vx, const Nd4jLong* xShapeInfo, const Nd4jLong* xOffsets, const void* vy, const Nd4jLong* yShapeInfo, void* vz, const Nd4jLong* zShapeInfo, const Nd4jLong* zOffsets), NUMERIC_TYPES, INTEGER_TYPES);
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void gatherCudaLinear, (const cudaStream_t *stream, const void* vx, const Nd4jLong* xShapeInfo, const void* vy, const Nd4jLong* yShapeInfo, void* vz, const Nd4jLong* zShapeInfo), NUMERIC_TYPES, INTEGER_TYPES);
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
|
@ -120,7 +120,6 @@ namespace nd4j {
|
||||||
|
|
||||||
gatherNDCuda<X,Y><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, vz, zShapeInfo);
|
gatherNDCuda<X,Y><<<blocksPerGrid, threadsPerBlock, sharedMem, *stream>>>(vx, xShapeInfo, vy, yShapeInfo, vz, zShapeInfo);
|
||||||
}
|
}
|
||||||
BUILD_DOUBLE_TEMPLATE(template void gatherNDCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, const int sharedMem, const cudaStream_t *stream, const void *vx, const Nd4jLong *xShapeInfo, const void *vy, const Nd4jLong *yShapeInfo, void *vz, const Nd4jLong *zShapeInfo), LIBND4J_TYPES, INTEGER_TYPES);
|
|
||||||
|
|
||||||
///////////////////////////////////////////////////////////////////
|
///////////////////////////////////////////////////////////////////
|
||||||
void gatherND(nd4j::LaunchContext * context, NDArray& input, NDArray& indices, NDArray& output) {
|
void gatherND(nd4j::LaunchContext * context, NDArray& input, NDArray& indices, NDArray& output) {
|
||||||
|
@ -137,7 +136,7 @@ namespace nd4j {
|
||||||
PointersManager manager(context, "gatherND");
|
PointersManager manager(context, "gatherND");
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&output}, {&input, &indices});
|
NDArray::prepareSpecialUse({&output}, {&input, &indices});
|
||||||
BUILD_DOUBLE_SELECTOR(xType, yType, gatherNDCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, context->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), indices.getSpecialBuffer(), indices.getSpecialShapeInfo(), output.getSpecialBuffer(), output.getSpecialShapeInfo()), LIBND4J_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(xType, yType, gatherNDCudaLauncher, (blocksPerGrid, threadsPerBlock, sharedMem, context->getCudaStream(), input.getSpecialBuffer(), input.getSpecialShapeInfo(), indices.getSpecialBuffer(), indices.getSpecialShapeInfo(), output.getSpecialBuffer(), output.getSpecialShapeInfo()), LIBND4J_TYPES, INDEXING_TYPES);
|
||||||
NDArray::registerSpecialUse({&output}, {&input, &indices});
|
NDArray::registerSpecialUse({&output}, {&input, &indices});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
|
|
@ -125,7 +125,7 @@ namespace nd4j {
|
||||||
double min_val = input.reduceNumber(reduce::SameOps::Min).e<double>(0);
|
double min_val = input.reduceNumber(reduce::SameOps::Min).e<double>(0);
|
||||||
double max_val = input.reduceNumber(reduce::SameOps::Max).e<double>(0);
|
double max_val = input.reduceNumber(reduce::SameOps::Max).e<double>(0);
|
||||||
|
|
||||||
BUILD_DOUBLE_SELECTOR(input.dataType(), output.dataType(), histogram_, (context, input.specialBuffer(), input.specialShapeInfo(), output.getSpecialBuffer(), output.getSpecialShapeInfo(), numBins, min_val, max_val), LIBND4J_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(input.dataType(), output.dataType(), histogram_, (context, input.specialBuffer(), input.specialShapeInfo(), output.getSpecialBuffer(), output.getSpecialShapeInfo(), numBins, min_val, max_val), LIBND4J_TYPES, INDEXING_TYPES);
|
||||||
|
|
||||||
NDArray::registerSpecialUse({&output}, {&input});
|
NDArray::registerSpecialUse({&output}, {&input});
|
||||||
}
|
}
|
||||||
|
|
|
@ -85,7 +85,6 @@ template <typename T>
|
||||||
static void im2colCudaLauncher(const int blocksPerGrid, const int threadsPerBlock, nd4j::LaunchContext & context, const void *image, void *columns, const Nd4jLong *imShapeInfo, const Nd4jLong *colShapeInfo, int sH, int sW, int pH, int pW, int dH, int dW, double zeroPadVal) {
|
static void im2colCudaLauncher(const int blocksPerGrid, const int threadsPerBlock, nd4j::LaunchContext & context, const void *image, void *columns, const Nd4jLong *imShapeInfo, const Nd4jLong *colShapeInfo, int sH, int sW, int pH, int pW, int dH, int dW, double zeroPadVal) {
|
||||||
im2colCuda<T><<<blocksPerGrid, threadsPerBlock, threadsPerBlock * sizeof(Nd4jLong) * 6 /* rank of columns = 6 */, *context.getCudaStream()>>>(image, columns, imShapeInfo, colShapeInfo, sH, sW, pH, pW, dH, dW, zeroPadVal);
|
im2colCuda<T><<<blocksPerGrid, threadsPerBlock, threadsPerBlock * sizeof(Nd4jLong) * 6 /* rank of columns = 6 */, *context.getCudaStream()>>>(image, columns, imShapeInfo, colShapeInfo, sH, sW, pH, pW, dH, dW, zeroPadVal);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void im2colCudaLauncher, (const int blocksPerGrid, const int threadsPerBlock, nd4j::LaunchContext& context, const void *image, void *columns, const Nd4jLong *imShapeInfo, const Nd4jLong *colShapeInfo, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const double zeroPadVal), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void im2col(nd4j::LaunchContext& context, const NDArray& image, NDArray& columns, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const NDArray& arrZeroPadVal) {
|
void im2col(nd4j::LaunchContext& context, const NDArray& image, NDArray& columns, const int kH, const int kW, const int sH, const int sW, const int pH, const int pW, const int dH, const int dW, const NDArray& arrZeroPadVal) {
|
||||||
|
@ -96,7 +95,7 @@ void im2col(nd4j::LaunchContext& context, const NDArray& image, NDArray& columns
|
||||||
const int blocksPerGrid = (columns.lengthOf() + threadsPerBlock - 1) / threadsPerBlock;
|
const int blocksPerGrid = (columns.lengthOf() + threadsPerBlock - 1) / threadsPerBlock;
|
||||||
|
|
||||||
NDArray::prepareSpecialUse({&columns}, {&image});
|
NDArray::prepareSpecialUse({&columns}, {&image});
|
||||||
BUILD_SINGLE_SELECTOR(columns.dataType(), im2colCudaLauncher, (blocksPerGrid, threadsPerBlock, context, image.getSpecialBuffer(), columns.getSpecialBuffer(), image.getSpecialShapeInfo(), columns.getSpecialShapeInfo(), sH, sW, pH, pW, dH, dW, arrZeroPadVal.e<double>(0)), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(columns.dataType(), im2colCudaLauncher, (blocksPerGrid, threadsPerBlock, context, image.getSpecialBuffer(), columns.getSpecialBuffer(), image.getSpecialShapeInfo(), columns.getSpecialShapeInfo(), sH, sW, pH, pW, dH, dW, arrZeroPadVal.e<double>(0)), FLOAT_TYPES);
|
||||||
NDArray::registerSpecialUse({&columns}, {&image});
|
NDArray::registerSpecialUse({&columns}, {&image});
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
|
|
|
@ -85,8 +85,8 @@ namespace helpers {
|
||||||
*shouldSelect = shouldSelectShared;
|
*shouldSelect = shouldSelectShared;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
template <typename I>
|
|
||||||
|
|
||||||
|
template <typename I>
|
||||||
static __global__ void copyIndices(void* indices, void* indicesLong, Nd4jLong len) {
|
static __global__ void copyIndices(void* indices, void* indicesLong, Nd4jLong len) {
|
||||||
__shared__ I* indexBuf;
|
__shared__ I* indexBuf;
|
||||||
__shared__ Nd4jLong* srcBuf;
|
__shared__ Nd4jLong* srcBuf;
|
||||||
|
@ -115,15 +115,15 @@ namespace helpers {
|
||||||
sortByValue(extras, indices->buffer(), indices->shapeInfo(), indices->specialBuffer(), indices->specialShapeInfo(), scores.buffer(), scores.shapeInfo(), scores.specialBuffer(), scores.specialShapeInfo(), true);
|
sortByValue(extras, indices->buffer(), indices->shapeInfo(), indices->specialBuffer(), indices->specialShapeInfo(), scores.buffer(), scores.shapeInfo(), scores.specialBuffer(), scores.specialShapeInfo(), true);
|
||||||
// TO DO: sort indices using scales as value row
|
// TO DO: sort indices using scales as value row
|
||||||
//std::sort(indices.begin(), indices.end(), [scales](int i, int j) {return scales->e<T>(i) > scales->e<T>(j);});
|
//std::sort(indices.begin(), indices.end(), [scales](int i, int j) {return scales->e<T>(i) > scales->e<T>(j);});
|
||||||
I* indexBuf = reinterpret_cast<I*>(indices->specialBuffer());
|
auto indexBuf = reinterpret_cast<I*>(indices->specialBuffer());
|
||||||
|
|
||||||
NDArray selectedIndices = NDArrayFactory::create<I>('c', {output->lengthOf()});
|
NDArray selectedIndices = NDArrayFactory::create<I>('c', {output->lengthOf()});
|
||||||
int numSelected = 0;
|
int numSelected = 0;
|
||||||
int numBoxes = boxes->sizeAt(0);
|
int numBoxes = boxes->sizeAt(0);
|
||||||
T* boxesBuf = reinterpret_cast<T*>(boxes->specialBuffer());
|
auto boxesBuf = reinterpret_cast<T*>(boxes->specialBuffer());
|
||||||
|
|
||||||
I* selectedIndicesData = reinterpret_cast<I*>(selectedIndices.specialBuffer());
|
auto selectedIndicesData = reinterpret_cast<I*>(selectedIndices.specialBuffer());
|
||||||
I* outputBuf = reinterpret_cast<I*>(output->specialBuffer());
|
auto outputBuf = reinterpret_cast<I*>(output->specialBuffer());
|
||||||
|
|
||||||
bool* shouldSelectD;
|
bool* shouldSelectD;
|
||||||
auto err = cudaMalloc(&shouldSelectD, sizeof(bool));
|
auto err = cudaMalloc(&shouldSelectD, sizeof(bool));
|
||||||
|
@ -138,8 +138,7 @@ namespace helpers {
|
||||||
throw cuda_exception::build("helpers::nonMaxSuppressionV2: Cannot set up bool flag to device", err);
|
throw cuda_exception::build("helpers::nonMaxSuppressionV2: Cannot set up bool flag to device", err);
|
||||||
}
|
}
|
||||||
|
|
||||||
shouldSelectKernel<T> <<< 128, 256, 1024, *stream >>>
|
shouldSelectKernel<T,I><<<128, 256, 1024, *stream>>>(boxesBuf, boxes->specialShapeInfo(), indexBuf, selectedIndicesData, threshold, numSelected, i, shouldSelectD);
|
||||||
(boxesBuf, boxes->specialShapeInfo(), indexBuf, selectedIndicesData, threshold, numSelected, i, shouldSelectD);
|
|
||||||
err = cudaMemcpy(&shouldSelect, shouldSelectD, sizeof(bool), cudaMemcpyDeviceToHost);
|
err = cudaMemcpy(&shouldSelect, shouldSelectD, sizeof(bool), cudaMemcpyDeviceToHost);
|
||||||
if (err) {
|
if (err) {
|
||||||
throw cuda_exception::build("helpers::nonMaxSuppressionV2: Cannot set up bool flag to host", err);
|
throw cuda_exception::build("helpers::nonMaxSuppressionV2: Cannot set up bool flag to host", err);
|
||||||
|
@ -161,9 +160,8 @@ namespace helpers {
|
||||||
}
|
}
|
||||||
|
|
||||||
void nonMaxSuppressionV2(nd4j::LaunchContext * context, NDArray* boxes, NDArray* scales, int maxSize, double threshold, NDArray* output) {
|
void nonMaxSuppressionV2(nd4j::LaunchContext * context, NDArray* boxes, NDArray* scales, int maxSize, double threshold, NDArray* output) {
|
||||||
BUILD_DOUBLE_SELECTOR(boxes->dataType(), output->dataType(), nonMaxSuppressionV2_, (context, boxes, scales, maxSize, threshold, output), FLOAT_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(boxes->dataType(), output->dataType(), nonMaxSuppressionV2_, (context, boxes, scales, maxSize, threshold, output), FLOAT_TYPES, INDEXING_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_DOUBLE_TEMPLATE(template void nonMaxSuppressionV2_, (nd4j::LaunchContext * context, NDArray* boxes, NDArray* scales, int maxSize, double threshold, NDArray* output), FLOAT_TYPES, INTEGER_TYPES);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -34,7 +34,6 @@ namespace nd4j {
|
||||||
|
|
||||||
theFirst->applyPairwiseLambda(theSecond, functor, nullptr);
|
theFirst->applyPairwiseLambda(theSecond, functor, nullptr);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void reluDerivative__, (NDArray* input, NDArray* epsilon), FLOAT_TYPES);
|
|
||||||
|
|
||||||
void reluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond) {
|
void reluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reluDerivative__, (theFirst, theSecond), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reluDerivative__, (theFirst, theSecond), FLOAT_TYPES);
|
||||||
|
@ -48,7 +47,6 @@ namespace nd4j {
|
||||||
|
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void reluDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void reluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void reluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
|
@ -63,8 +61,6 @@ namespace nd4j {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void relu6Derivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void relu6Derivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void relu6Derivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), relu6Derivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), relu6Derivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -78,8 +74,6 @@ namespace nd4j {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void leakyReluDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void leakyReluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void leakyReluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), leakyReluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), leakyReluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -93,8 +87,6 @@ namespace nd4j {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void eluDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void eluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void eluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), eluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), eluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -108,8 +100,6 @@ namespace nd4j {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void seluDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void seluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void seluDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), seluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), seluDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
|
@ -36,8 +36,6 @@ namespace nd4j {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void tanhDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void tanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void tanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), tanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), tanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -53,8 +51,6 @@ namespace nd4j {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void hardTanhDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void hardTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void hardTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), hardTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), hardTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -68,8 +64,6 @@ namespace nd4j {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void rationalTanhDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void rationalTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void rationalTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), rationalTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), rationalTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -83,8 +77,6 @@ namespace nd4j {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void rectifiedTanhDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void rectifiedTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void rectifiedTanhDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), rectifiedTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), rectifiedTanhDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
|
@ -35,8 +35,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void cubeDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void cubeDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void cubeDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), cubeDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), cubeDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -51,8 +49,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void reduceNorm1_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void reduceNorm1(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void reduceNorm1(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reduceNorm1_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), reduceNorm1_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -67,8 +63,6 @@ namespace helpers {
|
||||||
logits->applyPairwiseLambda(labels, functor, output);
|
logits->applyPairwiseLambda(labels, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void sigmCrossEntropy_, (NDArray* logits, NDArray* labels, NDArray* output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void sigmCrossEntropy(nd4j::LaunchContext * context, NDArray* logits, NDArray* labels, NDArray* output) {
|
void sigmCrossEntropy(nd4j::LaunchContext * context, NDArray* logits, NDArray* labels, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(logits->dataType(), sigmCrossEntropy_, (logits, labels, output), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(logits->dataType(), sigmCrossEntropy_, (logits, labels, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -87,8 +81,6 @@ namespace helpers {
|
||||||
logits->applyPairwiseLambda(labels, functor, output);
|
logits->applyPairwiseLambda(labels, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void sigmCrossEntropyGrad_, (NDArray* logits, NDArray* labels, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void sigmCrossEntropyGrad(nd4j::LaunchContext * context, NDArray* logits, NDArray* labels, NDArray* output) {
|
void sigmCrossEntropyGrad(nd4j::LaunchContext * context, NDArray* logits, NDArray* labels, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(logits->dataType(), sigmCrossEntropyGrad_, (logits, labels, output), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(logits->dataType(), sigmCrossEntropyGrad_, (logits, labels, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -106,8 +98,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void softSignDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void softSignDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void softSignDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), softSignDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), softSignDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -122,8 +112,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void softPlusDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void softPlusDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void softPlusDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), softPlusDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), softPlusDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -141,8 +129,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void sigmoidDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void sigmoidDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void sigmoidDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), sigmoidDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), sigmoidDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -156,8 +142,6 @@ namespace helpers {
|
||||||
input->applyPairwiseLambda(epsilon, functor, output);
|
input->applyPairwiseLambda(epsilon, functor, output);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void hardSigmoidDerivative_, (NDArray* input, NDArray* epsilon, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void hardSigmoidDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
void hardSigmoidDerivative(nd4j::LaunchContext * context, NDArray* theFirst, NDArray* theSecond, NDArray* theOutput) {
|
||||||
BUILD_SINGLE_SELECTOR(theFirst->dataType(), hardSigmoidDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(theFirst->dataType(), hardSigmoidDerivative_, (theFirst, theSecond, theOutput), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
@ -197,12 +181,10 @@ namespace helpers {
|
||||||
void logSumExp(nd4j::LaunchContext * context, NDArray* input, NDArray* axis, NDArray* output) {
|
void logSumExp(nd4j::LaunchContext * context, NDArray* input, NDArray* axis, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), logSumExp_, (input, axis, output), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), logSumExp_, (input, axis, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void logSumExp_, (NDArray* input, NDArray* axis, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
void logSumExp(nd4j::LaunchContext * context, NDArray* input, NDArray* subtrah, NDArray* axis, NDArray* output) {
|
void logSumExp(nd4j::LaunchContext * context, NDArray* input, NDArray* subtrah, NDArray* axis, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), logSumExp_, (input, subtrah, axis, output), FLOAT_TYPES);
|
BUILD_SINGLE_SELECTOR(input->dataType(), logSumExp_, (input, subtrah, axis, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void logSumExp_, (NDArray* input, NDArray* subtrah, NDArray* axis, NDArray*output);, FLOAT_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template <typename T>
|
template <typename T>
|
||||||
|
@ -246,7 +228,7 @@ void weightedCrossEntropyWithLogitsFunctor(nd4j::LaunchContext * context, NDArra
|
||||||
|
|
||||||
NDArray::registerSpecialUse({output}, {targets, input, weights});
|
NDArray::registerSpecialUse({output}, {targets, input, weights});
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void weightedCrossEntropyWithLogitsFunctor_, (NDArray const* targets, NDArray const* input, NDArray const* weights, NDArray* output), FLOAT_TYPES);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -148,7 +148,7 @@ namespace helpers {
|
||||||
input.syncToDevice();
|
input.syncToDevice();
|
||||||
gradO.syncToDevice();
|
gradO.syncToDevice();
|
||||||
|
|
||||||
BUILD_DOUBLE_SELECTOR(input.dataType(), gradO.dataType(), lrnBP_, (block, input, gradO, gradI, depth, bias, alpha, beta), LIBND4J_TYPES, FLOAT_TYPES);
|
BUILD_DOUBLE_SELECTOR(input.dataType(), gradO.dataType(), lrnBP_, (block, input, gradO, gradI, depth, bias, alpha, beta), FLOAT_TYPES, FLOAT_TYPES);
|
||||||
|
|
||||||
gradI.tickWriteDevice();
|
gradI.tickWriteDevice();
|
||||||
}
|
}
|
||||||
|
|
|
@ -212,8 +212,6 @@ namespace helpers {
|
||||||
invertLowKernel<T><<<n, n, 128, *stream>>>(invertedMatrix->specialBuffer(), invertedMatrix->specialShapeInfo(), inputMatrix->specialBuffer(), inputMatrix->specialShapeInfo(), n);
|
invertLowKernel<T><<<n, n, 128, *stream>>>(invertedMatrix->specialBuffer(), invertedMatrix->specialShapeInfo(), inputMatrix->specialBuffer(), inputMatrix->specialShapeInfo(), n);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void invertLowerMatrix_, (NDArray* inputMatrix, NDArray* invertedMatrix);, FLOAT_NATIVE);
|
|
||||||
|
|
||||||
void invertLowerMatrix(NDArray* inputMatrix, NDArray* invertedMatrix) {
|
void invertLowerMatrix(NDArray* inputMatrix, NDArray* invertedMatrix) {
|
||||||
BUILD_SINGLE_SELECTOR(inputMatrix->dataType(), invertLowerMatrix_, (inputMatrix, invertedMatrix), FLOAT_NATIVE);
|
BUILD_SINGLE_SELECTOR(inputMatrix->dataType(), invertLowerMatrix_, (inputMatrix, invertedMatrix), FLOAT_NATIVE);
|
||||||
}
|
}
|
||||||
|
@ -232,8 +230,6 @@ namespace helpers {
|
||||||
invertUpKernel<T><<<n, n, 256, *stream>>>(invertedMatrix->specialBuffer(), invertedMatrix->specialShapeInfo(), inputMatrix->specialBuffer(), inputMatrix->specialShapeInfo(), n);
|
invertUpKernel<T><<<n, n, 256, *stream>>>(invertedMatrix->specialBuffer(), invertedMatrix->specialShapeInfo(), inputMatrix->specialBuffer(), inputMatrix->specialShapeInfo(), n);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template void invertUpperMatrix_, (NDArray* inputMatrix, NDArray* invertedMatrix);, FLOAT_NATIVE);
|
|
||||||
|
|
||||||
void invertUpperMatrix(NDArray* inputMatrix, NDArray* invertedMatrix) {
|
void invertUpperMatrix(NDArray* inputMatrix, NDArray* invertedMatrix) {
|
||||||
BUILD_SINGLE_SELECTOR(inputMatrix->dataType(), invertUpperMatrix_, (inputMatrix, invertedMatrix), FLOAT_NATIVE);
|
BUILD_SINGLE_SELECTOR(inputMatrix->dataType(), invertUpperMatrix_, (inputMatrix, invertedMatrix), FLOAT_NATIVE);
|
||||||
}
|
}
|
||||||
|
@ -562,8 +558,6 @@ namespace helpers {
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template int determinant_, (nd4j::LaunchContext* context, NDArray* input, NDArray* output), FLOAT_NATIVE);
|
|
||||||
|
|
||||||
int determinant(nd4j::LaunchContext * context, NDArray* input, NDArray* output) {
|
int determinant(nd4j::LaunchContext * context, NDArray* input, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), return determinant_, (context, input, output), FLOAT_NATIVE);
|
BUILD_SINGLE_SELECTOR(input->dataType(), return determinant_, (context, input, output), FLOAT_NATIVE);
|
||||||
}
|
}
|
||||||
|
@ -612,8 +606,6 @@ namespace helpers {
|
||||||
return ND4J_STATUS_OK;
|
return ND4J_STATUS_OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_SINGLE_TEMPLATE(template int logAbsDeterminant_, (LaunchContext* context, NDArray* input, NDArray* output), FLOAT_NATIVE);
|
|
||||||
|
|
||||||
int logAbsDeterminant(nd4j::LaunchContext * context, NDArray* input, NDArray* output) {
|
int logAbsDeterminant(nd4j::LaunchContext * context, NDArray* input, NDArray* output) {
|
||||||
BUILD_SINGLE_SELECTOR(input->dataType(), return logAbsDeterminant_, (context, input, output), FLOAT_NATIVE);
|
BUILD_SINGLE_SELECTOR(input->dataType(), return logAbsDeterminant_, (context, input, output), FLOAT_NATIVE);
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,39 +0,0 @@
|
||||||
/*******************************************************************************
|
|
||||||
* Copyright (c) 2015-2018 Skymind, Inc.
|
|
||||||
*
|
|
||||||
* This program and the accompanying materials are made available under the
|
|
||||||
* terms of the Apache License, Version 2.0 which is available at
|
|
||||||
* https://www.apache.org/licenses/LICENSE-2.0.
|
|
||||||
*
|
|
||||||
* Unless required by applicable law or agreed to in writing, software
|
|
||||||
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
* License for the specific language governing permissions and limitations
|
|
||||||
* under the License.
|
|
||||||
*
|
|
||||||
* SPDX-License-Identifier: Apache-2.0
|
|
||||||
******************************************************************************/
|
|
||||||
|
|
||||||
//
|
|
||||||
// Created by raver119 on 20.12.17.
|
|
||||||
//
|
|
||||||
|
|
||||||
#include <ops/declarable/helpers/matmul.h>
|
|
||||||
|
|
||||||
namespace nd4j {
|
|
||||||
namespace ops {
|
|
||||||
namespace helpers {
|
|
||||||
template <typename X, typename Y, typename Z>
|
|
||||||
void __matmul(NDArray *vA, NDArray *vB, NDArray *vC, int transA, int transB, double alpha, double beta) {
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
void _matmul(nd4j::LaunchContext * context, NDArray *vA, NDArray *vB, NDArray *vC, int transA, int transB, double alpha, double beta) {
|
|
||||||
BUILD_TRIPLE_SELECTOR(vA->dataType(), vB->dataType(), vC->dataType(), __matmul, (vA, vB, vC, transA, transB, alpha, beta), LIBND4J_TYPES, LIBND4J_TYPES, LIBND4J_TYPES);
|
|
||||||
}
|
|
||||||
|
|
||||||
BUILD_TRIPLE_TEMPLATE(template void __matmul, (NDArray *A, NDArray *B, NDArray *C, int transA, int transB, double alpha, double beta), LIBND4J_TYPES, LIBND4J_TYPES, LIBND4J_TYPES);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -88,13 +88,10 @@ namespace helpers {
|
||||||
void maxPoolingFunctor(nd4j::LaunchContext * context, nd4j::graph::Context& block, NDArray* input, NDArray* values, std::vector<int> const& params, NDArray* indices) {
|
void maxPoolingFunctor(nd4j::LaunchContext * context, nd4j::graph::Context& block, NDArray* input, NDArray* values, std::vector<int> const& params, NDArray* indices) {
|
||||||
NDArray::prepareSpecialUse({values, indices}, {input});
|
NDArray::prepareSpecialUse({values, indices}, {input});
|
||||||
auto yType = indices == nullptr ? nd4j::DataType::INT64 : indices->dataType();
|
auto yType = indices == nullptr ? nd4j::DataType::INT64 : indices->dataType();
|
||||||
BUILD_DOUBLE_SELECTOR(input->dataType(), yType, maxPoolingFunctor_, (block, input, values, params, indices), FLOAT_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(input->dataType(), yType, maxPoolingFunctor_, (block, input, values, params, indices), FLOAT_TYPES, INDEXING_TYPES);
|
||||||
NDArray::registerSpecialUse({values, indices}, {input});
|
NDArray::registerSpecialUse({values, indices}, {input});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void maxPoolingFunctor_, (nd4j::graph::Context& block, NDArray* input, NDArray* values, std::vector<int> const& params, NDArray* indices), FLOAT_TYPES, INTEGER_TYPES);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
|
@ -107,7 +107,6 @@ namespace nd4j {
|
||||||
|
|
||||||
NDArray::registerSpecialUse({gradX, gradY}, {x, y, epsNext});
|
NDArray::registerSpecialUse({gradX, gradY}, {x, y, epsNext});
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void maximumBPFunctor_, (NDArray* x, NDArray* y, NDArray* epsNext, NDArray* gradX, NDArray* gradY), NUMERIC_TYPES);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -79,10 +79,9 @@ namespace nd4j {
|
||||||
}
|
}
|
||||||
|
|
||||||
void mergeMaxIndex(nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output) {
|
void mergeMaxIndex(nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output) {
|
||||||
BUILD_DOUBLE_SELECTOR(inArrs[0]->dataType(), output.dataType(), mergeMaxIndex_, (context, inArrs, output), LIBND4J_TYPES, INTEGER_TYPES);
|
BUILD_DOUBLE_SELECTOR(inArrs[0]->dataType(), output.dataType(), mergeMaxIndex_, (context, inArrs, output), LIBND4J_TYPES, INDEXING_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
BUILD_DOUBLE_TEMPLATE(template void mergeMaxIndex_, (nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output), LIBND4J_TYPES, INTEGER_TYPES);
|
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
template <typename T>
|
template <typename T>
|
||||||
|
@ -128,7 +127,6 @@ namespace nd4j {
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void mergeMax_, (nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
void mergeMax(nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output) {
|
void mergeMax(nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output) {
|
||||||
BUILD_SINGLE_SELECTOR(output.dataType(), mergeMax_, (context, inArrs, output), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(output.dataType(), mergeMax_, (context, inArrs, output), LIBND4J_TYPES);
|
||||||
|
@ -176,10 +174,9 @@ namespace nd4j {
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void mergeAvg_, (nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output), LIBND4J_TYPES);
|
|
||||||
|
|
||||||
void mergeAvg(nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output) {
|
void mergeAvg(nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output) {
|
||||||
BUILD_SINGLE_SELECTOR(output.dataType(), mergeAvg_, (context, inArrs, output), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(output.dataType(), mergeAvg_, (context, inArrs, output), FLOAT_TYPES);
|
||||||
}
|
}
|
||||||
|
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
|
@ -224,10 +221,10 @@ namespace nd4j {
|
||||||
|
|
||||||
manager.synchronize();
|
manager.synchronize();
|
||||||
}
|
}
|
||||||
BUILD_SINGLE_TEMPLATE(template void mergeAdd_, (nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output), LIBND4J_TYPES);
|
BUILD_SINGLE_TEMPLATE(template void mergeAdd_, (nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output), NUMERIC_TYPES);
|
||||||
|
|
||||||
void mergeAdd(nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output) {
|
void mergeAdd(nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, NDArray& output) {
|
||||||
BUILD_SINGLE_SELECTOR(output.dataType(), mergeAdd_, (context, inArrs, output), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(output.dataType(), mergeAdd_, (context, inArrs, output), NUMERIC_TYPES);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -136,7 +136,7 @@ namespace helpers {
|
||||||
//////////////////////////////////////////////////////////////////////////
|
//////////////////////////////////////////////////////////////////////////
|
||||||
void meshgrid(nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, const std::vector<NDArray*>& outArrs, const bool swapFirst2Dims) {
|
void meshgrid(nd4j::LaunchContext * context, const std::vector<NDArray*>& inArrs, const std::vector<NDArray*>& outArrs, const bool swapFirst2Dims) {
|
||||||
|
|
||||||
BUILD_SINGLE_SELECTOR(inArrs.at(0)->dataType(), meshgrid_, (context, inArrs, outArrs, swapFirst2Dims), LIBND4J_TYPES);
|
BUILD_SINGLE_SELECTOR(inArrs.at(0)->dataType(), meshgrid_, (context, inArrs, outArrs, swapFirst2Dims), NUMERIC_TYPES);
|
||||||
|
|
||||||
for (auto v:outArrs)
|
for (auto v:outArrs)
|
||||||
v->tickWriteDevice();
|
v->tickWriteDevice();
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue