cavis/libnd4j
Alex Black 68ea5f3688
Dev branch merge: dev_20190606 (#7904)
* correct logsoftmax looss (#2)

* Small SameDiff listener fix (#4)

* Various fixes (#6)

* #7839 Fix for asXMatrix and tests

* #7866 EmbeddingSequenceLayer dtype fix + test

* #7856 SameDiff save/load stream methods

* #7859 RegressionEvaluation rank 4 fix + tests + axis configuration

* EvaluationBinary 3d/4d

* More evaluation 3d/4d tests

* #7847 Evaluation empty checks

* Small test ifx

* #7848 Fix median edge case

* Improve DL4J samediff layer tests

* [WIP] FastText wrapper implemented (#8)

* FastText implemented

* Some fixes

* Fix shapes for wordsNearest

* Validation of input vectors

* Fixes

* Fixed test

* Thread tagged

* Some tweaks

* setContextClassLoader for DeallocatorServiceThread

* Numpy format tests (#1)

* Various fixes (#11)

* #7852 SameDiff gather fix

* #7892 SameDiff placeholder to constant conversion

* #7890 validate input rank for MLN/CG init methods

* Fix broken permute shape calculation

* Permute and gather fixes

* Tests

* #7850 LogSumExp fix + test

* Handful of test fixes

* Empty arrays with non-scalar shapes (#10)

* minor rearrangements for lambdas

* empty tensors with non-scalar shapes

* numpy empty tensors with non-scalar shapes

* few more empty tweaks

* Small fixes

* conv3d signature update

* micro fix in batchnorm mkldnn

* Import fixes

* Fix

* MKL-DNN update

* Small fill fix

* fill with empty input + test

* Fixes

* Small error improvement

* Fix

* one special test

* couple of fixes for lstm

* Rewrite TFGraphMapper.getNDArrayFromTensor to be maintainable and less error prone

* Fixes

* FP16

* Unsigned

* BFloat16

* Fill op - empty tweaks

* - couple of fixes for empty arrays construction
- stack updated

* strided slice fix

* one transform test

* provide method for reducing shapeInfo in case of input array is empty

* Fixed reduceAlongDimensions to use empty input properly.

* couple of broadcast tests

* couple of tests broadcast tests + tweak to make them pass

* add check of non-empty to methods producing sub-arrays

* Fixed reshapeC with zeros in shape.

* complete empty check in reduce_... legacy ops

* Concat and cumsum/prod

* Tweak to empty shape inference on import

* add empty check to the rest of reduce legacy ops

* one more test

* correct typo in evalReduceShapeInfoEmpty

* Added tests for reduce_* ops to tests with zero shapes.

* few more tests for empty reductions

* Fixed strided_slice op with empty case and tests.

* one more empty reduction test

* Fixed strided_slice test.

* add empty check to NDArray::reshapei

* infOrMax

* empty min/max with infinity tests

* made unstack working correctly with empty arrays

* few IndexReduce tests + tweaks for empty shapes

* add test for empty concat

* few tests fixed

* Validation fix for reductions on empty shapes

* Reverse fix

* Reduction shape calc fixes

* SameDiff.generateOutputVariable: don't use shape function to determine number of outputs

* Range fix

* - NDArray constructor updated for scalars/empty arrays
- few tests fixed

* More fixes

* Empty creator fixes

* concat fix

* concat fix

* TF import tests: allow 'both all NaN' and 'both all inf' to pass

* Slice, zero fraction, and reshape fixes

* transpose, gather

* Zero fraction

* scalar cast fix

* Empty reduction axis support

* few more tests fixed

* Fixed input checks conforming with TF for concat op and tests.

* few tests fixed

* matmul scalar shape fix

* Fixed checkout for data type and scalarity with concat to allow non-empty scalars with vector concats.

* broadcast bool fix

* few more tests

* few more tests

* correct evalReduceShapeInfoEmpty

* argmax/argmin + tests

* one more empty edge case + one more test

* argmax/argmin/realdiv_bp tweaks

* empty reshape test + fix

* Helper fixes

* Small fixes

* Gather test fix

* Gather test fix

* Small fixes

* reduce scalar zero values

* scalar mean workaround

* Remove debug code

* along dim mean workaround

* one more test

* - equalsTo() tweak for empty arrays
- one more test

* broadcast tweaks
2019-06-15 21:34:34 +10:00
..
blas Dev branch merge: dev_20190606 (#7904) 2019-06-15 21:34:34 +10:00
cmake Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
include Dev branch merge: dev_20190606 (#7904) 2019-06-15 21:34:34 +10:00
minifier Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
msi Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
packages Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
profile Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
server Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
tests_cpu Dev branch merge: dev_20190606 (#7904) 2019-06-15 21:34:34 +10:00
.gitignore Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
AddingNewOps.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
CMakeLists.txt Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
CMakeLists.txt.in Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
CMakeSettings.json Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
LICENSE Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
README.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
RaspberryPi.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
UnderstandingGraph.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
assembly-cuda.xml Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
assembly.xml Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
buildnativeoperations.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
cibuild.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
development.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
flatproto.txt Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
iOS.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
linuxOnPower.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
macOSx10 (CPU only).md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
pom.xml Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
proto.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
setuposx.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
windows.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00

README.md

LibND4J

Native operations for nd4j. Build using cmake

Prerequisites

  • GCC 4.9+
  • CUDA 8.0 or 9.0 (if desired)
  • CMake 3.8 (as of Nov 2017, in near future will require 3.9)

Additional build arguments

There's few additional arguments for buildnativeoperations.sh script you could use:

 -a XXXXXXXX// shortcut for -march/-mtune, i.e. -a native
 -b release OR -b debug // enables/desables debug builds. release is considered by default
 -j XX // this argument defines how many threads will be used to binaries on your box. i.e. -j 8 
 -cc XX// CUDA-only argument, builds only binaries for target GPU architecture. use this for fast builds

You can find the compute capability for your card on the NVIDIA website here.

For example, a GTX 1080 has compute capability 6.1, for which you would use -cc 61 (note no decimal point).

OS Specific Requirements

Android

Download the NDK, extract it somewhere, and execute the following commands, replacing android-xxx with either android-arm or android-x86:

git clone https://github.com/deeplearning4j/libnd4j
git clone https://github.com/deeplearning4j/nd4j
export ANDROID_NDK=/path/to/android-ndk/
cd libnd4j
bash buildnativeoperations.sh -platform android-xxx
cd ../nd4j
mvn clean install -Djavacpp.platform=android-xxx -DskipTests -pl '!:nd4j-cuda-9.0,!:nd4j-cuda-9.0-platform,!:nd4j-tests'

OSX

Run ./setuposx.sh (Please ensure you have brew installed)

See macOSx10 CPU only.md

Linux

Depends on the distro - ask in the earlyadopters channel for specifics on distro

Ubuntu Linux 15.10

wget http://developer.download.nvidia.com/compute/cuda/7.5/Prod/local_installers/cuda-repo-ubuntu1504-7-5-local_7.5-18_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1504-7-5-local_7.5-18_amd64.deb
sudo apt-get update
sudo apt-get install cuda
sudo apt-get install cmake
sudo apt-get install gcc-4.9
sudo apt-get install g++-4.9
sudo apt-get install git
git clone https://github.com/deeplearning4j/libnd4j
cd libnd4j/
export LIBND4J_HOME=~/libnd4j/
sudo rm /usr/bin/gcc
sudo rm /usr/bin/g++
sudo ln -s /usr/bin/gcc-4.9 /usr/bin/gcc
sudo ln -s /usr/bin/g++-4.9 /usr/bin/g++
./buildnativeoperations.sh
./buildnativeoperations.sh -c cuda -сс YOUR_DEVICE_ARCH

Ubuntu Linux 16.04

sudo apt install cmake
sudo apt install nvidia-cuda-dev nvidia-cuda-toolkit nvidia-361
export TRICK_NVCC=YES
./buildnativeoperations.sh
./buildnativeoperations.sh -c cuda -сс YOUR_DEVICE_ARCH

The standard development headers are needed.

CentOS 6

yum install centos-release-scl-rh epel-release
yum install devtoolset-3-toolchain maven30 cmake3 git
scl enable devtoolset-3 maven30 bash
./buildnativeoperations.sh
./buildnativeoperations.sh -c cuda -сс YOUR_DEVICE_ARCH

Windows

See Windows.md

Setup for All OS

  1. Set a LIBND4J_HOME as an environment variable to the libnd4j folder you've obtained from GIT

    • Note: this is required for building nd4j as well.
  2. Setup cpu followed by gpu, run the following on the command line:

    • For standard builds:

      ./buildnativeoperations.sh
      ./buildnativeoperations.sh -c cuda -сс YOUR_DEVICE_ARCH
      
    • For Debug builds:

      ./buildnativeoperations.sh blas -b debug
      ./buildnativeoperations.sh blas -c cuda -сс YOUR_DEVICE_ARCH -b debug
      
    • For release builds (default):

      ./buildnativeoperations.sh
      ./buildnativeoperations.sh -c cuda -сс YOUR_DEVICE_ARCH
      

OpenMP support

OpenMP 4.0+ should be used to compile libnd4j. However, this shouldn't be any trouble, since OpenMP 4 was released in 2015 and should be available on all major platforms.

Linking with MKL

We can link with MKL either at build time, or at runtime with binaries initially linked with another BLAS implementation such as OpenBLAS. In either case, simply add the path containing libmkl_rt.so (or mkl_rt.dll on Windows), say /path/to/intel64/lib/, to the LD_LIBRARY_PATH environment variable on Linux (or PATH on Windows), and build or run your Java application as usual. If you get an error message like undefined symbol: omp_get_num_procs, it probably means that libiomp5.so, libiomp5.dylib, or libiomp5md.dll is not present on your system. In that case though, it is still possible to use the GNU version of OpenMP by setting these environment variables on Linux, for example:

export MKL_THREADING_LAYER=GNU
export LD_PRELOAD=/usr/lib64/libgomp.so.1

##Troubleshooting MKL

Sometimes the above steps might not be all you need to do. Another additional step might be the need to add:

export LD_LIBRARY_PATH=/opt/intel/lib/intel64/:/opt/intel/mkl/lib/intel64

This ensures that mkl will be found first and liked to.

Packaging

If on Ubuntu (14.04 or above) or CentOS (6 or above), this repository is also set to create packages for your distribution. Let's assume you have built:

  • for the cpu, your command-line was ./buildnativeoperations.sh ...:
cd blasbuild/cpu
make package
  • for the gpu, your command-line was ./buildnativeoperations.sh -c cuda ...:
cd blasbuild/cuda
make package

Uploading package to Bintray

The package upload script is in packaging. The upload command for an rpm built for cpu is:

./packages/push_to_bintray.sh myAPIUser myAPIKey deeplearning4j blasbuild/cpu/libnd4j-0.8.0.fc7.3.1611.x86_64.rpm https://github.com/deeplearning4j

The upload command for a deb package built for cuda is:

./packages/push_to_bintray.sh myAPIUser myAPIKey deeplearning4j blasbuild/cuda/libnd4j-0.8.0.fc7.3.1611.x86_64.deb https://github.com/deeplearning4j

Running tests

Tests are written with gtest, run using cmake. Tests are currently under tests_cpu/

There are 2 directories for running tests:

1. libnd4j_tests: These are older legacy ops tests.
2. layers_tests: This covers the newer graph operations and ops associated with samediff.

For running the tests, we currently use cmake or CLion to run the tests.

To run tests using CUDA backend it's pretty much similar process:

1. ./buildnativeoperations.h -c cuda -cc <YOUR_ARCH> -b debug -t -j <NUMBER_OF_CORES>
2. ./blasbuild/cuda/tests_cpu/layers_tests/runtests (.exe on Windows)