7783012f39
* initial commit Signed-off-by: raver119 <raver119@gmail.com> * one file Signed-off-by: raver119 <raver119@gmail.com> * few more includes Signed-off-by: raver119 <raver119@gmail.com> * m? Signed-off-by: raver119 <raver119@gmail.com> * const Signed-off-by: raver119 <raver119@gmail.com> * cudnn linkage in tests Signed-off-by: raver119 <raver119@gmail.com> * culibos Signed-off-by: raver119 <raver119@gmail.com> * static reminder Signed-off-by: raver119 <raver119@gmail.com> * platform engine tag Signed-off-by: raver119 <raver119@gmail.com> * HAVE_CUDNN moved to config.h.in Signed-off-by: raver119 <raver119@gmail.com> * include Signed-off-by: raver119 <raver119@gmail.com> * include Signed-off-by: raver119 <raver119@gmail.com> * skip cudnn handle creation if there's not cudnn Signed-off-by: raver119 <raver119@gmail.com> * meh Signed-off-by: raver119 <raver119@gmail.com> * target device in context Signed-off-by: raver119 <raver119@gmail.com> * platform engines Signed-off-by: raver119 <raver119@gmail.com> * platform engines Signed-off-by: raver119 <raver119@gmail.com> * allow multiple -h args Signed-off-by: raver119 <raver119@gmail.com> * allow multiple -h args Signed-off-by: raver119 <raver119@gmail.com> * move mkldnn out of CPU block Signed-off-by: raver119 <raver119@gmail.com> * link to mkldnn on cuda Signed-off-by: raver119 <raver119@gmail.com> * less prints Signed-off-by: raver119 <raver119@gmail.com> * minor tweaks Signed-off-by: raver119 <raver119@gmail.com> * next step Signed-off-by: raver119 <raver119@gmail.com> * conv2d NCHW draft Signed-off-by: raver119 <raver119@gmail.com> * conv2d biasAdd Signed-off-by: raver119 <raver119@gmail.com> * test for MKL/CUDNN combined use Signed-off-by: raver119 <raver119@gmail.com> * - provide additional code for conv2d ff based on cudnn api, not tested yet Signed-off-by: Yurii <iuriish@yahoo.com> * - further work on conv2d helper based on using cudnn api Signed-off-by: Yurii <iuriish@yahoo.com> * - fixing several cuda bugs which appeared after cudnn lib had been started to use Signed-off-by: Yurii <iuriish@yahoo.com> * - implementation of conv2d backprop op based on cudnn api Signed-off-by: Yurii <iuriish@yahoo.com> * - implementaion of conv3d and conv3d_bp ops based on cudnn api Signed-off-by: Yurii <iuriish@yahoo.com> * - bugs fixing in conv3d/conv3d_bp ops (cudnn in use) Signed-off-by: Yurii <iuriish@yahoo.com> * - implementation of depthwiseConv2d (ff/bp) op based on cudnn api Signed-off-by: Yurii <iuriish@yahoo.com> * - implementation of batchnorm ff op based on cudnn api Signed-off-by: Yurii <iuriish@yahoo.com> * - disable cudnn batchnorm temporary Signed-off-by: Yurii <iuriish@yahoo.com> * - add minor change in cmake Signed-off-by: Yurii <iuriish@yahoo.com> * engine for depthwise mkldnn Signed-off-by: raver119 <raver119@gmail.com> * couple of includes Signed-off-by: raver119 <raver119@gmail.com> * - provide permutation to cudnn batchnorm ff when format is NHWC Signed-off-by: Yurii <iuriish@yahoo.com> * lgamma fix Signed-off-by: raver119 <raver119@gmail.com> * - eliminate memory leak in two tests Signed-off-by: Yurii <iuriish@yahoo.com> Co-authored-by: Yurii Shyrma <iuriish@yahoo.com> |
||
---|---|---|
.. | ||
ci | ||
contrib | ||
nd4j-backends | ||
nd4j-buffer | ||
nd4j-common | ||
nd4j-context | ||
nd4j-jdbc | ||
nd4j-parameter-server-parent | ||
nd4j-remote | ||
nd4j-serde | ||
nd4j-shade | ||
nd4j-tensorflow | ||
nd4j-uberjar | ||
.appveyor.yml | ||
.codeclimate.yml | ||
.gitignore | ||
.travis.yml | ||
LICENSE | ||
README.md | ||
RaspberryPi.md | ||
VERSION | ||
buildAllversions.sh | ||
buildmultiplescalaversions.sh | ||
pom.xml |
README.md
ND4J: Scientific Computing on the JVM
ND4J is an Apache 2.0-licensed scientific computing library for the JVM. By contributing code to this repository, you agree to make your contribution available under an Apache 2.0 license.
It is meant to be used in production environments rather than as a research tool, which means routines are designed to run fast with minimum RAM requirements.
Please search for the latest version on search.maven.org.
Or use the versions displayed in: https://github.com/eclipse/deeplearning4j-examples/blob/master/pom.xml
Main Features
- Versatile n-dimensional array object
- Multiplatform functionality including GPUs
- Linear algebra and signal processing functions
Specifics
- Supports GPUs via with the CUDA backend nd4j-cuda-7.5 and Native via nd4j-native.
- All of this is wrapped in a unifying interface.
- The API mimics the semantics of Numpy, Matlab and scikit-learn.
Documentation
Documentation is available at deeplearning4j.org. Access the JavaDocs for more detail.
Installation
To install ND4J, there are a couple of approaches, and more information can be found on the DL4J website.
Install from Maven Central
- Search for nd4j in the Maven Central Repository to find the available nd4j jars.
- Include the appropriate dependency in your pom.xml.
Clone from the GitHub Repo
https://deeplearning4j.org/docs/latest/deeplearning4j-build-from-source
Contribute
-
Check for open issues, or open a new issue to start a discussion around a feature idea or a bug.
-
If you feel uncomfortable or uncertain about an issue or your changes, feel free to contact us on Gitter using the link above.
-
Fork the repository on GitHub to start making your changes to the master branch (or branch off of it).
-
Write a test, which shows that the bug was fixed or that the feature works as expected.
-
Note the repository follows the Google Java style with two modifications: 120-char column wrap and 4-spaces indentation. You can format your code to this format by typing
mvn formatter:format
in the subproject you work on, by using thecontrib/formatter.xml
at the root of the repository to configure the Eclipse formatter, or by using the INtellij plugin. -
Send a pull request, and bug us on Gitter until it gets merged and published.