fe47f52896
* Libnd4j: TensorMMul backprop op #8174, raw implementation Signed-off-by: Oleg <oleg.semeniv@gmail.com> * Libnd4j: TensorMMul backprop op #8174 merge master and some corrections Signed-off-by: Oleg <oleg.semeniv@gmail.com> * Libnd4j: TensorMMul backprop op #8174 algorithm update, need testing, sync with master * Libnd4j: TensorMMul backprop op #8174 fixed incorrect B axes calculation Signed-off-by: Oleg <oleg.semeniv@gmail.com> * Libnd4j: TensorMMul backprop op #8174 optimize axes identification and fix bug of indeces overlapping, added first test. need testing with different shapes Signed-off-by: Oleg <oleg.semeniv@gmail.com> * Libnd4j: TensorMMul backprop op #8174 some fixes and improvements need more testing Signed-off-by: Oleg <oleg.semeniv@gmail.com> * Libnd4j: TensorMMul backprop op #8174 fixed order of matrix multiply Signed-off-by: Oleg <oleg.semeniv@gmail.com> * Libnd4j: TensorMMul backprop op #8174 fixed issue of incorrect axes definition, add tests based on TF, need additional testing for case dLdC not equal 1 Signed-off-by: Oleg <oleg.semeniv@gmail.com> * Libnd4j: TensorMMul backprop op #8174 fixed scalar case add test Signed-off-by: Oleg <oleg.semeniv@gmail.com> * Libnd4j: TensorMMul backprop op #8174 fixed bp algorithm, axes definition, need some mode testing with different orders combination f,c; c,f f,f and add some checks for inputs Signed-off-by: Oleg <oleg.semeniv@gmail.com> * Libnd4j: TensorMMul backprop op #8174 some checks and corrections added tests, exists the problem with different input orders support A-f B-c and A-f B-f Signed-off-by: Oleg <oleg.semeniv@gmail.com> * Libnd4j: TensorMMul backprop op #8174 sync master Signed-off-by: Oleg <oleg.semeniv@gmail.com> * - correct bug in MmulHelper::tensorDot(a, b, c, axes_a, axes_b,permutForC) Signed-off-by: Yurii <iuriish@yahoo.com> * Libnd4j: TensorMMul backprop op #8174 code clean up and refactoring Signed-off-by: Oleg <oleg.semeniv@gmail.com> * - add check for linspase ordered permutations in ShapeUtils::evalShapeForTensorDot Signed-off-by: Yurii <iuriish@yahoo.com> * - provide additional code in shape::reshape stuff in order to reduce amount of allocation/copy operations during reshaping procedure Signed-off-by: Yurii <iuriish@yahoo.com> * - further work on problem of wrong shape evaluation during permute/reshape procedures Signed-off-by: Yurii <iuriish@yahoo.com> * - still looking for bug reason in reshape/permute stuff Signed-off-by: Yurii <iuriish@yahoo.com> * - correct bug in transform cuda native ops Signed-off-by: Yurii <iuriish@yahoo.com> * - correct bug in NDArray::assign Signed-off-by: Yurii <iuriish@yahoo.com> * - remove old shape::reshape stuff Signed-off-by: Yurii <iuriish@yahoo.com> * - add possibility to disable copy of old buffer to new buffer during reshape operation in NDArray class Signed-off-by: Yurii <iuriish@yahoo.com> * - correct bug in tensorDot which had to do with wrong pointers assigments Signed-off-by: Yurii <iuriish@yahoo.com> Co-authored-by: Oleh <oleg.semeniv@gmail.com> |
||
---|---|---|
.. | ||
ci | ||
contrib | ||
nd4j-backends | ||
nd4j-common | ||
nd4j-common-tests | ||
nd4j-jdbc | ||
nd4j-parameter-server-parent | ||
nd4j-remote | ||
nd4j-serde | ||
nd4j-shade | ||
nd4j-tensorflow | ||
nd4j-uberjar | ||
.appveyor.yml | ||
.codeclimate.yml | ||
.gitignore | ||
.travis.yml | ||
LICENSE | ||
README.md | ||
RaspberryPi.md | ||
VERSION | ||
buildAllversions.sh | ||
buildmultiplescalaversions.sh | ||
pom.xml |
README.md
ND4J: Scientific Computing on the JVM
ND4J is an Apache 2.0-licensed scientific computing library for the JVM. By contributing code to this repository, you agree to make your contribution available under an Apache 2.0 license.
It is meant to be used in production environments rather than as a research tool, which means routines are designed to run fast with minimum RAM requirements.
Please search for the latest version on search.maven.org.
Or use the versions displayed in: https://github.com/eclipse/deeplearning4j-examples/blob/master/pom.xml
Main Features
- Versatile n-dimensional array object
- Multiplatform functionality including GPUs
- Linear algebra and signal processing functions
Specifics
- Supports GPUs via with the CUDA backend nd4j-cuda-7.5 and Native via nd4j-native.
- All of this is wrapped in a unifying interface.
- The API mimics the semantics of Numpy, Matlab and scikit-learn.
Documentation
Documentation is available at deeplearning4j.org. Access the JavaDocs for more detail.
Installation
To install ND4J, there are a couple of approaches, and more information can be found on the DL4J website.
Install from Maven Central
- Search for nd4j in the Maven Central Repository to find the available nd4j jars.
- Include the appropriate dependency in your pom.xml.
Clone from the GitHub Repo
https://deeplearning4j.org/docs/latest/deeplearning4j-build-from-source
Contribute
-
Check for open issues, or open a new issue to start a discussion around a feature idea or a bug.
-
If you feel uncomfortable or uncertain about an issue or your changes, feel free to contact us on Gitter using the link above.
-
Fork the repository on GitHub to start making your changes to the master branch (or branch off of it).
-
Write a test, which shows that the bug was fixed or that the feature works as expected.
-
Note the repository follows the Google Java style with two modifications: 120-char column wrap and 4-spaces indentation. You can format your code to this format by typing
mvn formatter:format
in the subproject you work on, by using thecontrib/formatter.xml
at the root of the repository to configure the Eclipse formatter, or by using the INtellij plugin. -
Send a pull request, and bug us on Gitter until it gets merged and published.