c969b724bb
* initial commit Signed-off-by: raver119 <raver119@gmail.com> * Added gradcheck test for dynamic_partition_bp op. * - implementation of dilation op (cpu and cuda) Signed-off-by: Yurii <yurii@skymind.io> * Fixed broadcast_dynamic_shape 1D case and tests. * Fixed usage of default integer arguments. * Fixed dynamic_partition_bp op and tests. * Eliminated test with grad check for dynamic_partition_bp op. * start working on cuda svd - porting available corresponding api from cuSOLVER library Signed-off-by: Yurii <yurii@skymind.io> * provide prelu_bp Signed-off-by: Yurii <yurii@skymind.io> * - provide gruCell_bp (old version ??) Signed-off-by: Yurii <yurii@skymind.io> * - polishing cumsum_bp and cumprod_bp tests Signed-off-by: Yurii <yurii@skymind.io> * provide sparseSoftmaxCrossEntropyWithLogits and sparseSoftmaxCrossEntropyWithLogits_grad Signed-off-by: Yurii <yurii@skymind.io> * Fixed atomicMul with float input/output * implementation of cuda kernel for triu_bp operation Signed-off-by: Yurii <yurii@skymind.io> * Refactored lup helper to add parrallel computing. * cusolver libraries Signed-off-by: raver119 <raver119@gmail.com> * uncomment cuSolver APIs in svd.cu Signed-off-by: Yurii <yurii@skymind.io> * cusolver var Signed-off-by: raver119 <raver119@gmail.com> * - further work on cuSolver svd Signed-off-by: Yurii <yurii@skymind.io> * Implement usage of cuda solver to LUP decomposition. * - correct naames in lup functions Signed-off-by: Yurii <yurii@skymind.io> * correct svdQR cuda Signed-off-by: Yurii <yurii@skymind.io> * - provide transpositions of input matrices in case of c order in svdCudaQR Signed-off-by: Yurii <yurii@skymind.io> * Fixed implementation issues with LUP usign cuda solver. * Implementation of matrix_determinant helper with cuda kernels. Working revision. * Implemented log_matrix_determinant helper with cuda kernels. * - implementation of batched cuda svd Signed-off-by: Yurii <yurii@skymind.io> * Refactored cholesky helper and implementation of cuda solver cholesky batch. * - implementation of cuda kernel for tile bp Signed-off-by: Yurii <yurii@skymind.io> * Implementation of cholesky and logdet with cuda kernels. * - implementation of cuda kernel for sru_bidirectional Signed-off-by: Yurii <yurii@skymind.io> * Fixed cholesky helper. * Cholesky op helper implementation. Working double-based cublas implementation. * bad import excluded Signed-off-by: raver119 <raver119@gmail.com> * Finished with cuda implementation of cholesky helper and tests. * - implementation of cuda kernel for sru_bidirectional_backprop operation Signed-off-by: Yurii <yurii@skymind.io> * Implementation of matrix_inverse op helper with cuda kernels. The first revision. * - start working on gruCell_bp Signed-off-by: Yurii <yurii@skymind.io> * Implementation of matrix_inverse helper. * - further work on new gruCell_bp Signed-off-by: Yurii <yurii@skymind.io> * cuBLAS related fixes Signed-off-by: raver119 <raver119@gmail.com> * calculateOutputShapes() now passes device buffers as well Signed-off-by: raver119 <raver119@gmail.com> * special concat/average/accumulate init host pointers now Signed-off-by: raver119 <raver119@gmail.com> * few more tweaks Signed-off-by: raver119 <raver119@gmail.com> * additional CudaDataBufferFactory signatures certain for data types Signed-off-by: raver119 <raver119@gmail.com> * cuSolver host buffer Signed-off-by: raver119 <raver119@gmail.com> * buffer to buffer memcpy host ptr allocation Signed-off-by: raver119 <raver119@gmail.com> |
||
---|---|---|
.. | ||
.github | ||
ci | ||
contrib | ||
nd4j-backends | ||
nd4j-buffer | ||
nd4j-common | ||
nd4j-context | ||
nd4j-jdbc | ||
nd4j-parameter-server-parent | ||
nd4j-serde | ||
nd4j-shade | ||
nd4j-tensorflow | ||
nd4j-uberjar | ||
.appveyor.yml | ||
.codeclimate.yml | ||
.gitignore | ||
.travis.yml | ||
LICENSE | ||
README.md | ||
RaspberryPi.md | ||
VERSION | ||
buildAllversions.sh | ||
buildmultiplescalaversions.sh | ||
pom.xml |
README.md
ND4J: Scientific Computing on the JVM
ND4J is an Apache 2.0-licensed scientific computing library for the JVM. By contributing code to this repository, you agree to make your contribution available under an Apache 2.0 license.
It is meant to be used in production environments rather than as a research tool, which means routines are designed to run fast with minimum RAM requirements.
Please search for the latest version on search.maven.org.
Or use the versions displayed in: https://github.com/deeplearning4j/dl4j-0.4-examples/blob/master/pom.xml
Main Features
- Versatile n-dimensional array object
- Multiplatform functionality including GPUs
- Linear algebra and signal processing functions
Specifics
- Supports GPUs via with the CUDA backend nd4j-cuda-7.5 and Native via nd4j-native.
- All of this is wrapped in a unifying interface.
- The API mimics the semantics of Numpy, Matlab and scikit-learn.
Documentation
Documentation is available at deeplearning4j.org. Access the JavaDocs for more detail.
Installation
To install ND4J, there are a couple of approaches, and more information can be found on the DL4J website.
Install from Maven Central
- Search for nd4j in the Maven Central Repository to find the available nd4j jars.
- Include the appropriate dependency in your pom.xml.
Clone from the GitHub Repo
https://deeplearning4j.org/buildinglocally
Contribute
-
Check for open issues, or open a new issue to start a discussion around a feature idea or a bug.
-
If you feel uncomfortable or uncertain about an issue or your changes, feel free to contact us on Gitter using the link above.
-
Fork the repository on GitHub to start making your changes to the master branch (or branch off of it).
-
Write a test, which shows that the bug was fixed or that the feature works as expected.
-
Note the repository follows the Google Java style with two modifications: 120-char column wrap and 4-spaces indentation. You can format your code to this format by typing
mvn formatter:format
in the subproject you work on, by using thecontrib/formatter.xml
at the root of the repository to configure the Eclipse formatter, or by using the INtellij plugin. -
Send a pull request, and bug us on Gitter until it gets merged and published.