cavis/nd4j
Alex Black 68ea5f3688
Dev branch merge: dev_20190606 (#7904)
* correct logsoftmax looss (#2)

* Small SameDiff listener fix (#4)

* Various fixes (#6)

* #7839 Fix for asXMatrix and tests

* #7866 EmbeddingSequenceLayer dtype fix + test

* #7856 SameDiff save/load stream methods

* #7859 RegressionEvaluation rank 4 fix + tests + axis configuration

* EvaluationBinary 3d/4d

* More evaluation 3d/4d tests

* #7847 Evaluation empty checks

* Small test ifx

* #7848 Fix median edge case

* Improve DL4J samediff layer tests

* [WIP] FastText wrapper implemented (#8)

* FastText implemented

* Some fixes

* Fix shapes for wordsNearest

* Validation of input vectors

* Fixes

* Fixed test

* Thread tagged

* Some tweaks

* setContextClassLoader for DeallocatorServiceThread

* Numpy format tests (#1)

* Various fixes (#11)

* #7852 SameDiff gather fix

* #7892 SameDiff placeholder to constant conversion

* #7890 validate input rank for MLN/CG init methods

* Fix broken permute shape calculation

* Permute and gather fixes

* Tests

* #7850 LogSumExp fix + test

* Handful of test fixes

* Empty arrays with non-scalar shapes (#10)

* minor rearrangements for lambdas

* empty tensors with non-scalar shapes

* numpy empty tensors with non-scalar shapes

* few more empty tweaks

* Small fixes

* conv3d signature update

* micro fix in batchnorm mkldnn

* Import fixes

* Fix

* MKL-DNN update

* Small fill fix

* fill with empty input + test

* Fixes

* Small error improvement

* Fix

* one special test

* couple of fixes for lstm

* Rewrite TFGraphMapper.getNDArrayFromTensor to be maintainable and less error prone

* Fixes

* FP16

* Unsigned

* BFloat16

* Fill op - empty tweaks

* - couple of fixes for empty arrays construction
- stack updated

* strided slice fix

* one transform test

* provide method for reducing shapeInfo in case of input array is empty

* Fixed reduceAlongDimensions to use empty input properly.

* couple of broadcast tests

* couple of tests broadcast tests + tweak to make them pass

* add check of non-empty to methods producing sub-arrays

* Fixed reshapeC with zeros in shape.

* complete empty check in reduce_... legacy ops

* Concat and cumsum/prod

* Tweak to empty shape inference on import

* add empty check to the rest of reduce legacy ops

* one more test

* correct typo in evalReduceShapeInfoEmpty

* Added tests for reduce_* ops to tests with zero shapes.

* few more tests for empty reductions

* Fixed strided_slice op with empty case and tests.

* one more empty reduction test

* Fixed strided_slice test.

* add empty check to NDArray::reshapei

* infOrMax

* empty min/max with infinity tests

* made unstack working correctly with empty arrays

* few IndexReduce tests + tweaks for empty shapes

* add test for empty concat

* few tests fixed

* Validation fix for reductions on empty shapes

* Reverse fix

* Reduction shape calc fixes

* SameDiff.generateOutputVariable: don't use shape function to determine number of outputs

* Range fix

* - NDArray constructor updated for scalars/empty arrays
- few tests fixed

* More fixes

* Empty creator fixes

* concat fix

* concat fix

* TF import tests: allow 'both all NaN' and 'both all inf' to pass

* Slice, zero fraction, and reshape fixes

* transpose, gather

* Zero fraction

* scalar cast fix

* Empty reduction axis support

* few more tests fixed

* Fixed input checks conforming with TF for concat op and tests.

* few tests fixed

* matmul scalar shape fix

* Fixed checkout for data type and scalarity with concat to allow non-empty scalars with vector concats.

* broadcast bool fix

* few more tests

* few more tests

* correct evalReduceShapeInfoEmpty

* argmax/argmin + tests

* one more empty edge case + one more test

* argmax/argmin/realdiv_bp tweaks

* empty reshape test + fix

* Helper fixes

* Small fixes

* Gather test fix

* Gather test fix

* Small fixes

* reduce scalar zero values

* scalar mean workaround

* Remove debug code

* along dim mean workaround

* one more test

* - equalsTo() tweak for empty arrays
- one more test

* broadcast tweaks
2019-06-15 21:34:34 +10:00
..
.github Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
ci Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
contrib Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
nd4j-backends Dev branch merge: dev_20190606 (#7904) 2019-06-15 21:34:34 +10:00
nd4j-buffer Dev branch merge: dev_20190606 (#7904) 2019-06-15 21:34:34 +10:00
nd4j-common Dev branch merge: dev_20190606 (#7904) 2019-06-15 21:34:34 +10:00
nd4j-context Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
nd4j-jdbc Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
nd4j-parameter-server-parent Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
nd4j-serde Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
nd4j-shade Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
nd4j-tensorflow Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
nd4j-uberjar Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
.appveyor.yml Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
.codeclimate.yml Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
.gitignore Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
.travis.yml Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
LICENSE Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
README.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
RaspberryPi.md Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
VERSION Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
buildAllversions.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
buildmultiplescalaversions.sh Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00
pom.xml Eclipse Migration Initial Commit 2019-06-06 15:21:15 +03:00

README.md

ND4J: Scientific Computing on the JVM

Join the chat at https://gitter.im/deeplearning4j/deeplearning4j Maven Central Javadoc

ND4J is an Apache 2.0-licensed scientific computing library for the JVM. By contributing code to this repository, you agree to make your contribution available under an Apache 2.0 license.

It is meant to be used in production environments rather than as a research tool, which means routines are designed to run fast with minimum RAM requirements.

Please search for the latest version on search.maven.org.

Or use the versions displayed in: https://github.com/deeplearning4j/dl4j-0.4-examples/blob/master/pom.xml


Main Features

  • Versatile n-dimensional array object
  • Multiplatform functionality including GPUs
  • Linear algebra and signal processing functions

Specifics

  • Supports GPUs via with the CUDA backend nd4j-cuda-7.5 and Native via nd4j-native.
  • All of this is wrapped in a unifying interface.
  • The API mimics the semantics of Numpy, Matlab and scikit-learn.

Documentation

Documentation is available at deeplearning4j.org. Access the JavaDocs for more detail.


Installation

To install ND4J, there are a couple of approaches, and more information can be found on the DL4J website.

Install from Maven Central

  1. Search for nd4j in the Maven Central Repository to find the available nd4j jars.
  2. Include the appropriate dependency in your pom.xml.

Clone from the GitHub Repo

https://deeplearning4j.org/buildinglocally

Contribute

  1. Check for open issues, or open a new issue to start a discussion around a feature idea or a bug.

  2. If you feel uncomfortable or uncertain about an issue or your changes, feel free to contact us on Gitter using the link above.

  3. Fork the repository on GitHub to start making your changes to the master branch (or branch off of it).

  4. Write a test, which shows that the bug was fixed or that the feature works as expected.

  5. Note the repository follows the Google Java style with two modifications: 120-char column wrap and 4-spaces indentation. You can format your code to this format by typing mvn formatter:format in the subproject you work on, by using the contrib/formatter.xml at the root of the repository to configure the Eclipse formatter, or by using the INtellij plugin.

  6. Send a pull request, and bug us on Gitter until it gets merged and published.