68ea5f3688
* correct logsoftmax looss (#2) * Small SameDiff listener fix (#4) * Various fixes (#6) * #7839 Fix for asXMatrix and tests * #7866 EmbeddingSequenceLayer dtype fix + test * #7856 SameDiff save/load stream methods * #7859 RegressionEvaluation rank 4 fix + tests + axis configuration * EvaluationBinary 3d/4d * More evaluation 3d/4d tests * #7847 Evaluation empty checks * Small test ifx * #7848 Fix median edge case * Improve DL4J samediff layer tests * [WIP] FastText wrapper implemented (#8) * FastText implemented * Some fixes * Fix shapes for wordsNearest * Validation of input vectors * Fixes * Fixed test * Thread tagged * Some tweaks * setContextClassLoader for DeallocatorServiceThread * Numpy format tests (#1) * Various fixes (#11) * #7852 SameDiff gather fix * #7892 SameDiff placeholder to constant conversion * #7890 validate input rank for MLN/CG init methods * Fix broken permute shape calculation * Permute and gather fixes * Tests * #7850 LogSumExp fix + test * Handful of test fixes * Empty arrays with non-scalar shapes (#10) * minor rearrangements for lambdas * empty tensors with non-scalar shapes * numpy empty tensors with non-scalar shapes * few more empty tweaks * Small fixes * conv3d signature update * micro fix in batchnorm mkldnn * Import fixes * Fix * MKL-DNN update * Small fill fix * fill with empty input + test * Fixes * Small error improvement * Fix * one special test * couple of fixes for lstm * Rewrite TFGraphMapper.getNDArrayFromTensor to be maintainable and less error prone * Fixes * FP16 * Unsigned * BFloat16 * Fill op - empty tweaks * - couple of fixes for empty arrays construction - stack updated * strided slice fix * one transform test * provide method for reducing shapeInfo in case of input array is empty * Fixed reduceAlongDimensions to use empty input properly. * couple of broadcast tests * couple of tests broadcast tests + tweak to make them pass * add check of non-empty to methods producing sub-arrays * Fixed reshapeC with zeros in shape. * complete empty check in reduce_... legacy ops * Concat and cumsum/prod * Tweak to empty shape inference on import * add empty check to the rest of reduce legacy ops * one more test * correct typo in evalReduceShapeInfoEmpty * Added tests for reduce_* ops to tests with zero shapes. * few more tests for empty reductions * Fixed strided_slice op with empty case and tests. * one more empty reduction test * Fixed strided_slice test. * add empty check to NDArray::reshapei * infOrMax * empty min/max with infinity tests * made unstack working correctly with empty arrays * few IndexReduce tests + tweaks for empty shapes * add test for empty concat * few tests fixed * Validation fix for reductions on empty shapes * Reverse fix * Reduction shape calc fixes * SameDiff.generateOutputVariable: don't use shape function to determine number of outputs * Range fix * - NDArray constructor updated for scalars/empty arrays - few tests fixed * More fixes * Empty creator fixes * concat fix * concat fix * TF import tests: allow 'both all NaN' and 'both all inf' to pass * Slice, zero fraction, and reshape fixes * transpose, gather * Zero fraction * scalar cast fix * Empty reduction axis support * few more tests fixed * Fixed input checks conforming with TF for concat op and tests. * few tests fixed * matmul scalar shape fix * Fixed checkout for data type and scalarity with concat to allow non-empty scalars with vector concats. * broadcast bool fix * few more tests * few more tests * correct evalReduceShapeInfoEmpty * argmax/argmin + tests * one more empty edge case + one more test * argmax/argmin/realdiv_bp tweaks * empty reshape test + fix * Helper fixes * Small fixes * Gather test fix * Gather test fix * Small fixes * reduce scalar zero values * scalar mean workaround * Remove debug code * along dim mean workaround * one more test * - equalsTo() tweak for empty arrays - one more test * broadcast tweaks |
||
---|---|---|
.. | ||
.github | ||
ci | ||
contrib | ||
deeplearning4j-common | ||
deeplearning4j-core | ||
deeplearning4j-cuda | ||
deeplearning4j-data | ||
deeplearning4j-dataimport-solrj | ||
deeplearning4j-graph | ||
deeplearning4j-manifold | ||
deeplearning4j-modelexport-solr | ||
deeplearning4j-modelimport | ||
deeplearning4j-nearestneighbors-parent | ||
deeplearning4j-nlp-parent | ||
deeplearning4j-nn | ||
deeplearning4j-scaleout | ||
deeplearning4j-ui-parent | ||
deeplearning4j-util | ||
deeplearning4j-zoo | ||
dl4j-integration-tests | ||
dl4j-perf | ||
.codeclimate.yml | ||
.travis.yml | ||
CONTRIBUTORS.md | ||
GITTER_GUIDELINES.md | ||
LICENSE.txt | ||
README.md | ||
buildmultiplescalaversions.sh | ||
pom.xml |
README.md
Eclipse Deeplearning4J: Neural Networks for Java/JVM
Eclipse Deeplearning4J is part of the Skymind Intelligence Layer, along with ND4J, DataVec, Arbiter and RL4J. It is an Apache 2.0-licensed, open-source, distributed neural net library written in Java and Scala. By contributing code to this repository, you agree to make your contribution available under an Apache 2.0 license.
Deeplearning4J integrates with Hadoop and Spark and runs on several backends that enable use of CPUs and GPUs. The aim is to create a plug-and-play solution that is more convention than configuration, and which allows for fast prototyping.
The most recent stable release in Maven Central is 0.9.1
, and the current master on Github can be built from source.
For more info, see: https://docs.skymind.ai/docs
Using Eclipse Deeplearning4j
To get started using Deeplearning4j, please go to our Quickstart. You'll need to be familiar with a Java automated build tool such as Maven and an IDE such as IntelliJ.
Main Features
- Versatile n-dimensional array class
- GPU integration (supports devices starting from Kepler, cc3.0. You can check your device's compute compatibility here.)
Modules
- datavec = Library for converting images, text and CSV data into format suitable for Deep Learning
- nn = core neural net structures MultiLayer Network and Computation graph for designing Neural Net structures
- core = additional functionality building on deeplearning4j-nn
- modelimport = functionality to import models from Keras
- nlp = natural language processing components including vectorizers, models, sample datasets and renderers
- scaleout = integrations
- spark = integration with Apache Spark versions 1.3 to 1.6 (Spark 2.0 coming soon)
- parallel-wraper = Single machine model parallelism (for multi-GPU systems, etc)
- aws = loading data to and from aws resources EC2 and S3
- ui = provides visual interfaces for tuning models. Details here
Documentation
Documentation is available at deeplearning4j.org and JavaDocs. Open-source contributors can help us improve our documentation for Deeplearning4j by sending pull requests for the DL4J website here and ND4J here.
Support
We are not supporting Stackoverflow right now. Github issues should focus on bug reports and feature requests. Please join the community on Gitter, where we field questions about how to install the software and work with neural nets. For support from Skymind, please see our contact page.
Installation
To install Deeplearning4J, see our Quickstart and below. More information can be found on the ND4J web site as well as here.
Use Maven Central Repository
Search Maven Central for deeplearning4j to get a list of dependencies.
Add the dependency information to your pom.xml
file. We highly recommend downloading via Maven unless you plan to help us develop DL4J. An easy way to get up-to-date dependencies is to use the ones listed in our dl4j-examples POM.
Contribute
- Check for open issues or open a fresh one to start a discussion around a feature idea or a bug.
- If you feel uncomfortable or uncertain about an issue or your changes, don't hesitate to contact us on Gitter using the link above.
- Fork the repository on GitHub to start making your changes (branch off of the master branch).
- Write a test that shows the bug was fixed or the feature works as expected.
- Note the repository follows
the Google Java style
with two modifications: 120-char column wrap and 4-spaces indentation. You
can format your code to this format by typing
mvn formatter:format
in the subproject you work on, by using thecontrib/formatter.xml
at the root of the repository to configure the Eclipse formatter, or by using the Intellij plugin. - Send a pull request and bug us on Gitter until it gets merged and published. :)
- Add technical documentation on the Deeplearning4j website and fix any typos you see.