Merge remote-tracking branch 'fork/master'
commit
a66e03355e
|
@ -5,7 +5,7 @@ Thanks for your interest in DL4J. Our goal is to bring fast, open-source deep le
|
|||
|
||||
## Getting Started
|
||||
|
||||
Deeplearning4j's [open issues are here](https://github.com/deeplearning4j/deeplearning4j/issues). In time, we'll tag issues that would make a good first pull request for new contributors. An easy way to get started helping the project is to *file an issue*. You can do that on the Deeplearning4j issues page by clicking on the green button at the right. Issues can include bugs to fix, features to add, or documentation that looks outdated.
|
||||
Deeplearning4j's [open issues are here](https://github.com/eclipse/deeplearning4j/issues). In time, we'll tag issues that would make a good first pull request for new contributors. An easy way to get started helping the project is to *file an issue*. You can do that on the Deeplearning4j issues page by clicking on the green button at the right. Issues can include bugs to fix, features to add, or documentation that looks outdated.
|
||||
|
||||
Note that you will need to [build dl4j from source](https://deeplearning4j.org/docs/latest/deeplearning4j-build-from-source)
|
||||
|
||||
|
|
26
README.md
26
README.md
|
@ -2,17 +2,17 @@
|
|||
|
||||
Welcome to the new monorepo of Deeplearning4j that contains the source code for all the following projects, in addition to the original repository of Deeplearning4j moved to [deeplearning4j](deeplearning4j):
|
||||
|
||||
* https://github.com/deeplearning4j/libnd4j
|
||||
* https://github.com/deeplearning4j/nd4j
|
||||
* https://github.com/deeplearning4j/datavec
|
||||
* https://github.com/deeplearning4j/arbiter
|
||||
* https://github.com/deeplearning4j/nd4s
|
||||
* https://github.com/deeplearning4j/gym-java-client
|
||||
* https://github.com/deeplearning4j/rl4j
|
||||
* https://github.com/deeplearning4j/scalnet
|
||||
* https://github.com/deeplearning4j/pydl4j
|
||||
* https://github.com/deeplearning4j/jumpy
|
||||
* https://github.com/deeplearning4j/pydatavec
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/libnd4j
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/nd4j
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/datavec
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/arbiter
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/nd4s
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/gym-java-client
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/rl4j
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/scalnet
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/pydl4j
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/jumpy
|
||||
* https://github.com/eclipse/deeplearning4j/tree/master/pydatavec
|
||||
|
||||
|
||||
To build everything, we can use commands like
|
||||
|
@ -30,6 +30,6 @@ mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=li
|
|||
An example of GPU "CC" or compute capability is 61 for Titan X Pascal.
|
||||
|
||||
# Want some examples?
|
||||
We have separate repository with various examples available: https://github.com/deeplearning4j/dl4j-examples
|
||||
We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples
|
||||
|
||||
In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/deeplearning4j/dl4j-examples/tree/master/tutorials
|
||||
In the examples repo, you'll also find a tutorial series in Zeppelin: https://github.com/eclipse/deeplearning4j-examples/tree/master/tutorials
|
||||
|
|
|
@ -1,15 +0,0 @@
|
|||
## Contribute
|
||||
|
||||
1. Check for open issues, or open a new issue to start a discussion around a feature idea or a bug.
|
||||
2. If you feel uncomfortable or uncertain about an issue or your changes, feel free to contact us on Gitter using the link above.
|
||||
3. Fork [the repository](https://github.com/deeplearning4j/Arbiter.git) on GitHub to start making your changes to the **master** branch (or branch off of it).
|
||||
4. Write a test, which shows that the bug was fixed or that the feature works as expected.
|
||||
5. Note the repository follows
|
||||
the [Google Java style](https://google.github.io/styleguide/javaguide.html)
|
||||
with two modifications: 120-char column wrap and 4-spaces indentation. You
|
||||
can format your code to this format by typing `mvn formatter:format` in the
|
||||
subproject you work on, by using the `contrib/formatter.xml` at the root of
|
||||
the repository to configure the Eclipse formatter, or by [using the INtellij
|
||||
plugin](https://github.com/HPI-Information-Systems/Metanome/wiki/Installing-the-google-styleguide-settings-in-intellij-and-eclipse).
|
||||
|
||||
6. Send a pull request, and bug us on Gitter until it gets merged and published.
|
|
@ -1,19 +0,0 @@
|
|||
#### Issue Description
|
||||
|
||||
Please describe your issue, along with:
|
||||
- expected behavior
|
||||
- encountered behavior
|
||||
|
||||
#### Version Information
|
||||
|
||||
Please indicate relevant versions, including, if relevant:
|
||||
|
||||
* Deeplearning4j version
|
||||
* platform information (OS, etc)
|
||||
* CUDA version, if used
|
||||
* NVIDIA driver version, if in use
|
||||
|
||||
#### Contributing
|
||||
|
||||
If you'd like to help us fix the issue by contributing some code, but would
|
||||
like guidance or help in doing so, please mention it!
|
|
@ -1,10 +0,0 @@
|
|||
## What changes were proposed in this pull request?
|
||||
|
||||
(Please fill in changes proposed in this fix)
|
||||
|
||||
## How was this patch tested?
|
||||
|
||||
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
|
||||
|
||||
Please review
|
||||
https://github.com/deeplearning4j/deeplearning4j/blob/master/CONTRIBUTING.md before opening a pull request.
|
|
@ -1,15 +0,0 @@
|
|||
## Contribute
|
||||
|
||||
1. Check for open issues, or open a new issue to start a discussion around a feature idea or a bug.
|
||||
2. If you feel uncomfortable or uncertain about an issue or your changes, feel free to contact us on Gitter using the link above.
|
||||
3. Fork [the repository](https://github.com/deeplearning4j/DataVec.git) on GitHub to start making your changes to the **master** branch (or branch off of it).
|
||||
4. Write a test, which shows that the bug was fixed or that the feature works as expected.
|
||||
5. Note the repository follows
|
||||
the [Google Java style](https://google.github.io/styleguide/javaguide.html)
|
||||
with two modifications: 120-char column wrap and 4-spaces indentation. You
|
||||
can format your code to this format by typing `mvn formatter:format` in the
|
||||
subproject you work on, by using the `contrib/formatter.xml` at the root of
|
||||
the repository to configure the Eclipse formatter, or by [using the INtellij
|
||||
plugin](https://github.com/HPI-Information-Systems/Metanome/wiki/Installing-the-google-styleguide-settings-in-intellij-and-eclipse).
|
||||
|
||||
6. Send a pull request, and bug us on Gitter until it gets merged and published.
|
|
@ -1,19 +0,0 @@
|
|||
#### Issue Description
|
||||
|
||||
Please describe your issue, along with:
|
||||
- expected behavior
|
||||
- encountered behavior
|
||||
|
||||
#### Version Information
|
||||
|
||||
Please indicate relevant versions, including, if relevant:
|
||||
|
||||
* Deeplearning4j version
|
||||
* platform information (OS, etc)
|
||||
* CUDA version, if used
|
||||
* NVIDIA driver version, if in use
|
||||
|
||||
#### Contributing
|
||||
|
||||
If you'd like to help us fix the issue by contributing some code, but would
|
||||
like guidance or help in doing so, please mention it!
|
|
@ -1,10 +0,0 @@
|
|||
## What changes were proposed in this pull request?
|
||||
|
||||
(Please fill in changes proposed in this fix)
|
||||
|
||||
## How was this patch tested?
|
||||
|
||||
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
|
||||
|
||||
Please review
|
||||
https://github.com/deeplearning4j/deeplearning4j/blob/master/CONTRIBUTING.md before opening a pull request.
|
|
@ -32,7 +32,7 @@ static data and for sequences (time series). Such operations can be executed on
|
|||
|
||||
Apart from obviously providing readers for classic data formats, DataVec also provides an interface. So if you wanted to
|
||||
ingest specific custom data, you wouldn't have to build the whole pipeline. You would just have to write the very first step. For example, if you describe through the API how your data fits into a common format that complies with the interface, DataVec
|
||||
would return a list of Writables for each record. You'll find more detail on the API in the corresponding [module](https://github.com/deeplearning4j/DataVec/tree/master/datavec-api).
|
||||
would return a list of Writables for each record. You'll find more detail on the API in the corresponding [module](https://github.com/eclipse/deeplearning4j/tree/master/datavec/datavec-api).
|
||||
|
||||
Another thing you can do with DataVec is data cleaning. Instead of having clean, ready-to-go data, let's say you start with data in different forms or from different sources. You might need to do sampling, filtering, or several incredibly messy ETL tasks needed to prepare data in the real world. DataVec offers filters and transformations that help with curating, preparing and massaging your data. It leverages Apache Spark to do this at scale.
|
||||
|
||||
|
@ -51,7 +51,7 @@ to be locked into a single tool, and using [Apache Flink](https://flink.apache.o
|
|||
## Examples
|
||||
|
||||
Examples for using DataVec are available
|
||||
here: [https://github.com/deeplearning4j/dl4j-examples](https://github.com/deeplearning4j/dl4j-examples)
|
||||
here: [https://github.com/eclipse/deeplearning4j-examples](https://github.com/eclipse/deeplearning4j-examples)
|
||||
|
||||
|
||||
---
|
||||
|
@ -91,7 +91,7 @@ It's useful to know which maintainers to contact to get information on a particu
|
|||
1. Check for open issues, or open a new issue to start a discussion around a feature idea or a bug.
|
||||
2. If you feel uncomfortable or uncertain about an issue or your changes, feel free to contact us on Gitter using the
|
||||
link above.
|
||||
3. Fork [the repository](https://github.com/deeplearning4j/datavec.git) on GitHub to start making your changes.
|
||||
3. Fork [the repository](https://github.com/eclipse/deeplearning4j.git) on GitHub to start making your changes.
|
||||
4. Write a test, which shows that the bug was fixed or that the feature works as expected.
|
||||
5. Note the repository follows the [Google Java style](https://google.github.io/styleguide/javaguide.html) with two
|
||||
modifications: 120-char column wrap and 4-spaces indentation. You can format your code to this format by typing `mvn
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
../CONTRIBUTING.md
|
|
@ -1,19 +0,0 @@
|
|||
#### Issue Description
|
||||
|
||||
Please describe our issue, along with:
|
||||
- expected behavior
|
||||
- encountered behavior
|
||||
|
||||
#### Version Information
|
||||
|
||||
Please indicate relevant versions, including, if relevant:
|
||||
|
||||
* Deeplearning4j version
|
||||
* platform information (OS, etc)
|
||||
* CUDA version, if used
|
||||
* NVIDIA driver version, if in use
|
||||
|
||||
#### Contributing
|
||||
|
||||
If you'd like to help us fix the issue by contributing some code, but would
|
||||
like guidance or help in doing so, please mention it!
|
|
@ -1,16 +0,0 @@
|
|||
## What changes were proposed in this pull request?
|
||||
|
||||
(Please fill in changes proposed in this fix)
|
||||
|
||||
## How was this patch tested?
|
||||
|
||||
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
|
||||
|
||||
## Quick checklist
|
||||
|
||||
The following checklist helps ensure your PR is complete:
|
||||
|
||||
- [ ] Reviewed the [Contributing Guidelines](https://github.com/deeplearning4j/deeplearning4j/blob/master/CONTRIBUTING.md) and followed the steps within.
|
||||
- [ ] Created tests for any significant new code additions.
|
||||
- [ ] Relevant tests for your changes are passing.
|
||||
- [ ] Ran mvn formatter:format (see [formatter instructions](http://code.revelc.net/formatter-maven-plugin/examples.html#Setting_Source_Files) for targeting your specific files).
|
|
@ -1,28 +0,0 @@
|
|||
# Configuration for lock-threads - https://github.com/dessant/lock-threads
|
||||
|
||||
# Number of days of inactivity before a closed issue or pull request is locked
|
||||
daysUntilLock: 45
|
||||
|
||||
# Issues and pull requests with these labels will not be locked. Set to `[]` to disable
|
||||
exemptLabels: []
|
||||
|
||||
# Label to add before locking, such as `outdated`. Set to `false` to disable
|
||||
lockLabel: Outdated
|
||||
|
||||
# Comment to post before locking. Set to `false` to disable
|
||||
lockComment: >
|
||||
This thread has been automatically locked since there has not been
|
||||
any recent activity after it was closed. Please open a new issue for
|
||||
related bugs.
|
||||
|
||||
# Limit to only `issues` or `pulls`
|
||||
# only: issues
|
||||
|
||||
# Optionally, specify configuration settings just for `issues` or `pulls`
|
||||
# issues:
|
||||
# exemptLabels:
|
||||
# - help-wanted
|
||||
# lockLabel: outdated
|
||||
|
||||
# pulls:
|
||||
# daysUntilLock: 30
|
|
@ -7,14 +7,14 @@ Welcome, stranger. You probably just joined a Gitter channel for Deeplearning4j.
|
|||
3. We're doing our best to improve the documentation, but it's not perfect. We welcome ideas about how to improve it! Writing good docs is our responsibility; reading them is yours. Please consult the docs before you post in the channel. A little effort from you will earn a lot of respect from us. (DL4J is backed by a startup, Skymind, and we are serving customers as well as the open-source community, which feels like a lot sometimes.)
|
||||
* User guide: [https://deeplearning4j.org/docs/latest/](https://deeplearning4j.org/docs/latest/)
|
||||
* API: [https://deeplearning4j.org/api/latest/](https://deeplearning4j.org/api/latest/)
|
||||
4. We welcome new contributors! Once you get familiar with the libs, if you see how our code can be improved, please file an issue or consider sending us a pull request with the new feature. [https://github.com/deeplearning4j/deeplearning4j/issues](https://github.com/deeplearning4j/deeplearning4j/issues)
|
||||
4. We welcome new contributors! Once you get familiar with the libs, if you see how our code can be improved, please file an issue or consider sending us a pull request with the new feature. [https://github.com/eclipse/deeplearning4j/issues](https://github.com/eclipse/deeplearning4j/issues)
|
||||
|
||||
Many of the questions asked on the Deeplearning4j Gitter support channel have been answered already in our documentation or can be easily Googled. To respect the Skymind team's time, Deeplearning4j users are kindly asked to remember a few things:
|
||||
|
||||
1. Please use Google before you ask a question. The Deeplearning4j Gitter channel should not be used as a human-enhanced search engine. (We promise that you'll end up with a better open-source framework if you only ask us the hard questions...) Please remember that DL4J Gitter channels are devoted to DL4J and other Skymind libraries specifically. We can't help with other frameworks or tools, which have their own docs and communities.
|
||||
2. If you don't receive an immediate response, please post again and flag your question with the Gitter ID of one of the people in the channel answering questions. If you do receive a response and link, please spend some time reading and trying to understand the response and additional resources before you ask the same question again.
|
||||
3. To answer questions, we need to know about your OS, Java version, Maven version and we may even need to see your code and stacktrace. When we ask for code, please send us a gist using https://gist.github.com/. If you can't give us code, in many cases we can't help you. (It's also a great sign of commitment on your part!)
|
||||
4. We're not perfect, and neither is our documentation. If you find ways for us to improve, please open an issue [here](https://github.com/deeplearning4j/deeplearning4j/issues) or email us at help@skymind.io and let us know what we need to fix.
|
||||
4. We're not perfect, and neither is our documentation. If you find ways for us to improve, please open an issue [here](https://github.com/eclipse/deeplearning4j/issues) or email us at help@skymind.io and let us know what we need to fix.
|
||||
5. Neural nets aren't magic. They are inherently hard to tune. We get many questions from beginners on how to tune neural nets. If you must post a question related to tuning, please post to: https://gitter.im/deeplearning4j/deeplearning4j/tuninghelp
|
||||
|
||||
### Guidelines for Help with Neural Network Tuning:
|
||||
|
@ -26,7 +26,7 @@ Providing help for tuning neural networks can be quite time consuming for the de
|
|||
1. Before posting a question, please first read both [https://deeplearning4j.org/docs/latest/deeplearning4j-troubleshooting-training](https://deeplearning4j.org/docs/latest/deeplearning4j-troubleshooting-training) and [https://deeplearning4j.org/docs/latest/deeplearning4j-nn-visualization](https://deeplearning4j.org/docs/latest/deeplearning4j-nn-visualization). You may also find an answer to your question on one of the other pages: [https://deeplearning4j.org/docs/latest/](https://deeplearning4j.org/docs/latest/)
|
||||
2. We generally won't answer questions that can be easily answered by searching Google or reading something like Andrej Karpathy's Stanford course on convolutional networks [http://cs231n.github.io/](http://cs231n.github.io/) or Ian Goodfellow and Yoshua Bengio's deep learning book [http://www.deeplearningbook.org/](http://www.deeplearningbook.org/)
|
||||
3. For some questions/issues, it may not be possible to provide a short/simple answer to your question. In these cases, we might decide to answer your question by improving our documentation, instead of answering your question directly in Gitter. Please understand that improving our documentation helps *everyone* and is a better use of the team's time than answering one-off questions.
|
||||
4. You should generally feel free to open issues ([https://github.com/deeplearning4j/deeplearning4j/issues](https://github.com/deeplearning4j/deeplearning4j/issues)) if you feel our documentation (troubleshooting/tuning) is lacking or doesn't answer common questions.
|
||||
4. You should generally feel free to open issues ([https://github.com/eclipse/deeplearning4j/issues](https://github.com/eclipse/deeplearning4j/issues)) if you feel our documentation (troubleshooting/tuning) is lacking or doesn't answer common questions.
|
||||
5. Upon entering the room, please do more than say "hi". Information-rich questions and comments are appreciated. Please keep the content relevant. Please note the below channels for different parts of the conversation.
|
||||
* Contributors/building from source: [https://gitter.im/deeplearning4j/deeplearning4j/earlyadopters](https://gitter.im/deeplearning4j/deeplearning4j/earlyadopters)
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@ To get started using Deeplearning4j, please go to our [Quickstart](https://deepl
|
|||
|
||||
---
|
||||
## Documentation
|
||||
Documentation is available at [deeplearning4j.org](https://deeplearning4j.org/overview) and [JavaDocs](https://deeplearning4j.org/api/latest/). Open-source contributors can help us improve our documentation for Deeplearning4j by sending pull requests for the DL4J website [here](https://github.com/deeplearning4j/deeplearning4j/tree/gh-pages) and ND4J [here](https://github.com/deeplearning4j/nd4j/tree/gh-pages).
|
||||
Documentation is available at [deeplearning4j.org](https://deeplearning4j.org/overview) and [JavaDocs](https://deeplearning4j.org/api/latest/). Open-source contributors can help us improve our documentation for Deeplearning4j by sending pull requests for the DL4J website [here](https://github.com/eclipse/deeplearning4j-docs)
|
||||
|
||||
## Support
|
||||
|
||||
|
@ -52,7 +52,7 @@ To install Deeplearning4J, see our [Quickstart](https://deeplearning4j.org/docs/
|
|||
|
||||
Search Maven Central for [deeplearning4j](https://search.maven.org/#search%7Cga%7C1%7Cdeeplearning4j) to get a list of dependencies.
|
||||
|
||||
Add the dependency information to your `pom.xml` file. **We highly recommend downloading via Maven unless you plan to help us develop DL4J.** An easy way to get up-to-date dependencies is to use the ones listed in our [dl4j-examples POM](https://github.com/deeplearning4j/dl4j-examples/blob/master/pom.xml).
|
||||
Add the dependency information to your `pom.xml` file. **We highly recommend downloading via Maven unless you plan to help us develop DL4J.** An easy way to get up-to-date dependencies is to use the ones listed in our [dl4j-examples POM](https://github.com/eclipse/deeplearning4j-examples/blob/master/pom.xml).
|
||||
|
||||
<!--
|
||||
#### Yum Install / Load RPM (Fedora or CentOS)
|
||||
|
@ -80,9 +80,9 @@ Note, be sure to install the ND4J modules you need first, especially the backend
|
|||
---
|
||||
## Contribute
|
||||
|
||||
1. Check for [open issues](https://github.com/deeplearning4j/deeplearning4j/issues) or open a fresh one to start a discussion around a feature idea or a bug.
|
||||
1. Check for [open issues](https://github.com/eclipse/deeplearning4j/issues) or open a fresh one to start a discussion around a feature idea or a bug.
|
||||
2. If you feel uncomfortable or uncertain about an issue or your changes, don't hesitate to contact us on Gitter using the link above.
|
||||
3. Fork [the repository](https://github.com/deeplearning4j/deeplearning4j.git)
|
||||
3. Fork [the repository](https://github.com/eclipse/deeplearning4j.git)
|
||||
on GitHub to start making your changes (branch off of the master branch).
|
||||
4. Write a test that shows the bug was fixed or the feature works as expected.
|
||||
5. Note the repository follows
|
||||
|
@ -93,4 +93,4 @@ Note, be sure to install the ND4J modules you need first, especially the backend
|
|||
the repository to configure the Eclipse formatter, or by [using the Intellij
|
||||
plugin](https://github.com/HPI-Information-Systems/Metanome/wiki/Installing-the-google-styleguide-settings-in-intellij-and-eclipse).
|
||||
6. Send a pull request and bug us on Gitter until it gets merged and published. :)
|
||||
7. Add technical documentation on the [Deeplearning4j website](https://github.com/deeplearning4j/deeplearning4j/tree/gh-pages) and fix any typos you see.
|
||||
7. Add technical documentation on the [Deeplearning4j website](https://github.com/eclipse/deeplearning4j/tree/gh-pages) and fix any typos you see.
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
Run `./gen_all_docs.sh` to generate documentation from source for all supported projects. For each documentation module, files will be put into a `doc_sources` folder where they are staged for copying to the primary docs repository. Note that the autogen docs require Python 2.
|
||||
|
||||
To deploy a new version of documentation, first make sure to set `$DL4J_DOCS_DIR` to your local copy of
|
||||
https://github.com/deeplearning4j/deeplearning4j-docs and set `$DL4J_VERSION` to a URI-friendly version string such as `v100-RC` (note the lack of decimals). Then run `./copy-to-dl4j-docs.sh`. This puts documentation
|
||||
https://github.com/eclipse/deeplearning4j-docs and set `$DL4J_VERSION` to a URI-friendly version string such as `v100-RC` (note the lack of decimals). Then run `./copy-to-dl4j-docs.sh`. This puts documentation
|
||||
into the right folders and you can use `git` to create a PR and update the live docs.
|
||||
|
||||
The structure of this project (template files, generating code, mkdocs YAML) is closely aligned
|
||||
|
|
|
@ -24,7 +24,7 @@ This video describes the conversion of image data to a vector.
|
|||
<iframe width="420" height="315" src="https://www.youtube.com/embed/EHHtyRKQIJ0" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
## Key Aspects
|
||||
- [DataVec](https://github.com/deeplearning4j/DataVec) uses an input/output format system (similar in some ways to how Hadoop MapReduce uses InputFormat to determine InputSplits and RecordReaders, DataVec also provides RecordReaders to Serialize Data)
|
||||
- [DataVec](https://github.com/eclipse/deeplearning4j/tree/master/datavec) uses an input/output format system (similar in some ways to how Hadoop MapReduce uses InputFormat to determine InputSplits and RecordReaders, DataVec also provides RecordReaders to Serialize Data)
|
||||
- Designed to support all major types of input data (text, CSV, audio, image and video) with these specific input formats
|
||||
- Uses an output format system to specify an implementation-neutral type of vector format (SVMLight, etc.)
|
||||
- Can be extended for specialized input formats (such as exotic image formats); i.e. You can write your own custom input format and let the rest of the codebase handle the transformation pipeline
|
||||
|
@ -62,7 +62,7 @@ A video tutorial of a simple DataVec transform along with code is available belo
|
|||
|
||||
## Example Java Code
|
||||
|
||||
Our [examples](https://github.com/deeplearning4j/dl4j-examples) include a collection of DataVec examples.
|
||||
Our [examples](https://github.com/eclipse/deeplearning4j-examples) include a collection of DataVec examples.
|
||||
|
||||
<!-- Note to Tom, write DataVec setup content
|
||||
|
||||
|
@ -90,7 +90,7 @@ recordReader.initialize(new FileSplit(new File(labeledPath)));
|
|||
|
||||
The RecordReader is a class in DataVec that helps convert the byte-oriented input into data that's oriented toward a record; i.e. a collection of elements that are fixed in number and indexed with a unique ID. Converting data to records is the process of vectorization. The record itself is a vector, each element of which is a feature.
|
||||
|
||||
The [ImageRecordReader](https://github.com/deeplearning4j/DataVec/blob/a64389c08396bb39626201beeabb7c4d5f9288f9/datavec-data/datavec-data-image/src/main/java/org/datavec/image/recordreader/ImageRecordReader.java) is a subclass of the RecordReader and is built to automatically take in 28 x 28 pixel images. Thus, LFW images are scaled to 28 pixels x 28 pixels. You can change dimensions to match your custom images by changing the parameters fed to the ImageRecordReader, as long as you make sure to adjust the `nIn` hyperparameter, which will be equal to the product of image height x image width.
|
||||
The [ImageRecordReader](https://github.com/eclipse/deeplearning4j/tree/master/datavec/blob/a64389c08396bb39626201beeabb7c4d5f9288f9/datavec-data/datavec-data-image/src/main/java/org/datavec/image/recordreader/ImageRecordReader.java) is a subclass of the RecordReader and is built to automatically take in 28 x 28 pixel images. Thus, LFW images are scaled to 28 pixels x 28 pixels. You can change dimensions to match your custom images by changing the parameters fed to the ImageRecordReader, as long as you make sure to adjust the `nIn` hyperparameter, which will be equal to the product of image height x image width.
|
||||
|
||||
Other parameters shown above include `true`, which instructs the reader to append a label to the record, and `labels`, which is the array of supervised values (e.g. targets) used to validate neural net model results. Here are all the RecordReader extensions that come pre-built with DataVec (you can find them by right-clicking on `RecordReader` in IntelliJ, clicking `Go To` in the drop-down menu, and selection `Implementations`):
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ In the ParagraphVectors builder pattern, the `labels()` method points to the lab
|
|||
.labels(Arrays.asList("negative", "neutral","positive"))
|
||||
```
|
||||
|
||||
Here's a full working example of [classification with paragraph vectors](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/paragraphvectors/ParagraphVectorsClassifierExample.java):
|
||||
Here's a full working example of [classification with paragraph vectors](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/paragraphvectors/ParagraphVectorsClassifierExample.java):
|
||||
|
||||
``` java
|
||||
public void testDifferentLabels() throws Exception {
|
||||
|
|
|
@ -16,7 +16,7 @@ Deeplearning4j's NLP relies on [ClearTK](https://cleartk.github.io/cleartk/), an
|
|||
|
||||
There are several steps involved in processing natural language. The first is to iterate over your corpus to create a list of documents, which can be as short as a tweet, or as long as a newspaper article. This is performed by a SentenceIterator, which will appear like this:
|
||||
|
||||
<script src="https://gist-it.appspot.com/https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java?slice=33:41"></script>
|
||||
<script src="https://gist-it.appspot.com/https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java?slice=33:41"></script>
|
||||
|
||||
The SentenceIterator encapsulates a corpus or text, organizing it, say, as one Tweet per line. It is responsible for feeding text piece by piece into your natural language processor. The SentenceIterator is not analogous to a similarly named class, the DatasetIterator, which creates a dataset for training a neural net. Instead it creates a collection of strings by segmenting a corpus.
|
||||
|
||||
|
@ -24,11 +24,11 @@ The SentenceIterator encapsulates a corpus or text, organizing it, say, as one T
|
|||
|
||||
A Tokenizer further segments the text at the level of single words, also alternatively as n-grams. ClearTK contains the underlying tokenizers, such as parts of speech (PoS) and parse trees, which allow for both dependency and constituency parsing, like that employed by a recursive neural tensor network (RNTN).
|
||||
|
||||
A Tokenizer is created and wrapped by a [TokenizerFactory](https://github.com/deeplearning4j/deeplearning4j/blob/6f027fd5075e3e76a38123ae5e28c00c17db4361/deeplearning4j-scaleout/deeplearning4j-nlp/src/main/java/org/deeplearning4j/text/tokenization/tokenizerfactory/UimaTokenizerFactory.java). The default tokens are words separated by spaces. The tokenization process also involves some machine learning to differentiate between ambibuous symbols like . which end sentences and also abbreviate words such as Mr. and vs.
|
||||
A Tokenizer is created and wrapped by a [TokenizerFactory](https://github.com/eclipse/deeplearning4j/blob/6f027fd5075e3e76a38123ae5e28c00c17db4361/deeplearning4j-scaleout/deeplearning4j-nlp/src/main/java/org/deeplearning4j/text/tokenization/tokenizerfactory/UimaTokenizerFactory.java). The default tokens are words separated by spaces. The tokenization process also involves some machine learning to differentiate between ambibuous symbols like . which end sentences and also abbreviate words such as Mr. and vs.
|
||||
|
||||
Both Tokenizers and SentenceIterators work with Preprocessors to deal with anomalies in messy text like Unicode, and to render such text, say, as lowercase characters uniformly.
|
||||
|
||||
<script src="https://gist-it.appspot.com/https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java?slice=43:57"></script>
|
||||
<script src="https://gist-it.appspot.com/https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java?slice=43:57"></script>
|
||||
|
||||
|
||||
|
||||
|
@ -38,6 +38,6 @@ Each document has to be tokenized to create a vocab, the set of words that matte
|
|||
|
||||
The vocab cache stores metadata for methods such as Word2vec and Bag of Words, which treat words in radically different ways. Word2vec creates representations of words, or neural word embeddings, in the form of vectors that are hundreds of coefficients long. Those coefficients help neural nets predict the likelihood of a word appearing in any given context; for example, after another word. Here's Word2vec, configured:
|
||||
|
||||
<script src="https://gist-it.appspot.com/https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java"></script>
|
||||
<script src="https://gist-it.appspot.com/https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java"></script>
|
||||
|
||||
Once you obtain word vectors, you can feed them into a deep net for classification, prediction, sentiment analysis and the like.
|
|
@ -32,7 +32,7 @@ Why? Because words are simply discrete states like the other data mentioned abov
|
|||
|
||||
The purpose and usefulness of Word2vec is to group the vectors of similar words together in vectorspace. That is, it detects similarities mathematically. Word2vec creates vectors that are distributed numerical representations of word features, features such as the context of individual words. It does so without human intervention.
|
||||
|
||||
Given enough data, usage and contexts, Word2vec can make highly accurate guesses about a word’s meaning based on past appearances. Those guesses can be used to establish a word's association with other words (e.g. "man" is to "boy" what "woman" is to "girl"), or cluster documents and classify them by topic. Those clusters can form the basis of search, [sentiment analysis](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/word2vecsentiment/Word2VecSentimentRNN.java) and recommendations in such diverse fields as scientific research, legal discovery, e-commerce and customer relationship management.
|
||||
Given enough data, usage and contexts, Word2vec can make highly accurate guesses about a word’s meaning based on past appearances. Those guesses can be used to establish a word's association with other words (e.g. "man" is to "boy" what "woman" is to "girl"), or cluster documents and classify them by topic. Those clusters can form the basis of search, [sentiment analysis](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/word2vecsentiment/Word2VecSentimentRNN.java) and recommendations in such diverse fields as scientific research, legal discovery, e-commerce and customer relationship management.
|
||||
|
||||
The output of the Word2vec neural net is a vocabulary in which each item has a vector attached to it, which can be fed into a deep-learning net or simply queried to detect relationships between words.
|
||||
|
||||
|
@ -216,7 +216,7 @@ This configuration accepts a number of hyperparameters. A few require some expla
|
|||
* *tokenizer* feeds it the words from the current batch.
|
||||
* *vec.fit()* tells the configured net to begin training.
|
||||
|
||||
An example for [uptraining your previously trained word vectors is here](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecUptrainingExample.java).
|
||||
An example for [uptraining your previously trained word vectors is here](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecUptrainingExample.java).
|
||||
|
||||
### <a name="eval">Evaluating the Model, Using Word2vec</a>
|
||||
|
||||
|
@ -253,7 +253,7 @@ With `vec.wordsNearest("word1", numWordsNearest)`, the words printed to the scre
|
|||
|
||||
### Visualizing the Model
|
||||
|
||||
We rely on [TSNE](https://lvdmaaten.github.io/tsne/) to reduce the dimensionality of word feature vectors and project words into a two or three-dimensional space. The full [DL4J/ND4J example for TSNE is here](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/tsne/TSNEStandardExample.java).
|
||||
We rely on [TSNE](https://lvdmaaten.github.io/tsne/) to reduce the dimensionality of word feature vectors and project words into a two or three-dimensional space. The full [DL4J/ND4J example for TSNE is here](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/tsne/TSNEStandardExample.java).
|
||||
|
||||
``` java
|
||||
Nd4j.setDataType(DataBuffer.Type.DOUBLE);
|
||||
|
@ -363,11 +363,11 @@ This n-gram is then fed into a neural network to learn the significance of a giv
|
|||
|
||||
### <a name="code">A Working Example</a>
|
||||
|
||||
**Please note** : The code below may be outdated. For updated examples, please see our [dl4j-examples repository on Github](https://github.com/deeplearning4j/dl4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp).
|
||||
**Please note** : The code below may be outdated. For updated examples, please see our [dl4j-examples repository on Github](https://github.com/eclipse/deeplearning4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp).
|
||||
|
||||
Now that you have a basic idea of how to set up Word2Vec, here's [one example](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java) of how it can be used with DL4J's API:
|
||||
Now that you have a basic idea of how to set up Word2Vec, here's [one example](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java) of how it can be used with DL4J's API:
|
||||
|
||||
<script src="https://gist-it.appspot.com/https://github.com/deeplearning4j/dl4j-examples/blob/master/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java?slice=22:64"></script>
|
||||
<script src="https://gist-it.appspot.com/https://github.com/eclipse/deeplearning4j-examples/blob/master/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java?slice=22:64"></script>
|
||||
|
||||
After following the instructions in the [Quickstart](./deeplearning4j-quickstart), you can open this example in IntelliJ and hit run to see it work. If you query the Word2vec model with a word isn't contained in the training corpus, it will return null.
|
||||
|
||||
|
@ -463,7 +463,7 @@ Loading and saving GloVe models to word2vec can be done like so:
|
|||
|
||||
### <a name="sequence">Sequence Vectors</a>
|
||||
|
||||
Deeplearning4j has a class called [SequenceVectors](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nlp-parent/deeplearning4j-nlp/src/main/java/org/deeplearning4j/models/sequencevectors/SequenceVectors.java), which is one level of abstraction above word vectors, and which allows you to extract features from any sequence, including social media profiles, transactions, proteins, etc. If data can be described as sequence, it can be learned via skip-gram and hierarchic softmax with the AbstractVectors class. This is compatible with the [DeepWalk algorithm](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-graph/src/main/java/org/deeplearning4j/graph/models/deepwalk/DeepWalk.java), also implemented in Deeplearning4j.
|
||||
Deeplearning4j has a class called [SequenceVectors](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nlp-parent/deeplearning4j-nlp/src/main/java/org/deeplearning4j/models/sequencevectors/SequenceVectors.java), which is one level of abstraction above word vectors, and which allows you to extract features from any sequence, including social media profiles, transactions, proteins, etc. If data can be described as sequence, it can be learned via skip-gram and hierarchic softmax with the AbstractVectors class. This is compatible with the [DeepWalk algorithm](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-graph/src/main/java/org/deeplearning4j/graph/models/deepwalk/DeepWalk.java), also implemented in Deeplearning4j.
|
||||
|
||||
### <a name="features">Word2Vec Features on Deeplearning4j</a>
|
||||
|
||||
|
@ -477,8 +477,8 @@ Deeplearning4j has a class called [SequenceVectors](https://github.com/deeplearn
|
|||
|
||||
### Doc2vec & Other NLP Resources
|
||||
|
||||
* [DL4J Example of Text Classification With Word2vec & RNNs](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/word2vecsentiment/Word2VecSentimentRNN.java)
|
||||
* [DL4J Example of Text Classification With Paragraph Vectors](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/paragraphvectors/ParagraphVectorsClassifierExample.java)
|
||||
* [DL4J Example of Text Classification With Word2vec & RNNs](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/word2vecsentiment/Word2VecSentimentRNN.java)
|
||||
* [DL4J Example of Text Classification With Paragraph Vectors](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/paragraphvectors/ParagraphVectorsClassifierExample.java)
|
||||
* [Doc2vec, or Paragraph Vectors, With Deeplearning4j](./deeplearning4j-nlp-doc2vec)
|
||||
* [Thought Vectors, Natural Language Processing & the Future of AI](https://skymind.ai/wiki/thought-vectors)
|
||||
* [Quora: How Does Word2vec Work?](http://www.quora.com/How-does-word2vec-work)
|
||||
|
|
|
@ -29,8 +29,8 @@ This page describes how to build more complicated networks, using DL4J's Computa
|
|||
|
||||
DL4J has two types of networks comprised of multiple layers:
|
||||
|
||||
- The [MultiLayerNetwork](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/multilayer/MultiLayerNetwork.java), which is essentially a stack of neural network layers (with a single input layer and single output layer), and
|
||||
- The [ComputationGraph](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/graph/ComputationGraph.java), which allows for greater freedom in network architectures
|
||||
- The [MultiLayerNetwork](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/multilayer/MultiLayerNetwork.java), which is essentially a stack of neural network layers (with a single input layer and single output layer), and
|
||||
- The [ComputationGraph](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/graph/ComputationGraph.java), which allows for greater freedom in network architectures
|
||||
|
||||
|
||||
Specifically, the ComputationGraph allows for networks to be built with the following features:
|
||||
|
@ -53,7 +53,7 @@ Examples of some architectures that can be built using ComputationGraph include:
|
|||
- Recurrent neural networks with skip connections
|
||||
- [GoogLeNet](http://arxiv.org/abs/1409.4842), a complex type of convolutional netural network for image classification
|
||||
- [Image caption generation](http://arxiv.org/abs/1411.4555)
|
||||
- [Convolutional networks for sentence classification](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/sentenceclassification/CnnSentenceClassificationExample.java)
|
||||
- [Convolutional networks for sentence classification](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/sentenceclassification/CnnSentenceClassificationExample.java)
|
||||
- [Residual learning convolutional neural networks](http://arxiv.org/abs/1512.03385)
|
||||
|
||||
|
||||
|
@ -61,7 +61,7 @@ Examples of some architectures that can be built using ComputationGraph include:
|
|||
|
||||
### <a name="vertextypes">Types of Graph Vertices</a>
|
||||
|
||||
The basic idea is that in the ComputationGraph, the core building block is the [GraphVertex](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/graph/vertex/GraphVertex.java), instead of layers. Layers (or, more accurately the [LayerVertex](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/graph/vertex/impl/LayerVertex.java) objects), are but one type of vertex in the graph. Other types of vertices include:
|
||||
The basic idea is that in the ComputationGraph, the core building block is the [GraphVertex](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/graph/vertex/GraphVertex.java), instead of layers. Layers (or, more accurately the [LayerVertex](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/graph/vertex/impl/LayerVertex.java) objects), are but one type of vertex in the graph. Other types of vertices include:
|
||||
|
||||
- Input Vertices
|
||||
- Element-wise operation vertices
|
||||
|
@ -72,7 +72,7 @@ The basic idea is that in the ComputationGraph, the core building block is the [
|
|||
These types of graph vertices are described briefly below.
|
||||
|
||||
**LayerVertex**: Layer vertices (graph vertices with neural network layers) are added using the ```.addLayer(String,Layer,String...)``` method. The first argument is the label for the layer, and the last arguments are the inputs to that layer.
|
||||
If you need to manually add an [InputPreProcessor](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor) (usually this is unnecessary - see next section) you can use the ```.addLayer(String,Layer,InputPreProcessor,String...)``` method.
|
||||
If you need to manually add an [InputPreProcessor](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor) (usually this is unnecessary - see next section) you can use the ```.addLayer(String,Layer,InputPreProcessor,String...)``` method.
|
||||
|
||||
**InputVertex**: Input vertices are specified by the ```addInputs(String...)``` method in your configuration. The strings used as inputs can be arbitrary - they are user-defined labels, and can be referenced later in the configuration. The number of strings provided define the number of inputs; the order of the input also defines the order of the corresponding INDArrays in the fit methods (or the DataSet/MultiDataSet objects).
|
||||
|
||||
|
@ -82,9 +82,9 @@ If you need to manually add an [InputPreProcessor](https://github.com/deeplearni
|
|||
|
||||
**SubsetVertex**: The subset vertex allows you to get only part of the activations out of another vertex. For example, to get the first 5 activations out of another vertex with label "layer1", you can use ```.addVertex("subset1", new SubsetVertex(0,4), "layer1")```: this means that the 0th through 4th (inclusive) activations out of the "layer1" vertex will be used as output from the subset vertex.
|
||||
|
||||
**PreProcessorVertex**: Occasionally, you might want to the functionality of an [InputPreProcessor](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor) without that preprocessor being associated with a layer. The PreProcessorVertex allows you to do this.
|
||||
**PreProcessorVertex**: Occasionally, you might want to the functionality of an [InputPreProcessor](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor) without that preprocessor being associated with a layer. The PreProcessorVertex allows you to do this.
|
||||
|
||||
Finally, it is also possible to define custom graph vertices by implementing both a [configuration](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/GraphVertex.java) and [implementation](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/graph/vertex/GraphVertex.java) class for your custom GraphVertex.
|
||||
Finally, it is also possible to define custom graph vertices by implementing both a [configuration](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/GraphVertex.java) and [implementation](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/graph/vertex/GraphVertex.java) class for your custom GraphVertex.
|
||||
|
||||
|
||||
### <a name="rnnskip">Example 1: Recurrent Network with Skip Connections</a>
|
||||
|
@ -165,7 +165,7 @@ One feature of the ComputationGraphConfiguration is that you can specify the typ
|
|||
|
||||
The setInputType method has two effects:
|
||||
|
||||
1. It will automatically add any [InputPreProcessor](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor)s as required. InputPreProcessors are necessary to handle the interaction between for example fully connected (dense) and convolutional layers, or recurrent and fully connected layers.
|
||||
1. It will automatically add any [InputPreProcessor](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor)s as required. InputPreProcessors are necessary to handle the interaction between for example fully connected (dense) and convolutional layers, or recurrent and fully connected layers.
|
||||
2. It will automatically calculate the number of inputs (.nIn(x) config) to a layer. Thus, if you are using the ```setInputTypes(InputType...)``` functionality, it is not necessary to manually specify the .nIn(x) options in your configuration. This can simplify building some architectures (such as convolutional networks with fully connected layers). If the .nIn(x) is specified for a layer, the network will not override this when using the InputType functionality.
|
||||
|
||||
|
||||
|
@ -188,8 +188,8 @@ MultiDataSet is multiple input and/or multiple output version of DataSet. It may
|
|||
|
||||
There are currently two ways to use a MultiDataSetIterator:
|
||||
|
||||
- By implementing the [MultiDataSetIterator](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/iterator/MultiDataSetIterator.java) interface directly
|
||||
- By using the [RecordReaderMultiDataSetIterator](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datavec-iterators/src/main/java/org/deeplearning4j/datasets/datavec/RecordReaderMultiDataSetIterator.java) in conjuction with DataVec record readers
|
||||
- By implementing the [MultiDataSetIterator](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/iterator/MultiDataSetIterator.java) interface directly
|
||||
- By using the [RecordReaderMultiDataSetIterator](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datavec-iterators/src/main/java/org/deeplearning4j/datasets/datavec/RecordReaderMultiDataSetIterator.java) in conjuction with DataVec record readers
|
||||
|
||||
|
||||
The RecordReaderMultiDataSetIterator provides a number of options for loading data. In particular, the RecordReaderMultiDataSetIterator provides the following functionality:
|
||||
|
@ -200,7 +200,7 @@ The RecordReaderMultiDataSetIterator provides a number of options for loading da
|
|||
- It is possible to convert single columns from a class index to a one-hot representation
|
||||
|
||||
|
||||
Some basic examples on how to use the RecordReaderMultiDataSetIterator follow. You might also find [these unit tests](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/test/java/org/deeplearning4j/datasets/datavec/RecordReaderMultiDataSetIteratorTest.java) to be useful.
|
||||
Some basic examples on how to use the RecordReaderMultiDataSetIterator follow. You might also find [these unit tests](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/test/java/org/deeplearning4j/datasets/datavec/RecordReaderMultiDataSetIteratorTest.java) to be useful.
|
||||
|
||||
### <a name="rrmdsi1">RecordReaderMultiDataSetIterator Example 1: Regression Data</a>
|
||||
|
||||
|
|
|
@ -45,7 +45,7 @@ These tests should at a minimum include the following:
|
|||
|
||||
## Example
|
||||
|
||||
A full custom layer example is available in our [examples repository](https://github.com/deeplearning4j/dl4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/customlayers).
|
||||
A full custom layer example is available in our [examples repository](https://github.com/eclipse/deeplearning4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/customlayers).
|
||||
|
||||
## API
|
||||
|
||||
|
|
|
@ -30,7 +30,7 @@ The best model is the one saved at the time of the vertical dotted line - i.e.,
|
|||
|
||||
Using DL4J's early stopping functionality requires you to provide a number of configuration options:
|
||||
|
||||
* A score calculator, such as the *DataSetLossCalculator*([JavaDoc](https://deeplearning4j.org/api/{{page.version}}/org/deeplearning4j/earlystopping/scorecalc/DataSetLossCalculator.html), [Source Code](https://github.com/deeplearning4j/deeplearning4j/blob/c152293ef8d1094c281f5375ded61ff5f8eb6587/deeplearning4j-core/src/main/java/org/deeplearning4j/earlystopping/scorecalc/DataSetLossCalculator.java)) for a Multi Layer Network, or *DataSetLossCalculatorCG* ([JavaDoc](https://deeplearning4j.org/api/{{page.version}}/org/deeplearning4j/earlystopping/scorecalc/DataSetLossCalculatorCG.html), [Source Code](https://github.com/deeplearning4j/deeplearning4j/blob/c152293ef8d1094c281f5375ded61ff5f8eb6587/deeplearning4j-core/src/main/java/org/deeplearning4j/earlystopping/scorecalc/DataSetLossCalculatorCG.java)) for a Computation Graph. Is used to calculate at every epoch (for example: the loss function value on a test set, or the accuracy on the test set)
|
||||
* A score calculator, such as the *DataSetLossCalculator*([JavaDoc](https://deeplearning4j.org/api/{{page.version}}/org/deeplearning4j/earlystopping/scorecalc/DataSetLossCalculator.html), [Source Code](https://github.com/eclipse/deeplearning4j/blob/c152293ef8d1094c281f5375ded61ff5f8eb6587/deeplearning4j-core/src/main/java/org/deeplearning4j/earlystopping/scorecalc/DataSetLossCalculator.java)) for a Multi Layer Network, or *DataSetLossCalculatorCG* ([JavaDoc](https://deeplearning4j.org/api/{{page.version}}/org/deeplearning4j/earlystopping/scorecalc/DataSetLossCalculatorCG.html), [Source Code](https://github.com/eclipse/deeplearning4j/blob/c152293ef8d1094c281f5375ded61ff5f8eb6587/deeplearning4j-core/src/main/java/org/deeplearning4j/earlystopping/scorecalc/DataSetLossCalculatorCG.java)) for a Computation Graph. Is used to calculate at every epoch (for example: the loss function value on a test set, or the accuracy on the test set)
|
||||
* How frequently we want to calculate the score function (default: every epoch)
|
||||
* One or more termination conditions, which tell the training process when to stop. There are two classes of termination conditions:
|
||||
* Epoch termination conditions: evaluated every N epochs
|
||||
|
|
|
@ -22,7 +22,7 @@ DataSetIterator myTestData = ...
|
|||
Evaluation eval = model.evaluate(myTestData);
|
||||
```
|
||||
|
||||
However, evaluation can be performed on individual minibatches also. Here is an example taken from our dataexamples/CSVExample in the [Examples](https://github.com/deeplearning4j/dl4j-examples) project.
|
||||
However, evaluation can be performed on individual minibatches also. Here is an example taken from our dataexamples/CSVExample in the [Examples](https://github.com/eclipse/deeplearning4j-examples) project.
|
||||
|
||||
The CSV example has CSV data for 3 classes of flowers and builds a simple feed forward neural network to classify the flowers based on 4 measurements.
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ weight: 10
|
|||
|
||||
The `ModelSerializer` is a class which handles loading and saving models. There are two methods for saving models shown in the examples through the link. The first example saves a normal multilayer network, the second one saves a [computation graph](https://deeplearning4j.org/compgraph).
|
||||
|
||||
Here is a [basic example](https://github.com/deeplearning4j/dl4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/modelsaving) with code to save a computation graph using the `ModelSerializer` class, as well as an example of using ModelSerializer to save a neural net built using MultiLayer configuration.
|
||||
Here is a [basic example](https://github.com/eclipse/deeplearning4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/modelsaving) with code to save a computation graph using the `ModelSerializer` class, as well as an example of using ModelSerializer to save a neural net built using MultiLayer configuration.
|
||||
|
||||
### RNG Seed
|
||||
|
||||
|
|
|
@ -182,7 +182,7 @@ In either case, we can then do the following:
|
|||
|
||||
RNN layers in DL4J can be combined with other layer types. For example, it is possible to combine DenseLayer and LSTM layers in the same network; or combine Convolutional (CNN) layers and LSTM layers for video.
|
||||
|
||||
Of course, the DenseLayer and Convolutional layers do not handle time series data - they expect a different type of input. To deal with this, we need to use the layer preprocessor functionality: for example, the CnnToRnnPreProcessor and FeedForwardToRnnPreprocessor classes. See [here](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor) for all preprocessors. Fortunately, in most situations, the DL4J configuration system will automatically add these preprocessors as required. However, the preprocessors can be added manually (overriding the automatic addition of preprocessors, for each layer).
|
||||
Of course, the DenseLayer and Convolutional layers do not handle time series data - they expect a different type of input. To deal with this, we need to use the layer preprocessor functionality: for example, the CnnToRnnPreProcessor and FeedForwardToRnnPreprocessor classes. See [here](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor) for all preprocessors. Fortunately, in most situations, the DL4J configuration system will automatically add these preprocessors as required. However, the preprocessors can be added manually (overriding the automatic addition of preprocessors, for each layer).
|
||||
|
||||
For example, to manually add a preprocessor between layers 1 and 2, add the following to your network configuration: `.inputPreProcessor(2, new RnnToFeedForwardPreProcessor())`.
|
||||
|
||||
|
@ -252,13 +252,13 @@ This method also supports:
|
|||
|
||||
Note that in all cases, each line in the data files represents one time step.
|
||||
|
||||
(In addition to the examples below, you might find [these unit tests](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/test/java/org/deeplearning4j/datasets/datavec/RecordReaderDataSetiteratorTest.java) to be of some use.)
|
||||
(In addition to the examples below, you might find [these unit tests](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/test/java/org/deeplearning4j/datasets/datavec/RecordReaderDataSetiteratorTest.java) to be of some use.)
|
||||
|
||||
#### Example 1: Time Series of Same Length, Input and Labels in Separate Files
|
||||
|
||||
Suppose we have 10 time series in our training data, represented by 20 files: 10 files for the input of each time series, and 10 files for the output/labels. For now, assume these 20 files all contain the same number of time steps (i.e., same number of rows).
|
||||
|
||||
To use the [SequenceRecordReaderDataSetIterator](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datavec-iterators/src/main/java/org/deeplearning4j/datasets/datavec/SequenceRecordReaderDataSetIterator.java) and [CSVSequenceRecordReader](https://github.com/deeplearning4j/deeplearning4j/blob/master/datavec/datavec-api/src/main/java/org/datavec/api/records/reader/impl/csv/CSVSequenceRecordReader.java) approaches, we first create two CSVSequenceRecordReader objects, one for input and one for labels:
|
||||
To use the [SequenceRecordReaderDataSetIterator](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datavec-iterators/src/main/java/org/deeplearning4j/datasets/datavec/SequenceRecordReaderDataSetIterator.java) and [CSVSequenceRecordReader](https://github.com/eclipse/deeplearning4j/blob/master/datavec/datavec-api/src/main/java/org/datavec/api/records/reader/impl/csv/CSVSequenceRecordReader.java) approaches, we first create two CSVSequenceRecordReader objects, one for input and one for labels:
|
||||
|
||||
SequenceRecordReader featureReader = new CSVSequenceRecordReader(1, ",");
|
||||
SequenceRecordReader labelReader = new CSVSequenceRecordReader(1, ",");
|
||||
|
@ -266,7 +266,7 @@ To use the [SequenceRecordReaderDataSetIterator](https://github.com/deeplearning
|
|||
This particular constructor takes the number of lines to skip (1 row skipped here), and the delimiter (comma character used here).
|
||||
|
||||
Second, we need to initialize these two readers, by telling them where to get the data from. We do this with an InputSplit object.
|
||||
Suppose that our time series are numbered, with file names "myInput_0.csv", "myInput_1.csv", ..., "myLabels_0.csv", etc. One approach is to use the [NumberedFileInputSplit](https://github.com/deeplearning4j/deeplearning4j/blob/master/datavec/datavec-api/src/main/java/org/datavec/api/split/NumberedFileInputSplit.java):
|
||||
Suppose that our time series are numbered, with file names "myInput_0.csv", "myInput_1.csv", ..., "myLabels_0.csv", etc. One approach is to use the [NumberedFileInputSplit](https://github.com/eclipse/deeplearning4j/blob/master/datavec/datavec-api/src/main/java/org/datavec/api/split/NumberedFileInputSplit.java):
|
||||
|
||||
featureReader.initialize(new NumberedFileInputSplit("/path/to/data/myInput_%d.csv", 0, 9));
|
||||
labelReader.initialize(new NumberedFileInputSplit(/path/to/data/myLabels_%d.csv", 0, 9));
|
||||
|
|
|
@ -58,9 +58,9 @@ You can set the port by using the ```org.deeplearning4j.ui.port``` system proper
|
|||
Information will then be collected and routed to the UI when you call the ```fit``` method on your network.
|
||||
|
||||
|
||||
**Example:** [See a UI example here](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/userInterface/UIExample.java)
|
||||
**Example:** [See a UI example here](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/userInterface/UIExample.java)
|
||||
|
||||
The full set of UI examples are available [here](https://github.com/deeplearning4j/dl4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/userInterface).
|
||||
The full set of UI examples are available [here](https://github.com/eclipse/deeplearning4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/userInterface).
|
||||
|
||||
|
||||
### <a name="overviewpage">Deeplearning4j UI: The Overview Page</a>
|
||||
|
@ -137,7 +137,7 @@ First, in the JVM running the UI (note this is the server):
|
|||
This will require the ```deeplearning4j-ui_2.10``` or ```deeplearning4j-ui_2.11``` dependency. (NOTE THIS IS NOT THE CLIENT THIS IS YOUR SERVER - SEE BELOW FOR THE CLIENT WHICH USES: deeplearning4j-ui-model)
|
||||
|
||||
Client (both spark and standalone neural networks using simple deeplearning4j-nn)
|
||||
Second, for your neural net (Note this example is for spark, but computation graph and multi layer network both have the equivalemtn setListeners method with the same usage, [example found here](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/userInterface/RemoteUIExample.java)):
|
||||
Second, for your neural net (Note this example is for spark, but computation graph and multi layer network both have the equivalemtn setListeners method with the same usage, [example found here](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/userInterface/RemoteUIExample.java)):
|
||||
|
||||
```
|
||||
SparkDl4jMultiLayer sparkNet = new SparkDl4jMultiLayer(sc, conf, tm);
|
||||
|
@ -214,7 +214,7 @@ The layer update histogram is displayed for the most recent iteration only.
|
|||
- Keep an eye out for very large values: this can indicate exploding gradients in your network
|
||||
- Exploding gradients are problematic as they can 'mess up' the parameters of your network
|
||||
- In this case, it may indicate a weight initialization, learning rate or input/labels data normalization issue
|
||||
- In the case of recurrent neural networks, adding some [gradient normalization or gradient clipping](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/GradientNormalization.java) may help
|
||||
- In the case of recurrent neural networks, adding some [gradient normalization or gradient clipping](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/GradientNormalization.java) may help
|
||||
|
||||
**Model Page: Parameter Learning Rates Chart**
|
||||
|
||||
|
|
|
@ -231,7 +231,7 @@ while(iter.hasNext()){
|
|||
Note that for the purposes of Spark, the exact file names don't matter.
|
||||
The process for saving MultiDataSets is almost identical.
|
||||
|
||||
As an aside: you can read these saved DataSet objects on a single machine (for non-Spark training) using [FileDataSetIterator](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/file/FileDataSetIterator.java)).
|
||||
As an aside: you can read these saved DataSet objects on a single machine (for non-Spark training) using [FileDataSetIterator](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/file/FileDataSetIterator.java)).
|
||||
|
||||
An alternative approach is to save directly to the cluster using output streams, to (for example) HDFS. This can only be done if the machine running the code is properly configured with the required libraries and access rights. For example, to save the DataSets directly to HDFS you could use:
|
||||
|
||||
|
@ -370,10 +370,10 @@ JavaRDD<DataSet> dataSetRdd = sequencesRdd.map(new DataVecSequenceDataSetFunctio
|
|||
|
||||
This guide shows how to create an ```RDD<DataSet>``` for image classification, starting from images stored either locally, or on a network file system such as HDFS.
|
||||
|
||||
The approach here used (added in 1.0.0-beta3) is to first preprocess the images into batches of files - [FileBatch](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-common/src/main/java/org/nd4j/api/loader/FileBatch.java) objects.
|
||||
The approach here used (added in 1.0.0-beta3) is to first preprocess the images into batches of files - [FileBatch](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-common/src/main/java/org/nd4j/api/loader/FileBatch.java) objects.
|
||||
The motivation for this approach is simple: the original image files typically use efficient compresion (JPEG for example) which is much more space (and network) efficient than a bitmap (int8 or 32-bit floating point) representation. However, on a cluster we want to minimize disk reads due to latency issues with remote storage - one file read/transfer is going to be faster than ```minibatchSize``` remote file reads.
|
||||
|
||||
The [TinyImageNet example](https://github.com/deeplearning4j/dl4j-examples/tree/master/dl4j-spark-examples/dl4j-spark/src/main/java/org/deeplearning4j/tinyimagenet) also shows how this can be done.
|
||||
The [TinyImageNet example](https://github.com/eclipse/deeplearning4j-examples/tree/master/dl4j-spark-examples/dl4j-spark/src/main/java/org/deeplearning4j/tinyimagenet) also shows how this can be done.
|
||||
|
||||
Note that one limitation of the implementation is that the set of classes (i.e., the class/category labels when doing classification) needs to be known, provided or collected manually. This differs from using ImageRecordReader for classification on a single machine, which can automatically infer the set of class labels.
|
||||
|
||||
|
@ -415,7 +415,7 @@ SparkDataUtils.createFileBatchesSpark(sourceDirectory, destinationDirectory, bat
|
|||
```
|
||||
|
||||
**Step 2: Training**
|
||||
The data pipeline for image classification can be constructed as follows. This code is taken from the [TinyImageNet example](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-spark-examples/dl4j-spark/src/main/java/org/deeplearning4j/tinyimagenet/TrainSpark.java):
|
||||
The data pipeline for image classification can be constructed as follows. This code is taken from the [TinyImageNet example](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-spark-examples/dl4j-spark/src/main/java/org/deeplearning4j/tinyimagenet/TrainSpark.java):
|
||||
```
|
||||
//Create data loader
|
||||
int imageHeightWidth = 64; //64x64 pixel input to network
|
||||
|
@ -458,10 +458,10 @@ When files represent a single record/example (instead of a minibatch) in a custo
|
|||
|
||||
The interfaces of note are:
|
||||
|
||||
* [DataSetLoader](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/main/java/org/deeplearning4j/api/loader/DataSetLoader.java)
|
||||
* [MultiDataSetLoader](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/main/java/org/deeplearning4j/api/loader/MultiDataSetLoader.java)
|
||||
* [DataSetLoader](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/main/java/org/deeplearning4j/api/loader/DataSetLoader.java)
|
||||
* [MultiDataSetLoader](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/main/java/org/deeplearning4j/api/loader/MultiDataSetLoader.java)
|
||||
|
||||
Both of which extend the single-method [Loader](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-common/src/main/java/org/nd4j/api/loader/Loader.java) interface.
|
||||
Both of which extend the single-method [Loader](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-common/src/main/java/org/nd4j/api/loader/Loader.java) interface.
|
||||
|
||||
Suppose a HDFS directory contains a number of files, each being a minibatch in some custom format.
|
||||
These can be loaded using the following process:
|
||||
|
|
|
@ -101,7 +101,7 @@ When using Spark submit, you will need an uber-jar to submit to start and run yo
|
|||
|
||||
We recommend that you use the maven shade plugin for building an uber-jar. There are alternative tools/plugins for this purpose, but these do not always include all relevant files from the source jars, such as those required for Java's ServiceLoader mechanism to function correctly. (The ServiceLoader mechanism is used by ND4J and a lot of other software libraries).
|
||||
|
||||
A Maven shade configuration suitable for this purpose is provided in the example standalone sample project [pom.xml file](https://github.com/deeplearning4j/dl4j-examples/blob/master/standalone-sample-project/pom.xml):
|
||||
A Maven shade configuration suitable for this purpose is provided in the example standalone sample project [pom.xml file](https://github.com/eclipse/deeplearning4j-examples/blob/master/standalone-sample-project/pom.xml):
|
||||
```
|
||||
<build>
|
||||
<plugins>
|
||||
|
@ -191,7 +191,7 @@ If resources (i.e., the number of available GPU machines) are not constrained, i
|
|||
|
||||
Assuming the master/driver is executing on a CPU machine, and the workers are executing on GPU machines, you can simply include both backends (i.e., both the ```nd4j-cuda-x.x``` and ```nd4j-native``` dependencies as described in the [uber-jar section](#uberjar)).
|
||||
|
||||
When multiple backends are present on the classpath, by default the CUDA backend will be tried first. If this cannot be loaded, the CPU (nd4j-native) backend will be loaded second. Thus, if the driver does not have a GPU, it should fall back to using a CPU. However, this default behaviour can be changed by setting the ```BACKEND_PRIORITY_CPU``` or ```BACKEND_PRIORITY_GPU``` environment variables on the master/driver, as described [here](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-common/src/main/java/org/nd4j/config/ND4JEnvironmentVars.java).
|
||||
When multiple backends are present on the classpath, by default the CUDA backend will be tried first. If this cannot be loaded, the CPU (nd4j-native) backend will be loaded second. Thus, if the driver does not have a GPU, it should fall back to using a CPU. However, this default behaviour can be changed by setting the ```BACKEND_PRIORITY_CPU``` or ```BACKEND_PRIORITY_GPU``` environment variables on the master/driver, as described [here](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-common/src/main/java/org/nd4j/config/ND4JEnvironmentVars.java).
|
||||
The exact process for setting environment variables may depend on the cluster manager - Spark standalone vs. YARN vs. Mesos. Please consult the documentation for each on how to set the environment variables for Spark jobs for the driver/master.
|
||||
|
||||
<br><br>
|
||||
|
@ -324,9 +324,9 @@ Both too large thresholds and too small thresholds can result in sub-optimal per
|
|||
* Large thresholds mean infrequent communication - too infrequent and convergence can suffer
|
||||
* Small thresholds mean more frequent communication - but smaller changes are communicated at each step
|
||||
|
||||
The encoding threshold to be used is controlled by the [ThresholdAlgorithm](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/ThresholdAlgorithm.java). The specific implementation of the ThresholdAlgorithm determines what threshold should be used.
|
||||
The encoding threshold to be used is controlled by the [ThresholdAlgorithm](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/ThresholdAlgorithm.java). The specific implementation of the ThresholdAlgorithm determines what threshold should be used.
|
||||
|
||||
The default behaviour for DL4J is to use [AdaptiveThresholdAlgorithm](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/threshold/AdaptiveThresholdAlgorithm.java) which tries to keep the sparsity ratio in a certain range.
|
||||
The default behaviour for DL4J is to use [AdaptiveThresholdAlgorithm](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/threshold/AdaptiveThresholdAlgorithm.java) which tries to keep the sparsity ratio in a certain range.
|
||||
* The sparsity ratio is defined as numValues(encodedUpdate)/numParameters - 1.0 means fully dense (all values communicated), 0.0 means fully sparse (no values communicated)
|
||||
* Larger thresholds mean more sparse values (less network communication), and a smaller threshold means less sparse values (more network communication)
|
||||
* The AdaptiveThresholdAlgorithm tries to keep the sparsity ratio between 0.01 and 0.0001 by default. If the sparsity of the updates falls outside of this range, the threshold is either increased or decreased until it is within this range.
|
||||
|
@ -336,10 +336,10 @@ In practice, we have seen that this adaptive threshold process to work well.
|
|||
The built-in implementations for threshold algorithms include:
|
||||
|
||||
* AdaptiveThresholdAlgorithm
|
||||
* [FixedThresholdAlgorithm](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/threshold/FixedThresholdAlgorithm.java): a fixed, non-adaptive threshold using the specified encoding threshold.
|
||||
* [TargetSparsityThresholdAlgorithm](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/threshold/TargetSparsityThresholdAlgorithm.java): an adaptive threshold algorithm that targets a specific sparsity, and increases or decreases the threshold to try to match the target.
|
||||
* [FixedThresholdAlgorithm](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/threshold/FixedThresholdAlgorithm.java): a fixed, non-adaptive threshold using the specified encoding threshold.
|
||||
* [TargetSparsityThresholdAlgorithm](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/threshold/TargetSparsityThresholdAlgorithm.java): an adaptive threshold algorithm that targets a specific sparsity, and increases or decreases the threshold to try to match the target.
|
||||
|
||||
In addition, DL4J has a [ResidualPostProcessor](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/ResidualPostProcessor.java) interface, with the default implementation being [ResidualClippingPostProcessor](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/residual/ResidualClippingPostProcessor.java) which clips the residual vector to a maximum of 5x the current threshold, every 5 steps.
|
||||
In addition, DL4J has a [ResidualPostProcessor](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/ResidualPostProcessor.java) interface, with the default implementation being [ResidualClippingPostProcessor](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/residual/ResidualClippingPostProcessor.java) which clips the residual vector to a maximum of 5x the current threshold, every 5 steps.
|
||||
The motivation for this is that the "left over" parts of the updates (i.e., those parts not communicated) are store in the residual vector. If the updates are much larger than the threshold, we can have a phenomenon we have termed "residual explosion" - that is, the residual values can continue to grow to many times the threshold (hence would take many steps to communicate the gradient). The residual post processor is used to avoid this phenomenon.
|
||||
|
||||
The threshold algorithm (and initial threshold) and the residual post processor can be set as follows:
|
||||
|
|
|
@ -30,7 +30,7 @@ You should use Spark when:
|
|||
2. You need more than single machine to train the network
|
||||
3. Your network is large to justify a distributed implementation
|
||||
|
||||
For a single machine with multiple GPUs or multiple physical processors, users should consider using DL4J's Parallel-Wrapper implementation as shown in [this example](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-cuda-specific-examples/src/main/java/org/deeplearning4j/examples/multigpu/MultiGpuLenetMnistExample.java). ParallelWrapper allows for easy data parallel training of networks on a single machine with multiple cores. Spark has higher overheads compared to ParallelWrapper for single machine training.
|
||||
For a single machine with multiple GPUs or multiple physical processors, users should consider using DL4J's Parallel-Wrapper implementation as shown in [this example](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-cuda-specific-examples/src/main/java/org/deeplearning4j/examples/multigpu/MultiGpuLenetMnistExample.java). ParallelWrapper allows for easy data parallel training of networks on a single machine with multiple cores. Spark has higher overheads compared to ParallelWrapper for single machine training.
|
||||
|
||||
Similarly, if you don't need Spark (smaller networks and/or datasets) - it is recommended to use single machine training, which is usually simpler to set up.
|
||||
|
||||
|
@ -150,4 +150,4 @@ for (int i = 0; i < numEpochs; i++) {
|
|||
* [Deeplearning4j on Spark: How To Guides](deeplearning4j-scaleout-howto)
|
||||
* [Deeplearning4j on Spark: How To Build Data Pipelines](deeplearning4j-scaleout-data-howto)
|
||||
* [Deeplearning4j on Spark: API Reference](deeplearning4j-scaleout-apiref)
|
||||
* The [Deeplearning4j examples repo](https://github.com/deeplearning4j/dl4j-examples) contains a number of Spark examples that can be used by the user as reference.
|
||||
* The [Deeplearning4j examples repo](https://github.com/eclipse/deeplearning4j-examples) contains a number of Spark examples that can be used by the user as reference.
|
||||
|
|
|
@ -25,7 +25,7 @@ Here are a few more perks were added to original algorithm proposed by Nikko Str
|
|||
|
||||
![Two phases within the cluster](/images/guide/distributed.png)
|
||||
|
||||
Note that using Spark entails overhead. In order to determine whether Spark will help you or not, consider using the [Performance Listener](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/PerformanceListener.java) and look at the millisecond iteration time.
|
||||
Note that using Spark entails overhead. In order to determine whether Spark will help you or not, consider using the [Performance Listener](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/PerformanceListener.java) and look at the millisecond iteration time.
|
||||
If it's <= 150ms, Spark may not be worth it.
|
||||
|
||||
## Setting up Your Cluster
|
||||
|
@ -74,7 +74,7 @@ For example:
|
|||
|
||||
### Example Configuration:
|
||||
|
||||
Below is a snippet from an example project taken from [our examples repo on Github](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-spark-examples/dl4j-spark/src/main/java/org/deeplearning4j/mlp/MnistMLPDistributedExample.java)
|
||||
Below is a snippet from an example project taken from [our examples repo on Github](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-spark-examples/dl4j-spark/src/main/java/org/deeplearning4j/mlp/MnistMLPDistributedExample.java)
|
||||
|
||||
```
|
||||
SparkConf sparkConf = new SparkConf();
|
||||
|
|
|
@ -50,9 +50,9 @@ The implementation allows the user to choose between two modes of network organi
|
|||
2. Two encoding schemes:
|
||||
DL4J uses two encoding schemes, dynamically switching between the two depending on which will provide less network communication. Refer to the section on [encoding](#encoding) for more details.
|
||||
3. Quantization thresholds adjusted:
|
||||
The quantization threshold is stepped up or down depending on the distribution of the updates after each iteration. This is done on each node independently to make sure that updates are indeed sparse. In practice, this is implemented via the [ThresholdAlgorithm](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/ThresholdAlgorithm.java) interface and the [implementations](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/threshold) there-of.
|
||||
The quantization threshold is stepped up or down depending on the distribution of the updates after each iteration. This is done on each node independently to make sure that updates are indeed sparse. In practice, this is implemented via the [ThresholdAlgorithm](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/ThresholdAlgorithm.java) interface and the [implementations](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/threshold) there-of.
|
||||
4. Residual clipping
|
||||
As noted earlier, the "left over" parts of the updates (i.e., those parts not communicated) are store in the residual vector. If the updates are much larger than the threshold, we can have a phenomenon we have termed "residual explosion" - that is, the residual values can continue to grow to many times the threshold (hence would take many steps to communicate the gradient). To avoid this, DL4J has a [ResidualPostProcessor](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/ResidualPostProcessor.java) interface, with the default implementation being [ResidualClippingPostProcessor](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/residual/ResidualClippingPostProcessor.java) which clips the residual vector to a maximum of 5x the current threshold, every 5 steps.
|
||||
As noted earlier, the "left over" parts of the updates (i.e., those parts not communicated) are store in the residual vector. If the updates are much larger than the threshold, we can have a phenomenon we have termed "residual explosion" - that is, the residual values can continue to grow to many times the threshold (hence would take many steps to communicate the gradient). To avoid this, DL4J has a [ResidualPostProcessor](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/ResidualPostProcessor.java) interface, with the default implementation being [ResidualClippingPostProcessor](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/solvers/accumulation/encoding/residual/ResidualClippingPostProcessor.java) which clips the residual vector to a maximum of 5x the current threshold, every 5 steps.
|
||||
5. Local parallelism via ParallelWrapper:
|
||||
This enables multi-CPU/GPU nodes to share information faster
|
||||
|
||||
|
|
|
@ -87,21 +87,21 @@ Note that for convolutional models, input shape information follows the NCHW con
|
|||
|
||||
The model zoo comes with well-known image recognition configurations in the deep learning community. The zoo also includes an LSTM for text generation, and a simple CNN for general image recognition.
|
||||
|
||||
You can find a complete list of models using this [deeplearning4j-zoo Github link](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model).
|
||||
You can find a complete list of models using this [deeplearning4j-zoo Github link](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model).
|
||||
|
||||
This includes ImageNet models such as VGG-16, ResNet-50, AlexNet, Inception-ResNet-v1, LeNet, and more.
|
||||
|
||||
* [AlexNet](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/AlexNet.java)
|
||||
* [Darknet19](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/Darknet19.java)
|
||||
* [FaceNetNN4Small2](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/FaceNetNN4Small2.java)
|
||||
* [InceptionResNetV1](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/InceptionResNetV1.java)
|
||||
* [LeNet](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/LeNet.java)
|
||||
* [ResNet50](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/ResNet50.java)
|
||||
* [SimpleCNN](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/SimpleCNN.java)
|
||||
* [TextGenerationLSTM](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/TextGenerationLSTM.java)
|
||||
* [TinyYOLO](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/TinyYOLO.java)
|
||||
* [VGG16](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/VGG16.java)
|
||||
* [VGG19](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/VGG19.java)
|
||||
* [AlexNet](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/AlexNet.java)
|
||||
* [Darknet19](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/Darknet19.java)
|
||||
* [FaceNetNN4Small2](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/FaceNetNN4Small2.java)
|
||||
* [InceptionResNetV1](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/InceptionResNetV1.java)
|
||||
* [LeNet](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/LeNet.java)
|
||||
* [ResNet50](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/ResNet50.java)
|
||||
* [SimpleCNN](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/SimpleCNN.java)
|
||||
* [TextGenerationLSTM](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/TextGenerationLSTM.java)
|
||||
* [TinyYOLO](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/TinyYOLO.java)
|
||||
* [VGG16](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/VGG16.java)
|
||||
* [VGG19](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/VGG19.java)
|
||||
|
||||
## Advanced usage
|
||||
|
||||
|
|
|
@ -392,4 +392,4 @@ The onPostExecute method will receive an INDArray which contains the neural netw
|
|||
|
||||
This tutorial provides a basic framework for image recognition in an Android Application using a DL4J neural network. It illustrates how to load a pre-trained DL4J model from the raw resources file and how to test user generate input images against the model. The AsyncTask then returns the output to the main thread and updates the UI.
|
||||
|
||||
The complete code for this example is available [here.](https://github.com/deeplearning4j/dl4j-examples/tree/master/android/DL4JImageRecognitionDemo)
|
||||
The complete code for this example is available [here.](https://github.com/eclipse/deeplearning4j-examples/tree/master/android/DL4JImageRecognitionDemo)
|
||||
|
|
|
@ -275,7 +275,7 @@ Once the training of the neural network and the classification of the user measu
|
|||
Hopefully this tutorial has illustrated how the compatibility of DL4J with Android makes it easy to build, train, and evaluate neural networks on mobile devices. We used a simple UI to take input values from the measurement and then passed them as the *Params* in an AsyncTask. The processor intensive steps of data preparation, network layer building, model training, and evaluation of the user data were all performed in the doInBackground() method of the background thread, maintaining a stable and responsive device. Once completed, we passed the output INDArray as the AsyncTask *Results* to onPostExecute() where the UI was updated to demonstrate the classification results.
|
||||
The limitations of processing power and battery life of mobile devices make training robust, multi-layer networks somewhat unfeasible. To address this limitation, we will next look at an example Android application that saves the trained model on the device for faster performance after an initial model training.
|
||||
|
||||
The complete code for this example is available [here.](https://github.com/deeplearning4j/dl4j-examples/tree/master/android/DL4JIrisClassifierDemo)
|
||||
The complete code for this example is available [here.](https://github.com/eclipse/deeplearning4j-examples/tree/master/android/DL4JIrisClassifierDemo)
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -89,7 +89,7 @@ Once you have programming basics down, tackle Java, the world's most widely used
|
|||
|
||||
## Deeplearning4j
|
||||
|
||||
With that under your belt, we recommend you approach Deeplearning4j through its [examples](https://github.com/deeplearning4j/dl4j-examples).
|
||||
With that under your belt, we recommend you approach Deeplearning4j through its [examples](https://github.com/eclipse/deeplearning4j-examples).
|
||||
|
||||
* [Quickstart](./deeplearning4j-quickstart)
|
||||
|
||||
|
|
|
@ -274,7 +274,7 @@ Here's how the DatasetIterator is uniformly invoked for MNIST:
|
|||
|
||||
You can optimize by using an asychronous loader in the background. Java can do real multi-threading. It can load data in the background while other threads take care of compute. So you load data into the GPU at the same time that compute is being run. The neural net trains even as you grab new data from memory.
|
||||
|
||||
This is the [relevant code](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-scaleout/deeplearning4j-scaleout-parallelwrapper/src/main/java/org/deeplearning4j/parallelism/ParallelWrapper.java#L136), in particular the third line:
|
||||
This is the [relevant code](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-scaleout/deeplearning4j-scaleout-parallelwrapper/src/main/java/org/deeplearning4j/parallelism/ParallelWrapper.java#L136), in particular the third line:
|
||||
|
||||
MultiDataSetIterator iterator;
|
||||
if (prefetchSize > 0 && source.asyncSupported()) {
|
||||
|
@ -301,11 +301,11 @@ But if your training pipeline is doing that every time, Deeplearning4j will seem
|
|||
|
||||
One way is to pre-save the datasets, in a manner similar to the Python frameworks. (Pickles are pre-formatted data.) When you pre-save the dataset, you create a separate class.
|
||||
|
||||
Here’s how you [pre-save datasets](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/presave/PreSave.java).
|
||||
Here’s how you [pre-save datasets](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/presave/PreSave.java).
|
||||
|
||||
A `Recordreaderdatasetiterator` talks to Datavec and outputs datasets for DL4J.
|
||||
|
||||
Here’s how you [load a pre-saved dataset](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/presave/LoadPreSavedLenetMnistExample.java).
|
||||
Here’s how you [load a pre-saved dataset](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/presave/LoadPreSavedLenetMnistExample.java).
|
||||
|
||||
Line 90 is where you see the asynchronous ETL. In this case, it's wrapping the pre-saved iterator, so you're taking advantage of both methods, with the asynch loading the pre-saved data in the background as the net trains.
|
||||
|
||||
|
|
|
@ -16,10 +16,10 @@ For those developers and engineers who prefer to use the most up-to-date version
|
|||
|
||||
Building locally requires that you build the entire Deeplearning4j stack which includes:
|
||||
|
||||
- [libnd4j](https://github.com/deeplearning4j/libnd4j)
|
||||
- [nd4j](https://github.com/deeplearning4j/nd4j)
|
||||
- [datavec](https://github.com/deeplearning4j/datavec)
|
||||
- [deeplearning4j](https://github.com/deeplearning4j/deeplearning4j)
|
||||
- [libnd4j](https://github.com/eclipse/deeplearning4j/tree/master/libnd4j)
|
||||
- [nd4j](https://github.com/eclipse/deeplearning4j/tree/master/nd4j)
|
||||
- [datavec](https://github.com/eclipse/deeplearning4j/tree/master/datavec)
|
||||
- [deeplearning4j](https://github.com/eclipse/deeplearning4j)
|
||||
|
||||
Note that Deeplearning4j is designed to work on most platforms (Windows, OS X, and Linux) and is also includes multiple "flavors" depending on the computing architecture you choose to utilize. This includes CPU (OpenBLAS, MKL, ATLAS) and GPU (CUDA). The DL4J stack also supports x86 and PowerPC architectures.
|
||||
|
||||
|
@ -287,7 +287,7 @@ copy mkl_rt.dll libblas3.dll
|
|||
|
||||
### Build Script
|
||||
|
||||
You can use the [build-dl4j-stack.sh](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/build-dl4j-stack.sh) script from the deeplearning4j repository to build the whole deeplearning4j stack from source: libndj4, ndj4, datavec, deeplearning4j. It clones the DL4J stack, builds each repository, and installs them locally to Maven. This script will work on both Linux and OS X platforms.
|
||||
You can use the [build-dl4j-stack.sh](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/build-dl4j-stack.sh) script from the deeplearning4j repository to build the whole deeplearning4j stack from source: libndj4, ndj4, datavec, deeplearning4j. It clones the DL4J stack, builds each repository, and installs them locally to Maven. This script will work on both Linux and OS X platforms.
|
||||
|
||||
OK, now read the following section carefully.
|
||||
|
||||
|
@ -297,7 +297,7 @@ Use the build script below for CPU architectures:
|
|||
./build-dl4j-stack.sh
|
||||
```
|
||||
Make sure to read this if you are on OS X (ensure gcc 5.x is setup and you aren't using clang):
|
||||
https://github.com/deeplearning4j/deeplearning4j/issues/2668
|
||||
https://github.com/eclipse/deeplearning4j/issues/2668
|
||||
|
||||
|
||||
If you are using a GPU backend, use this instead:
|
||||
|
@ -306,7 +306,7 @@ If you are using a GPU backend, use this instead:
|
|||
./build-dl4j-stack.sh -c cuda
|
||||
```
|
||||
|
||||
You can speed up your CUDA builds by using the `cc` flag as explained in the [libndj4 README](https://github.com/deeplearning4j/libnd4j).
|
||||
You can speed up your CUDA builds by using the `cc` flag as explained in the [libndj4 README](https://github.com/eclipse/deeplearning4j/tree/master/libnd4j).
|
||||
|
||||
For Scala users, you can pass your binary version for Spark compatibility:
|
||||
|
||||
|
@ -334,7 +334,7 @@ rm -rf datavec
|
|||
rm -rf deeplearning4j
|
||||
|
||||
# compile libnd4j
|
||||
git clone https://github.com/deeplearning4j/libnd4j.git
|
||||
git clone https://github.com/eclipse/deeplearning4j.git
|
||||
cd libnd4j
|
||||
./buildnativeoperations.sh
|
||||
# and/or when using GPU
|
||||
|
@ -344,7 +344,7 @@ export LIBND4J_HOME=`pwd`
|
|||
cd ..
|
||||
|
||||
# build and install nd4j to maven locally
|
||||
git clone https://github.com/deeplearning4j/nd4j.git
|
||||
git clone https://github.com/eclipse/deeplearning4j.git
|
||||
cd nd4j
|
||||
# cross-build across Scala versions (recommended)
|
||||
bash buildmultiplescalaversions.sh clean install -DskipTests -Dmaven.javadoc.skip=true -pl '!:nd4j-cuda-9.0,!:nd4j-cuda-9.0-platform,!:nd4j-tests'
|
||||
|
@ -355,7 +355,7 @@ bash buildmultiplescalaversions.sh clean install -DskipTests -Dmaven.javadoc.ski
|
|||
cd ..
|
||||
|
||||
# build and install datavec
|
||||
git clone https://github.com/deeplearning4j/datavec.git
|
||||
git clone https://github.com/eclipse/deeplearning4j.git
|
||||
cd datavec
|
||||
if [ "$SCALAV" == "" ]; then
|
||||
bash buildmultiplescalaversions.sh clean install -DskipTests -Dmaven.javadoc.skip=true
|
||||
|
@ -365,7 +365,7 @@ fi
|
|||
cd ..
|
||||
|
||||
# build and install deeplearning4j
|
||||
git clone https://github.com/deeplearning4j/deeplearning4j.git
|
||||
git clone https://github.com/eclipse/deeplearning4j.git
|
||||
cd deeplearning4j
|
||||
# cross-build across Scala versions (recommended)
|
||||
./buildmultiplescalaversions.sh clean install -DskipTests -Dmaven.javadoc.skip=true
|
||||
|
@ -379,7 +379,7 @@ cd ..
|
|||
|
||||
## Using Local Dependencies
|
||||
|
||||
Once you've installed the DL4J stack to your local maven repository, you can now include it in your build tool's dependencies. Follow the typical [Getting Started](http://deeplearning4j.org/gettingstarted) instructions for Deeplearning4j, and appropriately replace versions with the SNAPSHOT version currently on the [master POM](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/pom.xml).
|
||||
Once you've installed the DL4J stack to your local maven repository, you can now include it in your build tool's dependencies. Follow the typical [Getting Started](http://deeplearning4j.org/gettingstarted) instructions for Deeplearning4j, and appropriately replace versions with the SNAPSHOT version currently on the [master POM](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/pom.xml).
|
||||
|
||||
Note that some build tools such as Gradle and SBT don't properly pull in platform-specific binaries. You can follow instructions [here](http://nd4j.org/dependencies.html) for setting up your favorite build tool.
|
||||
|
||||
|
|
|
@ -53,85 +53,85 @@ Deeplearning4j (and related projects) have a lot of functionality. The goal of t
|
|||
|
||||
### <a name="layers-ff">Feed-Forward Layers</a>
|
||||
|
||||
* **DenseLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/feedforward/dense/DenseLayer.java)) - A simple/standard fully-connected layer
|
||||
* **EmbeddingLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/feedforward/embedding/EmbeddingLayer.java)) - Takes positive integer indexes as input, outputs vectors. Only usable as first layer in a model. Mathematically equivalent (when bias is enabled) to DenseLayer with one-hot input, but more efficient. See also: EmbeddingSequenceLayer.
|
||||
* **DenseLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/feedforward/dense/DenseLayer.java)) - A simple/standard fully-connected layer
|
||||
* **EmbeddingLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/feedforward/embedding/EmbeddingLayer.java)) - Takes positive integer indexes as input, outputs vectors. Only usable as first layer in a model. Mathematically equivalent (when bias is enabled) to DenseLayer with one-hot input, but more efficient. See also: EmbeddingSequenceLayer.
|
||||
|
||||
#### <a name="layers-out">Output Layers</a>
|
||||
|
||||
Output layers: usable only as the last layer in a network. Loss functions are set here.
|
||||
|
||||
* **OutputLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/OutputLayer.java)) - Output layer for standard classification/regression in MLPs/CNNs. Has a fully connected DenseLayer built in. 2d input/output (i.e., row vector per example).
|
||||
* **LossLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/LossLayer.java)) - Output layer without parameters - only loss function and activation function. 2d input/output (i.e., row vector per example). Unlike Outputlayer, restricted to nIn = nOut.
|
||||
* **RnnOutputLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/RnnOutputLayer.java)) - Output layer for recurrent neural networks. 3d (time series) input and output. Has time distributed fully connected layer built in.
|
||||
* **RnnLossLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/RnnLossLayer.java)) - The 'no parameter' version of RnnOutputLayer. 3d (time series) input and output.
|
||||
* **CnnLossLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/CnnLossLayer.java)) - Used with CNNs, where a prediction must be made at each spatial location of the output (for example: segmentation or denoising). No parameters, 4d input/output with shape [minibatch, depth, height, width]. When using softmax, this is applied depthwise at each spatial location.
|
||||
* **Cnn3DLossLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/Cnn3DLossLayer.java)) - used with 3D CNNs, where a preduction must be made at each spatial location (x/y/z) of the output. Layer has no parameters, 5d data in either NCDHW or NDHWC ("channels first" or "channels last") format (configurable). Supports masking. When using Softmax, this is applied along channels at each spatial location.
|
||||
* **Yolo2OutputLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/objdetect/Yolo2OutputLayer.java)) - Implentation of the YOLO 2 model for object detection in images
|
||||
* **CenterLossOutputLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/CenterLossOutputLayer.java)) - A version of OutputLayer that also attempts to minimize the intra-class distance of examples' activations - i.e., "If example x is in class Y, ensure that embedding(x) is close to average(embedding(y)) for all examples y in Y"
|
||||
* **OutputLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/OutputLayer.java)) - Output layer for standard classification/regression in MLPs/CNNs. Has a fully connected DenseLayer built in. 2d input/output (i.e., row vector per example).
|
||||
* **LossLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/LossLayer.java)) - Output layer without parameters - only loss function and activation function. 2d input/output (i.e., row vector per example). Unlike Outputlayer, restricted to nIn = nOut.
|
||||
* **RnnOutputLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/RnnOutputLayer.java)) - Output layer for recurrent neural networks. 3d (time series) input and output. Has time distributed fully connected layer built in.
|
||||
* **RnnLossLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/RnnLossLayer.java)) - The 'no parameter' version of RnnOutputLayer. 3d (time series) input and output.
|
||||
* **CnnLossLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/CnnLossLayer.java)) - Used with CNNs, where a prediction must be made at each spatial location of the output (for example: segmentation or denoising). No parameters, 4d input/output with shape [minibatch, depth, height, width]. When using softmax, this is applied depthwise at each spatial location.
|
||||
* **Cnn3DLossLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/Cnn3DLossLayer.java)) - used with 3D CNNs, where a preduction must be made at each spatial location (x/y/z) of the output. Layer has no parameters, 5d data in either NCDHW or NDHWC ("channels first" or "channels last") format (configurable). Supports masking. When using Softmax, this is applied along channels at each spatial location.
|
||||
* **Yolo2OutputLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/objdetect/Yolo2OutputLayer.java)) - Implentation of the YOLO 2 model for object detection in images
|
||||
* **CenterLossOutputLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/CenterLossOutputLayer.java)) - A version of OutputLayer that also attempts to minimize the intra-class distance of examples' activations - i.e., "If example x is in class Y, ensure that embedding(x) is close to average(embedding(y)) for all examples y in Y"
|
||||
|
||||
|
||||
#### <a name="layers-conv">Convolutional Layers</a>
|
||||
|
||||
* **ConvolutionLayer** / Convolution2D - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/ConvolutionLayer.java)) - Standard 2d convolutional neural network layer. Inputs and outputs have 4 dimensions with shape [minibatch,depthIn,heightIn,widthIn] and [minibatch,depthOut,heightOut,widthOut] respectively.
|
||||
* **Convolution1DLayer** / Convolution1D - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/Convolution1DLayer.java)) - Standard 1d convolution layer
|
||||
* **Convolution3DLayer** / Convolution3D - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/Convolution3D.java)) - Standard 3D convolution layer. Supports both NDHWC ("channels last") and NCDHW ("channels first") activations format.
|
||||
* **Deconvolution2DLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/Deconvolution2DLayer.java)) - also known as transpose or fractionally strided convolutions. Can be considered a "reversed" ConvolutionLayer; output size is generally larger than the input, whilst maintaining the spatial connection structure.
|
||||
* **SeparableConvolution2DLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/SeparableConvolution2DLayer.java)) - depthwise separable convolution layer
|
||||
* **SubsamplingLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/subsampling/SubsamplingLayer.java)) - Implements standard 2d spatial pooling for CNNs - with max, average and p-norm pooling available.
|
||||
* **Subsampling1DLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/Subsampling1DLayer.java)) - 1D version of the subsampling layer.
|
||||
* **Upsampling2D** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/upsampling/Upsampling2D.java)) - Upscale CNN activations by repeating the row/column values
|
||||
* **Upsampling1D** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/upsampling/Upsampling1D.java)) - 1D version of the upsampling layer
|
||||
* **Cropping2D** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/convolutional/Cropping2D.java)) - Cropping layer for 2D convolutional neural networks
|
||||
* **DepthwiseConvolution2D** ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/DepthwiseConvolution2D.java))- 2d depthwise convolution layer
|
||||
* **ZeroPaddingLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/ZeroPaddingLayer.java)) - Very simple layer that adds the specified amount of zero padding to edges of the 4d input activations.
|
||||
* **ZeroPadding1DLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/ZeroPadding1DLayer.java)) - 1D version of ZeroPaddingLayer
|
||||
* **SpaceToDepth** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/SpaceToDepthLayer.java)) - This operation takes 4D array in, and moves data from spatial dimensions (HW) to channels (C) for given blockSize
|
||||
* **SpaceToBatch** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/SpaceToBatchLayer.java)) - Transforms data from a tensor from 2 spatial dimensions into batch dimension according to the "blocks" specified
|
||||
* **ConvolutionLayer** / Convolution2D - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/ConvolutionLayer.java)) - Standard 2d convolutional neural network layer. Inputs and outputs have 4 dimensions with shape [minibatch,depthIn,heightIn,widthIn] and [minibatch,depthOut,heightOut,widthOut] respectively.
|
||||
* **Convolution1DLayer** / Convolution1D - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/Convolution1DLayer.java)) - Standard 1d convolution layer
|
||||
* **Convolution3DLayer** / Convolution3D - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/Convolution3D.java)) - Standard 3D convolution layer. Supports both NDHWC ("channels last") and NCDHW ("channels first") activations format.
|
||||
* **Deconvolution2DLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/Deconvolution2DLayer.java)) - also known as transpose or fractionally strided convolutions. Can be considered a "reversed" ConvolutionLayer; output size is generally larger than the input, whilst maintaining the spatial connection structure.
|
||||
* **SeparableConvolution2DLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/SeparableConvolution2DLayer.java)) - depthwise separable convolution layer
|
||||
* **SubsamplingLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/subsampling/SubsamplingLayer.java)) - Implements standard 2d spatial pooling for CNNs - with max, average and p-norm pooling available.
|
||||
* **Subsampling1DLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/Subsampling1DLayer.java)) - 1D version of the subsampling layer.
|
||||
* **Upsampling2D** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/upsampling/Upsampling2D.java)) - Upscale CNN activations by repeating the row/column values
|
||||
* **Upsampling1D** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/upsampling/Upsampling1D.java)) - 1D version of the upsampling layer
|
||||
* **Cropping2D** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/convolutional/Cropping2D.java)) - Cropping layer for 2D convolutional neural networks
|
||||
* **DepthwiseConvolution2D** ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/DepthwiseConvolution2D.java))- 2d depthwise convolution layer
|
||||
* **ZeroPaddingLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/ZeroPaddingLayer.java)) - Very simple layer that adds the specified amount of zero padding to edges of the 4d input activations.
|
||||
* **ZeroPadding1DLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/convolution/ZeroPadding1DLayer.java)) - 1D version of ZeroPaddingLayer
|
||||
* **SpaceToDepth** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/SpaceToDepthLayer.java)) - This operation takes 4D array in, and moves data from spatial dimensions (HW) to channels (C) for given blockSize
|
||||
* **SpaceToBatch** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/SpaceToBatchLayer.java)) - Transforms data from a tensor from 2 spatial dimensions into batch dimension according to the "blocks" specified
|
||||
|
||||
|
||||
#### <a name="layers-rnn">Recurrent Layers</a>
|
||||
|
||||
* **LSTM** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/LSTM.java)) - LSTM RNN without peephole connections. Supports CuDNN.
|
||||
* **GravesLSTM** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/GravesLSTM.java)) - LSTM RNN with peephole connections. Does *not* support CuDNN (thus for GPUs, LSTM should be used in preference).
|
||||
* **GravesBidirectionalLSTM** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/GravesBidirectionalLSTM.java)) - A bidirectional LSTM implementation with peephole connections. Equivalent to Bidirectional(ADD, GravesLSTM). Due to addition of Bidirecitonal wrapper (below), has been deprecated on master.
|
||||
* **Bidirectional** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/recurrent/Bidirectional.java)) - A 'wrapper' layer - converts any standard uni-directional RNN into a bidirectional RNN (doubles number of params - forward/backward nets have independent parameters). Activations from forward/backward nets may be either added, multiplied, averaged or concatenated.
|
||||
* **SimpleRnn** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/recurrent/SimpleRnn.java)) - A standard/'vanilla' RNN layer. Usually not effective in practice with long time series dependencies - LSTM is generally preferred.
|
||||
* **LastTimeStep** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/recurrent/LastTimeStep.java)) - A 'wrapper' layer - extracts out the last time step of the (non-bidirectional) RNN layer it wraps. 3d input with shape [minibatch, size, timeSeriesLength], 2d output with shape [minibatch, size].
|
||||
* EmbeddingSequenceLayer: ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/EmbeddingSequenceLayer.java)) - A version of EmbeddingLayer that expects fixed-length number (inputLength) of integers/indices per example as input, ranged from 0 to numClasses - 1. This input thus has shape [numExamples, inputLength] or shape [numExamples, 1, inputLength]. The output of this layer is 3D (sequence/time series), namely of shape [numExamples, nOut, inputLength]. Can only be used as the first layer for a network.
|
||||
* **LSTM** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/LSTM.java)) - LSTM RNN without peephole connections. Supports CuDNN.
|
||||
* **GravesLSTM** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/GravesLSTM.java)) - LSTM RNN with peephole connections. Does *not* support CuDNN (thus for GPUs, LSTM should be used in preference).
|
||||
* **GravesBidirectionalLSTM** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/GravesBidirectionalLSTM.java)) - A bidirectional LSTM implementation with peephole connections. Equivalent to Bidirectional(ADD, GravesLSTM). Due to addition of Bidirecitonal wrapper (below), has been deprecated on master.
|
||||
* **Bidirectional** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/recurrent/Bidirectional.java)) - A 'wrapper' layer - converts any standard uni-directional RNN into a bidirectional RNN (doubles number of params - forward/backward nets have independent parameters). Activations from forward/backward nets may be either added, multiplied, averaged or concatenated.
|
||||
* **SimpleRnn** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/recurrent/SimpleRnn.java)) - A standard/'vanilla' RNN layer. Usually not effective in practice with long time series dependencies - LSTM is generally preferred.
|
||||
* **LastTimeStep** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/recurrent/LastTimeStep.java)) - A 'wrapper' layer - extracts out the last time step of the (non-bidirectional) RNN layer it wraps. 3d input with shape [minibatch, size, timeSeriesLength], 2d output with shape [minibatch, size].
|
||||
* EmbeddingSequenceLayer: ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/EmbeddingSequenceLayer.java)) - A version of EmbeddingLayer that expects fixed-length number (inputLength) of integers/indices per example as input, ranged from 0 to numClasses - 1. This input thus has shape [numExamples, inputLength] or shape [numExamples, 1, inputLength]. The output of this layer is 3D (sequence/time series), namely of shape [numExamples, nOut, inputLength]. Can only be used as the first layer for a network.
|
||||
|
||||
|
||||
#### <a name="layers-unsupervised">Unsupervised Layers</a>
|
||||
|
||||
* **VariationalAutoencoder** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/variational/VariationalAutoencoder.java)) - A variational autoencoder implementation with MLP/dense layers for the encoder and decoder. Supports multiple different types of [reconstruction distributions](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/variational)
|
||||
* **AutoEncoder** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/AutoEncoder.java)) - Standard denoising autoencoder layer
|
||||
* **VariationalAutoencoder** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/variational/VariationalAutoencoder.java)) - A variational autoencoder implementation with MLP/dense layers for the encoder and decoder. Supports multiple different types of [reconstruction distributions](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/variational)
|
||||
* **AutoEncoder** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/AutoEncoder.java)) - Standard denoising autoencoder layer
|
||||
|
||||
#### <a name="layers-other">Other Layers</a>
|
||||
|
||||
* **GlobalPoolingLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/GlobalPoolingLayer.java)) - Implements both pooling over time (for RNNs/time series - input size [minibatch, size, timeSeriesLength], out [minibatch, size]) and global spatial pooling (for CNNs - input size [minibatch, depth, h, w], out [minibatch, depth]). Available pooling modes: sum, average, max and p-norm.
|
||||
* **ActivationLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/ActivationLayer.java)) - Applies an activation function (only) to the input activations. Note that most DL4J layers have activation functions built in as a config option.
|
||||
* **DropoutLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/DropoutLayer.java)) - Implements dropout as a separate/single layer. Note that most DL4J layers have a "built-in" dropout configuration option.
|
||||
* **BatchNormalization** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/BatchNormalization.java)) - Batch normalization for 2d (feedforward), 3d (time series) or 4d (CNN) activations. For time series, parameter sharing across time; for CNNs, parameter sharing across spatial locations (but not depth).
|
||||
* **LocalResponseNormalization** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/LocalResponseNormalization.java)) - Local response normalization layer for CNNs. Not frequently used in modern CNN architectures.
|
||||
* **FrozenLayer** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/misc/FrozenLayer.java)) - Usually not used directly by users - added as part of transfer learning, to freeze a layer's parameters such that they don't change during further training.
|
||||
* **LocallyConnected2D** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/LocallyConnected2D.java)) - a 2d locally connected layer, assumes input is 4d data in NCHW ("channels first") format.
|
||||
* **LocallyConected1D** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/LocallyConnected1D.java)) - a 1d locally connected layer, assumes input is 3d data in NCW ([minibatch, size, sequenceLength]) format
|
||||
* **GlobalPoolingLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/GlobalPoolingLayer.java)) - Implements both pooling over time (for RNNs/time series - input size [minibatch, size, timeSeriesLength], out [minibatch, size]) and global spatial pooling (for CNNs - input size [minibatch, depth, h, w], out [minibatch, depth]). Available pooling modes: sum, average, max and p-norm.
|
||||
* **ActivationLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/ActivationLayer.java)) - Applies an activation function (only) to the input activations. Note that most DL4J layers have activation functions built in as a config option.
|
||||
* **DropoutLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/DropoutLayer.java)) - Implements dropout as a separate/single layer. Note that most DL4J layers have a "built-in" dropout configuration option.
|
||||
* **BatchNormalization** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/BatchNormalization.java)) - Batch normalization for 2d (feedforward), 3d (time series) or 4d (CNN) activations. For time series, parameter sharing across time; for CNNs, parameter sharing across spatial locations (but not depth).
|
||||
* **LocalResponseNormalization** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/LocalResponseNormalization.java)) - Local response normalization layer for CNNs. Not frequently used in modern CNN architectures.
|
||||
* **FrozenLayer** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/misc/FrozenLayer.java)) - Usually not used directly by users - added as part of transfer learning, to freeze a layer's parameters such that they don't change during further training.
|
||||
* **LocallyConnected2D** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/LocallyConnected2D.java)) - a 2d locally connected layer, assumes input is 4d data in NCHW ("channels first") format.
|
||||
* **LocallyConected1D** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/layers/LocallyConnected1D.java)) - a 1d locally connected layer, assumes input is 3d data in NCW ([minibatch, size, sequenceLength]) format
|
||||
|
||||
|
||||
#### <a name="layers-vertices">Graph Vertices</a>
|
||||
|
||||
Graph vertex: use with ComputationGraph. Similar to layers, vertices usually don't have any parameters, and may support multiple inputs.
|
||||
|
||||
* **ElementWiseVertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/ElementWiseVertex.java)) - Performs an element-wise operation on the inputs - add, subtract, product, average, max
|
||||
* **L2NormalizeVertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/L2NormalizeVertex.java)) - normalizes the input activations by dividing by the L2 norm for each example. i.e., out <- out / l2Norm(out)
|
||||
* **L2Vertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/L2Vertex.java)) - calculates the L2 distance between the two input arrays, for each example separately. Output is a single value, for each input value.
|
||||
* **MergeVertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/L2Vertex.java)) - merge the input activations along dimension 1, to make a larger output array. For CNNs, this implements merging along the depth/channels dimension
|
||||
* **PreprocessorVertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/PreprocessorVertex.java)) - a simple GraphVertex that contains an InputPreProcessor only
|
||||
* **ReshapeVertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/ReshapeVertex.java)) - Performs arbitrary activation array reshaping. The preprocessors in the next section should usually be preferred.
|
||||
* **ScaleVertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/ScaleVertex.java)) - implements simple multiplicative scaling of the inputs - i.e., out = scalar * input
|
||||
* **ShiftVertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/ShiftVertex.java)) - implements simple scalar element-wise addition on the inputs - i.e., out = input + scalar
|
||||
* **StackVertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/StackVertex.java)) - used to stack all inputs along the minibatch dimension. Analogous to MergeVertex, but along dimension 0 (minibatch) instead of dimension 1 (nOut/channels)
|
||||
* **SubsetVertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/SubsetVertex.java)) - used to get a contiguous subset of the input activations along dimension 1. For example, two SubsetVertex instances could be used to split the activations from an input array into two separate activations. Essentially the opposite of MergeVertex.
|
||||
* **UnstackVertex** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/UnstackVertex.java)) - similar to SubsetVertex, but along dimension 0 (minibatch) instead of dimension 1 (nOut/channels). Opposite of StackVertex
|
||||
* **ElementWiseVertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/ElementWiseVertex.java)) - Performs an element-wise operation on the inputs - add, subtract, product, average, max
|
||||
* **L2NormalizeVertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/L2NormalizeVertex.java)) - normalizes the input activations by dividing by the L2 norm for each example. i.e., out <- out / l2Norm(out)
|
||||
* **L2Vertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/L2Vertex.java)) - calculates the L2 distance between the two input arrays, for each example separately. Output is a single value, for each input value.
|
||||
* **MergeVertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/L2Vertex.java)) - merge the input activations along dimension 1, to make a larger output array. For CNNs, this implements merging along the depth/channels dimension
|
||||
* **PreprocessorVertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/PreprocessorVertex.java)) - a simple GraphVertex that contains an InputPreProcessor only
|
||||
* **ReshapeVertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/ReshapeVertex.java)) - Performs arbitrary activation array reshaping. The preprocessors in the next section should usually be preferred.
|
||||
* **ScaleVertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/ScaleVertex.java)) - implements simple multiplicative scaling of the inputs - i.e., out = scalar * input
|
||||
* **ShiftVertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/ShiftVertex.java)) - implements simple scalar element-wise addition on the inputs - i.e., out = input + scalar
|
||||
* **StackVertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/StackVertex.java)) - used to stack all inputs along the minibatch dimension. Analogous to MergeVertex, but along dimension 0 (minibatch) instead of dimension 1 (nOut/channels)
|
||||
* **SubsetVertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/SubsetVertex.java)) - used to get a contiguous subset of the input activations along dimension 1. For example, two SubsetVertex instances could be used to split the activations from an input array into two separate activations. Essentially the opposite of MergeVertex.
|
||||
* **UnstackVertex** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/graph/UnstackVertex.java)) - similar to SubsetVertex, but along dimension 0 (minibatch) instead of dimension 1 (nOut/channels). Opposite of StackVertex
|
||||
|
||||
|
||||
|
||||
|
@ -141,13 +141,13 @@ An InputPreProcessor is a simple class/interface that operates on the input to a
|
|||
|
||||
Note that in many cases (such as the XtoYPreProcessor classes), users won't need to (and shouldn't) add these manually, and can instead just use ```.setInputType(InputType.feedForward(10))``` or similar, which whill infer and add the preprocessors as required.
|
||||
|
||||
* **CnnToFeedForwardPreProcessor** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/CnnToFeedForwardPreProcessor.java)) - handles the activation reshaping necessary to transition from a CNN layer (ConvolutionLayer, SubsamplingLayer, etc) to DenseLayer/OutputLayer etc.
|
||||
* **CnnToRnnPreProcessor** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/CnnToRnnPreProcessor.java)) - handles reshaping necessary to transition from a (effectively, time distributed) CNN layer to a RNN layer.
|
||||
* **ComposableInputPreProcessor** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/ComposableInputPreProcessor.java)) - simple class that allows multiple preprocessors to be chained + used on a single layer
|
||||
* **FeedForwardToCnnPreProcessor** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/FeedForwardToCnnPreProcessor.java)) - handles activation reshaping to transition from a row vector (per example) to a CNN layer. Note that this transition/preprocessor only makes sense if the activations are actually CNN activations, but have been 'flattened' to a row vector.
|
||||
* **FeedForwardToRnnPreProcessor** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/FeedForwardToRnnPreProcessor.java)) - handles transition from a (time distributed) feed-forward layer to a RNN layer
|
||||
* **RnnToCnnPreProcessor** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/RnnToCnnPreProcessor.java)) - handles transition from a sequence of CNN activations with shape ```[minibatch, depth*height*width, timeSeriesLength]``` to time-distributed ```[numExamples*timeSeriesLength, numChannels, inputWidth, inputHeight]``` format
|
||||
* **RnnToFeedForwardPreProcessor** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/RnnToFeedForwardPreProcessor.java)) - handles transition from time series activations (shape ```[minibatch,size,timeSeriesLength]```) to time-distributed feed-forward (shape ```[minibatch*tsLength,size]```) activations.
|
||||
* **CnnToFeedForwardPreProcessor** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/CnnToFeedForwardPreProcessor.java)) - handles the activation reshaping necessary to transition from a CNN layer (ConvolutionLayer, SubsamplingLayer, etc) to DenseLayer/OutputLayer etc.
|
||||
* **CnnToRnnPreProcessor** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/CnnToRnnPreProcessor.java)) - handles reshaping necessary to transition from a (effectively, time distributed) CNN layer to a RNN layer.
|
||||
* **ComposableInputPreProcessor** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/ComposableInputPreProcessor.java)) - simple class that allows multiple preprocessors to be chained + used on a single layer
|
||||
* **FeedForwardToCnnPreProcessor** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/FeedForwardToCnnPreProcessor.java)) - handles activation reshaping to transition from a row vector (per example) to a CNN layer. Note that this transition/preprocessor only makes sense if the activations are actually CNN activations, but have been 'flattened' to a row vector.
|
||||
* **FeedForwardToRnnPreProcessor** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/FeedForwardToRnnPreProcessor.java)) - handles transition from a (time distributed) feed-forward layer to a RNN layer
|
||||
* **RnnToCnnPreProcessor** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/RnnToCnnPreProcessor.java)) - handles transition from a sequence of CNN activations with shape ```[minibatch, depth*height*width, timeSeriesLength]``` to time-distributed ```[numExamples*timeSeriesLength, numChannels, inputWidth, inputHeight]``` format
|
||||
* **RnnToFeedForwardPreProcessor** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/preprocessor/RnnToFeedForwardPreProcessor.java)) - handles transition from time series activations (shape ```[minibatch,size,timeSeriesLength]```) to time-distributed feed-forward (shape ```[minibatch*tsLength,size]```) activations.
|
||||
|
||||
|
||||
### <a name="listeners">Iteration/Training Listeners</a>
|
||||
|
@ -158,13 +158,13 @@ TrainingListener: extends IterationListener. Has a number of additional methods
|
|||
Neither type (iteration/training) are called outside of training (i.e., during output or feed-forward methods)
|
||||
|
||||
|
||||
* **ScoreIterationListener** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/ScoreIterationListener.java), Javadoc) - Logs the loss function score every N training iterations
|
||||
* **PerformanceListener** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/PerformanceListener.java), Javadoc) - Logs performance (examples per sec, minibatches per sec, ETL time), and optionally score, every N training iterations.
|
||||
* **EvaluativeListener** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/EvaluativeListener.java), Javadoc) - Evaluates network performance on a test set every N iterations or epochs. Also has a system for callbacks, to (for example) save the evaluation results.
|
||||
* **CheckpointListener** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/checkpoint/CheckpointListener.java), Javadoc) - Save network checkpoints periodically - based on epochs, iterations or time (or some combination of all three).
|
||||
* **StatsListener** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-ui-parent/deeplearning4j-ui-model/src/main/java/org/deeplearning4j/ui/stats/StatsListener.java)) - Main listener for DL4J's web-based network training user interface. See [visualization page](https://deeplearning4j.org/visualization) for more details.
|
||||
* **CollectScoresIterationListener** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/CollectScoresIterationListener.java), Javadoc) - Similar to ScoreIterationListener, but stores scores internally in a list (for later retrieval) instead of logging scores
|
||||
* **TimeIterationListener** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/TimeIterationListener.java), Javadoc) - Attempts to estimate time until training completion, based on current speed and specified total number of iterations
|
||||
* **ScoreIterationListener** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/ScoreIterationListener.java), Javadoc) - Logs the loss function score every N training iterations
|
||||
* **PerformanceListener** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/PerformanceListener.java), Javadoc) - Logs performance (examples per sec, minibatches per sec, ETL time), and optionally score, every N training iterations.
|
||||
* **EvaluativeListener** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/EvaluativeListener.java), Javadoc) - Evaluates network performance on a test set every N iterations or epochs. Also has a system for callbacks, to (for example) save the evaluation results.
|
||||
* **CheckpointListener** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/checkpoint/CheckpointListener.java), Javadoc) - Save network checkpoints periodically - based on epochs, iterations or time (or some combination of all three).
|
||||
* **StatsListener** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-ui-parent/deeplearning4j-ui-model/src/main/java/org/deeplearning4j/ui/stats/StatsListener.java)) - Main listener for DL4J's web-based network training user interface. See [visualization page](https://deeplearning4j.org/visualization) for more details.
|
||||
* **CollectScoresIterationListener** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/CollectScoresIterationListener.java), Javadoc) - Similar to ScoreIterationListener, but stores scores internally in a list (for later retrieval) instead of logging scores
|
||||
* **TimeIterationListener** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/optimize/listeners/TimeIterationListener.java), Javadoc) - Attempts to estimate time until training completion, based on current speed and specified total number of iterations
|
||||
|
||||
### <a name="evaluation">Evaluation</a>
|
||||
|
||||
|
@ -174,22 +174,22 @@ ND4J has a number of classes for evaluating the performance of a network, agains
|
|||
Note: in 1.0.0-beta3 (November 2018), all evaluation classes were moved from DL4J to ND4J; previously they were in DL4J.
|
||||
|
||||
|
||||
* **Evaluation** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/Evaluation.java)) - Used for the evaluation of multi-class classifiers (assumes standard one-hot labels, and softmax probability distribution over N classes for predictions). Calculates a number of metrics - accuracy, precision, recall, F1, F-beta, Matthews correlation coefficient, confusion matrix. Optionally calculates top N accuracy, custom binary decision thresholds, and cost arrays (for non-binary case). Typically used for softmax + mcxent/negative-log-likelihood networks.
|
||||
* **EvaluationBinary** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/EvaluationBinary.java)) - A multi-label binary version of the Evaluation class. Each network output is assumed to be a separate/independent binary class, with probability 0 to 1 independent of all other outputs. Typically used for sigmoid + binary cross entropy networks.
|
||||
* **EvaluationCalibration** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/EvaluationCalibration.java)) - Used to evaluation the calibration of a binary or multi-class classifier. Produces reliability diagrams, residual plots, and histograms of probabilities. Export plots to HTML using [EvaluationTools](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/main/java/org/deeplearning4j/evaluation/EvaluationTools.java).exportevaluationCalibrationToHtmlFile method
|
||||
* **ROC** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/ROC.java)) - Used for single output binary classifiers only - i.e., networks with nOut(1) + sigmoid, or nOut(2) + softmax. Supports 2 modes: thresholded (approximate) or exact (the default). Calculates area under ROC curve, area under precision-recall curve. Plot ROC and P-R curves to HTML using [EvaluationTools](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/main/java/org/deeplearning4j/evaluation/EvaluationTools.java)
|
||||
* **ROCBinary** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/ROCBinary.java)) - a version of ROC that is used for multi-label binary networks (i.e., sigmoid + binary cross entropy), where each network output is assumed to be an independent binary variable.
|
||||
* **ROCMultiClass** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/ROCMultiClass.java)) - a version of ROC that is used for multi-class (non-binary) networks (i.e., softmax + mcxent/negative-log-likelihood networks). As ROC metrics are only defined for binary classification, this treats the multi-class output as a set of 'one-vs-all' binary classification problems.
|
||||
* **RegressionEvaluation** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/regression/RegressionEvaluation.java)) - An evaluation class used for regression models (including multi-output regression models). Reports metrics such as mean-squared error (MSE), mean-absolute error, etc for each output/column.
|
||||
* **Evaluation** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/Evaluation.java)) - Used for the evaluation of multi-class classifiers (assumes standard one-hot labels, and softmax probability distribution over N classes for predictions). Calculates a number of metrics - accuracy, precision, recall, F1, F-beta, Matthews correlation coefficient, confusion matrix. Optionally calculates top N accuracy, custom binary decision thresholds, and cost arrays (for non-binary case). Typically used for softmax + mcxent/negative-log-likelihood networks.
|
||||
* **EvaluationBinary** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/EvaluationBinary.java)) - A multi-label binary version of the Evaluation class. Each network output is assumed to be a separate/independent binary class, with probability 0 to 1 independent of all other outputs. Typically used for sigmoid + binary cross entropy networks.
|
||||
* **EvaluationCalibration** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/EvaluationCalibration.java)) - Used to evaluation the calibration of a binary or multi-class classifier. Produces reliability diagrams, residual plots, and histograms of probabilities. Export plots to HTML using [EvaluationTools](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/main/java/org/deeplearning4j/evaluation/EvaluationTools.java).exportevaluationCalibrationToHtmlFile method
|
||||
* **ROC** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/ROC.java)) - Used for single output binary classifiers only - i.e., networks with nOut(1) + sigmoid, or nOut(2) + softmax. Supports 2 modes: thresholded (approximate) or exact (the default). Calculates area under ROC curve, area under precision-recall curve. Plot ROC and P-R curves to HTML using [EvaluationTools](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/main/java/org/deeplearning4j/evaluation/EvaluationTools.java)
|
||||
* **ROCBinary** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/ROCBinary.java)) - a version of ROC that is used for multi-label binary networks (i.e., sigmoid + binary cross entropy), where each network output is assumed to be an independent binary variable.
|
||||
* **ROCMultiClass** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/classification/ROCMultiClass.java)) - a version of ROC that is used for multi-class (non-binary) networks (i.e., softmax + mcxent/negative-log-likelihood networks). As ROC metrics are only defined for binary classification, this treats the multi-class output as a set of 'one-vs-all' binary classification problems.
|
||||
* **RegressionEvaluation** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/evaluation/regression/RegressionEvaluation.java)) - An evaluation class used for regression models (including multi-output regression models). Reports metrics such as mean-squared error (MSE), mean-absolute error, etc for each output/column.
|
||||
|
||||
|
||||
## <a name="saving">Network Saving and Loading</a>
|
||||
|
||||
```MultiLayerNetwork.save(File)``` and ```MultiLayerNetwork.load(File)``` methods can be used to save and load models. These use ModelSerializer internally. Similar save/load methods are also available for ComputationGraph.
|
||||
|
||||
MultiLayerNetwork and ComputationGraph can be saved using the [ModelSerializer](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/util/ModelSerializer.java) class - and specifically the ```writeModel```, ```restoreMultiLayerNetwork``` and ```restoreComputationGraph``` methods.
|
||||
MultiLayerNetwork and ComputationGraph can be saved using the [ModelSerializer](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/util/ModelSerializer.java) class - and specifically the ```writeModel```, ```restoreMultiLayerNetwork``` and ```restoreComputationGraph``` methods.
|
||||
|
||||
Examples: [Saving and loading network](https://github.com/deeplearning4j/dl4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/modelsaving)
|
||||
Examples: [Saving and loading network](https://github.com/eclipse/deeplearning4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/modelsaving)
|
||||
|
||||
Networks can be trained further after saving and loading: however, be sure to load the 'updater' (i.e., the historical state for updaters like momentum, ). If no futher training is required, the updater state can be ommitted to save disk space and memory.
|
||||
|
||||
|
@ -206,35 +206,35 @@ This section lists the various configuration options that Deeplearning4j support
|
|||
### <a name="config-afn">Activation Functions</a>
|
||||
|
||||
Activation functions can be defined in one of two ways:
|
||||
(a) By passing an [Activation](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/Activation.java) enumeration value to the configuration - for example, ```.activation(Activation.TANH)```
|
||||
(b) By passing an [IActivation](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/IActivation.java) instance - for example, ```.activation(new ActivationSigmoid())```
|
||||
(a) By passing an [Activation](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/Activation.java) enumeration value to the configuration - for example, ```.activation(Activation.TANH)```
|
||||
(b) By passing an [IActivation](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/IActivation.java) instance - for example, ```.activation(new ActivationSigmoid())```
|
||||
|
||||
Note that Deeplearning4j supports custom activation functions, which can be defined by extending [BaseActivationFunction](https://github.com/deeplearning4j/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl)
|
||||
Note that Deeplearning4j supports custom activation functions, which can be defined by extending [BaseActivationFunction](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl)
|
||||
|
||||
List of supported activation functions:
|
||||
* **CUBE** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationCube.java)) - ```f(x) = x^3```
|
||||
* **ELU** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationELU.java)) - Exponential linear unit ([Reference](https://arxiv.org/abs/1511.07289))
|
||||
* **HARDSIGMOID** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationHardSigmoid.java)) - a piecewise linear version of the standard sigmoid activation function. ```f(x) = min(1, max(0, 0.2*x + 0.5))```
|
||||
* **HARDTANH** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationHardTanH.java)) - a piecewise linear version of the standard tanh activation function.
|
||||
* **IDENTITY** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationIdentity.java)) - a 'no op' activation function: ```f(x) = x```
|
||||
* **LEAKYRELU** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationLReLU.java)) - leaky rectified linear unit. ```f(x) = max(0, x) + alpha * min(0, x)``` with ```alpha=0.01``` by default.
|
||||
* **RATIONALTANH** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationRationalTanh.java)) - ```tanh(y) ~ sgn(y) * { 1 - 1/(1+|y|+y^2+1.41645*y^4)}``` which approximates ```f(x) = 1.7159 * tanh(2x/3)```, but should be faster to execute. ([Reference](https://arxiv.org/abs/1508.01292))
|
||||
* **RELU** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationReLU.java)) - standard rectified linear unit: ```f(x) = x``` if ```x>0``` or ```f(x) = 0``` otherwise
|
||||
* **RRELU** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationRReLU.java)) - randomized rectified linear unit. Deterministic during test time. ([Reference](http://arxiv.org/abs/1505.00853))
|
||||
* **SIGMOID** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSigmoid.java)) - standard sigmoid activation function, ```f(x) = 1 / (1 + exp(-x))```
|
||||
* **SOFTMAX** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSoftmax.java)) - standard softmax activation function
|
||||
* **SOFTPLUS** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSoftPlus.java)) - ```f(x) = log(1+e^x)``` - shape is similar to a smooth version of the RELU activation function
|
||||
* **SOFTSIGN** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSoftSign.java)) - ```f(x) = x / (1+|x|)``` - somewhat similar in shape to the standard tanh activation function (faster to calculate).
|
||||
* **TANH** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationTanH.java)) - standard tanh (hyperbolic tangent) activation function
|
||||
* **RECTIFIEDTANH** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationRectifiedTanh.java)) - ```f(x) = max(0, tanh(x))```
|
||||
* **SELU** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSELU.java)) - scaled exponential linear unit - used with [self normalizing neural networks](https://arxiv.org/abs/1706.02515)
|
||||
* **SWISH** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSwish.java)) - Swish activation function, ```f(x) = x * sigmoid(x)``` ([Reference](https://arxiv.org/abs/1710.05941))
|
||||
* **CUBE** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationCube.java)) - ```f(x) = x^3```
|
||||
* **ELU** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationELU.java)) - Exponential linear unit ([Reference](https://arxiv.org/abs/1511.07289))
|
||||
* **HARDSIGMOID** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationHardSigmoid.java)) - a piecewise linear version of the standard sigmoid activation function. ```f(x) = min(1, max(0, 0.2*x + 0.5))```
|
||||
* **HARDTANH** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationHardTanH.java)) - a piecewise linear version of the standard tanh activation function.
|
||||
* **IDENTITY** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationIdentity.java)) - a 'no op' activation function: ```f(x) = x```
|
||||
* **LEAKYRELU** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationLReLU.java)) - leaky rectified linear unit. ```f(x) = max(0, x) + alpha * min(0, x)``` with ```alpha=0.01``` by default.
|
||||
* **RATIONALTANH** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationRationalTanh.java)) - ```tanh(y) ~ sgn(y) * { 1 - 1/(1+|y|+y^2+1.41645*y^4)}``` which approximates ```f(x) = 1.7159 * tanh(2x/3)```, but should be faster to execute. ([Reference](https://arxiv.org/abs/1508.01292))
|
||||
* **RELU** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationReLU.java)) - standard rectified linear unit: ```f(x) = x``` if ```x>0``` or ```f(x) = 0``` otherwise
|
||||
* **RRELU** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationRReLU.java)) - randomized rectified linear unit. Deterministic during test time. ([Reference](http://arxiv.org/abs/1505.00853))
|
||||
* **SIGMOID** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSigmoid.java)) - standard sigmoid activation function, ```f(x) = 1 / (1 + exp(-x))```
|
||||
* **SOFTMAX** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSoftmax.java)) - standard softmax activation function
|
||||
* **SOFTPLUS** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSoftPlus.java)) - ```f(x) = log(1+e^x)``` - shape is similar to a smooth version of the RELU activation function
|
||||
* **SOFTSIGN** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSoftSign.java)) - ```f(x) = x / (1+|x|)``` - somewhat similar in shape to the standard tanh activation function (faster to calculate).
|
||||
* **TANH** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationTanH.java)) - standard tanh (hyperbolic tangent) activation function
|
||||
* **RECTIFIEDTANH** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationRectifiedTanh.java)) - ```f(x) = max(0, tanh(x))```
|
||||
* **SELU** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSELU.java)) - scaled exponential linear unit - used with [self normalizing neural networks](https://arxiv.org/abs/1706.02515)
|
||||
* **SWISH** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/activations/impl/ActivationSwish.java)) - Swish activation function, ```f(x) = x * sigmoid(x)``` ([Reference](https://arxiv.org/abs/1710.05941))
|
||||
|
||||
### <a name="config-init">Weight Initialization</a>
|
||||
|
||||
Weight initialization refers to the method by which the initial parameters for a new network should be set.
|
||||
|
||||
Weight initialization are usually defined using the [WeightInit](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/weights/WeightInit.java) enumeration.
|
||||
Weight initialization are usually defined using the [WeightInit](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/weights/WeightInit.java) enumeration.
|
||||
|
||||
Custom weight initializations can be specified using ```.weightInit(WeightInit.DISTRIBUTION).dist(new NormalDistribution(0, 1))``` for example. As for master (but not 0.9.1 release) ```.weightInit(new NormalDistribution(0, 1))``` is also possible, which is equivalent to the previous approach.
|
||||
|
||||
|
@ -267,15 +267,15 @@ An 'updater' in DL4J is a class that takes raw gradients and modifies them to be
|
|||
The [CS231n course notes](http://cs231n.github.io/neural-networks-3/#ada) have a good explanation of some of these updaters.
|
||||
|
||||
Supported updaters in Deeplearning4j:
|
||||
* **AdaDelta** - ([Source](https://github.com/deeplearning4j/nd4j/blob/e92cae4626e2c9cf27d416136f1895b747a32cee/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/AdaDelta.java)) - [Reference](https://arxiv.org/abs/1212.5701)
|
||||
* **AdaGrad** - ([Source](https://github.com/deeplearning4j/nd4j/blob/e92cae4626e2c9cf27d416136f1895b747a32cee/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/AdaGrad.java)) - [Reference](http://jmlr.org/papers/v12/duchi11a.html)
|
||||
* **AdaMax** - ([Source](https://github.com/deeplearning4j/nd4j/blob/e92cae4626e2c9cf27d416136f1895b747a32cee/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/AdaMax.java)) - A variant of the Adam updater - [Reference](http://arxiv.org/abs/1412.6980)
|
||||
* **Adam** - ([Source](https://github.com/deeplearning4j/nd4j/blob/e92cae4626e2c9cf27d416136f1895b747a32cee/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/Adam.java))
|
||||
* **Nadam** - ([Source](https://github.com/deeplearning4j/nd4j/blob/e92cae4626e2c9cf27d416136f1895b747a32cee/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/Nadam.java)) - A variant of the Adam updater, using the Nesterov mementum update rule - [Reference](https://arxiv.org/abs/1609.04747)
|
||||
* **Nesterovs** - ([Source](https://github.com/deeplearning4j/nd4j/blob/e92cae4626e2c9cf27d416136f1895b747a32cee/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/Nesterovs.java)) - Nesterov momentum updater
|
||||
* **NoOp** - ([Source](https://github.com/deeplearning4j/nd4j/blob/e92cae4626e2c9cf27d416136f1895b747a32cee/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/NoOp.java)) - A 'no operation' updater. That is, gradients are not modified at all by this updater. Mathematically equivalent to the SGD updater with a learning rate of 1.0
|
||||
* **RmsProp** - ([Source](https://github.com/deeplearning4j/nd4j/blob/e92cae4626e2c9cf27d416136f1895b747a32cee/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/RmsProp.java)) - [Reference - slide 29](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)
|
||||
* **Sgd** - ([Source](https://github.com/deeplearning4j/nd4j/blob/e92cae4626e2c9cf27d416136f1895b747a32cee/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/Sgd.java)) - Standard stochastic gradient descent updater. This updater applies a learning rate only.
|
||||
* **AdaDelta** - ([Source](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/AdaDelta.java)) - [Reference](https://arxiv.org/abs/1212.5701)
|
||||
* **AdaGrad** - ([Source](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/AdaGrad.java)) - [Reference](http://jmlr.org/papers/v12/duchi11a.html)
|
||||
* **AdaMax** - ([Source](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/AdaMax.java)) - A variant of the Adam updater - [Reference](http://arxiv.org/abs/1412.6980)
|
||||
* **Adam** - ([Source](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/Adam.java))
|
||||
* **Nadam** - ([Source](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/Nadam.java)) - A variant of the Adam updater, using the Nesterov mementum update rule - [Reference](https://arxiv.org/abs/1609.04747)
|
||||
* **Nesterovs** - ([Source](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/Nesterovs.java)) - Nesterov momentum updater
|
||||
* **NoOp** - ([Source](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/NoOp.java)) - A 'no operation' updater. That is, gradients are not modified at all by this updater. Mathematically equivalent to the SGD updater with a learning rate of 1.0
|
||||
* **RmsProp** - ([Source](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/RmsProp.java)) - [Reference - slide 29](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf)
|
||||
* **Sgd** - ([Source](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/learning/config/Sgd.java)) - Standard stochastic gradient descent updater. This updater applies a learning rate only.
|
||||
|
||||
|
||||
### <a name="config-schedules">Learning Rate Schedules</a>
|
||||
|
@ -287,12 +287,12 @@ You can plot/inspect the learning rate that will be used at any point by calling
|
|||
|
||||
Available schedules:
|
||||
|
||||
* **ExponentialSchedule** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/ExponentialSchedule.java)) - Implements ```value(i) = initialValue * gamma^i```
|
||||
* **InverseSchedule** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/InverseSchedule.java)) - Implements ```value(i) = initialValue * (1 + gamma * i)^(-power)```
|
||||
* **MapSchedule** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/MapSchedule.java)) - Learning rate schedule based on a user-provided map. Note that the provided map must have a value for iteration/epoch 0. Has a builder class to conveniently define a schedule.
|
||||
* **PolySchedule** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/PolySchedule.java)) - Implements ```value(i) = initialValue * (1 + i/maxIter)^(-power)```
|
||||
* **SigmoidSchedule** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/SigmoidSchedule.java)) - Implements ```value(i) = initialValue * 1.0 / (1 + exp(-gamma * (iter - stepSize)))```
|
||||
* **StepSchedule** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/StepSchedule.java)) - Implements ```value(i) = initialValue * gamma^( floor(iter/step) )```
|
||||
* **ExponentialSchedule** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/ExponentialSchedule.java)) - Implements ```value(i) = initialValue * gamma^i```
|
||||
* **InverseSchedule** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/InverseSchedule.java)) - Implements ```value(i) = initialValue * (1 + gamma * i)^(-power)```
|
||||
* **MapSchedule** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/MapSchedule.java)) - Learning rate schedule based on a user-provided map. Note that the provided map must have a value for iteration/epoch 0. Has a builder class to conveniently define a schedule.
|
||||
* **PolySchedule** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/PolySchedule.java)) - Implements ```value(i) = initialValue * (1 + i/maxIter)^(-power)```
|
||||
* **SigmoidSchedule** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/SigmoidSchedule.java)) - Implements ```value(i) = initialValue * 1.0 / (1 + exp(-gamma * (iter - stepSize)))```
|
||||
* **StepSchedule** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/schedule/StepSchedule.java)) - Implements ```value(i) = initialValue * gamma^( floor(iter/step) )```
|
||||
|
||||
|
||||
Note that custom schedules can be created by implementing the ISchedule interface.
|
||||
|
@ -312,10 +312,10 @@ L1 and L2 regularization is applied by default on the weight parameters only. Th
|
|||
|
||||
All dropout types are applied at training time only. They are not applied at test time.
|
||||
|
||||
* **Dropout** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/dropout/Dropout.java)) - Each input activation x is independently set to (0, with probability 1-p) or (x/p with probability p)<br>
|
||||
* **GaussianDropout** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/dropout/GaussianDropout.java)) - This is a multiplicative Gaussian noise (mean 1) on the input activations. Each input activation x is independently set to: ```x * y```, where ```y ~ N(1, stdev = sqrt((1-rate)/rate))```
|
||||
* **GaussianNoise** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/dropout/GaussianNoise.java)) - Applies additive, mean-zero Gaussian noise to the input - i.e., ```x = x + N(0,stddev)```
|
||||
* **AlphaDropout** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/dropout/AlphaDropout.java)) - AlphaDropout is a dropout technique proposed by [Klaumbauer et al. 2017 - Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515). Designed for self-normalizing neural networks (SELU activation, NORMAL weight init). Attempts to keep both the mean and variance of the post-dropout activations to the same (in expectation) as before alpha dropout was applied
|
||||
* **Dropout** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/dropout/Dropout.java)) - Each input activation x is independently set to (0, with probability 1-p) or (x/p with probability p)<br>
|
||||
* **GaussianDropout** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/dropout/GaussianDropout.java)) - This is a multiplicative Gaussian noise (mean 1) on the input activations. Each input activation x is independently set to: ```x * y```, where ```y ~ N(1, stdev = sqrt((1-rate)/rate))```
|
||||
* **GaussianNoise** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/dropout/GaussianNoise.java)) - Applies additive, mean-zero Gaussian noise to the input - i.e., ```x = x + N(0,stddev)```
|
||||
* **AlphaDropout** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/dropout/AlphaDropout.java)) - AlphaDropout is a dropout technique proposed by [Klaumbauer et al. 2017 - Self-Normalizing Neural Networks](https://arxiv.org/abs/1706.02515). Designed for self-normalizing neural networks (SELU activation, NORMAL weight init). Attempts to keep both the mean and variance of the post-dropout activations to the same (in expectation) as before alpha dropout was applied
|
||||
|
||||
Note that (as of current master - but not 0.9.1) the dropout parameters can also be specified according to any of the schedule classes mentioned in the Learning Rate Schedules section.
|
||||
|
||||
|
@ -323,17 +323,17 @@ Note that (as of current master - but not 0.9.1) the dropout parameters can also
|
|||
|
||||
As per dropout, dropconnect / weight noise is applied only at training time
|
||||
|
||||
* **DropConnect** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/weightnoise/DropConnect.java)) - DropConnect is similar to dropout, but applied to the parameters of a network (instead of the input activations). [Reference](https://cs.nyu.edu/~wanli/dropc/dropc.pdf)
|
||||
* **WeightNoise** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/weightnoise/WeightNoise.java)) - Apply noise of the specified distribution to the weights at training time. Both additive and multiplicative modes are supported - when additive, noise should be mean 0, when multiplicative, noise should be mean 1
|
||||
* **DropConnect** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/weightnoise/DropConnect.java)) - DropConnect is similar to dropout, but applied to the parameters of a network (instead of the input activations). [Reference](https://cs.nyu.edu/~wanli/dropc/dropc.pdf)
|
||||
* **WeightNoise** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/weightnoise/WeightNoise.java)) - Apply noise of the specified distribution to the weights at training time. Both additive and multiplicative modes are supported - when additive, noise should be mean 0, when multiplicative, noise should be mean 1
|
||||
|
||||
### <a name="config-constraints">Constraints</a>
|
||||
|
||||
Constraints are deterministic limitations that are placed on a model's parameters at the end of each iteration (after the parameter update has occurred). They can be thought of as a type of regularization.
|
||||
|
||||
* **MaxNormConstraint** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/constraint/MaxNormConstraint.java)) - Constrain the maximum L2 norm of the incoming weights for each unit to be less than or equal to the specified value. If the L2 norm exceeds the specified value, the weights will be scaled down to satisfy the constraint.
|
||||
* **MinMaxNormConstraint** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/constraint/MinMaxNormConstraint.java)) - Constrain the minimum AND maximum L2 norm of the incoming weights for each unit to be between the specified values. Weights will be scaled up/down if required.
|
||||
* **NonNegativeConstraint** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/constraint/NonNegativeConstraint.java)) - Constrain all parameters to be non-negative. Negative parameters will be replaced with 0.
|
||||
* **UnitNormConstraint** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/b841c0f549194dbdf88b42836df662d9b8ea8c6d/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/constraint/UnitNormConstraint.java)) - Constrain the L2 norm of the incoming weights for each unit to be 1.0.
|
||||
* **MaxNormConstraint** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/constraint/MaxNormConstraint.java)) - Constrain the maximum L2 norm of the incoming weights for each unit to be less than or equal to the specified value. If the L2 norm exceeds the specified value, the weights will be scaled down to satisfy the constraint.
|
||||
* **MinMaxNormConstraint** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/constraint/MinMaxNormConstraint.java)) - Constrain the minimum AND maximum L2 norm of the incoming weights for each unit to be between the specified values. Weights will be scaled up/down if required.
|
||||
* **NonNegativeConstraint** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/constraint/NonNegativeConstraint.java)) - Constrain all parameters to be non-negative. Negative parameters will be replaced with 0.
|
||||
* **UnitNormConstraint** - ([Source](https://github.com/eclipse/deeplearning4j/blob/b841c0f549194dbdf88b42836df662d9b8ea8c6d/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/constraint/UnitNormConstraint.java)) - Constrain the L2 norm of the incoming weights for each unit to be 1.0.
|
||||
|
||||
|
||||
## <a name="data">Data Classes</a>
|
||||
|
@ -348,44 +348,44 @@ MultiDataSetIterator is similar to DataSetIterator, but returns MultiDataSet obj
|
|||
|
||||
These iterators download their data as required. The actual datasets they return are not customizable.
|
||||
|
||||
* **MnistDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/MnistDataSetIterator.java)) - DataSetIterator for the well-known MNIST digits dataset. By default, returns a row vector (1x784), with values normalized to 0 to 1 range. Use ```.setInputType(InputType.convolutionalFlat())``` to use with CNNs.
|
||||
* **EmnistDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/EmnistDataSetIterator.java)) - Similar to the MNIST digits dataset, but with more examples, and also letters. Includes multiple different splits (letters only, digits only, letters + digits, etc). Same 1x784 format as MNIST, hence (other than different number of labels for some splits) can be used as a drop-in replacement for MnistDataSetIterator. [Reference 1](https://www.nist.gov/itl/iad/image-group/emnist-dataset), [Reference 2](https://arxiv.org/abs/1702.05373)
|
||||
* **IrisDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/IrisDataSetIterator.java)) - An iterator for the well known Iris dataset. 4 features, 3 output classes.
|
||||
* **CifarDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/CifarDataSetIterator.java)) - An iterator for the CIFAR images dataset. 10 classes, 4d features/activations format for CNNs in DL4J: ```[minibatch,channels,height,width] = [minibatch,3,32,32]```. Features are *not* normalized - instead, are in the range 0 to 255.
|
||||
* **MnistDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/MnistDataSetIterator.java)) - DataSetIterator for the well-known MNIST digits dataset. By default, returns a row vector (1x784), with values normalized to 0 to 1 range. Use ```.setInputType(InputType.convolutionalFlat())``` to use with CNNs.
|
||||
* **EmnistDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/EmnistDataSetIterator.java)) - Similar to the MNIST digits dataset, but with more examples, and also letters. Includes multiple different splits (letters only, digits only, letters + digits, etc). Same 1x784 format as MNIST, hence (other than different number of labels for some splits) can be used as a drop-in replacement for MnistDataSetIterator. [Reference 1](https://www.nist.gov/itl/iad/image-group/emnist-dataset), [Reference 2](https://arxiv.org/abs/1702.05373)
|
||||
* **IrisDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/IrisDataSetIterator.java)) - An iterator for the well known Iris dataset. 4 features, 3 output classes.
|
||||
* **CifarDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/CifarDataSetIterator.java)) - An iterator for the CIFAR images dataset. 10 classes, 4d features/activations format for CNNs in DL4J: ```[minibatch,channels,height,width] = [minibatch,3,32,32]```. Features are *not* normalized - instead, are in the range 0 to 255.
|
||||
* **LFWDataSetIterator** - ([Source]())
|
||||
* **TinyImageNetDataSetIterator** ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/TinyImageNetDataSetIterator.java)) - A subset of the standard imagenet dataset; 200 classes, 500 images per class
|
||||
* **UciSequenceDataSetIterator** ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/UciSequenceDataSetIterator.java)) - UCI synthetic control time series dataset
|
||||
* **TinyImageNetDataSetIterator** ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/TinyImageNetDataSetIterator.java)) - A subset of the standard imagenet dataset; 200 classes, 500 images per class
|
||||
* **UciSequenceDataSetIterator** ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datasets/src/main/java/org/deeplearning4j/datasets/iterator/impl/UciSequenceDataSetIterator.java)) - UCI synthetic control time series dataset
|
||||
|
||||
#### <a name="data-iter-user">Iterators - User Provided Data</a>
|
||||
|
||||
The iterators in this subsection are used with user-provided data.
|
||||
|
||||
* **RecordReaderDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datavec-iterators/src/main/java/org/deeplearning4j/datasets/datavec/RecordReaderDataSetIterator.java)) - an iterator that takes a DataVec record reader (such as CsvRecordReader or ImageRecordReader) and handles conversion to DataSets, batching, masking, etc. One of the most commonly used iterators in DL4J. Handles non-sequence data only, as input (i.e., RecordReader, no SequenceeRecordReader).
|
||||
* **RecordReaderMultiDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datavec-iterators/src/main/java/org/deeplearning4j/datasets/datavec/RecordReaderMultiDataSetIterator.java)) - the MultiDataSet version of RecordReaderDataSetIterator, that supports multiple readers. Has a builder pattern for creating more complex data pipelines (such as different subsets of a reader's output to different input/output arrays, conversion to one-hot, etc). Handles both sequence and non-sequence data as input.
|
||||
* **SequenceRecordReaderDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datavec-iterators/src/main/java/org/deeplearning4j/datasets/datavec/SequenceRecordReaderDataSetIterator.java)) - The sequence (SequenceRecordReader) version of RecordReaderDataSetIterator. Users may be better off using RecordReaderMultiDataSetIterator, in conjunction with
|
||||
* **DoublesDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/DoublesDataSetIterator.java))
|
||||
* **FloatsDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/FloatsDataSetIterator.java))
|
||||
* **INDArrayDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/INDArrayDataSetIterator.java))
|
||||
* **RecordReaderDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datavec-iterators/src/main/java/org/deeplearning4j/datasets/datavec/RecordReaderDataSetIterator.java)) - an iterator that takes a DataVec record reader (such as CsvRecordReader or ImageRecordReader) and handles conversion to DataSets, batching, masking, etc. One of the most commonly used iterators in DL4J. Handles non-sequence data only, as input (i.e., RecordReader, no SequenceeRecordReader).
|
||||
* **RecordReaderMultiDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datavec-iterators/src/main/java/org/deeplearning4j/datasets/datavec/RecordReaderMultiDataSetIterator.java)) - the MultiDataSet version of RecordReaderDataSetIterator, that supports multiple readers. Has a builder pattern for creating more complex data pipelines (such as different subsets of a reader's output to different input/output arrays, conversion to one-hot, etc). Handles both sequence and non-sequence data as input.
|
||||
* **SequenceRecordReaderDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-datavec-iterators/src/main/java/org/deeplearning4j/datasets/datavec/SequenceRecordReaderDataSetIterator.java)) - The sequence (SequenceRecordReader) version of RecordReaderDataSetIterator. Users may be better off using RecordReaderMultiDataSetIterator, in conjunction with
|
||||
* **DoublesDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/DoublesDataSetIterator.java))
|
||||
* **FloatsDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/FloatsDataSetIterator.java))
|
||||
* **INDArrayDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/INDArrayDataSetIterator.java))
|
||||
|
||||
|
||||
#### <a name="data-iter-util">Iterators - Adapter and Utility Iterators</a>
|
||||
|
||||
* **MultiDataSetIteratorAdapter** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/datasets/iterator/impl/MultiDataSetIteratorAdapter.java)) - Wrap a DataSetIterator to convert it to a MultiDataSetIterator
|
||||
* **SingletonMultiDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/impl/SingletonMultiDataSetIterator.java)) - Wrap a MultiDataSet into a MultiDataSetIterator that returns one MultiDataSet (i.e., the wrapped MultiDataSet is *not* split up)
|
||||
* **AsyncDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/AsyncDataSetIterator.java)) - Used automatically by MultiLayerNetwork and ComputationGraph where appropriate. Implements asynchronous prefetching of datasets to improve performance.
|
||||
* **AsyncMultiDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/AsyncMultiDataSetIterator.java)) - Used automatically by ComputationGraph where appropriate. Implements asynchronous prefetching of MultiDataSets to improve performance.
|
||||
* **AsyncShieldDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/AsyncShieldDataSetIterator.java)) - Generally used only for debugging. Stops MultiLayerNetwork and ComputationGraph from using an AsyncDataSetIterator.
|
||||
* **AsyncShieldMultiDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/AsyncShieldMultiDataSetIterator.java)) - The MultiDataSetIterator version of AsyncShieldDataSetIterator
|
||||
* **EarlyTerminationDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/EarlyTerminationDataSetIterator.java)) - Wraps another DataSetIterator, ensuring that only a specified (maximum) number of minibatches (DataSet) objects are returned between resets. Can be used to 'cut short' an iterator, returning only the first N DataSets.
|
||||
* **EarlyTerminationMultiDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/EarlyTerminationMultiDataSetIterator.java)) - The MultiDataSetIterator version of EarlyTerminationDataSetIterator
|
||||
* **ExistingDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/ExistingDataSetIterator.java)) - Convert an ```Iterator<DataSet>``` or ```Iterable<DataSet>``` to a DataSetIterator. Does not split the underlying DataSet objects
|
||||
* **FileDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/file/FileDataSetIterator.java)) - An iterator that iterates over DataSet files that have been previously saved with ```DataSet.save(File)```. Supports randomization, filtering, different output batch size vs. saved DataSet batch size, etc.
|
||||
* **FileMultiDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/file/FileMultiDataSetIterator.java)) - A MultiDataSet version of FileDataSetIterator
|
||||
* **IteratorDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/IteratorDataSetIterator.java)) - Convert an ```Iterator<DataSet>``` to a DataSetIterator. Unlike ExistingDataSetIterator, the underlying DataSet objects may be split/combined - i.e., the minibatch size may differ for the output, vs. the input iterator.
|
||||
* **IteratorMultiDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/IteratorMultiDataSetIterator.java)) - The ```Iterator<MultiDataSet>``` version of IteratorDataSetIterator
|
||||
* **MultiDataSetWrapperIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/MultiDataSetWrapperIterator.java)) - Convert a MultiDataSetIterator to a DataSetIterator. Note that this is only possible if the number of features and labels arrays is equal to 1.
|
||||
* **MultipleEpochsIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/MultipleEpochsIterator.java)) - Treat multiple passes (epochs) of the underlying iterator as a single epoch, when training.
|
||||
* **WorkspaceShieldDataSetIterator** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/WorkspacesShieldDataSetIterator.java)) - Generally used only for debugging, and not usually by users. Detaches/migrates DataSets coming out of the underlying DataSetIterator.
|
||||
* **MultiDataSetIteratorAdapter** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/datasets/iterator/impl/MultiDataSetIteratorAdapter.java)) - Wrap a DataSetIterator to convert it to a MultiDataSetIterator
|
||||
* **SingletonMultiDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/impl/SingletonMultiDataSetIterator.java)) - Wrap a MultiDataSet into a MultiDataSetIterator that returns one MultiDataSet (i.e., the wrapped MultiDataSet is *not* split up)
|
||||
* **AsyncDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/AsyncDataSetIterator.java)) - Used automatically by MultiLayerNetwork and ComputationGraph where appropriate. Implements asynchronous prefetching of datasets to improve performance.
|
||||
* **AsyncMultiDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/AsyncMultiDataSetIterator.java)) - Used automatically by ComputationGraph where appropriate. Implements asynchronous prefetching of MultiDataSets to improve performance.
|
||||
* **AsyncShieldDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/AsyncShieldDataSetIterator.java)) - Generally used only for debugging. Stops MultiLayerNetwork and ComputationGraph from using an AsyncDataSetIterator.
|
||||
* **AsyncShieldMultiDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/AsyncShieldMultiDataSetIterator.java)) - The MultiDataSetIterator version of AsyncShieldDataSetIterator
|
||||
* **EarlyTerminationDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/EarlyTerminationDataSetIterator.java)) - Wraps another DataSetIterator, ensuring that only a specified (maximum) number of minibatches (DataSet) objects are returned between resets. Can be used to 'cut short' an iterator, returning only the first N DataSets.
|
||||
* **EarlyTerminationMultiDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/EarlyTerminationMultiDataSetIterator.java)) - The MultiDataSetIterator version of EarlyTerminationDataSetIterator
|
||||
* **ExistingDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/ExistingDataSetIterator.java)) - Convert an ```Iterator<DataSet>``` or ```Iterable<DataSet>``` to a DataSetIterator. Does not split the underlying DataSet objects
|
||||
* **FileDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/file/FileDataSetIterator.java)) - An iterator that iterates over DataSet files that have been previously saved with ```DataSet.save(File)```. Supports randomization, filtering, different output batch size vs. saved DataSet batch size, etc.
|
||||
* **FileMultiDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/file/FileMultiDataSetIterator.java)) - A MultiDataSet version of FileDataSetIterator
|
||||
* **IteratorDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/IteratorDataSetIterator.java)) - Convert an ```Iterator<DataSet>``` to a DataSetIterator. Unlike ExistingDataSetIterator, the underlying DataSet objects may be split/combined - i.e., the minibatch size may differ for the output, vs. the input iterator.
|
||||
* **IteratorMultiDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/IteratorMultiDataSetIterator.java)) - The ```Iterator<MultiDataSet>``` version of IteratorDataSetIterator
|
||||
* **MultiDataSetWrapperIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/MultiDataSetWrapperIterator.java)) - Convert a MultiDataSetIterator to a DataSetIterator. Note that this is only possible if the number of features and labels arrays is equal to 1.
|
||||
* **MultipleEpochsIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/MultipleEpochsIterator.java)) - Treat multiple passes (epochs) of the underlying iterator as a single epoch, when training.
|
||||
* **WorkspaceShieldDataSetIterator** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-data/deeplearning4j-utility-iterators/src/main/java/org/deeplearning4j/datasets/iterator/WorkspacesShieldDataSetIterator.java)) - Generally used only for debugging, and not usually by users. Detaches/migrates DataSets coming out of the underlying DataSetIterator.
|
||||
|
||||
|
||||
### <a name="data-norm">Data Normalization</a>
|
||||
|
@ -403,32 +403,32 @@ In general, you should fit *only* on the training data, and do ```trainData.setP
|
|||
|
||||
Note that where appropriate (NormalizerStandardize, NormalizerMinMaxScaler) statistics such as mean/standard-deviation/min/max are shared across time (for time series) and across image x/y locations (but not depth/channels - for image data).
|
||||
|
||||
Data normalization example: [link](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/dataexamples/PreprocessNormalizerExample.java)
|
||||
Data normalization example: [link](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/dataexamples/PreprocessNormalizerExample.java)
|
||||
|
||||
**Available normalizers: DataSet / DataSetIterator**
|
||||
|
||||
* **ImagePreProcessingScaler** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/ImagePreProcessingScaler.java)) - Applies min-max scaling to image activations. Default settings do 0-255 input to 0-1 output (but is configurable). Note that unlike the other normalizers here, this one does not rely on statistics (mean/min/max etc) collected from the data, hence the ```normalizer.fit(trainData)``` step is unnecessary (is a no-op).
|
||||
* **NormalizerStandardize** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/NormalizerStandardize.java)) - normalizes each feature value independently (and optionally label values) to have 0 mean and a standard deviation of 1
|
||||
* **NormalizerMinMaxScaler** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/NormalizerMinMaxScaler.java)) - normalizes each feature value independently (and optionally label values) to lie between a minimum and maximum value (by default between 0 and 1)
|
||||
* **VGG16ImagePreProcessor** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/VGG16ImagePreProcessor.java)) - This is a preprocessor specifically for VGG16. It subtracts the mean RGB value, computed on the training set, from each pixel as reported in [Link](https://arxiv.org/pdf/1409.1556.pdf)
|
||||
* **ImagePreProcessingScaler** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/ImagePreProcessingScaler.java)) - Applies min-max scaling to image activations. Default settings do 0-255 input to 0-1 output (but is configurable). Note that unlike the other normalizers here, this one does not rely on statistics (mean/min/max etc) collected from the data, hence the ```normalizer.fit(trainData)``` step is unnecessary (is a no-op).
|
||||
* **NormalizerStandardize** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/NormalizerStandardize.java)) - normalizes each feature value independently (and optionally label values) to have 0 mean and a standard deviation of 1
|
||||
* **NormalizerMinMaxScaler** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/NormalizerMinMaxScaler.java)) - normalizes each feature value independently (and optionally label values) to lie between a minimum and maximum value (by default between 0 and 1)
|
||||
* **VGG16ImagePreProcessor** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/VGG16ImagePreProcessor.java)) - This is a preprocessor specifically for VGG16. It subtracts the mean RGB value, computed on the training set, from each pixel as reported in [Link](https://arxiv.org/pdf/1409.1556.pdf)
|
||||
|
||||
|
||||
**Available normalizers: MultiDataSet / MultiDataSetIterator**
|
||||
|
||||
* **ImageMultiPreProcessingScaler** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/ImageMultiPreProcessingScaler.java)) - A MultiDataSet/MultiDataSetIterator version of ImagePreProcessingScaler
|
||||
* **MultiNormalizerStandardize** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/MultiNormalizerStandardize.java)) - MultiDataSet/MultiDataSetIterator version of NormalizerStandardize
|
||||
* **MultiNormalizerMinMaxScaler** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/MultiNormalizerMinMaxScaler.java)) - MultiDataSet/MultiDataSetIterator version of NormalizerMinMaxScaler
|
||||
* **MultiNormalizerHybrid** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/MultiNormalizerHybrid.java)) - A MultiDataSet normalizer that can combine different normalization types (standardize, min/max etc) for different input/feature and output/label arrays.
|
||||
* **ImageMultiPreProcessingScaler** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/ImageMultiPreProcessingScaler.java)) - A MultiDataSet/MultiDataSetIterator version of ImagePreProcessingScaler
|
||||
* **MultiNormalizerStandardize** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/MultiNormalizerStandardize.java)) - MultiDataSet/MultiDataSetIterator version of NormalizerStandardize
|
||||
* **MultiNormalizerMinMaxScaler** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/MultiNormalizerMinMaxScaler.java)) - MultiDataSet/MultiDataSetIterator version of NormalizerMinMaxScaler
|
||||
* **MultiNormalizerHybrid** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/preprocessor/MultiNormalizerHybrid.java)) - A MultiDataSet normalizer that can combine different normalization types (standardize, min/max etc) for different input/feature and output/label arrays.
|
||||
|
||||
|
||||
### <a name="transfer">Transfer Learning</a>
|
||||
|
||||
Deeplearning4j has classes/utilities for performing transfer learning - i.e., taking an existing network, and modifying some of the layers (optionally freezing others so their parameters don't change). For example, an image classifier could be trained on ImageNet, then applied to a new/different dataset. Both MultiLayerNetwork and ComputationGraph can be used with transfer learning - frequently starting from a pre-trained model from the model zoo (see next section), though any MultiLayerNetwork/ComputationGraph can be used.
|
||||
|
||||
Link: [Transfer learning examples](https://github.com/deeplearning4j/dl4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/transferlearning/vgg16)
|
||||
Link: [Transfer learning examples](https://github.com/eclipse/deeplearning4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/transferlearning/vgg16)
|
||||
|
||||
The main class for transfer learning is [TransferLearning](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/transferlearning/TransferLearning.java). This class has a builder pattern that can be used to add/remove layers, freeze layers, etc.
|
||||
[FineTuneConfiguration](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/transferlearning/FineTuneConfiguration.java) can be used here to specify the learning rate and other settings for the non-frozen layers.
|
||||
The main class for transfer learning is [TransferLearning](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/transferlearning/TransferLearning.java). This class has a builder pattern that can be used to add/remove layers, freeze layers, etc.
|
||||
[FineTuneConfiguration](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/transferlearning/FineTuneConfiguration.java) can be used here to specify the learning rate and other settings for the non-frozen layers.
|
||||
|
||||
|
||||
### <a name="zoo">Trained Model Library - Model Zoo</a>
|
||||
|
@ -439,17 +439,17 @@ Link: [Deeplearning4j Model Zoo](https://deeplearning4j.org/model-zoo)
|
|||
|
||||
Models available in DL4J's model zoo:
|
||||
|
||||
* **AlexNet** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/AlexNet.java))
|
||||
* **Darknet19** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/Darknet19.java))
|
||||
* **FaceNetNN4Small2** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/FaceNetNN4Small2.java))
|
||||
* **InceptionResNetV1** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/InceptionResNetV1.java))
|
||||
* **LeNet** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/LeNet.java))
|
||||
* **ResNet50** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/ResNet50.java))
|
||||
* **SimpleCNN** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/SimpleCNN.java))
|
||||
* **TextGenerationLSTM** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/TextGenerationLSTM.java))
|
||||
* **TinyYOLO** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/TinyYOLO.java))
|
||||
* **VGG16** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/VGG16.java))
|
||||
* **VGG19** - ([Source](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/VGG19.java))
|
||||
* **AlexNet** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/AlexNet.java))
|
||||
* **Darknet19** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/Darknet19.java))
|
||||
* **FaceNetNN4Small2** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/FaceNetNN4Small2.java))
|
||||
* **InceptionResNetV1** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/InceptionResNetV1.java))
|
||||
* **LeNet** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/LeNet.java))
|
||||
* **ResNet50** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/ResNet50.java))
|
||||
* **SimpleCNN** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/SimpleCNN.java))
|
||||
* **TextGenerationLSTM** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/TextGenerationLSTM.java))
|
||||
* **TinyYOLO** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/TinyYOLO.java))
|
||||
* **VGG16** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/VGG16.java))
|
||||
* **VGG19** - ([Source](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-zoo/src/main/java/org/deeplearning4j/zoo/model/VGG19.java))
|
||||
|
||||
|
||||
**Note*: Trained Keras models (not provided by DL4J) may also be imported, using Deeplearning4j's Keras model import functionality.
|
||||
|
@ -635,7 +635,7 @@ TransformProcess tp = new TransformProcess.Builder(schema)
|
|||
JavaRDD<List<Writable>> processedData = SparkTransformExecutor.execute(parsedInputData, tp);
|
||||
```
|
||||
|
||||
We recommend having a look at the [DataVec examples](https://github.com/deeplearning4j/dl4j-examples/tree/master/datavec-examples/src/main/java/org/datavec/transform) before creating more complex transformations.
|
||||
We recommend having a look at the [DataVec examples](https://github.com/eclipse/deeplearning4j-examples/tree/master/datavec-examples/src/main/java/org/datavec/transform) before creating more complex transformations.
|
||||
|
||||
|
||||
### Evaluation
|
||||
|
|
|
@ -18,7 +18,7 @@ Unlike other machine learning or deep learning frameworks, DL4J treats the tasks
|
|||
|
||||
Before the algorithm can start learning, you have to prepare the data, even if you already have a trained model. Preparing data means loading it and putting it in the right shape and value range (e.g. normalization, zero-mean and unit variance). Building these processes from scratch is error prone, so use DataVec wherever possible.
|
||||
|
||||
Deeplearning4j works with a lot of different data types, such as images, CSV, plain text and, with [Apache Camel](https://camel.apache.org/) [integration](https://github.com/deeplearning4j/DataVec/tree/master/datavec-camel), pretty much any other data type you can think of.
|
||||
Deeplearning4j works with a lot of different data types, such as images, CSV, plain text and, with [Apache Camel](https://camel.apache.org/) [integration](https://github.com/eclipse/deeplearning4j/tree/master/datavec/tree/master/datavec-camel), pretty much any other data type you can think of.
|
||||
|
||||
To use DataVec, you will need one of the implementations of the [RecordReader](/api/{{page.version}}/org/datavec/api/records/reader/RecordReader.html) interface along with the [RecordReaderDataSetIterator](/api/{{page.version}}/org/deeplearning4j/datasets/datavec/RecordReaderDataSetIterator.html).
|
||||
|
||||
|
@ -95,7 +95,7 @@ Also note that DL4J does not only support training just `MultiLayerNetworks`, bu
|
|||
As you train your model, you will want to test how well it performs. For that test, you will need a dedicated data set that will not be used for training but instead will only be used for evaluating your model. This data should have the same distribution as the real-world data you want to make predictions about with your model. The reason you can't simply use your training data for evaluation is because machine learning methods are prone to overfitting (getting good at making predictions about the training set, but not performing well on larger datasets).
|
||||
|
||||
The [Evaluation](/api/{{page.version}}/org/deeplearning4j/eval/Evaluation.html)
|
||||
class is used for evaluation. Slightly different methods apply to evaluating a normal feed forward networks or recurrent networks. For more details on using it, take a look at the corresponding [examples](https://github.com/deeplearning4j/dl4j-examples).
|
||||
class is used for evaluation. Slightly different methods apply to evaluating a normal feed forward networks or recurrent networks. For more details on using it, take a look at the corresponding [examples](https://github.com/eclipse/deeplearning4j-examples).
|
||||
|
||||
## Troubleshooting a Neural Net Model
|
||||
|
||||
|
|
|
@ -260,7 +260,7 @@ For inference, avoid using minibatch size of 1, as throughput will suffer. Unles
|
|||
|
||||
For training, you should never use a minibatch size of 1 as overall performance and hardware utilization will be reduced. Network convergence may also suffer. Start with a minibatch size of 32-128, if memory will allow this to be used.
|
||||
|
||||
For serving predictions in multi-threaded applications (such as a web server), [ParallelInference](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-scaleout/deeplearning4j-scaleout-parallelwrapper/src/main/java/org/deeplearning4j/parallelism/ParallelInference.java) should be used.
|
||||
For serving predictions in multi-threaded applications (such as a web server), [ParallelInference](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-scaleout/deeplearning4j-scaleout-parallelwrapper/src/main/java/org/deeplearning4j/parallelism/ParallelInference.java) should be used.
|
||||
|
||||
|
||||
## Step 6: Ensure you are not using a single MultiLayerNetwork/ComputationGraph for inference from multiple threads
|
||||
|
@ -270,7 +270,7 @@ That said, most operations such as fit, output, etc use synchronized blocks. The
|
|||
In summary, using the one network from multiple threads should be avoided as it is not thread safe and can be a performance bottleneck.
|
||||
|
||||
|
||||
For inference from multiple threads, you should use one model per thread (as this avoids locks) or for serving predictions in multi-threaded applications (such as a web server), use [ParallelInference](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-scaleout/deeplearning4j-scaleout-parallelwrapper/src/main/java/org/deeplearning4j/parallelism/ParallelInference.java).
|
||||
For inference from multiple threads, you should use one model per thread (as this avoids locks) or for serving predictions in multi-threaded applications (such as a web server), use [ParallelInference](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-scaleout/deeplearning4j-scaleout-parallelwrapper/src/main/java/org/deeplearning4j/parallelism/ParallelInference.java).
|
||||
|
||||
## Step 7: Check Data Types
|
||||
|
||||
|
@ -386,7 +386,7 @@ In summary, in ND4J we use OpenMP pallelism at the c++ level to increase operati
|
|||
|
||||
This also applies if the CPU resources are shared with other computationally demanding processes.
|
||||
|
||||
In either case, you may see better overall throughput by reducing the number of OpenMP threads by setting the OMP_NUM_THREADS environment variable - see [ND4JEnvironmentVars](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-common/src/main/java/org/nd4j/config/ND4JEnvironmentVars.java) for details.
|
||||
In either case, you may see better overall throughput by reducing the number of OpenMP threads by setting the OMP_NUM_THREADS environment variable - see [ND4JEnvironmentVars](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-common/src/main/java/org/nd4j/config/ND4JEnvironmentVars.java) for details.
|
||||
|
||||
One reason for reducing OMP_NUM_THREADS improving overall performance is due to reduced [cache thrashing](https://en.wikipedia.org/wiki/Thrashing_(computer_science)).
|
||||
|
||||
|
|
|
@ -71,7 +71,7 @@ to
|
|||
**Sample pom.xml using Snapshots**
|
||||
|
||||
A sample pom.xml is provided here: [sample pom.xml using snapshots](https://gist.github.com/AlexDBlack/28b0c9a72bce562c8782be326a6e2aaa)
|
||||
This has been taken from the DL4J standalone sample project and modified using step 1 and 2 above. The original (using the last release) can be found [here](https://github.com/deeplearning4j/dl4j-examples/blob/master/standalone-sample-project/pom.xml)
|
||||
This has been taken from the DL4J standalone sample project and modified using step 1 and 2 above. The original (using the last release) can be found [here](https://github.com/eclipse/deeplearning4j-examples/blob/master/standalone-sample-project/pom.xml)
|
||||
|
||||
|
||||
## <a name="Limitations">Limitations</a>
|
||||
|
@ -129,4 +129,4 @@ dependencies {
|
|||
|
||||
should work in theory, but it does not. This is due to [a bug in Gradle](https://github.com/gradle/gradle/issues/2882). Gradle with snapshots *and* Maven classifiers appears to be a problem.
|
||||
|
||||
Of note when using the nd4j-native backend on Gradle (and SBT - but not Maven), you need to add openblas as a dependency. We do this for you in the -platform pom. Reference the -platform pom [here](https://github.com/deeplearning4j/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-backend-impls/nd4j-native-platform/pom.xml#L19) to double check your dependencies. Note that these are version properties. See the ```<properties>``` section of the pom for current versions of the openblas and javacpp presets required to run nd4j-native.
|
||||
Of note when using the nd4j-native backend on Gradle (and SBT - but not Maven), you need to add openblas as a dependency. We do this for you in the -platform pom. Reference the -platform pom [here](https://github.com/eclipse/deeplearning4j/blob/master/nd4j/nd4j-backends/nd4j-backend-impls/nd4j-native-platform/pom.xml#L19) to double check your dependencies. Note that these are version properties. See the ```<properties>``` section of the pom for current versions of the openblas and javacpp presets required to run nd4j-native.
|
||||
|
|
|
@ -8,14 +8,14 @@ weight: 10
|
|||
|
||||
## Prerequisites
|
||||
|
||||
Before contributing, make sure you know the structure of all of the Eclipse Deeplearning4j libraries. As of early 2018, all libraries now live in the Deeplearning4j [monorepo](https://github.com/deeplearning4j/deeplearning4j). These include:
|
||||
Before contributing, make sure you know the structure of all of the Eclipse Deeplearning4j libraries. As of early 2018, all libraries now live in the Deeplearning4j [monorepo](https://github.com/eclipse/deeplearning4j). These include:
|
||||
|
||||
- DeepLearning4J: Contains all of the code for learning neural networks, both on a single machine and distributed.
|
||||
- ND4J: “N-Dimensional Arrays for Java”. ND4J is the mathematical backend upon which DL4J is built. All of DL4J’s neural networks are built using the operations (matrix multiplications, vector operations, etc) in ND4J. ND4J is how DL4J supports both CPU and GPU training of networks, without any changes to the networks themselves. Without ND4J, there would be no DL4J.
|
||||
- DataVec: DataVec handles the data import and conversion side of the pipeline. If you want to import images, video, audio or simply CSV data into DL4J: you probably want to use DataVec to do this.
|
||||
- Arbiter: Arbiter is a package for (amongst other things) hyperparameter optimization of neural networks. Hyperparameter optimization refers to the process of automating the selection of network hyperparameters (learning rate, number of layers, etc) in order to obtain good performance.
|
||||
|
||||
We also have an extensive examples repository at [dl4j-examples](https://github.com/deeplearning4j/dl4j-examples).
|
||||
We also have an extensive examples repository at [dl4j-examples](https://github.com/eclipse/deeplearning4j-examples).
|
||||
|
||||
|
||||
## Ways to contribute
|
||||
|
@ -34,8 +34,8 @@ There are numerous ways to contribute to DeepLearning4J (and related projects),
|
|||
There are a number of different ways to find things to work on. These include:
|
||||
|
||||
- Looking at the issue trackers:
|
||||
https://github.com/deeplearning4j/deeplearning4j/issues
|
||||
https://github.com/deeplearning4j/dl4j-examples/issues
|
||||
https://github.com/eclipse/deeplearning4j/issues
|
||||
https://github.com/eclipse/deeplearning4j-examples/issues
|
||||
- Reviewing our Roadmap
|
||||
- Talking to the developers on Gitter, especially our early adopters channel
|
||||
- Reviewing recent papers and blog posts on training features, network architectures and applications
|
||||
|
|
|
@ -18,31 +18,31 @@ Most of the examples make use of DataVec, a toolkit for preprocessing and clearn
|
|||
|
||||
This example takes the canonical Iris dataset of the flower species of the same name, whose relevant measurements are sepal length, sepal width, petal length and petal width. It builds a Spark RDD from the relatively small dataset and runs an analysis against it.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/datavec-examples/src/main/java/org/datavec/transform/analysis/IrisAnalysis.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/datavec-examples/src/main/java/org/datavec/transform/analysis/IrisAnalysis.java)
|
||||
|
||||
### BasicDataVecExample.java
|
||||
|
||||
This example loads data into a Spark RDD. All DataVec transform operations use Spark RDDs. Here, we use DataVec to filter data, apply time transformations and remove columns.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/datavec-examples/src/main/java/org/datavec/transform/basic/BasicDataVecExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/datavec-examples/src/main/java/org/datavec/transform/basic/BasicDataVecExample.java)
|
||||
|
||||
### PrintSchemasAtEachStep.java
|
||||
|
||||
This example shows the print Schema tools that are useful to visualize and to ensure that the code for the transform is behaving as expected.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/datavec-examples/src/main/java/org/datavec/transform/debugging/PrintSchemasAtEachStep.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/datavec-examples/src/main/java/org/datavec/transform/debugging/PrintSchemasAtEachStep.java)
|
||||
|
||||
### JoinExample.java
|
||||
|
||||
You may need to join datasets before passing to a neural network. You can do that in DataVec, and this example shows you how.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/datavec-examples/src/main/java/org/datavec/transform/join/JoinExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/datavec-examples/src/main/java/org/datavec/transform/join/JoinExample.java)
|
||||
|
||||
### LogDataExample.java
|
||||
|
||||
This is an example of parsing log data using DataVec. The obvious use cases are cybersecurity and customer relationship management.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/datavec-examples/src/main/java/org/datavec/transform/logdata/LogDataExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/datavec-examples/src/main/java/org/datavec/transform/logdata/LogDataExample.java)
|
||||
|
||||
### MnistImagePipelineExample.java
|
||||
|
||||
|
@ -50,19 +50,19 @@ This example is from the video below, which demonstrates the ParentPathLabelGene
|
|||
|
||||
<iframe width="560" height="315" src="http://www.youtube.com/embed/GLC8CIoHDnI" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/dataExamples/MnistImagePipelineExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/dataExamples/MnistImagePipelineExample.java)
|
||||
|
||||
### PreprocessNormalizerExample.java
|
||||
|
||||
This example demonstrates preprocessing features available in DataVec.
|
||||
|
||||
[Show me the code](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/dataexamples/PreprocessNormalizerExample.java)
|
||||
[Show me the code](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/dataexamples/PreprocessNormalizerExample.java)
|
||||
|
||||
### CSVExampleEvaluationMetaData.java
|
||||
|
||||
DataMeta data tracking - i.e. seeing where data for each example comes from - is useful when tracking down malformed data that causes errors and other issues. This example demostrates the functionality in the RecordMetaData class.
|
||||
|
||||
[Show me the code](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/dataexamples/CSVExampleEvaluationMetaData.java)
|
||||
[Show me the code](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/dataexamples/CSVExampleEvaluationMetaData.java)
|
||||
|
||||
---
|
||||
|
||||
|
@ -78,13 +78,13 @@ MNIST is the "Hello World" of deep learning. Simple, straightforward, and focuss
|
|||
|
||||
This is a Single Layer Perceptron for recognizing digits. Note that this pulls the images from a binary package containing the dataset, a rather special case for data ingestion.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward/mnist/MLPMnistSingleLayerExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward/mnist/MLPMnistSingleLayerExample.java)
|
||||
|
||||
### MLPMnistTwoLayerExample.java
|
||||
|
||||
A two-layer perceptron for MNIST, showing there is more than one useful network for a given dataset.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward/mnist/MLPMnistTwoLayerExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward/mnist/MLPMnistTwoLayerExample.java)
|
||||
|
||||
### Feedforward Examples
|
||||
|
||||
|
@ -92,7 +92,7 @@ Data flows through feed-forward neural networks in a single pass from input via
|
|||
|
||||
These networks can be used for a wide range of tasks depending on they are configured. Along with image classification over MNIST data, this directory has examples demonstrating regression, classification, and anomoly detection.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward)
|
||||
|
||||
### Convolutional Neural Networks
|
||||
|
||||
|
@ -102,7 +102,7 @@ Convolutional Neural Networks are mainly used for image recognition, although th
|
|||
|
||||
This example can be run using either LeNet or AlexNet.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/AnimalsClassification.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/AnimalsClassification.java)
|
||||
|
||||
---
|
||||
|
||||
|
@ -115,7 +115,7 @@ load the model for later training or inference.
|
|||
|
||||
This demonstrates saving and loading a network build using the class ComputationGraph.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/modelsaving/SaveLoadComputationGraph.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/modelsaving/SaveLoadComputationGraph.java)
|
||||
|
||||
### SaveLoadMultiLayerNetwork.java
|
||||
|
||||
|
@ -135,11 +135,11 @@ Do you need to add a Loss Function that is not available or prebuilt yet? Check
|
|||
|
||||
### CustomLossExample.java
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/lossfunctions/CustomLossExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/lossfunctions/CustomLossExample.java)
|
||||
|
||||
### CustomLossL1L2.java
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/lossfunctions/CustomLossL1L2.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/lossfunctions/CustomLossL1L2.java)
|
||||
|
||||
### Custom Layer
|
||||
|
||||
|
@ -147,7 +147,7 @@ Do you need to add a layer with features that aren't available in DeepLearning4J
|
|||
|
||||
### CustomLayerExample.java
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/customlayers/CustomLayerExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/customlayers/CustomLayerExample.java)
|
||||
|
||||
---
|
||||
|
||||
|
@ -159,25 +159,25 @@ Neural Networks for NLP? We have those, too.
|
|||
|
||||
Global Vectors for Word Representation are useful for detecting relationships between words.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/glove/GloVeExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/glove/GloVeExample.java)
|
||||
|
||||
### Paragraph Vectors
|
||||
|
||||
A vectorized representation of words. Described [here](https://cs.stanford.edu/~quocle/paragraph_vector.pdf)
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/paragraphvectors/ParagraphVectorsClassifierExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/paragraphvectors/ParagraphVectorsClassifierExample.java)
|
||||
|
||||
### Sequence Vectors
|
||||
|
||||
One way to represent sentences is as a sequence of words.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/sequencevectors/SequenceVectorsTextExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/sequencevectors/SequenceVectorsTextExample.java)
|
||||
|
||||
### Word2Vec
|
||||
|
||||
Described [here](https://deeplearning4j.org/word2vec.html)
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/word2vec/Word2VecRawTextExample.java)
|
||||
|
||||
---
|
||||
|
||||
|
@ -185,7 +185,7 @@ Described [here](https://deeplearning4j.org/word2vec.html)
|
|||
|
||||
t-Distributed Stochastic Neighbor Embedding (t-SNE) is useful for data visualization. We include an example in the NLP section since word similarity visualization is a common use.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/tsne/TSNEStandardExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/nlp/tsne/TSNEStandardExample.java)
|
||||
|
||||
---
|
||||
|
||||
|
@ -199,37 +199,37 @@ The examples folder for Recurrent Neural Networks has the following:
|
|||
|
||||
An RNN learns a string of characters.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/basic/BasicRNNExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/basic/BasicRNNExample.java)
|
||||
|
||||
### GravesLSTMCharModellingExample.java
|
||||
|
||||
Takes the complete works of Shakespeare as a sequence of characters and Trains a Neural Net to generate "Shakespeare" one character at a time.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/character/GravesLSTMCharModellingExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/character/GravesLSTMCharModellingExample.java)
|
||||
|
||||
### SingleTimestepRegressionExample.java
|
||||
|
||||
Regression with an LSTM (Long Short Term Memory) Recurrent Neural Network.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/regression/SingleTimestepRegressionExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/regression/SingleTimestepRegressionExample.java)
|
||||
|
||||
### AdditionRNN.java
|
||||
|
||||
This example trains a neural network to do addition.
|
||||
|
||||
[Show me the code](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/seq2seq/AdditionRNN.java)
|
||||
[Show me the code](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/seq2seq/AdditionRNN.java)
|
||||
|
||||
### RegressionMathFunctions.java
|
||||
|
||||
This example trains a neural network to perform various math operations.
|
||||
|
||||
[Show me the code](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward/regression/RegressionMathFunctions.java)
|
||||
[Show me the code](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward/regression/RegressionMathFunctions.java)
|
||||
|
||||
### UCISequenceClassificationExample.java
|
||||
|
||||
A publicly available dataset of time series data of six classes, cyclic, up-trending, etc. Example of an RNN learning to classify the time series.
|
||||
|
||||
[Show me the code](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/seqclassification/UCISequenceClassificationExample.java)
|
||||
[Show me the code](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/seqclassification/UCISequenceClassificationExample.java)
|
||||
|
||||
### VideoClassificationExample.java
|
||||
|
||||
|
@ -237,13 +237,13 @@ How do autonomous vehicles distinguish between a pedestrian, a stop sign and a g
|
|||
|
||||
This example is similar, but simplified. It combines convolutional, max pooling, dense (feed forward) and recurrent (LSTM) layers to classify frames in a video.
|
||||
|
||||
[Show me the code](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/video/VideoClassificationExample.java)
|
||||
[Show me the code](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/video/VideoClassificationExample.java)
|
||||
|
||||
### SentimentExampleIterator.java
|
||||
|
||||
This sentiment analysis example classifies sentiment as positive or negative using word vectors and a Recurrent Neural Network.
|
||||
|
||||
[Show me the code](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/word2vecsentiment/Word2VecSentimentRNN.java)
|
||||
[Show me the code](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/recurrent/word2vecsentiment/Word2VecSentimentRNN.java)
|
||||
|
||||
---
|
||||
|
||||
|
@ -254,13 +254,13 @@ DeepLearning4j supports using a Spark Cluster for network training. Here are the
|
|||
### MnistMLPExample.java
|
||||
|
||||
This is an example of a Multi-Layer Perceptron training on the Mnist data set of handwritten digits.
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-spark-examples/dl4j-spark/src/main/java/org/deeplearning4j/mlp/MnistMLPExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-spark-examples/dl4j-spark/src/main/java/org/deeplearning4j/mlp/MnistMLPExample.java)
|
||||
|
||||
### SparkLSTMCharacterExample.java
|
||||
|
||||
An LSTM recurrent Network in Spark.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-spark-examples/dl4j-spark/src/main/java/org/deeplearning4j/rnn/SparkLSTMCharacterExample.java)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-spark-examples/dl4j-spark/src/main/java/org/deeplearning4j/rnn/SparkLSTMCharacterExample.java)
|
||||
|
||||
---
|
||||
|
||||
|
@ -274,7 +274,7 @@ The learning algorithms and loss functions are executed as ND4J operations.
|
|||
|
||||
This is a directory with examples for creating and manipulating NDArrays.
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/tree/master/nd4j-examples/src/main/java/org/nd4j/examples)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/tree/master/nd4j-examples/src/main/java/org/nd4j/examples)
|
||||
|
||||
---
|
||||
|
||||
|
@ -282,4 +282,4 @@ This is a directory with examples for creating and manipulating NDArrays.
|
|||
|
||||
Deep learning algorithms have learned to play Space Invaders and Doom using reinforcement learning. DeepLearning4J/RL4J examples of Reinforcement Learning are available here:
|
||||
|
||||
[Show me the code](http://github.com/deeplearning4j/dl4j-examples/tree/master/rl4j-examples)
|
||||
[Show me the code](http://github.com/eclipse/deeplearning4j-examples/tree/master/rl4j-examples)
|
|
@ -109,7 +109,7 @@ This can be fixed by running:
|
|||
1. Use the command line to enter the following:
|
||||
|
||||
```shell
|
||||
$ git clone https://github.com/deeplearning4j/dl4j-examples.git
|
||||
$ git clone https://github.com/eclipse/deeplearning4j-examples.git
|
||||
$ cd dl4j-examples/
|
||||
$ mvn clean install
|
||||
```
|
||||
|
@ -135,9 +135,9 @@ To run DL4J in your own projects, we highly recommend using Maven for Java users
|
|||
- `nd4j-native-platform`, the CPU version of the ND4J library that powers DL4J
|
||||
- `datavec-api` - Datavec is our library vectorizing and loading data
|
||||
|
||||
Every Maven project has a POM file. Here is [how the POM file should appear](https://github.com/deeplearning4j/dl4j-examples/blob/master/pom.xml) when you run your examples.
|
||||
Every Maven project has a POM file. Here is [how the POM file should appear](https://github.com/eclipse/deeplearning4j-examples/blob/master/pom.xml) when you run your examples.
|
||||
|
||||
Within IntelliJ, you will need to choose the first Deeplearning4j example you're going to run. We suggest `MLPClassifierLinear`, as you will almost immediately see the network classify two groups of data in our UI. The file on [Github can be found here](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward/classification/MLPClassifierLinear.java).
|
||||
Within IntelliJ, you will need to choose the first Deeplearning4j example you're going to run. We suggest `MLPClassifierLinear`, as you will almost immediately see the network classify two groups of data in our UI. The file on [Github can be found here](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/feedforward/classification/MLPClassifierLinear.java).
|
||||
|
||||
To run the example, right click on it and select the green button in the drop-down menu. You will see, in IntelliJ's bottom window, a series of scores. The rightmost number is the error score for the network's classifications. If your network is learning, then that number will decrease over time with each batch it processes. At the end, this window will tell you how accurate your neural-network model has become:
|
||||
|
||||
|
@ -225,11 +225,11 @@ Deeplearning4j is a framework that lets you pick and choose with everything avai
|
|||
|
||||
If you'd like to deploy models to production, you might like our [model import from Keras](./keras-import-get-started).
|
||||
|
||||
Deeplearning4j has several submodules. These range from a visualization UI to distributed training on Spark. For an overview of these modules, please look at the [**Deeplearning4j examples on Github**](https://github.com/deeplearning4j/dl4j-examples).
|
||||
Deeplearning4j has several submodules. These range from a visualization UI to distributed training on Spark. For an overview of these modules, please look at the [**Deeplearning4j examples on Github**](https://github.com/eclipse/deeplearning4j-examples).
|
||||
|
||||
To get started with a simple desktop app, you need two things: An [nd4j backend](http://nd4j.org/backend.html) and `deeplearning4j-core`. For more code, see the [simpler examples submodule](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/pom.xml#L64).
|
||||
To get started with a simple desktop app, you need two things: An [nd4j backend](http://nd4j.org/backend.html) and `deeplearning4j-core`. For more code, see the [simpler examples submodule](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/pom.xml#L64).
|
||||
|
||||
If you want a flexible deep-learning API, there are two ways to go. You can use nd4j standalone See our [nd4j examples](https://github.com/deeplearning4j/dl4j-examples/tree/master/nd4j-examples) or the [computation graph API](http://deeplearning4j.org/compgraph).
|
||||
If you want a flexible deep-learning API, there are two ways to go. You can use nd4j standalone See our [nd4j examples](https://github.com/eclipse/deeplearning4j-examples/tree/master/nd4j-examples) or the [computation graph API](http://deeplearning4j.org/compgraph).
|
||||
|
||||
If you want distributed training on Spark, you can see our [Spark page](http://deeplearning4j.org/spark)
|
||||
Keep in mind that we cannot setup Spark for you. If you want to set up distributed Spark and GPUs, that is largely up to you. Deeplearning4j simply deploys as a JAR file on an existing Spark cluster.
|
||||
|
@ -239,13 +239,13 @@ If you want Spark with GPUs, we recommend [Spark with Mesos](https://spark.apach
|
|||
If you want to deploy on mobile, you can see our [Android page](./deeplearning4j-android).
|
||||
|
||||
We deploy optimized code for various hardware architectures natively. We use C++ based for loops just like everybody else.
|
||||
For that, please see our [C++ framework libnd4j](https://github.com/deeplearning4j/libnd4j).
|
||||
For that, please see our [C++ framework libnd4j](https://github.com/eclipse/deeplearning4j/tree/master/libnd4j).
|
||||
|
||||
Deeplearning4j has two other notable components:
|
||||
|
||||
* [Arbiter: hyperparameter optimization and model evaluation](./arbiter-overview)
|
||||
* [DataVec: built-in ETL for machine-learning data pipelines](./datavec-overview)
|
||||
|
||||
Deeplearning4j is meant to be an end-to-end platform for building real applications, not just a tensor library with automatic differentiation. If you want a tensor library with autodiff, please see ND4J and [samediff](https://github.com/deeplearning4j/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff). Samediff is still in alpha, but if you want to contribute, please join our [live chat on Gitter](https://gitter.im/deeplearning4j/deeplearning4j).
|
||||
Deeplearning4j is meant to be an end-to-end platform for building real applications, not just a tensor library with automatic differentiation. If you want a tensor library with autodiff, please see ND4J and [samediff](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff). Samediff is still in alpha, but if you want to contribute, please join our [live chat on Gitter](https://gitter.im/deeplearning4j/deeplearning4j).
|
||||
|
||||
Lastly, if you are benchmarking Deeplearnin4j, please consider coming in to our live chat and asking for tips. Deeplearning4j has [all the knobs](./deeplearning4j-config-gpu-cpu), but some may not work exactly like the Python frameworks to do. You have to build Deeplearning4j from source for some applications.
|
||||
|
|
|
@ -61,7 +61,7 @@ For training neural networks in a distributed manner, you may need a different (
|
|||
|
||||
### Policies and Scheduling
|
||||
|
||||
You can optionally define a learning rate policy for your neural network. A policy will change the learning rate over time, achieving better results since the learning rate can "slow down" to find closer local minima for convergence. A common policy used is scheduling. See the [LeNet example](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/LenetMnistExample.java) for a learning rate schedule used in practice.
|
||||
You can optionally define a learning rate policy for your neural network. A policy will change the learning rate over time, achieving better results since the learning rate can "slow down" to find closer local minima for convergence. A common policy used is scheduling. See the [LeNet example](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/LenetMnistExample.java) for a learning rate schedule used in practice.
|
||||
|
||||
Note that if you're using multiple GPUs, this will affect your scheduling. For example, if you have 2x GPUs, then you will need to divide the iterations in your schedule by 2, since the throughput of your training process will be double, and the learning rate schedule is only applicable to the local GPU.
|
||||
|
||||
|
@ -109,7 +109,7 @@ A good default choice in most cases is to use the stochastic gradient descent op
|
|||
|
||||
## <a name="normalization">Gradient Normalization</a>
|
||||
|
||||
When training a neural network, it can sometimes be helpful to apply gradient normalization, to avoid the gradients being too large (the so-called exploding gradient problem, common in recurrent neural networks) or too small. This can be applied using the .gradientNormalization(GradientNormalization) and .gradientNormalizationThreshould(double) methods. For an example of gradient normalization see, [GradientNormalization.java](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/GradientNormalization.java). The test code for that example is [here](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/test/java/org/deeplearning4j/nn/updater/TestGradientNormalization.java).
|
||||
When training a neural network, it can sometimes be helpful to apply gradient normalization, to avoid the gradients being too large (the so-called exploding gradient problem, common in recurrent neural networks) or too small. This can be applied using the .gradientNormalization(GradientNormalization) and .gradientNormalizationThreshould(double) methods. For an example of gradient normalization see, [GradientNormalization.java](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/GradientNormalization.java). The test code for that example is [here](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-core/src/test/java/org/deeplearning4j/nn/updater/TestGradientNormalization.java).
|
||||
|
||||
## <a name="rnn">Recurrent Neural Networks: Truncated Backpropagation through Time</a>
|
||||
|
||||
|
|
|
@ -21,4 +21,4 @@ We support all [Keras activation functions](https://keras.io/activations), namel
|
|||
* <i class="fa fa-check-square-o"></i> hard_sigmoid
|
||||
* <i class="fa fa-check-square-o"></i> linear
|
||||
|
||||
The mapping of Keras to DL4J activation functions is defined in [KerasActivationUtils](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasActivationUtils.java)
|
||||
The mapping of Keras to DL4J activation functions is defined in [KerasActivationUtils](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasActivationUtils.java)
|
||||
|
|
|
@ -15,4 +15,4 @@ All [Keras constraints](https://keras.io/constraints) are supported:
|
|||
* <i class="fa fa-check-square-o"></i> unit_norm
|
||||
* <i class="fa fa-check-square-o"></i> min_max_norm
|
||||
|
||||
Mapping Keras to DL4J constraints happens in [KerasConstraintUtils](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasConstraintUtils.java).
|
||||
Mapping Keras to DL4J constraints happens in [KerasConstraintUtils](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasConstraintUtils.java).
|
||||
|
|
|
@ -26,4 +26,4 @@ DL4J supports all available [Keras initializers](https://keras.io/initializers),
|
|||
* <i class="fa fa-check-square-o"></i> he_normal
|
||||
* <i class="fa fa-check-square-o"></i> he_uniform
|
||||
|
||||
The mapping of Keras to DL4J initializers can be found in [KerasInitilizationUtils](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasInitilizationUtils.java).
|
||||
The mapping of Keras to DL4J initializers can be found in [KerasInitilizationUtils](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasInitilizationUtils.java).
|
|
@ -25,4 +25,4 @@ DL4J supports all available [Keras losses](https://keras.io/losses) (except for
|
|||
* <i class="fa fa-check-square-o"></i> poisson
|
||||
* <i class="fa fa-check-square-o"></i> cosine_proximity
|
||||
|
||||
The mapping of Keras loss functions can be found in [KerasLossUtils](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasLossUtils.java).
|
||||
The mapping of Keras loss functions can be found in [KerasLossUtils](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasLossUtils.java).
|
|
@ -8,7 +8,7 @@ weight: 0
|
|||
|
||||
## Deeplearing4j: Keras model import
|
||||
|
||||
[Keras model import](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras)
|
||||
[Keras model import](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras)
|
||||
provides routines for importing neural network models originally configured and trained
|
||||
using [Keras](https://keras.io/), a popular Python deep learning library.
|
||||
|
||||
|
@ -61,7 +61,7 @@ Here's how you do training in DL4J for your imported model:
|
|||
model.fit(input, output);
|
||||
```
|
||||
|
||||
The full example just shown can be found in our [DL4J examples](https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/modelimport/keras/basic/SimpleSequentialMlpImport.java).
|
||||
The full example just shown can be found in our [DL4J examples](https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/modelimport/keras/basic/SimpleSequentialMlpImport.java).
|
||||
|
||||
|
||||
## Project setup
|
||||
|
@ -78,12 +78,12 @@ dependency to your pom.xml.
|
|||
```
|
||||
|
||||
If you need a project to get started in the first place, consider cloning
|
||||
[DL4J examples](https://github.com/deeplearning4j/dl4j-examples) and follow
|
||||
[DL4J examples](https://github.com/eclipse/deeplearning4j-examples) and follow
|
||||
the instructions in the repository to build the project.
|
||||
|
||||
## Popular models and applications
|
||||
|
||||
We support import for a growing number of applications, check [here](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/test/java/org/deeplearning4j/nn/modelimport/keras/e2e/KerasModelEndToEndTest.java)
|
||||
We support import for a growing number of applications, check [here](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/test/java/org/deeplearning4j/nn/modelimport/keras/e2e/KerasModelEndToEndTest.java)
|
||||
for a full list of currently covered models. These applications include
|
||||
|
||||
- Deep convolutional and Wasserstein GANs
|
||||
|
@ -105,7 +105,7 @@ Once you have imported your model, we recommend our own `ModelSerializer` class
|
|||
saving and reloading of your model.
|
||||
|
||||
You can inquire further by visiting the [DL4J gitter channel](https://gitter.im/deeplearning4j/deeplearning4j). You might consider filing
|
||||
a [feature request via Github](https://github.com/deeplearning4j/deeplearning4j/issues)
|
||||
a [feature request via Github](https://github.com/eclipse/deeplearning4j/issues)
|
||||
so that this missing functionality can be placed on the DL4J development roadmap or even
|
||||
sending us a pull request with the necessary changes!
|
||||
|
||||
|
|
|
@ -14,4 +14,4 @@ All [Keras regularizers] are supported by DL4J model import:
|
|||
* <i class="fa fa-check-square-o"></i> l2
|
||||
* <i class="fa fa-check-square-o"></i> l1_l2
|
||||
|
||||
Mapping of regularizers can be found in [KerasRegularizerUtils](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasRegularizerUtils.java).
|
||||
Mapping of regularizers can be found in [KerasRegularizerUtils](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasRegularizerUtils.java).
|
||||
|
|
|
@ -15,7 +15,7 @@ Chollet, who's at Google.
|
|||
|
||||
While not every concept in DL4J has an equivalent in Keras and vice versa, many of the
|
||||
key concepts can be matched. Importing keras models into DL4J is done in
|
||||
our [deeplearning4j-modelimport](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras)
|
||||
our [deeplearning4j-modelimport](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras)
|
||||
module. Below is a comprehensive list of currently supported features.
|
||||
|
||||
* [Layers](#layers)
|
||||
|
@ -29,73 +29,73 @@ module. Below is a comprehensive list of currently supported features.
|
|||
|
||||
|
||||
## <a name="layers">Layers</a>
|
||||
Mapping keras to DL4J layers is done in the [layers](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers) sub-module of model import. The structure of this project loosely reflects the structure of Keras.
|
||||
Mapping keras to DL4J layers is done in the [layers](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers) sub-module of model import. The structure of this project loosely reflects the structure of Keras.
|
||||
|
||||
### [Core Layers](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Dense](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasDense.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Activation](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasActivation.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Dropout](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasDropout.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Flatten](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasFlatten.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Reshape](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasReshape.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Merge](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasMerge.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Permute](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasPermute.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [RepeatVector](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasRepeatVector.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Lambda](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasLambda.java)
|
||||
### [Core Layers](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Dense](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasDense.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Activation](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasActivation.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Dropout](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasDropout.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Flatten](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasFlatten.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Reshape](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasReshape.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Merge](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasMerge.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Permute](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasPermute.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [RepeatVector](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasRepeatVector.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Lambda](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasLambda.java)
|
||||
* <i class="fas fa-times" style="color:#FF0000"></i> ActivityRegularization
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Masking](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasMasking.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [SpatialDropout1D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasSpatialDropout.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [SpatialDropout2D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasSpatialDropout.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [SpatialDropout3D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasSpatialDropout.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Masking](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasMasking.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [SpatialDropout1D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasSpatialDropout.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [SpatialDropout2D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasSpatialDropout.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [SpatialDropout3D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasSpatialDropout.java)
|
||||
|
||||
### [Convolutional Layers](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Conv1D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasConvolution1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Conv2D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasConvolution2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Conv3D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasConvolution3D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AtrousConvolution1D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasAtrousConvolution1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AtrousConvolution2D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasAtrousConvolution1D.java)
|
||||
### [Convolutional Layers](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Conv1D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasConvolution1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Conv2D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasConvolution2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Conv3D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasConvolution3D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AtrousConvolution1D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasAtrousConvolution1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AtrousConvolution2D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasAtrousConvolution1D.java)
|
||||
* <i class="fas fa-times" style="color:#FF0000"></i> SeparableConv1D
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [SeparableConv2D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasSeparableConvolution2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Conv2DTranspose](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasDeconvolution2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [SeparableConv2D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasSeparableConvolution2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Conv2DTranspose](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasDeconvolution2D.java)
|
||||
* <i class="fas fa-times" style="color:#FF0000"></i> Conv3DTranspose
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Cropping1D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasCropping1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Cropping2D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasCropping2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Cropping3D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasCropping3D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [UpSampling1D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasUpsampling1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [UpSampling2D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasUpsampling2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [UpSampling3D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasUpsampling2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [ZeroPadding1D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasZeroPadding1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [ZeroPadding2D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasZeroPadding2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [ZeroPadding3D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasZeroPadding3D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Cropping1D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasCropping1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Cropping2D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasCropping2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Cropping3D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasCropping3D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [UpSampling1D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasUpsampling1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [UpSampling2D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasUpsampling2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [UpSampling3D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasUpsampling2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [ZeroPadding1D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasZeroPadding1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [ZeroPadding2D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasZeroPadding2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [ZeroPadding3D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/convolutional/KerasZeroPadding3D.java)
|
||||
|
||||
### [Pooling Layers](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [MaxPooling1D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [MaxPooling2D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [MaxPooling3D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling3D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AveragePooling1D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AveragePooling2D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AveragePooling3D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling3D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalMaxPooling1D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalMaxPooling2D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalMaxPooling3D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalAveragePooling1D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalAveragePooling2D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalAveragePooling3D](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
### [Pooling Layers](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [MaxPooling1D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [MaxPooling2D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [MaxPooling3D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling3D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AveragePooling1D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AveragePooling2D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling2D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AveragePooling3D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasPooling3D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalMaxPooling1D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalMaxPooling2D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalMaxPooling3D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalAveragePooling1D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalAveragePooling2D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GlobalAveragePooling3D](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/pooling/KerasGlobalPooling.java)
|
||||
|
||||
### [Locally-connected Layers](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/local)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [LocallyConnected1D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/local/KerasLocallyConnected1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [LocallyConnected2D](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/local/KerasLocallyConnected2D.java)
|
||||
### [Locally-connected Layers](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/local)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [LocallyConnected1D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/local/KerasLocallyConnected1D.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [LocallyConnected2D](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/local/KerasLocallyConnected2D.java)
|
||||
|
||||
### [Recurrent Layers](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/recurrent)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [SimpleRNN](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/recurrent/KerasSimpleRnn.java)
|
||||
### [Recurrent Layers](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/recurrent)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [SimpleRNN](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/recurrent/KerasSimpleRnn.java)
|
||||
* <i class="fas fa-times" style="color:#FF0000"></i> GRU
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [LSTM](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/recurrent/KerasLstm.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [LSTM](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/recurrent/KerasLstm.java)
|
||||
* <i class="fas fa-times" style="color:#FF0000"></i> ConvLSTM2D
|
||||
|
||||
|
||||
### [Embedding Layers](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/embeddings)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Embedding](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/embeddings/KerasEmbedding.java)
|
||||
### [Embedding Layers](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/embeddings)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Embedding](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/embeddings/KerasEmbedding.java)
|
||||
|
||||
### [Merge Layers](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasMerge.java)
|
||||
### [Merge Layers](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/core/KerasMerge.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> Add / add
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> Multiply / multiply
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> Subtract / subtract
|
||||
|
@ -105,25 +105,25 @@ Mapping keras to DL4J layers is done in the [layers](https://github.com/deeplear
|
|||
* <i class="fas fa-times" style="color:#FF0000"></i> Dot / dot
|
||||
|
||||
|
||||
### [Advanced Activation Layers](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/advanced/activations)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [LeakyReLU](https://github.com/deeplearning4j/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/advanced/activations/KerasLeakyReLU.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [PReLU](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/advanced/activations/KerasPReLU.java)
|
||||
### [Advanced Activation Layers](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/advanced/activations)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [LeakyReLU](https://github.com/eclipse/deeplearning4j/tree/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/advanced/activations/KerasLeakyReLU.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [PReLU](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/advanced/activations/KerasPReLU.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> ELU
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [ThresholdedReLU](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/advanced/activations/KerasThresholdedReLU.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [ThresholdedReLU](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/advanced/activations/KerasThresholdedReLU.java)
|
||||
|
||||
### [Normalization Layers](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/normalization)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [BatchNormalization](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/normalization/KerasBatchNormalization.java)
|
||||
### [Normalization Layers](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/normalization)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [BatchNormalization](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/normalization/KerasBatchNormalization.java)
|
||||
|
||||
### Noise Layers
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GaussianNoise](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/noise/KerasGaussianNoise.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GaussianDropout](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/noise/KerasGaussianDropout.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AlphaDropout](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/noise/KerasAlphaDropout.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GaussianNoise](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/noise/KerasGaussianNoise.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [GaussianDropout](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/noise/KerasGaussianDropout.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [AlphaDropout](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/noise/KerasAlphaDropout.java)
|
||||
|
||||
### Layer Wrappers
|
||||
* <i class="fas fa-times" style="color:#FF0000"></i> TimeDistributed
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Bidirectional](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/wrappers/KerasBidirectional.java)
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> [Bidirectional](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/layers/wrappers/KerasBidirectional.java)
|
||||
|
||||
## <a name="losses">[Losses](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasLossUtils.java)</a>
|
||||
## <a name="losses">[Losses](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasLossUtils.java)</a>
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> mean_squared_error
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> mean_absolute_error
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> mean_absolute_percentage_error
|
||||
|
@ -139,7 +139,7 @@ Mapping keras to DL4J layers is done in the [layers](https://github.com/deeplear
|
|||
* <i class="far fa-check-square" style="color:#008000"></i> poisson
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> cosine_proximity
|
||||
|
||||
## <a name="activations">[Activations](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasActivationUtils.java)</a>
|
||||
## <a name="activations">[Activations](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasActivationUtils.java)</a>
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> softmax
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> elu
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> selu
|
||||
|
@ -151,7 +151,7 @@ Mapping keras to DL4J layers is done in the [layers](https://github.com/deeplear
|
|||
* <i class="far fa-check-square" style="color:#008000"></i> hard_sigmoid
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> linear
|
||||
|
||||
## <a name="initializers">[Initializers](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasInitilizationUtils.java)</a>
|
||||
## <a name="initializers">[Initializers](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasInitilizationUtils.java)</a>
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> Zeros
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> Ones
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> Constant
|
||||
|
@ -168,18 +168,18 @@ Mapping keras to DL4J layers is done in the [layers](https://github.com/deeplear
|
|||
* <i class="far fa-check-square" style="color:#008000"></i> he_normal
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> he_uniform
|
||||
|
||||
## <a name="regularizers">[Regularizers](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasRegularizerUtils.java)</a>
|
||||
## <a name="regularizers">[Regularizers](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasRegularizerUtils.java)</a>
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> l1
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> l2
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> l1_l2
|
||||
|
||||
## <a name="constraints">[Constraints](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasConstraintUtils.java)</a>
|
||||
## <a name="constraints">[Constraints](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasConstraintUtils.java)</a>
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> max_norm
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> non_neg
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> unit_norm
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> min_max_norm
|
||||
|
||||
## <a name="optimizers">[Optimizers](https://github.com/deeplearning4j/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasOptimizerUtils.java)</a>
|
||||
## <a name="optimizers">[Optimizers](https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-modelimport/src/main/java/org/deeplearning4j/nn/modelimport/keras/utils/KerasOptimizerUtils.java)</a>
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> SGD
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> RMSprop
|
||||
* <i class="far fa-check-square" style="color:#008000"></i> Adagrad
|
||||
|
|
|
@ -58,7 +58,7 @@ Some concepts you should be familiar with:
|
|||
|
||||
In terms of indexing there are a few things to know. First, rows are dimension 0, and columns are dimension 1: thus `INDArray.size(0)` is the number of rows, and `INDArray.size(1)` is the number of columns. Like normal arrays in most programming languages, indexing is zero-based: thus rows have indexes `0` to `INDArray.size(0)-1`, and so on for the other dimensions.
|
||||
|
||||
Throughout this document, we'll use the term `NDArray` to refer to the general concept of an n-dimensional array; the term `INDArray` refers specifically to the [Java interface](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ndarray/INDArray.java) that ND4J defines. In practice, these two terms can be used interchangeably.
|
||||
Throughout this document, we'll use the term `NDArray` to refer to the general concept of an n-dimensional array; the term `INDArray` refers specifically to the [Java interface](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ndarray/INDArray.java) that ND4J defines. In practice, these two terms can be used interchangeably.
|
||||
|
||||
### <a name="inmemory">NDArrays: How Are They Stored in Memory?</a>
|
||||
|
||||
|
@ -470,7 +470,7 @@ Note that with the `x.add(y)` operation, the original array `x` is not modified.
|
|||
|
||||
### <a name="opsscalar">Scalar Ops</a>
|
||||
|
||||
[Scalar ops](https://github.com/deeplearning4j/nd4j/tree/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/scalar) are element-wise operations that also take a scalar (i.e., a number). Examples of scalar ops are add, max, multiply, set and divide operations (see the previous link for a full list).
|
||||
[Scalar ops](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/scalar) are element-wise operations that also take a scalar (i.e., a number). Examples of scalar ops are add, max, multiply, set and divide operations (see the previous link for a full list).
|
||||
|
||||
A number of the methods such as `INDArray.addi(Number)` and `INDArray.divi(Number)` actually execute scalar ops behind the scenes, so when available, it is more convenient to use these methods.
|
||||
|
||||
|
@ -491,7 +491,7 @@ To execute an element-wise tanh operation directly (on the full NDArray) you can
|
|||
`INDArray tanh = Nd4j.getExecutioner().execAndReturn(new Tanh(myArr))`
|
||||
As with scalar ops mentioned above, transform operations using the above method are *in-place* operations: that is, the NDArray myArr is modified, and the returned array `tanh` is actually the same object as the input `myArr`. Again, you can use `myArr.dup()` if you want a copy.
|
||||
|
||||
The [Transforms class](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/ops/transforms/Transforms.java) also defines some convenience methods, such as: `INDArray tanh = Transforms.tanh(INDArray in,boolean copy);` This is equivalent to the method using `Nd4j.getExecutioner()` above.
|
||||
The [Transforms class](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/ops/transforms/Transforms.java) also defines some convenience methods, such as: `INDArray tanh = Transforms.tanh(INDArray in,boolean copy);` This is equivalent to the method using `Nd4j.getExecutioner()` above.
|
||||
|
||||
### <a name="opsaccum">Accumulation (Reduction) Ops</a>
|
||||
|
||||
|
@ -524,7 +524,7 @@ Accumulations along dimensions also generalize to NDArrays with 3 or more dimens
|
|||
|
||||
### <a name="opsindexaccum">Index Accumulation Ops</a>
|
||||
|
||||
[Index accumulation ops](https://github.com/deeplearning4j/nd4j/tree/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/indexaccum) are very similar to accumulation ops. The difference is that they return an integer index, instead of a double values.
|
||||
[Index accumulation ops](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/indexaccum) are very similar to accumulation ops. The difference is that they return an integer index, instead of a double values.
|
||||
|
||||
Examples of index accumulation ops are IMax (argmax), IMin (argmin) and IAMax (argmax of absolute values).
|
||||
|
||||
|
@ -561,7 +561,7 @@ As with other ops, there are inplace and copy versions. There are also column co
|
|||
|
||||
[This section: Forthcoming.]
|
||||
|
||||
[Link: Boolean Indexing Unit Tests](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/linalg/indexing/BooleanIndexingTest.java)
|
||||
[Link: Boolean Indexing Unit Tests](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/linalg/indexing/BooleanIndexingTest.java)
|
||||
|
||||
|
||||
## <a name="workspaces">Workspaces</a>
|
||||
|
@ -571,7 +571,7 @@ Workspaces are a feature of ND4J used to improve performance, by means of more e
|
|||
For more details on workspaces, see the following links:
|
||||
|
||||
* <a href="https://deeplearning4j.org/workspaces">Deeplearning4j Guide to Workspaces</a>
|
||||
* <a href="https://github.com/deeplearning4j/dl4j-examples/blob/master/nd4j-examples/src/main/java/org/nd4j/examples/Nd4jEx15_Workspaces.java">Workspaces Examples</a>
|
||||
* <a href="https://github.com/eclipse/deeplearning4j-examples/blob/master/nd4j-examples/src/main/java/org/nd4j/examples/Nd4jEx15_Workspaces.java">Workspaces Examples</a>
|
||||
|
||||
### <a name="workspaces-panic">Workspaces: Scope Panic</a>
|
||||
|
||||
|
@ -722,7 +722,7 @@ arrRead = Nd4j.readTxt("tmp.txt");
|
|||
arrRead =Nd4j.readNumpy("tmp.csv", ", ");
|
||||
```
|
||||
|
||||
The [nd4j-serde](https://github.com/deeplearning4j/nd4j/tree/master/nd4j-serde) directory provides packages for Aeron, base64, camel-routes, gsom, jackson and kryo.
|
||||
The [nd4j-serde](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-serde) directory provides packages for Aeron, base64, camel-routes, gsom, jackson and kryo.
|
||||
|
||||
|
||||
## <a name="quickref">Quick Reference: A Summary Overview of ND4J Methods</a>
|
||||
|
@ -804,7 +804,7 @@ Note: all of these methods return
|
|||
|
||||
**Element-Wise Transforms (Tanh, Sigmoid, Sin, Log etc)**:
|
||||
|
||||
* Using [Transforms](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/ops/transforms/Transforms.java): `Transforms.sin(INDArray)`, `Transforms.log(INDArray)`, `Transforms.sigmoid(INDArray)` etc
|
||||
* Using [Transforms](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/ops/transforms/Transforms.java): `Transforms.sin(INDArray)`, `Transforms.log(INDArray)`, `Transforms.sigmoid(INDArray)` etc
|
||||
* Directly (method 1): `Nd4j.getExecutioner().execAndReturn(new Tanh(INDArray))`
|
||||
* Directly (method 2) `Nd4j.getExecutioner().execAndReturn(Nd4j.getOpFactory().createTransform("tanh",INDArray))`
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ weight: 10
|
|||
|
||||
For the complete nd4j-api index, please consult the [Javadoc](../doc).
|
||||
|
||||
There are three types of operations used in ND4J: scalars, transforms and accumulations. We’ll use the word op synonymously with operation. You can see the lists of those three kinds of [ND4J ops under the directories here]( https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl
|
||||
There are three types of operations used in ND4J: scalars, transforms and accumulations. We’ll use the word op synonymously with operation. You can see the lists of those three kinds of [ND4J ops under the directories here]( https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl
|
||||
). Each Java file in each list is an op.
|
||||
|
||||
Most of the ops just take [enums](https://docs.oracle.com/javase/tutorial/java/javaOO/enum.html), or a list of discrete values that you can autocomplete. Activation functions are the exception, because they take strings such as `"relu"` or `"tanh"`.
|
||||
|
@ -49,7 +49,7 @@ As you can see, there are three possible argument types with ND4J ops: inputs, o
|
|||
|Transforms.tanh(myArray)|Hyperbolic tangent: a sigmoidal function. This applies elementwise tanh inplace.|
|
||||
|Nd4j.getExecutioner().exec(Nd4j.getOpFactory() .createTransform("tanh", myArray))|equivalent to the above|
|
||||
|
||||
For other transforms, [please see this page](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/ops/transforms/Transforms.java).
|
||||
For other transforms, [please see this page](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/ops/transforms/Transforms.java).
|
||||
|
||||
Here are two examples of performing `z = tanh(x)`, in which the original array `x` is unmodified.
|
||||
```java
|
||||
|
|
|
@ -10,7 +10,7 @@ weight: 2
|
|||
|
||||
### A quick SameDiff overview
|
||||
|
||||
To get started with SameDiff, familiarize yourself with the `autodiff` module of the ND4J API located [here on GitHub.](https://github.com/deeplearning4j/nd4j/tree/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff)
|
||||
To get started with SameDiff, familiarize yourself with the `autodiff` module of the ND4J API located [here on GitHub.](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff)
|
||||
|
||||
For better or worse, SameDiff code is organized in just a few key places. For basic usage and testing of SameDiff the following modules are key. We'll discuss some of them in more detail in just a bit.
|
||||
|
||||
|
@ -22,7 +22,7 @@ For better or worse, SameDiff code is organized in just a few key places. For ba
|
|||
|
||||
### Differential functions in the `functions` module
|
||||
|
||||
See the `functions` module on [GitHub.](https://github.com/deeplearning4j/nd4j/tree/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff/functions)
|
||||
See the `functions` module on [GitHub.](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff/functions)
|
||||
|
||||
The central abstraction of the `functions` module is `DifferentialFunction`, which underlies pretty much everything in SameDiff. Mathematically, what we're doing in SameDiff is build a directed acyclic graph whose nodes are differential functions, for which we can compute gradients. In that regard, `DifferentialFunction` makes up a SameDiff graph on a fundamental level.
|
||||
|
||||
|
@ -82,7 +82,7 @@ public List<SDVariable> doDiff(List<SDVariable> grad) {
|
|||
|
||||
### Building and executing graphs in `samediff`
|
||||
|
||||
See the `samediff` module on [GitHub.](https://github.com/deeplearning4j/nd4j/tree/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff/samediff)
|
||||
See the `samediff` module on [GitHub.](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff/samediff)
|
||||
|
||||
Not surprisingly, this is where the magic happens. This module has the core structures that SameDiff operates with. First, let's have a look at the variables that make up SameDiff operations.
|
||||
|
||||
|
@ -107,7 +107,7 @@ allows you to call `c = a.add(b)` on two SameDiff variables, the result of which
|
|||
|
||||
The `SameDiff` class is the main workhorse of the module and brings together most of the concepts discussed so far. A little unfortunately, the inverse is also true and `SameDiff` instances are part of all other SameDiff module abstractions in some way or the other (which is why you've seen it many times already). Generally speaking, `SameDiff` is the main entry point for automatic differentiation and you use it to define a symbolic graph that carries operations on `SDVariable`s. Once built, a SameDiff graph can be run in a few ways, for instance `exec()` and `execAndEndResult()`.
|
||||
|
||||
Convince yourself that invoking `SameDiff()` sets up a [million things!]( https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff/samediff/SameDiff.java#L817-L846) Essentially, `SameDiff` will collect and give you access (in terms of both getters and setters) to
|
||||
Convince yourself that invoking `SameDiff()` sets up a [million things!]( https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff/samediff/SameDiff.java#L817-L846) Essentially, `SameDiff` will collect and give you access (in terms of both getters and setters) to
|
||||
|
||||
- All differential functions for the graph, with all their properties, which can be accessed in various ways (e.g. name or id).
|
||||
- All inputs and output information for said functions.
|
||||
|
@ -142,54 +142,54 @@ SDVariable inMul2 = in.mul(2.0);
|
|||
sd.exec();
|
||||
```
|
||||
|
||||
This example is taken from [SameDiffTests](https://github.com/deeplearning4j/nd4j/blob/4c00b19ad4972399264233b6f0b0f5a22493235b/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/samediff/SameDiffTests.java), one of the main test sources, in which you also find a few complete end-to-end examples.
|
||||
This example is taken from [SameDiffTests](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/samediff/SameDiffTests.java), one of the main test sources, in which you also find a few complete end-to-end examples.
|
||||
|
||||
The second place you find tests is in [gradcheck](https://github.com/deeplearning4j/nd4j/tree/4c00b19ad4972399264233b6f0b0f5a22493235b/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/gradcheck). Whenever you add a new operation to SameDiff, add tests for the forward pass and gradient checks as well.
|
||||
The second place you find tests is in [gradcheck](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/gradcheck). Whenever you add a new operation to SameDiff, add tests for the forward pass and gradient checks as well.
|
||||
|
||||
The third set of relevant tests is stored in [imports](https://github.com/deeplearning4j/nd4j/tree/20e3d53dbcd56a14dd1b7572dd52d5e200e9a4ba/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/imports) and contains test for importing TensorFlow and ONNX graphs. On a side note, the resources for these import tests are generated in our [TFOpsTests](https://github.com/deeplearning4j/TFOpTests) project.
|
||||
The third set of relevant tests is stored in [imports](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/imports) and contains test for importing TensorFlow and ONNX graphs. On a side note, the resources for these import tests are generated in our [TFOpsTests](https://github.com/deeplearning4j/TFOpTests) project.
|
||||
|
||||
### Creating and exposing new SameDiff ops
|
||||
|
||||
We've seen how ND4J operations get picked up by `DifferentialFunctionFactory` and `SameDiff` to expose them to SameDiff at various levels. As for actually implementing these ops, you need to know a few things. In libnd4j you find two classes of operations, which are described [here](https://github.com/deeplearning4j/libnd4j/blob/5dea2d228c61cdec7535d1c0c6aa093a15fef9fa/AddingNewOps.md) in detail. We'll show how to implement both op types.
|
||||
We've seen how ND4J operations get picked up by `DifferentialFunctionFactory` and `SameDiff` to expose them to SameDiff at various levels. As for actually implementing these ops, you need to know a few things. In libnd4j you find two classes of operations, which are described [here](https://github.com/eclipse/deeplearning4j/blob/master/libnd4j/AddingNewOps.md) in detail. We'll show how to implement both op types.
|
||||
|
||||
All operations go [here](https://github.com/deeplearning4j/nd4j/tree/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl), and most of the time it's obvious where exactly to put the ops. Special attention goes to `layers`, which is reserved for deep learning layer implementations (like [`Conv2D`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/layers/convolution/Conv2D.java)). These higher-level ops are based on the concept of [Modules](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/BaseModule.java), similar to modules in pytorch or layers in TensorFlow. These layer op implementation also provide a source of more involved op implementations.
|
||||
All operations go [here](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl), and most of the time it's obvious where exactly to put the ops. Special attention goes to `layers`, which is reserved for deep learning layer implementations (like [`Conv2D`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/layers/convolution/Conv2D.java)). These higher-level ops are based on the concept of [Modules](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/BaseModule.java), similar to modules in pytorch or layers in TensorFlow. These layer op implementation also provide a source of more involved op implementations.
|
||||
|
||||
#### Implementing legacy operations
|
||||
|
||||
Legacy (or XYZ) operations are the old breed of ND4J operations with a characteristic "xyz" signature. Here's how to implement cosine in ND4J by wrapping the `cos` legacy op from libn4j: [Cosine implementation](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Cos.java#L38-L72). When it comes to SameDiff, the good thing about legacy ops is that they're already available in ND4J, but need to be augmented by SameDiff specific functionality to pass the muster. Since the cosine function does not have any properties, this implementation is straightforward. The parts that make this op SameDiff compliant are:
|
||||
Legacy (or XYZ) operations are the old breed of ND4J operations with a characteristic "xyz" signature. Here's how to implement cosine in ND4J by wrapping the `cos` legacy op from libn4j: [Cosine implementation](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Cos.java#L38-L72). When it comes to SameDiff, the good thing about legacy ops is that they're already available in ND4J, but need to be augmented by SameDiff specific functionality to pass the muster. Since the cosine function does not have any properties, this implementation is straightforward. The parts that make this op SameDiff compliant are:
|
||||
|
||||
- You specify SameDiff constructors [here](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Cos.java#L38-L51)
|
||||
- You implement `doDiff` [here] (https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Cos.java#L38-L51)
|
||||
- You specify a SameDiff `opName`, a TensorFlow `tensorflowName` and an ONNX `onnxName` [here](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Cos.java#L74-L93).
|
||||
- You specify SameDiff constructors [here](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Cos.java#L38-L51)
|
||||
- You implement `doDiff` [here] (https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Cos.java#L38-L51)
|
||||
- You specify a SameDiff `opName`, a TensorFlow `tensorflowName` and an ONNX `onnxName` [here](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Cos.java#L74-L93).
|
||||
|
||||
If you look closely, this is only part of the truth, since `Cos` extends `BaseTransformOp`, which implements other SameDiff functionality. (Note that `BaseTransformOp` is a [`BaseOp`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/BaseOp.java), which extends `DifferentialFunction` from earlier.) For instance, `calculateOutputShape` is [implemented there](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/BaseTransformOp.java#L195-L207). If you want to implement a new transform, you can simply inherit from `BaseTransformOp`, too. For other op types like reductions etc. there are op base classes available as well, meaning you only need to address the three bullet points above.
|
||||
If you look closely, this is only part of the truth, since `Cos` extends `BaseTransformOp`, which implements other SameDiff functionality. (Note that `BaseTransformOp` is a [`BaseOp`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/BaseOp.java), which extends `DifferentialFunction` from earlier.) For instance, `calculateOutputShape` is [implemented there](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/BaseTransformOp.java#L195-L207). If you want to implement a new transform, you can simply inherit from `BaseTransformOp`, too. For other op types like reductions etc. there are op base classes available as well, meaning you only need to address the three bullet points above.
|
||||
|
||||
In the rare case you need to write a legacy op from scratch, you'll have to find the respective op number from libn4j, which can be found in `legacy_ops.h`.
|
||||
|
||||
#### Implementing Dynamic Custom Operations
|
||||
|
||||
[`DynamicCustomOp`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/DynamicCustomOp.java) is the new kind of operation from libnd4j and all recent additions are implemented as such. This operation type in ND4J directly extends `DifferentialFunction`.
|
||||
[`DynamicCustomOp`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/DynamicCustomOp.java) is the new kind of operation from libnd4j and all recent additions are implemented as such. This operation type in ND4J directly extends `DifferentialFunction`.
|
||||
|
||||
[Here's](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/BatchToSpace.java) an example of the `BatchToSpace` operation, which inherits from `DynamicCustomOp`:
|
||||
[Here's](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/BatchToSpace.java) an example of the `BatchToSpace` operation, which inherits from `DynamicCustomOp`:
|
||||
|
||||
- BatchToSpace is [initialized](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/BatchToSpace.java#L49-L67) with two properties, `blocks` and `crops`. Note how `blocks` and `crops`, which are both of integer type, get added to _integer arguments_ for the operation by calling `addIArgument`. For float arguments and other _types_, use `addTArgument` instead.
|
||||
- The operation gets its own name and [names for import](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/BatchToSpace.java#L69-L82),
|
||||
- and `doDiff` is [implemented](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/BatchToSpace.java#L84-L89).
|
||||
- BatchToSpace is [initialized](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/BatchToSpace.java#L49-L67) with two properties, `blocks` and `crops`. Note how `blocks` and `crops`, which are both of integer type, get added to _integer arguments_ for the operation by calling `addIArgument`. For float arguments and other _types_, use `addTArgument` instead.
|
||||
- The operation gets its own name and [names for import](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/BatchToSpace.java#L69-L82),
|
||||
- and `doDiff` is [implemented](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/BatchToSpace.java#L84-L89).
|
||||
|
||||
The BatchToSpace operation is then integrated into `DifferentialFunctionFactory` [here](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff/functions/DifferentialFunctionFactory.java#L840-L844), exposed to `SameDiff` [here](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff/samediff/SameDiff.java#L2105-L2107) and tested [here](https://github.com/deeplearning4j/nd4j/blob/4c00b19ad4972399264233b6f0b0f5a22493235b/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/gradcheck/GradCheckTransforms.java#L151-L191).
|
||||
The BatchToSpace operation is then integrated into `DifferentialFunctionFactory` [here](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff/functions/DifferentialFunctionFactory.java#L840-L844), exposed to `SameDiff` [here](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/autodiff/samediff/SameDiff.java#L2105-L2107) and tested [here](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-tests/src/test/java/org/nd4j/autodiff/gradcheck/GradCheckTransforms.java#L151-L191).
|
||||
|
||||
The only thing BatchToSpace is currently missing is _property mapping_. We call the properties for this operation `blocks` and `crops`, but in ONNX or TensorFlow they might be called and stored quite differently. To look up the differences for mappings this correctly, see [`ops.proto`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/resources/ops.proto) for TensorFlow and [`onnxops.json`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/resources/onnxops.json) for ONNX.
|
||||
The only thing BatchToSpace is currently missing is _property mapping_. We call the properties for this operation `blocks` and `crops`, but in ONNX or TensorFlow they might be called and stored quite differently. To look up the differences for mappings this correctly, see [`ops.proto`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/resources/ops.proto) for TensorFlow and [`onnxops.json`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/resources/onnxops.json) for ONNX.
|
||||
|
||||
|
||||
Let's look at another operation that does property mapping right, namely [`DynamicPartition`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/DynamicPartition.java). This op has precisely one property, called `numPartitions` in SameDiff. To map and use this property, you do the following:
|
||||
Let's look at another operation that does property mapping right, namely [`DynamicPartition`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/DynamicPartition.java). This op has precisely one property, called `numPartitions` in SameDiff. To map and use this property, you do the following:
|
||||
|
||||
- Implement a little helper method called [`addArgs`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/DynamicPartition.java#L59-L61) that is used in the constructor of the op and in an import helper one-liner that we're discussing next. It's not necessary, but encouraged to do this and call it `addArgs` consistently, for clarity.
|
||||
- Override [`initFromTensorFlow` method](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/DynamicPartition.java#L63-L67) that maps properties for us using a `TFGraphMapper` instance and adding arguments with `addArgs`. Note that since ONNX does not support dynamic partitioning at the time of this writing (hence no `onnxName`) there's also no `initFromOnnx` method, which works pretty much the same way as `initFromTensorFlow`.
|
||||
- For the TensorFlow import to work, we also need to [override `mappingsForFunction`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/DynamicPartition.java#L70-L83). This example of a mapping is very simple, all it does is map TensorFlow's property name `num_partititions` to our name `numPartitions`.
|
||||
- Implement a little helper method called [`addArgs`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/DynamicPartition.java#L59-L61) that is used in the constructor of the op and in an import helper one-liner that we're discussing next. It's not necessary, but encouraged to do this and call it `addArgs` consistently, for clarity.
|
||||
- Override [`initFromTensorFlow` method](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/DynamicPartition.java#L63-L67) that maps properties for us using a `TFGraphMapper` instance and adding arguments with `addArgs`. Note that since ONNX does not support dynamic partitioning at the time of this writing (hence no `onnxName`) there's also no `initFromOnnx` method, which works pretty much the same way as `initFromTensorFlow`.
|
||||
- For the TensorFlow import to work, we also need to [override `mappingsForFunction`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/DynamicPartition.java#L70-L83). This example of a mapping is very simple, all it does is map TensorFlow's property name `num_partititions` to our name `numPartitions`.
|
||||
|
||||
Note that while `DynamicPartition` has proper property mapping, it currently does not have a working `doDiff` implementation.
|
||||
|
||||
As a last example, we show one that has a little more interesting property mapping setup, namely [`Dilation2D`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Dilation2D.java). Not only has this op far more properties to map, as you can see in [`mappingsForFunction`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Dilation2D.java#L59-L104), the properties also come with _property values_, as defined in [`attributeAdaptersForFunction`](https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Dilation2D.java#L106-L132). We've chosen to show this op because it is one that has property mapping, but is neither exposed to `DifferentialFunctionFactory` not `SameDiff`.
|
||||
As a last example, we show one that has a little more interesting property mapping setup, namely [`Dilation2D`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Dilation2D.java). Not only has this op far more properties to map, as you can see in [`mappingsForFunction`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Dilation2D.java#L59-L104), the properties also come with _property values_, as defined in [`attributeAdaptersForFunction`](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl/transforms/Dilation2D.java#L106-L132). We've chosen to show this op because it is one that has property mapping, but is neither exposed to `DifferentialFunctionFactory` not `SameDiff`.
|
||||
|
||||
Hence, the three `DynamicCustomOp` examples shown each come with their own defects and represent examples of the work that has to be done for SameDiff. To summarize, to add a new SameDiff op you need to:
|
||||
|
||||
|
|
|
@ -1,15 +0,0 @@
|
|||
## Contribute
|
||||
|
||||
1. Check for open issues, or open a new issue to start a discussion around a feature idea or a bug.
|
||||
2. If you feel uncomfortable or uncertain about an issue or your changes, feel free to contact us on Gitter using the link above.
|
||||
3. Fork [the repository](https://github.com/deeplearning4j/gym-java-client.git) on GitHub to start making your changes to the **master** branch (or branch off of it).
|
||||
4. Write a test, which shows that the bug was fixed or that the feature works as expected.
|
||||
5. Note the repository follows
|
||||
the [Google Java style](https://google.github.io/styleguide/javaguide.html)
|
||||
with two modifications: 120-char column wrap and 4-spaces indentation. You
|
||||
can format your code to this format by typing `mvn formatter:format` in the
|
||||
subproject you work on, by using the `contrib/formatter.xml` at the root of
|
||||
the repository to configure the Eclipse formatter, or by [using the INtellij
|
||||
plugin](https://github.com/HPI-Information-Systems/Metanome/wiki/Installing-the-google-styleguide-settings-in-intellij-and-eclipse).
|
||||
|
||||
6. Send a pull request, and bug us on Gitter until it gets merged and published.
|
|
@ -1,19 +0,0 @@
|
|||
#### Issue Description
|
||||
|
||||
Please describe your issue, along with:
|
||||
- expected behavior
|
||||
- encountered behavior
|
||||
|
||||
#### Version Information
|
||||
|
||||
Please indicate relevant versions, including, if relevant:
|
||||
|
||||
* Deeplearning4j version
|
||||
* platform information (OS, etc)
|
||||
* CUDA version, if used
|
||||
* NVIDIA driver version, if in use
|
||||
|
||||
#### Contributing
|
||||
|
||||
If you'd like to help us fix the issue by contributing some code, but would
|
||||
like guidance or help in doing so, please mention it!
|
|
@ -1,10 +0,0 @@
|
|||
## What changes were proposed in this pull request?
|
||||
|
||||
(Please fill in changes proposed in this fix)
|
||||
|
||||
## How was this patch tested?
|
||||
|
||||
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
|
||||
|
||||
Please review
|
||||
https://github.com/deeplearning4j/deeplearning4j/blob/master/CONTRIBUTING.md before opening a pull request.
|
|
@ -2,7 +2,7 @@ Jumpy: Python interface for [nd4j](https://nd4j.org)
|
|||
===========================================
|
||||
|
||||
[![Join the chat at https://gitter.im/deeplearning4j/deeplearning4j](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/deeplearning4j/deeplearning4j?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/deeplearning4j/deeplearning4j/blob/master/jumpy/LICENSE)
|
||||
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/eclipse/deeplearning4j/blob/master/jumpy/LICENSE)
|
||||
[![PyPI version](https://badge.fury.io/py/jumpy.svg)](https://badge.fury.io/py/jumpy)
|
||||
|
||||
Jumpy allows you to use ND4J from Python _without any network communication_. Many other Python libraries bridging Java
|
||||
|
|
|
@ -16,7 +16,7 @@ Please follow following instructions to build nd4j for raspberry PI:
|
|||
|
||||
```
|
||||
$ cd $HOME
|
||||
$ git clone https://github.com/deeplearning4j/deeplearning4j.git
|
||||
$ git clone https://github.com/eclipse/deeplearning4j.git
|
||||
```
|
||||
|
||||
3. build libnd4j:
|
||||
|
|
|
@ -13,7 +13,7 @@ delete graph;
|
|||
```
|
||||
|
||||
### FlatBuffers schemas
|
||||
You can find scheme files [here](https://github.com/deeplearning4j/libnd4j/tree/master/include/graph/scheme).
|
||||
You can find scheme files [here](https://github.com/eclipse/deeplearning4j/tree/master/libnd4j/include/graph/scheme).
|
||||
|
||||
At this moment libnd4j repo contains compiled definitions for C++, Python, Java, and JSON, but FlatBuffers can be compiled for PHP, C#, JavaScript, TypeScript and Go as well. Please refer to `flatc` instructions to do that.
|
||||
|
||||
|
|
|
@ -1770,7 +1770,7 @@ NDArray NDArray::operator()(const Nd4jLong i) const {
|
|||
} else {
|
||||
Nd4jLong idx[MAX_RANK];
|
||||
shape::ind2subC(rankOf(), shapeOf(), i, idx);
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), idx, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), idx);
|
||||
|
||||
auto cast = reinterpret_cast<int8_t *>(_buffer) + (xOffset * this->sizeOfT());
|
||||
NDArray result(cast, nd4j::ShapeBuilders::createScalarShapeInfo(this->dataType(), this->getWorkspace()));
|
||||
|
@ -1801,7 +1801,7 @@ NDArray& NDArray::operator()(const Nd4jLong i) {
|
|||
} else {
|
||||
Nd4jLong idx[MAX_RANK];
|
||||
shape::ind2subC(rankOf(), shapeOf(), i, idx);
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), idx, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), idx);
|
||||
|
||||
auto cast = reinterpret_cast<int8_t *>(_buffer) + (xOffset * this->sizeOfT());
|
||||
NDArray result(cast, nd4j::ShapeBuilders::createScalarShapeInfo(this->dataType(), this->getWorkspace()));
|
||||
|
@ -1818,7 +1818,7 @@ NDArray NDArray::operator()(const Nd4jLong i, const Nd4jLong j) const {
|
|||
throw std::invalid_argument("NDArray::operator(i,j): one of input indexes is out of array length or rank!=2 !");
|
||||
|
||||
Nd4jLong coords[2] = {i, j};
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
// TODO: do we really want a view here?
|
||||
auto cast = reinterpret_cast<int8_t *>(_buffer) + (xOffset * this->sizeOfT());
|
||||
|
@ -1834,7 +1834,7 @@ NDArray& NDArray::operator()(const Nd4jLong i, const Nd4jLong j) {
|
|||
throw std::invalid_argument("NDArray::operator(i,j): one of input indexes is out of array length or rank!=2 !");
|
||||
|
||||
Nd4jLong coords[2] = {i, j};
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
auto cast = reinterpret_cast<int8_t *>(_buffer) + (xOffset * this->sizeOfT());
|
||||
NDArray result(cast, nd4j::ShapeBuilders::createScalarShapeInfo(this->dataType(), this->getWorkspace()));
|
||||
|
@ -1853,7 +1853,7 @@ NDArray NDArray::operator()(const Nd4jLong i, const Nd4jLong j, const Nd4jLong k
|
|||
throw std::invalid_argument("NDArray::operator(i,j,k): one of input indexes is out of array length or rank!=3 !");
|
||||
|
||||
Nd4jLong coords[3] = {i, j, k};
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
auto cast = reinterpret_cast<int8_t *>(_buffer) + (xOffset * this->sizeOfT());
|
||||
NDArray result(cast, nd4j::ShapeBuilders::createScalarShapeInfo(this->dataType(), this->getWorkspace()));
|
||||
|
@ -1870,7 +1870,7 @@ NDArray& NDArray::operator()(const Nd4jLong i, const Nd4jLong j, const Nd4jLong
|
|||
throw std::invalid_argument("NDArray::operator(i,j,k): one of input indexes is out of array length or rank!=3 !");
|
||||
|
||||
Nd4jLong coords[3] = {i, j, k};
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
auto cast = reinterpret_cast<int8_t *>(_buffer) + (xOffset * this->sizeOfT());
|
||||
NDArray result(cast, nd4j::ShapeBuilders::createScalarShapeInfo(this->dataType(), this->getWorkspace()));
|
||||
|
@ -1886,7 +1886,7 @@ NDArray NDArray::operator()(const Nd4jLong t, const Nd4jLong u, const Nd4jLong v
|
|||
throw std::invalid_argument("NDArray::operator(t,u,v,w): one of input indexes is out of array length or rank!=4 !");
|
||||
|
||||
Nd4jLong coords[4] = {t, u, v, w};
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
auto cast = reinterpret_cast<int8_t *>(_buffer) + (xOffset * this->sizeOfT());
|
||||
NDArray result(cast, nd4j::ShapeBuilders::createScalarShapeInfo(this->dataType(), this->getWorkspace()));
|
||||
|
@ -1900,7 +1900,7 @@ NDArray& NDArray::operator()(const Nd4jLong t, const Nd4jLong u, const Nd4jLong
|
|||
throw std::invalid_argument("NDArray::operator(t,u,v,w): one of input indexes is out of array length or rank!=4 !");
|
||||
|
||||
Nd4jLong coords[4] = {t, u, v, w};
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
// FIXME
|
||||
auto cast = reinterpret_cast<int8_t *>(_buffer) + (xOffset * this->sizeOfT());
|
||||
|
@ -1916,7 +1916,7 @@ NDArray NDArray::operator()(const Nd4jLong* idx) const {
|
|||
if (idx[i] >= sizeAt(i))
|
||||
throw std::invalid_argument("NDArray::operator(const Nd4jLong* idx): input index is out of dimension length !");
|
||||
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), idx, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), idx);
|
||||
|
||||
auto cast = reinterpret_cast<int8_t *>(_buffer) + (xOffset * this->sizeOfT());
|
||||
NDArray result(cast, nd4j::ShapeBuilders::createScalarShapeInfo(this->dataType(), this->getWorkspace()));
|
||||
|
@ -1931,7 +1931,7 @@ NDArray& NDArray::operator()(const Nd4jLong* idx) {
|
|||
if (idx[i] >= sizeAt(i))
|
||||
throw std::invalid_argument("NDArray::operator(const Nd4jLong* idx): input index is out of dimension length !");
|
||||
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), idx, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), idx);
|
||||
|
||||
auto cast = reinterpret_cast<int8_t *>(_buffer) + (xOffset * this->sizeOfT());
|
||||
NDArray result(cast, nd4j::ShapeBuilders::createScalarShapeInfo(this->dataType(), this->getWorkspace()));
|
||||
|
@ -2067,7 +2067,7 @@ T& NDArray::t(const Nd4jLong i, const Nd4jLong j) {
|
|||
syncToHost();
|
||||
|
||||
Nd4jLong coords[2] = {i, j};
|
||||
auto offset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto offset = shape::getOffset(getShapeInfo(), coords);
|
||||
tickWriteHost();
|
||||
return *(reinterpret_cast<T*>(bufferWithOffset(offset)));
|
||||
}
|
||||
|
@ -2084,7 +2084,7 @@ T& NDArray::t(const Nd4jLong i, const Nd4jLong j, const Nd4jLong k) {
|
|||
syncToHost();
|
||||
|
||||
Nd4jLong coords[3] = {i, j, k};
|
||||
auto offset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto offset = shape::getOffset(getShapeInfo(), coords);
|
||||
tickWriteHost();
|
||||
return *(reinterpret_cast<T*>(bufferWithOffset(offset)));
|
||||
}
|
||||
|
@ -2118,7 +2118,7 @@ T NDArray::t(const Nd4jLong i, const Nd4jLong j) const {
|
|||
syncToHost();
|
||||
|
||||
Nd4jLong coords[2] = {i, j};
|
||||
auto offset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto offset = shape::getOffset(getShapeInfo(), coords);
|
||||
tickReadHost();
|
||||
return *(reinterpret_cast<T*>(bufferWithOffset(offset)));
|
||||
}
|
||||
|
@ -2135,7 +2135,7 @@ T NDArray::t(const Nd4jLong i, const Nd4jLong j) const {
|
|||
syncToHost();
|
||||
|
||||
Nd4jLong coords[3] = {i, j, k};
|
||||
auto offset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto offset = shape::getOffset(getShapeInfo(), coords);
|
||||
tickReadHost();
|
||||
return *(reinterpret_cast<T*>(bufferWithOffset(offset)));
|
||||
}
|
||||
|
|
|
@ -808,7 +808,7 @@ void NDArray::templatedSet(void *buffer, const Nd4jLong *indices, const void *va
|
|||
auto t = reinterpret_cast<T *>(buffer);
|
||||
const auto y = *(reinterpret_cast<const Y *>(value));
|
||||
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), indices, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), indices);
|
||||
t[xOffset] = static_cast<T>(y);
|
||||
}
|
||||
BUILD_DOUBLE_TEMPLATE(template void NDArray::templatedSet, (void *buffer, const Nd4jLong *indices, const void *value), LIBND4J_TYPES, LIBND4J_TYPES);
|
||||
|
@ -2462,14 +2462,13 @@ double NDArray::getTrace() const {
|
|||
|
||||
int rank = rankOf();
|
||||
auto shape = shapeOf();
|
||||
auto strides = stridesOf();
|
||||
int minDim = 100000000;
|
||||
|
||||
Nd4jLong indices[MAX_RANK];
|
||||
for(int j = 0; j < rank; ++j)
|
||||
indices[j] = 1;
|
||||
|
||||
auto offset = shape::getOffset(0, shape, strides, indices, rank);
|
||||
auto offset = shape::getOffset(getShapeInfo(), indices);
|
||||
|
||||
for(int i = 0; i < rank; ++i)
|
||||
if(minDim > shape[i])
|
||||
|
@ -3472,7 +3471,7 @@ T NDArray::e(const Nd4jLong i, const Nd4jLong j) const {
|
|||
throw std::invalid_argument("NDArray::e(i,j): one of input indexes is out of array length or rank!=2 !");
|
||||
|
||||
const Nd4jLong coords[2] = {i, j};
|
||||
const auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
const auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
NDArray::preparePrimaryUse({}, {this});
|
||||
NDArray::registerPrimaryUse({}, {this});
|
||||
|
@ -3492,7 +3491,7 @@ T NDArray::e(const Nd4jLong i, const Nd4jLong j, const Nd4jLong k) const {
|
|||
throw std::invalid_argument("NDArray::e(i,j,k): one of input indexes is out of array length or rank!=3 !");
|
||||
|
||||
const Nd4jLong coords[3] = {i, j, k};
|
||||
const auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
const auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
NDArray::preparePrimaryUse({}, {this});
|
||||
NDArray::registerPrimaryUse({}, {this});
|
||||
|
@ -3512,7 +3511,7 @@ T NDArray::e(const Nd4jLong i, const Nd4jLong j, const Nd4jLong k, const Nd4jLon
|
|||
throw std::invalid_argument("NDArray::e(i,j,k,l): one of input indexes is out of array length or rank!=4 !");
|
||||
|
||||
const Nd4jLong coords[4] = {i, j, k, l};
|
||||
const auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
const auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
NDArray::preparePrimaryUse({}, {this});
|
||||
NDArray::registerPrimaryUse({}, {this});
|
||||
|
@ -4095,7 +4094,7 @@ void NDArray::p(const Nd4jLong i, const Nd4jLong j, const T value) {
|
|||
|
||||
void *p = reinterpret_cast<void *>(const_cast<T *>(&value));
|
||||
Nd4jLong coords[2] = {i, j};
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
NDArray::preparePrimaryUse({this}, {}, true);
|
||||
BUILD_SINGLE_PARTIAL_SELECTOR(dataType(), templatedSet<, T>(this->getBuffer(), xOffset, p), LIBND4J_TYPES);
|
||||
|
@ -4127,7 +4126,7 @@ void NDArray::p(const Nd4jLong i, const Nd4jLong j, const Nd4jLong k, const T va
|
|||
|
||||
void *p = reinterpret_cast<void *>(const_cast<T *>(&value));
|
||||
Nd4jLong coords[3] = {i, j, k};
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
BUILD_SINGLE_PARTIAL_SELECTOR(dataType(), templatedSet<, T>(this->getBuffer(), xOffset, p), LIBND4J_TYPES);
|
||||
NDArray::registerPrimaryUse({this}, {});
|
||||
}
|
||||
|
@ -4154,7 +4153,7 @@ void NDArray::p(const Nd4jLong i, const Nd4jLong j, const Nd4jLong k, const Nd4j
|
|||
|
||||
void *p = reinterpret_cast<void *>(const_cast<T *>(&value));
|
||||
Nd4jLong coords[4] = {i, j, k, l};
|
||||
auto xOffset = shape::getOffset(0, shapeOf(), stridesOf(), coords, rankOf());
|
||||
auto xOffset = shape::getOffset(getShapeInfo(), coords);
|
||||
|
||||
NDArray::preparePrimaryUse({this}, {}, true);
|
||||
BUILD_SINGLE_PARTIAL_SELECTOR(dataType(), templatedSet<, T>(this->getBuffer(), xOffset, p), LIBND4J_TYPES);
|
||||
|
@ -4409,7 +4408,7 @@ Nd4jLong NDArray::getOffset(const Nd4jLong i) const {
|
|||
if (i >= lengthOf())
|
||||
throw std::invalid_argument("NDArray::getOffset: input index is out of array length !");
|
||||
|
||||
return shape::getIndexOffset(i, _shapeInfo, lengthOf());
|
||||
return shape::getIndexOffset(i, _shapeInfo);
|
||||
}
|
||||
|
||||
NDArray NDArray::like() {
|
||||
|
@ -4455,7 +4454,7 @@ NDArray* NDArray::diagonal(const char type) const {
|
|||
indices[i] = 1;
|
||||
}
|
||||
|
||||
auto step = shape::getOffset(0, shapeOf(), stridesOf(), indices, rank);
|
||||
auto step = shape::getOffset(getShapeInfo(), indices);
|
||||
|
||||
if(type == 'c') {
|
||||
outShapeInfo[1] = diagSize;
|
||||
|
|
|
@ -103,8 +103,8 @@ void NDArray::fillAsTriangular(const float val, int lower, int upper, const char
|
|||
PRAGMA_OMP_PARALLEL_FOR_ARGS(OMP_IF(zLen > Environment::getInstance()->elementwiseThreshold()) firstprivate(coords))
|
||||
for (Nd4jLong i = 0; i < zLen; ++i) {
|
||||
|
||||
shape::index2coords(zRank, target->shapeOf(), i, zLen, coords.data());
|
||||
const auto zOffset = shape::getOffset(0, target->shapeOf(), target->stridesOf(), coords.data(), zRank);
|
||||
shape::index2coords(i, target->getShapeInfo(), coords.data());
|
||||
const auto zOffset = shape::getOffset(target->getShapeInfo(), coords.data());
|
||||
|
||||
// if( (row + upper < col) || (row + lower > col) )
|
||||
if((coords[zRank - 2] + upper < coords[zRank - 1]) || (coords[zRank - 2] + lower > coords[zRank - 1]))
|
||||
|
@ -112,7 +112,7 @@ void NDArray::fillAsTriangular(const float val, int lower, int upper, const char
|
|||
else if(this != target) { // when this and target are different arrays
|
||||
if(xRank != zRank)
|
||||
coords[0] = coords[1];
|
||||
const auto xOffset = areSameOffsets ? zOffset : shape::getOffset(0, shapeOf(), stridesOf(), coords.data(), xRank);
|
||||
const auto xOffset = areSameOffsets ? zOffset : shape::getOffset(getShapeInfo(), coords.data());
|
||||
z[zOffset] = x[xOffset];
|
||||
}
|
||||
}
|
||||
|
@ -128,13 +128,12 @@ void NDArray::setIdentity() {
|
|||
|
||||
int rank = rankOf();
|
||||
auto shape = shapeOf();
|
||||
auto strides = stridesOf();
|
||||
int minDim = MAX_INT;
|
||||
Nd4jLong indices[MAX_RANK];
|
||||
for(int j = 0; j < rank; ++j)
|
||||
indices[j] = 1;
|
||||
|
||||
Nd4jLong offset = shape::getOffset(0, shape, strides, indices, rank);
|
||||
Nd4jLong offset = shape::getOffset(getShapeInfo(), indices);
|
||||
|
||||
for(int i = 0; i < rank; ++i)
|
||||
if(minDim > shape[i])
|
||||
|
@ -380,9 +379,9 @@ static void repeat_(const NDArray& input, NDArray& output, const std::vector<int
|
|||
PRAGMA_OMP_PARALLEL_FOR_ARGS(schedule(guided) firstprivate(coords))
|
||||
for (Nd4jLong i = 0; i < zLen; ++i) {
|
||||
|
||||
shape::index2coords(rank, output.shapeOf(), i, zLen, coords.data());
|
||||
shape::index2coords(i, output.getShapeInfo(), coords.data());
|
||||
|
||||
const auto zOffset = shape::getOffset(0, output.shapeOf(), output.stridesOf(), coords.data(), rank);
|
||||
const auto zOffset = shape::getOffset(output.getShapeInfo(), coords.data());
|
||||
|
||||
if(repSize > 1) {
|
||||
for (uint j = 0; j < repSize; ++j) {
|
||||
|
@ -396,7 +395,7 @@ static void repeat_(const NDArray& input, NDArray& output, const std::vector<int
|
|||
else
|
||||
coords[axis] /= repeats[0];
|
||||
|
||||
z[zOffset] = x[shape::getOffset(0, input.shapeOf(), input.stridesOf(), coords.data(), rank)];
|
||||
z[zOffset] = x[shape::getOffset(input.getShapeInfo(), coords.data())];
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1389,8 +1389,8 @@ void pullRowsGeneric(void *vx,
|
|||
}
|
||||
else {
|
||||
for (int i = 0; i < tadLength; i++) {
|
||||
auto xOffset = xTadOffsetForBlock + shape::getIndexOffset(i, tadShapeInfo, tadLength);
|
||||
auto zOffset = zTadOffsetForBlock + shape::getIndexOffset(i, zTadShapeInfo, tadLength);
|
||||
auto xOffset = xTadOffsetForBlock + shape::getIndexOffset(i, tadShapeInfo);
|
||||
auto zOffset = zTadOffsetForBlock + shape::getIndexOffset(i, zTadShapeInfo);
|
||||
hZ[zOffset] = hX[xOffset];
|
||||
}
|
||||
}
|
||||
|
@ -1454,7 +1454,7 @@ void tearGeneric(void *vx,
|
|||
else {
|
||||
|
||||
for (Nd4jLong j = 0; j < tadLength; j++)
|
||||
hZ[shape::getIndexOffset(j, hZShapeInfo, tadLength)] = s[shape::getIndexOffset(j, tadShapeInfo, tadLength)];
|
||||
hZ[shape::getIndexOffset(j, hZShapeInfo)] = s[shape::getIndexOffset(j, tadShapeInfo)];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1601,7 +1601,7 @@ void shuffleGeneric(void **hX, Nd4jLong **hXShapeInfo, void **dz, Nd4jLong **hZS
|
|||
}
|
||||
} else {
|
||||
for (Nd4jLong i = 0; i < tadLength; i++) {
|
||||
auto offset = shape::getIndexOffset(i, tadOnlyShapeInfo[f], tadLength);
|
||||
auto offset = shape::getIndexOffset(i, tadOnlyShapeInfo[f]);
|
||||
nd4j::math::nd4j_swap<T>(hX[offset + oldOffset], hX[offset + newOffset]);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -106,8 +106,8 @@ __global__ static void fillAsTriangularCuda(const void* vx, const Nd4jLong* xSha
|
|||
|
||||
for (Nd4jLong i = tid; i < zLen; i += totalThreads) {
|
||||
|
||||
shape::index2coords(zRank, shape::shapeOf(const_cast<Nd4jLong*>(zShapeInfo)), i, zLen, coords);
|
||||
const auto zOffset = shape::getOffset(0, shape::shapeOf(const_cast<Nd4jLong*>(zShapeInfo)), shape::stride(const_cast<Nd4jLong*>(zShapeInfo)), coords, zRank);
|
||||
shape::index2coords(i, zShapeInfo, coords);
|
||||
const auto zOffset = shape::getOffset(zShapeInfo, coords);
|
||||
|
||||
// if( (row + upper < col) || (row + lower > col) )
|
||||
if((coords[zRank - 2] + upper < coords[zRank - 1]) || (coords[zRank - 2] + lower > coords[zRank - 1]))
|
||||
|
@ -115,7 +115,7 @@ __global__ static void fillAsTriangularCuda(const void* vx, const Nd4jLong* xSha
|
|||
else if(vx != vz) { // when x and z are different arrays
|
||||
if(xRank != zRank)
|
||||
coords[0] = coords[1];
|
||||
const auto xOffset = areSameOffsets ? zOffset : shape::getOffset(0, shape::shapeOf(const_cast<Nd4jLong*>(xShapeInfo)), shape::stride(const_cast<Nd4jLong*>(xShapeInfo)), coords, xRank);
|
||||
const auto xOffset = areSameOffsets ? zOffset : shape::getOffset(xShapeInfo, coords);
|
||||
z[zOffset] = x[xOffset];
|
||||
}
|
||||
}
|
||||
|
@ -177,8 +177,8 @@ __global__ static void identityMatrixCuda(void* vx, const Nd4jLong* xShapeInfo,
|
|||
|
||||
for (Nd4jLong i = tid; i < len; i += totalThreads) {
|
||||
|
||||
shape::index2coords(rank, shape::shapeOf(const_cast<Nd4jLong*>(xShapeInfo)), i, len, coords);
|
||||
const auto offset = shape::getOffset(0, shape::shapeOf(const_cast<Nd4jLong*>(xShapeInfo)), shape::stride(const_cast<Nd4jLong*>(xShapeInfo)), coords, rank);
|
||||
shape::index2coords(i, xShapeInfo, coords);
|
||||
const auto offset = shape::getOffset(xShapeInfo, coords);
|
||||
|
||||
if(coords[rank - 2] == coords[rank - 1]) // row == col -> on diagonal
|
||||
x[offset] = val;
|
||||
|
@ -424,9 +424,9 @@ __global__ static void repeatCuda(const void* vx, const Nd4jLong* xShapeInfo,
|
|||
|
||||
for (Nd4jLong i = tid; i < zLen; i += totalThreads) {
|
||||
|
||||
shape::index2coords(rank, zShapeInfo + 1, i, zLen, coords);
|
||||
shape::index2coords(i, zShapeInfo, coords);
|
||||
|
||||
const auto zOffset = shape::getOffset(0, zShapeInfo + 1, zShapeInfo + rank + 1, coords, rank);
|
||||
const auto zOffset = shape::getOffset(zShapeInfo, coords);
|
||||
|
||||
if(repSize > 1) {
|
||||
for (uint j = 0; j < repSize; ++j) {
|
||||
|
@ -440,7 +440,7 @@ __global__ static void repeatCuda(const void* vx, const Nd4jLong* xShapeInfo,
|
|||
else
|
||||
coords[axis] /= repeats[0];
|
||||
|
||||
z[zOffset] = x[shape::getOffset(0, xShapeInfo + 1, xShapeInfo + rank + 1, coords, rank)];
|
||||
z[zOffset] = x[shape::getOffset(xShapeInfo, coords)];
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -23,8 +23,8 @@
|
|||
#include <cuda.h>
|
||||
#include <cuda_runtime.h>
|
||||
|
||||
static Nd4jLong __device__ __noinline__ __getIndexOffset(Nd4jLong index, Nd4jLong *shapeInfo, Nd4jLong length) {
|
||||
return shape::getIndexOffset(index, shapeInfo, length);
|
||||
static Nd4jLong __device__ __noinline__ __getIndexOffset(Nd4jLong index, Nd4jLong *shapeInfo) {
|
||||
return shape::getIndexOffset(index, shapeInfo);
|
||||
}
|
||||
|
||||
static Nd4jLong __device__ __noinline__ __length(Nd4jLong *shapeInfo) {
|
||||
|
@ -103,8 +103,8 @@ static _CUDA_G void lambdaKernel(void* vx, Nd4jLong *xShapeInfo, void *vz, Nd4jL
|
|||
z[e * zEws] = lambda(x[e * xEws]);
|
||||
} else {
|
||||
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
||||
auto xOffset = __getIndexOffset(e, xShapeInfo, zLength);
|
||||
auto zOffset = __getIndexOffset(e, zShapeInfo, zLength);
|
||||
auto xOffset = __getIndexOffset(e, xShapeInfo);
|
||||
auto zOffset = __getIndexOffset(e, zShapeInfo);
|
||||
|
||||
z[zOffset] = lambda(x[xOffset]);
|
||||
}
|
||||
|
@ -132,8 +132,8 @@ static _CUDA_G void lambdaIndexedKernel(void* vx, Nd4jLong *xShapeInfo, void *vz
|
|||
z[e * zEws] = lambda(e, x[e * xEws]);
|
||||
} else {
|
||||
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
||||
auto xOffset = __getIndexOffset(e, xShapeInfo, zLength);
|
||||
auto zOffset = __getIndexOffset(e, zShapeInfo, zLength);
|
||||
auto xOffset = __getIndexOffset(e, xShapeInfo);
|
||||
auto zOffset = __getIndexOffset(e, zShapeInfo);
|
||||
|
||||
z[zOffset] = lambda(e, x[xOffset]);
|
||||
}
|
||||
|
@ -164,9 +164,9 @@ static _CUDA_G void lambdaIndexedPairwiseKernel(void* vx, Nd4jLong *xShapeInfo,
|
|||
z[e * zEws] = lambda(e, x[e * xEws], y[e * yEws]);
|
||||
} else {
|
||||
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
||||
auto xOffset = __getIndexOffset(e, xShapeInfo, zLength);
|
||||
auto yOffset = __getIndexOffset(e, yShapeInfo, zLength);
|
||||
auto zOffset = __getIndexOffset(e, zShapeInfo, zLength);
|
||||
auto xOffset = __getIndexOffset(e, xShapeInfo);
|
||||
auto yOffset = __getIndexOffset(e, yShapeInfo);
|
||||
auto zOffset = __getIndexOffset(e, zShapeInfo);
|
||||
|
||||
z[zOffset] = lambda(e, x[xOffset], y[yOffset]);
|
||||
}
|
||||
|
@ -197,9 +197,9 @@ static _CUDA_G void lambdaPairwiseKernel(void* vx, Nd4jLong *xShapeInfo, void* v
|
|||
z[e * zEws] = lambda(x[e * xEws], y[e * yEws]);
|
||||
} else {
|
||||
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
||||
auto xOffset = __getIndexOffset(e, xShapeInfo, zLength);
|
||||
auto yOffset = __getIndexOffset(e, yShapeInfo, zLength);
|
||||
auto zOffset = __getIndexOffset(e, zShapeInfo, zLength);
|
||||
auto xOffset = __getIndexOffset(e, xShapeInfo);
|
||||
auto yOffset = __getIndexOffset(e, yShapeInfo);
|
||||
auto zOffset = __getIndexOffset(e, zShapeInfo);
|
||||
|
||||
z[zOffset] = lambda(x[xOffset], y[yOffset]);
|
||||
}
|
||||
|
@ -233,10 +233,10 @@ static _CUDA_G void lambdaTriplewiseKernel(void* vw, Nd4jLong *wShapeInfo, void*
|
|||
z[e * zEws] = lambda(w[e * wEws], x[e * xEws], y[e * yEws]);
|
||||
} else {
|
||||
for (uint e = tid; e < zLength; e += blockDim.x * gridDim.x) {
|
||||
auto wOffset = __getIndexOffset(e, wShapeInfo, zLength);
|
||||
auto xOffset = __getIndexOffset(e, xShapeInfo, zLength);
|
||||
auto yOffset = __getIndexOffset(e, yShapeInfo, zLength);
|
||||
auto zOffset = __getIndexOffset(e, zShapeInfo, zLength);
|
||||
auto wOffset = __getIndexOffset(e, wShapeInfo);
|
||||
auto xOffset = __getIndexOffset(e, xShapeInfo);
|
||||
auto yOffset = __getIndexOffset(e, yShapeInfo);
|
||||
auto zOffset = __getIndexOffset(e, zShapeInfo);
|
||||
|
||||
z[zOffset] = lambda(w[wOffset], x[xOffset], y[yOffset]);
|
||||
}
|
||||
|
|
|
@ -3228,8 +3228,8 @@ __global__ static void scatterUpdateCuda(const int opCode, const int numOfSubArr
|
|||
|
||||
for (Nd4jLong i = threadIdx.x; i < arrLenX; i += blockDim.x) {
|
||||
|
||||
const auto xOffset = shape::getIndexOffset(i, xShapeInfo, arrLenX);
|
||||
const auto yOffset = shape::getIndexOffset(i, yShapeInfo, arrLenY);
|
||||
const auto xOffset = shape::getIndexOffset(i, xShapeInfo);
|
||||
const auto yOffset = shape::getIndexOffset(i, yShapeInfo);
|
||||
|
||||
switch (opCode) {
|
||||
case 0:
|
||||
|
|
|
@ -246,9 +246,9 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
auto lenPerThread = static_cast<uint>(threadsInfo.getItersPerThread(threadNum));
|
||||
PRAGMA_OMP_SIMD
|
||||
for (uint i = 0; i < lenPerThread; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, len, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, len, canCastY);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, len, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = op(x[xOffset], y[yOffset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -452,7 +452,7 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
for (uint j = 0; j < tadLen; j++)
|
||||
start = OpType::update(start, OpType::op(tad[j * tadEws], extraParams), extraParams);
|
||||
|
||||
auto zOffset = shape::indexOffset(i, zShapeInfo, castZShapeInfo, zLen, canCastZ);
|
||||
auto zOffset = shape::indexOffset(i, zShapeInfo, castZShapeInfo, canCastZ);
|
||||
z[zOffset] = OpType::postProcess(start, tadLen, extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -469,7 +469,7 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
auto start = OpType::startingValue(tad);
|
||||
|
||||
for (uint j = 0; j < tadLen; j++) {
|
||||
auto tadOffset = shape::indexOffset(j, tadShapeInfo, castTadShapeInfo, tadLen, canCastTad);
|
||||
auto tadOffset = shape::indexOffset(j, tadShapeInfo, castTadShapeInfo, canCastTad);
|
||||
start = OpType::update(start, OpType::op(tad[tadOffset], extraParams), extraParams);
|
||||
}
|
||||
|
||||
|
@ -491,11 +491,11 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
// auto start = OpType::startingValue(tad);
|
||||
|
||||
// for (uint j = 0; j < tadLen; j++) {
|
||||
// auto tadOffset = shape::indexOffset(j, tadShapeInfo, castTadShapeInfo, tadLen, canCastTad);
|
||||
// auto tadOffset = shape::indexOffset(j, tadShapeInfo, castTadShapeInfo, canCastTad);
|
||||
// start = OpType::update(start, OpType::op(tad[tadOffset], extraParams), extraParams);
|
||||
// }
|
||||
|
||||
// auto zOffset = shape::indexOffset(i, zShapeInfo, castZShapeInfo, zLen, canCastZ);
|
||||
// auto zOffset = shape::indexOffset(i, zShapeInfo, castZShapeInfo, canCastZ);
|
||||
// z[zOffset] = OpType::postProcess(start, tadLen, extraParams);
|
||||
// }
|
||||
// }
|
||||
|
@ -517,7 +517,7 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
for (uint j = 0; j < tadLen; j++)
|
||||
start = OpType::update(start, OpType::op(tad[innertadOffsets[j]], extraParams), extraParams);
|
||||
|
||||
auto zOffset = shape::indexOffset(i, zShapeInfo, castZShapeInfo, zLen, canCastZ);
|
||||
auto zOffset = shape::indexOffset(i, zShapeInfo, castZShapeInfo, canCastZ);
|
||||
z[zOffset] = OpType::postProcess(start, tadLen, extraParams);
|
||||
}
|
||||
|
||||
|
@ -658,13 +658,13 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (uint i = 0; i < lenPerThread; i++) {
|
||||
const auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, castXShapeInfo, len, canCastX);
|
||||
const auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, castXShapeInfo, canCastX);
|
||||
zi[i * zEws] = OpType::op(x[xOffset], extraParams);
|
||||
}
|
||||
} else {
|
||||
PRAGMA_OMP_SIMD
|
||||
for (uint i = 0; i < lenPerThread; i++) {
|
||||
const auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, castXShapeInfo, len, canCastX);
|
||||
const auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, castXShapeInfo, canCastX);
|
||||
zi[i] = OpType::op(x[xOffset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -782,8 +782,8 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (uint i = 0; i < lenPerThread; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, len, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, len, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[xOffset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -1123,7 +1123,7 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
auto start = OpType::startingValue(xTad);
|
||||
|
||||
for (uint j = 0; j < tadLen; ++j) {
|
||||
const auto tadOffset = shape::indexOffset(j, xTadShapeInfo, castXTadShapeInfo, tadLen, canCastXTad);
|
||||
const auto tadOffset = shape::indexOffset(j, xTadShapeInfo, castXTadShapeInfo, canCastXTad);
|
||||
start = OpType::update(start, OpType::op(xTad[tadOffset], yTad[tadOffset], extraParams), extraParams);
|
||||
}
|
||||
|
||||
|
@ -1147,8 +1147,8 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
auto start = OpType::startingValue(xTad);
|
||||
|
||||
for (uint j = 0; j < tadLen; ++j) {
|
||||
const auto xTadOffset = shape::indexOffset(j, xTadShapeInfo, castXTadShapeInfo, tadLen, canCastXTad);
|
||||
const auto yTadOffset = shape::indexOffset(j, yTadShapeInfo, castYTadShapeInfo, tadLen, canCastYTad);
|
||||
const auto xTadOffset = shape::indexOffset(j, xTadShapeInfo, castXTadShapeInfo, canCastXTad);
|
||||
const auto yTadOffset = shape::indexOffset(j, yTadShapeInfo, castYTadShapeInfo, canCastYTad);
|
||||
start = OpType::update(start, OpType::op(xTad[xTadOffset], yTad[yTadOffset], extraParams), extraParams);
|
||||
}
|
||||
|
||||
|
@ -1423,7 +1423,7 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
auto start = startVal;
|
||||
|
||||
for (uint j = 0; j < tadLen; ++j) {
|
||||
const auto tadOffset = shape::indexOffset(j, xTadShapeInfo, castXTadShapeInfo, tadLen, canCastXTad);
|
||||
const auto tadOffset = shape::indexOffset(j, xTadShapeInfo, castXTadShapeInfo, canCastXTad);
|
||||
start = OpType::update(start, OpType::op(xTad[tadOffset], yTad[tadOffset], extraParams), extraParams);
|
||||
}
|
||||
z[zInd * zEws] = OpType::postProcess(start, tadLen, extraParams);
|
||||
|
@ -1449,8 +1449,8 @@ void Loops::loopXYZ(const X* x, const Nd4jLong* xShapeInfo,
|
|||
auto start = startVal;
|
||||
|
||||
for (uint j = 0; j < tadLen; ++j) {
|
||||
const auto xTadOffset = shape::indexOffset(j, xTadShapeInfo, castXTadShapeInfo, tadLen, canCastXTad);
|
||||
const auto yTadOffset = shape::indexOffset(j, yTadShapeInfo, castYTadShapeInfo, tadLen, canCastYTad);
|
||||
const auto xTadOffset = shape::indexOffset(j, xTadShapeInfo, castXTadShapeInfo, canCastXTad);
|
||||
const auto yTadOffset = shape::indexOffset(j, yTadShapeInfo, castYTadShapeInfo, canCastYTad);
|
||||
start = OpType::update(start, OpType::op(xTad[xTadOffset], yTad[yTadOffset], extraParams), extraParams);
|
||||
}
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
******************************************************************************/
|
||||
|
||||
//
|
||||
// @author iuriish@yahoo.com
|
||||
// @author Yurii Shyrma (iuriish@yahoo.com)
|
||||
//
|
||||
|
||||
#ifndef LIBND4J_SHAPEUTILS_H
|
||||
|
|
|
@ -526,7 +526,7 @@ namespace shape {
|
|||
/* int *sub = new int[leftOverIndexLen];
|
||||
shape::ind2subOrder(tadShape,index,len,sub);
|
||||
*/
|
||||
shape::index2coords(leftOverIndexLen,tadShape, index,len, sub);
|
||||
shape::index2coords(index, leftOverIndexLen,tadShape, sub);
|
||||
|
||||
|
||||
for(int i = 0; i < leftOverIndexLen; i++) {
|
||||
|
@ -609,7 +609,7 @@ namespace shape {
|
|||
if(dimensionLength > 1) {
|
||||
Nd4jLong *tad2Sub = this->tad2Sub(index, ptrManager);
|
||||
|
||||
Nd4jLong ret = shape::getOffset(0,shape::shapeOf(shapeInfo),shape::stride(shapeInfo),tad2Sub,shape::rank(shapeInfo));
|
||||
Nd4jLong ret = shape::getOffset(shapeInfo, tad2Sub);
|
||||
|
||||
if(ret < 0) {
|
||||
if (ptrManager == nullptr)
|
||||
|
@ -625,7 +625,7 @@ namespace shape {
|
|||
else {
|
||||
Nd4jLong *tad2Sub = this->tad2Sub(index, ptrManager);
|
||||
|
||||
Nd4jLong ret = shape::getOffset(0,shape::shapeOf(shapeInfo),shape::stride(shapeInfo),tad2Sub,shape::rank(shapeInfo));
|
||||
Nd4jLong ret = shape::getOffset(shapeInfo, tad2Sub);
|
||||
|
||||
if (ptrManager == nullptr)
|
||||
delete[] tad2Sub;
|
||||
|
@ -703,7 +703,7 @@ namespace shape {
|
|||
/* int *sub = new int[leftOverIndexLen];
|
||||
shape::ind2subOrder(tadShape,index,len,sub);
|
||||
*/
|
||||
shape::index2coords(leftOverIndexLen,tadShape,index,len, sub);
|
||||
shape::index2coords(index, leftOverIndexLen,tadShape, sub);
|
||||
|
||||
for(int i = 0; i < leftOverIndexLen; i++) {
|
||||
ret[leftOverIndexes[i]] = sub[i];
|
||||
|
|
|
@ -64,7 +64,7 @@ namespace nd4j {
|
|||
|
||||
|
||||
for (int i = 0; i < totalIterations; i++) {
|
||||
shape::index2coords(xRank, xShape, i, totalIterations, xCoords);
|
||||
shape::index2coords(i, xRank, xShape, xCoords);
|
||||
|
||||
Parameters params;
|
||||
for (int j = 0; j < xRank; j++) {
|
||||
|
|
|
@ -226,7 +226,7 @@ void nd4j::IndexReductionLoops<X,Z>::loopIndexReduce(X* x, Nd4jLong* xShapeInfo,
|
|||
indexValue = OpType::update(indexValue, comp, extraParams);
|
||||
}
|
||||
|
||||
auto zOffset = shape::indexOffset(i, zShapeInfo, castZShapeInfo, zLen, canCastZ);
|
||||
auto zOffset = shape::indexOffset(i, zShapeInfo, castZShapeInfo, canCastZ);
|
||||
z[zOffset] = (Z) indexValue.index;
|
||||
}
|
||||
}
|
||||
|
@ -243,7 +243,7 @@ void nd4j::IndexReductionLoops<X,Z>::loopIndexReduce(X* x, Nd4jLong* xShapeInfo,
|
|||
auto indexValue = OpType::startingIndexValue(tad);
|
||||
|
||||
for (uint j = 0; j < tadLen; j++) {
|
||||
auto tadOffset = shape::indexOffset(j, tadShapeInfo, castTadShapeInfo, tadLen, canCastTad);
|
||||
auto tadOffset = shape::indexOffset(j, tadShapeInfo, castTadShapeInfo, canCastTad);
|
||||
functions::indexreduce::IndexValue<X> comp(tad[tadOffset], j);
|
||||
indexValue = OpType::update(indexValue, comp, extraParams);
|
||||
}
|
||||
|
@ -266,12 +266,12 @@ void nd4j::IndexReductionLoops<X,Z>::loopIndexReduce(X* x, Nd4jLong* xShapeInfo,
|
|||
auto indexValue = OpType::startingIndexValue(tad);
|
||||
|
||||
for (uint j = 0; j < tadLen; j++) {
|
||||
auto tadOffset = shape::indexOffset(j, tadShapeInfo, castTadShapeInfo, tadLen, canCastTad);
|
||||
auto tadOffset = shape::indexOffset(j, tadShapeInfo, castTadShapeInfo, canCastTad);
|
||||
functions::indexreduce::IndexValue<X> comp(tad[tadOffset], j);
|
||||
indexValue = OpType::update(indexValue, comp, extraParams);
|
||||
}
|
||||
|
||||
auto zOffset = shape::indexOffset(i, zShapeInfo, castZShapeInfo, zLen, canCastZ);
|
||||
auto zOffset = shape::indexOffset(i, zShapeInfo, castZShapeInfo, canCastZ);
|
||||
z[zOffset] = (Z) indexValue.index;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
******************************************************************************/
|
||||
|
||||
//
|
||||
// @author Yurii Shyrma
|
||||
// @author Yurii Shyrma (iuriish@yahoo.com)
|
||||
//
|
||||
|
||||
#include <algorithm>
|
||||
|
@ -931,7 +931,7 @@ void ShapeUtils::evalIdxRangesForSubArr(const Nd4jLong subArrIdx, const Nd4jLon
|
|||
for(int i = 0; i < subArrRank; ++i)
|
||||
shapeOfSubArr[i] = shapeInfo[dimsToExclude[i] + 1];
|
||||
|
||||
shape::index2coords(subArrRank, shapeOfSubArr.data(), subArrIdx, indexes.data());
|
||||
shape::index2coords(subArrIdx, subArrRank, shapeOfSubArr.data(), indexes.data());
|
||||
|
||||
memset(idxRanges, 0, 2 * rank * sizeof(Nd4jLong));
|
||||
|
||||
|
|
|
@ -887,7 +887,7 @@ namespace shape {
|
|||
* @param indices the indices to iterate over
|
||||
* @return the double at the specified index
|
||||
*/
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong getOffset(Nd4jLong baseOffset, const Nd4jLong *shape, const Nd4jLong *stride, const Nd4jLong *indices, const int rank);
|
||||
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong getOffset(const Nd4jLong *shapeInfo, const Nd4jLong *indices, Nd4jLong baseOffset = 0);
|
||||
ND4J_EXPORT Nd4jLong getOffset(const Nd4jLong *shapeInfo, const std::vector<uint>& indices);
|
||||
|
||||
|
@ -897,20 +897,19 @@ namespace shape {
|
|||
|
||||
/**
|
||||
* Convert a linear index to the corresponding coordinates
|
||||
* for example if shape is {2, 4}, then index 5 corresponds to following coordinates
|
||||
* -> [1, 1] in case of c order
|
||||
* -> [1, 2] in case of f order
|
||||
* for example if shape is {2, 4}, then index 5 corresponds to coordinates [1, 1]
|
||||
*/
|
||||
ND4J_EXPORT _CUDA_HD void index2coords(const int rank, const Nd4jLong *shape, Nd4jLong index, Nd4jLong arrLen, Nd4jLong *coords, const char order = 'c');
|
||||
ND4J_EXPORT _CUDA_HD void index2coords(const int rank, const Nd4jLong *shape, Nd4jLong index, Nd4jLong *coords, const char order = 'c');
|
||||
ND4J_EXPORT _CUDA_HD void index2coords(Nd4jLong index, const Nd4jLong *shapeInfo, Nd4jLong *coords);
|
||||
ND4J_EXPORT _CUDA_HD void index2coords(Nd4jLong index, const int rank, const Nd4jLong *shape, Nd4jLong *coords);
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* Convert coordinates to the corresponding linear index (sequence number in other words)
|
||||
* for example if shape is {2, 4}, then:
|
||||
* in case of c order and coordinates [1, 1] index 5 is returned
|
||||
* in case of f order and coordinates [1, 2] index 5 is returned
|
||||
* for example if shape is {2, 4} and coordinates [1, 1] then index 5 is returned
|
||||
*/
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong coords2index(const int rank, const Nd4jLong *shape, const Nd4jLong *coords, const char order = 'c');
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong coords2index(const Nd4jLong *shapeInfo, const Nd4jLong *coords);
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong coords2index(const int rank, const Nd4jLong *shape, const Nd4jLong *coords);
|
||||
|
||||
/**
|
||||
* increment n-dimensional array by one iteration by changing coord appropriately
|
||||
|
@ -921,24 +920,10 @@ namespace shape {
|
|||
*/
|
||||
|
||||
/* calculates an array buffer offset for given "index" using following formula: offset = coord_0*stride_0 + coord_1*stride_1 + ... + coord_{rank-1}*stride_{rank-1}
|
||||
* arrLen - array length
|
||||
*/
|
||||
ND4J_EXPORT _CUDA_HD uint getIndexOffset(uint index, const uint *shapeInfo, uint arrLen);
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong getIndexOffset(Nd4jLong index, const Nd4jLong *shapeInfo, Nd4jLong arrLen);
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong getIndexOrderOffset(Nd4jLong index, const Nd4jLong *shapeInfo, Nd4jLong arrLen, const char order);
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong indexOffset(Nd4jLong index, const Nd4jLong* lShapeInfo, const uint* uShapeInfo, Nd4jLong arrLen, const bool useUnsigned);
|
||||
|
||||
/**
|
||||
* Compute the real linear indices for the given shape and stride
|
||||
*/
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong *computeIndices(int rank, Nd4jLong *shape, Nd4jLong *stride);
|
||||
|
||||
/**
|
||||
* Compute the real linear indices for the
|
||||
* given shape buffer. Shape,stride and rank are derived
|
||||
* from the buffer
|
||||
*/
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong *computeIndices( Nd4jLong *shapeBuffer);
|
||||
ND4J_EXPORT _CUDA_HD uint getIndexOffset(uint index, const uint *shapeInfo);
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong getIndexOffset(Nd4jLong index, const Nd4jLong *shapeInfo);
|
||||
ND4J_EXPORT _CUDA_HD Nd4jLong indexOffset(Nd4jLong index, const Nd4jLong* lShapeInfo, const uint* uShapeInfo, const bool useUnsigned);
|
||||
|
||||
ND4J_EXPORT _CUDA_HD void printShapeInfo(Nd4jLong *shapeInfo);
|
||||
|
||||
|
@ -1749,52 +1734,29 @@ __device__ INLINEDEF Nd4jLong *cuMalloc(Nd4jLong *buffer, long size) {
|
|||
return output;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute the real linear indices for the given shape and stride
|
||||
*/
|
||||
INLINEDEF _CUDA_HD Nd4jLong *computeIndices(int rank, Nd4jLong *shape, Nd4jLong *stride) {
|
||||
Nd4jLong length = shape::prodLong(shape,rank);
|
||||
|
||||
traceNew(13);
|
||||
|
||||
Nd4jLong *ret = new Nd4jLong[length];
|
||||
for(int i = 0; i < length; i++) {
|
||||
Nd4jLong *idx = new Nd4jLong[rank];
|
||||
shape::index2coords(rank, shape, i, idx, 'f');
|
||||
ret[i] = shape::getOffset(0, shape, stride, idx, rank);
|
||||
delete[] idx;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute the real linear indices for the given shape and stride
|
||||
*/
|
||||
INLINEDEF _CUDA_HD Nd4jLong *computeIndices(Nd4jLong *shapeBuffer) {
|
||||
return computeIndices(shape::rank(shapeBuffer),shape::shapeOf(shapeBuffer),shape::stride(shapeBuffer));
|
||||
}
|
||||
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
INLINEDEF _CUDA_HD Nd4jLong coords2index(const int rank, const Nd4jLong *shape, const Nd4jLong *indices, const char order) {
|
||||
INLINEDEF _CUDA_HD Nd4jLong coords2index(const Nd4jLong *shapeInfo, const Nd4jLong *indices) {
|
||||
|
||||
Nd4jLong index, shift = 1;;
|
||||
|
||||
if(order == 'c') {
|
||||
index = indices[shapeInfo[0] - 1];
|
||||
for(uint i = shapeInfo[0]; i > 1; --i) {
|
||||
shift *= shapeInfo[i];
|
||||
index += shift * indices[i - 2];
|
||||
}
|
||||
|
||||
return index;
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
INLINEDEF _CUDA_HD Nd4jLong coords2index(const int rank, const Nd4jLong *shape, const Nd4jLong *indices) {
|
||||
|
||||
Nd4jLong index, shift = 1;;
|
||||
|
||||
index = indices[rank - 1];
|
||||
for(int i = rank - 2; i >= 0; --i) {
|
||||
shift *= shape[i + 1];
|
||||
index += shift * indices[i];
|
||||
}
|
||||
}
|
||||
else {
|
||||
index = indices[0];
|
||||
for(int i = 1; i < rank; ++i) {
|
||||
shift *= shape[i - 1];
|
||||
index += shift * indices[i];
|
||||
}
|
||||
for(uint i = rank - 1; i >= 1; --i) {
|
||||
shift *= shape[i];
|
||||
index += shift * indices[i - 1];
|
||||
}
|
||||
|
||||
return index;
|
||||
|
@ -1809,83 +1771,108 @@ template <typename T>
|
|||
}
|
||||
|
||||
|
||||
// //////////////////////////////////////////////////////////////////////
|
||||
// INLINEDEF _CUDA_HD Nd4jLong getIndexOffset(Nd4jLong index, const Nd4jLong *shapeInfo, Nd4jLong arrLen) {
|
||||
|
||||
// const Nd4jLong ews = shapeInfo[shapeInfo[0] + shapeInfo[0] + 2];
|
||||
|
||||
// if(ews > 0 && order(shapeInfo) == 'c')
|
||||
// if (ews == 1)
|
||||
// return index;
|
||||
// else
|
||||
// return ews * index;
|
||||
|
||||
// Nd4jLong offset = 0;
|
||||
// Nd4jLong rank = shapeInfo[0];
|
||||
// for(int i = 1; i <= shapeInfo[0]; ++i) {
|
||||
// arrLen /= shapeInfo[i];
|
||||
// if(arrLen > 0 && shapeInfo[i] > 1) {
|
||||
// offset += (index / arrLen) * shapeInfo[i + rank];
|
||||
// index %= arrLen;
|
||||
// }
|
||||
// }
|
||||
// return offset;
|
||||
// }
|
||||
|
||||
// INLINEDEF _CUDA_HD uint getIndexOffset(uint index, const uint *shapeInfo, uint arrLen) {
|
||||
|
||||
// const uint rank = shapeInfo[0];
|
||||
// const uint ews = shapeInfo[rank + rank + 2];
|
||||
|
||||
// if(ews > 0 && shapeInfo[rank + rank + 3] == 99)
|
||||
// if (ews == 1)
|
||||
// return index;
|
||||
// else
|
||||
// return ews * index;
|
||||
|
||||
// uint offset = 0;
|
||||
|
||||
// for(uint i = 1; i <= rank; ++i) {
|
||||
// arrLen /= shapeInfo[i];
|
||||
// if(arrLen > 0 && shapeInfo[i] > 1) {
|
||||
// offset += (index / arrLen) * shapeInfo[i + rank];
|
||||
// index %= arrLen;
|
||||
// }
|
||||
// }
|
||||
// return offset;
|
||||
// }
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
INLINEDEF _CUDA_HD Nd4jLong getIndexOffset(Nd4jLong index, const Nd4jLong *shapeInfo, Nd4jLong arrLen) {
|
||||
INLINEDEF _CUDA_HD Nd4jLong getIndexOffset(Nd4jLong index, const Nd4jLong *shapeInfo) {
|
||||
|
||||
const Nd4jLong ews = shapeInfo[shapeInfo[0] + shapeInfo[0] + 2];
|
||||
if (shapeInfo[2 * shapeInfo[0] + 3] == 99) {
|
||||
|
||||
if(ews > 0 && order(shapeInfo) == 'c')
|
||||
const Nd4jLong ews = shapeInfo[2 * shapeInfo[0] + 2];
|
||||
if (ews == 1)
|
||||
return index;
|
||||
else
|
||||
else if(ews > 1)
|
||||
return ews * index;
|
||||
}
|
||||
|
||||
Nd4jLong offset = 0;
|
||||
Nd4jLong rank = shapeInfo[0];
|
||||
for(int i = 1; i <= shapeInfo[0]; ++i) {
|
||||
arrLen /= shapeInfo[i];
|
||||
if(arrLen > 0 && shapeInfo[i] > 1) {
|
||||
offset += (index / arrLen) * shapeInfo[i + rank];
|
||||
index %= arrLen;
|
||||
}
|
||||
|
||||
for(uint i = shapeInfo[0]; i > 1; --i) {
|
||||
offset += (index % shapeInfo[i]) * shapeInfo[i + shapeInfo[0]];
|
||||
index /= shapeInfo[i];
|
||||
}
|
||||
|
||||
offset += index * shapeInfo[1 + shapeInfo[0]]; // last iteration
|
||||
|
||||
return offset;
|
||||
}
|
||||
|
||||
INLINEDEF _CUDA_HD uint getIndexOffset(uint index, const uint *shapeInfo, uint arrLen) {
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
INLINEDEF _CUDA_HD uint getIndexOffset(uint index, const uint *shapeInfo) {
|
||||
|
||||
const uint rank = shapeInfo[0];
|
||||
const uint ews = shapeInfo[rank + rank + 2];
|
||||
if (shapeInfo[2 * shapeInfo[0] + 3] == 99) {
|
||||
|
||||
if(ews > 0 && shapeInfo[rank + rank + 3] == 99)
|
||||
const Nd4jLong ews = shapeInfo[2 * shapeInfo[0] + 2];
|
||||
if (ews == 1)
|
||||
return index;
|
||||
else
|
||||
else if(ews > 1)
|
||||
return ews * index;
|
||||
}
|
||||
|
||||
uint offset = 0;
|
||||
|
||||
for(uint i = 1; i <= rank; ++i) {
|
||||
arrLen /= shapeInfo[i];
|
||||
if(arrLen > 0 && shapeInfo[i] > 1) {
|
||||
offset += (index / arrLen) * shapeInfo[i + rank];
|
||||
index %= arrLen;
|
||||
}
|
||||
for(uint i = shapeInfo[0]; i > 1; --i) {
|
||||
offset += (index % shapeInfo[i]) * shapeInfo[i + shapeInfo[0]];
|
||||
index /= shapeInfo[i];
|
||||
}
|
||||
|
||||
offset += index * shapeInfo[1 + shapeInfo[0]]; // last iteration
|
||||
|
||||
return offset;
|
||||
}
|
||||
|
||||
INLINEDEF _CUDA_HD Nd4jLong indexOffset(Nd4jLong index, const Nd4jLong* lShapeInfo, const uint* uShapeInfo, Nd4jLong arrLen, const bool useUnsigned) {
|
||||
|
||||
if(useUnsigned)
|
||||
return getIndexOffset(static_cast<uint>(index), uShapeInfo, static_cast<uint>(arrLen));
|
||||
|
||||
return getIndexOffset(index, lShapeInfo, arrLen);
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
INLINEDEF _CUDA_HD Nd4jLong getIndexOrderOffset(Nd4jLong index, const Nd4jLong *shapeInfo, Nd4jLong arrLen, const char order) {
|
||||
INLINEDEF _CUDA_HD Nd4jLong indexOffset(Nd4jLong index, const Nd4jLong* lShapeInfo, const uint* uShapeInfo, const bool useUnsigned) {
|
||||
|
||||
Nd4jLong offset = 0;
|
||||
if(order == 'c') {
|
||||
for(int i = 1; i <= *shapeInfo; ++i) {
|
||||
arrLen /= shapeInfo[i];
|
||||
if(arrLen > 0 && shapeInfo[i] > 1) {
|
||||
offset += (index / arrLen) * shapeInfo[i + *shapeInfo];
|
||||
index %= arrLen;
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
for(int i = *shapeInfo; i >= 1 ; --i) {
|
||||
arrLen /= shapeInfo[i];
|
||||
if(arrLen > 0 && shapeInfo[i] > 1) {
|
||||
offset += (index / arrLen) * shapeInfo[i + *shapeInfo];
|
||||
index %= arrLen;
|
||||
}
|
||||
}
|
||||
}
|
||||
return offset;
|
||||
if(useUnsigned)
|
||||
return getIndexOffset(static_cast<uint>(index), uShapeInfo);
|
||||
|
||||
return getIndexOffset(index, lShapeInfo);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2394,7 +2381,7 @@ template <typename T>
|
|||
auto indices = new Nd4jLong[rank];
|
||||
memset((void *) indices,0,rank * sizeof(Nd4jLong));
|
||||
indices[0] = sliceIdx;
|
||||
Nd4jLong offset = shape::getOffset(0,newShape,newStride,indices,rank);
|
||||
Nd4jLong offset = shape::getOffset(newShapeBuffer, indices);
|
||||
newShapeBuffer[shape::shapeInfoLength(newRank) - 3] = offset;
|
||||
|
||||
// set current order and ews
|
||||
|
@ -3201,27 +3188,27 @@ INLINEDEF _CUDA_HD bool haveSameShapeAndStrides(const Nd4jLong *shapeInfo1, cons
|
|||
* @param indices the indices to iterate over
|
||||
* @return the double at the specified index
|
||||
*/
|
||||
INLINEDEF _CUDA_HD Nd4jLong getOffset(Nd4jLong baseOffset, const Nd4jLong *shape, const Nd4jLong *stride, const Nd4jLong *indices, const int rank) {
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////
|
||||
INLINEDEF _CUDA_HD Nd4jLong getOffset(const Nd4jLong *shapeInfo, const Nd4jLong *indices, Nd4jLong baseOffset) {
|
||||
|
||||
Nd4jLong offset = baseOffset;
|
||||
for(int i = 0; i < rank; i++) {
|
||||
if(shape[i] != 1)
|
||||
offset += indices[i] * stride[i];
|
||||
}
|
||||
|
||||
for(uint i = 1; i <= shapeInfo[0]; ++i)
|
||||
if(shapeInfo[i] != 1)
|
||||
offset += indices[i - 1] * shapeInfo[shapeInfo[0] + i];
|
||||
|
||||
return offset;
|
||||
}
|
||||
|
||||
INLINEDEF _CUDA_HD Nd4jLong getOffset(const Nd4jLong *shapeInfo, const Nd4jLong *indices, Nd4jLong baseOffset) {
|
||||
return shape::getOffset(baseOffset, shape::shapeOf(const_cast<Nd4jLong*>(shapeInfo)), shape::stride(const_cast<Nd4jLong*>(shapeInfo)), indices, shapeInfo[0]);
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////
|
||||
INLINEDEF Nd4jLong getOffset(const Nd4jLong *shapeInfo, const std::vector<uint>& indices) {
|
||||
|
||||
Nd4jLong offset = 0;
|
||||
|
||||
for(uint i = 0; i < shapeInfo[0]; ++i)
|
||||
if(shapeInfo[i + 1] != 1)
|
||||
offset += indices[i] * shapeInfo[shapeInfo[0] + i + 1];
|
||||
for(uint i = 1; i <= shapeInfo[0]; ++i)
|
||||
if(shapeInfo[i] != 1)
|
||||
offset += indices[i - 1] * shapeInfo[shapeInfo[0] + i];
|
||||
|
||||
return offset;
|
||||
}
|
||||
|
@ -4209,24 +4196,24 @@ INLINEDEF _CUDA_HD void maxIndToMinInd(Nd4jLong* maxIdxs, Nd4jLong* minIdxs, con
|
|||
INLINEDEF _CUDA_HD Nd4jLong subArrayIndex(const Nd4jLong maxIdx, const Nd4jLong* maxShapeInfo, const Nd4jLong* minShapeInfo, const int* dimsToExclude, const int dimsLen) {
|
||||
|
||||
Nd4jLong maxIdxs[MAX_RANK];
|
||||
shape::index2coords(shape::rank(maxShapeInfo), const_cast<Nd4jLong *>(maxShapeInfo)+1, const_cast<Nd4jLong&>(maxIdx), maxIdxs, shape::order(maxShapeInfo));
|
||||
shape::index2coords(const_cast<Nd4jLong&>(maxIdx), maxShapeInfo, maxIdxs);
|
||||
|
||||
Nd4jLong minIdxs[MAX_RANK];
|
||||
maxIndToMinInd(maxIdxs, minIdxs, maxShapeInfo, minShapeInfo, dimsToExclude, dimsLen);
|
||||
|
||||
return coords2index(shape::rank(minShapeInfo), minShapeInfo + 1, minIdxs);
|
||||
return shape::coords2index(minShapeInfo, minIdxs);
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
INLINEDEF _CUDA_HD Nd4jLong subArrayOffset(const Nd4jLong maxIdx, const Nd4jLong* maxShapeInfo, const Nd4jLong* minShapeInfo, const int* dimsToExclude, const int dimsLen) {
|
||||
|
||||
Nd4jLong maxIdxs[MAX_RANK];
|
||||
shape::index2coords(shape::rank(maxShapeInfo), const_cast<Nd4jLong *>(maxShapeInfo)+1, const_cast<Nd4jLong&>(maxIdx), maxIdxs, shape::order(maxShapeInfo));
|
||||
shape::index2coords(const_cast<Nd4jLong&>(maxIdx), maxShapeInfo, maxIdxs);
|
||||
|
||||
Nd4jLong minIdxs[MAX_RANK];
|
||||
maxIndToMinInd(maxIdxs, minIdxs, maxShapeInfo, minShapeInfo, dimsToExclude, dimsLen);
|
||||
|
||||
return getOffset(0, minShapeInfo + 1, minShapeInfo + shape::rank(minShapeInfo) + 1, minIdxs, shape::rank(minShapeInfo));
|
||||
return getOffset(minShapeInfo, minIdxs);
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
|
@ -4246,7 +4233,7 @@ INLINEDEF _CUDA_HD void maxIndToMinInd(Nd4jLong* maxIdxs, Nd4jLong* minIdxs, con
|
|||
int N, minI, maxI;
|
||||
|
||||
// calculate min per-dim-indices which corresponds to absolute minIdx index
|
||||
shape::index2coords(rankMin, minShapeInfo + 1, minIdx, indices, order(minShapeInfo));
|
||||
shape::index2coords(minIdx, minShapeInfo, indices);
|
||||
|
||||
// transform storage indices to contain per-dim max indices, purpose - memory saving
|
||||
// fill increment array as well
|
||||
|
@ -4277,7 +4264,7 @@ INLINEDEF _CUDA_HD void maxIndToMinInd(Nd4jLong* maxIdxs, Nd4jLong* minIdxs, con
|
|||
maxI = rankMax-1;
|
||||
N = 0;
|
||||
int step;
|
||||
maxOffsets[N++] = shape::getOffset(0, maxShapeInfo + 1, maxShapeInfo + rankMax + 1, indices, rankMax);
|
||||
maxOffsets[N++] = shape::getOffset(maxShapeInfo, indices);
|
||||
|
||||
// nested loops - producing of absolute indices for max array
|
||||
while(maxI >= 0) {
|
||||
|
@ -4290,7 +4277,7 @@ INLINEDEF _CUDA_HD void maxIndToMinInd(Nd4jLong* maxIdxs, Nd4jLong* minIdxs, con
|
|||
step = -1;
|
||||
}
|
||||
else {
|
||||
maxOffsets[N++] = shape::getOffset(0, maxShapeInfo + 1, maxShapeInfo + rankMax + 1, indices, rankMax);
|
||||
maxOffsets[N++] = shape::getOffset(maxShapeInfo, indices);
|
||||
step = rankMax - 1 - maxI;
|
||||
}
|
||||
}
|
||||
|
@ -4322,7 +4309,7 @@ INLINEDEF _CUDA_HD void maxIndToMinInd(Nd4jLong* maxIdxs, Nd4jLong* minIdxs, con
|
|||
int N, minI, maxI;
|
||||
|
||||
// calculate min per-dim-indices which corresponds to absolute minIdx index
|
||||
shape::index2coords(rankMin, minShapeInfo + 1, minIdx, indices, order(minShapeInfo));
|
||||
shape::index2coords(minIdx, minShapeInfo, indices);
|
||||
|
||||
// transform storage indices to contain per-dim max indices, purpose - memory saving
|
||||
// fill increment array as well
|
||||
|
@ -4353,7 +4340,7 @@ INLINEDEF _CUDA_HD void maxIndToMinInd(Nd4jLong* maxIdxs, Nd4jLong* minIdxs, con
|
|||
maxI = rankMax-1;
|
||||
N = 0;
|
||||
int step;
|
||||
maxIdxs[N++] = coords2index(rankMax, maxShapeInfo + 1, indices);
|
||||
maxIdxs[N++] = shape::coords2index(maxShapeInfo, indices);
|
||||
|
||||
// nested loops - producing of absolute indices for max array
|
||||
while(maxI >= 0) {
|
||||
|
@ -4366,7 +4353,7 @@ INLINEDEF _CUDA_HD void maxIndToMinInd(Nd4jLong* maxIdxs, Nd4jLong* minIdxs, con
|
|||
step = -1;
|
||||
}
|
||||
else {
|
||||
maxIdxs[N++] = coords2index(rankMax, maxShapeInfo + 1, indices);
|
||||
maxIdxs[N++] = shape::coords2index(maxShapeInfo, indices);
|
||||
step = rankMax - 1 - maxI;
|
||||
}
|
||||
}
|
||||
|
@ -4699,37 +4686,23 @@ INLINEDEF _CUDA_HD void calcSubArrShapeAndOffsets(const Nd4jLong* wholeShapeInfo
|
|||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
INLINEDEF void _CUDA_HD index2coords(const int rank, const Nd4jLong *shape, Nd4jLong index, Nd4jLong *coords, const char order) {
|
||||
Nd4jLong arrLen = shape::prodLong(shape, rank);
|
||||
shape::index2coords(rank, shape, index, arrLen, coords, order);
|
||||
INLINEDEF void _CUDA_HD index2coords(Nd4jLong index, const Nd4jLong *shapeInfo, Nd4jLong *coords) {
|
||||
|
||||
for(uint i = shapeInfo[0]; i > 1; --i) {
|
||||
coords[i - 1] = index % shapeInfo[i];
|
||||
index /= shapeInfo[i];
|
||||
}
|
||||
coords[0] = index; // last iteration
|
||||
}
|
||||
|
||||
INLINEDEF void _CUDA_HD index2coords(const int rank, const Nd4jLong *shape, Nd4jLong index, Nd4jLong arrLen, Nd4jLong *coords, const char order) {
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
INLINEDEF void _CUDA_HD index2coords(Nd4jLong index, const int rank, const Nd4jLong *shape, Nd4jLong *coords) {
|
||||
|
||||
if(order == 'c') {
|
||||
|
||||
for(int i = 0; i < rank; i++) {
|
||||
arrLen /= shape[i];
|
||||
if(arrLen > 0 && shape[i] > 1) {
|
||||
coords[i] = index / arrLen;
|
||||
index %= arrLen;
|
||||
}
|
||||
else
|
||||
coords[i] = 0;
|
||||
}
|
||||
}
|
||||
else {
|
||||
|
||||
for(int i = rank - 1; i >= 0; i--) {
|
||||
arrLen /= shape[i];
|
||||
if(arrLen > 0 && shape[i] > 1) {
|
||||
coords[i] = index / arrLen;
|
||||
index %= arrLen;
|
||||
}
|
||||
else
|
||||
coords[i] = 0;
|
||||
}
|
||||
for(uint i = rank - 1; i > 0; --i) {
|
||||
coords[i] = index % shape[i];
|
||||
index /= shape[i];
|
||||
}
|
||||
coords[0] = index; // last iteration
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
|
|
|
@ -176,7 +176,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
oZ[offset] = OpType::op(oX[offset], y[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -196,8 +196,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(oX[offset], y[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -217,8 +217,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, lenY, canCastY);
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
oZ[offset] = OpType::op(oX[offset], y[yOffset]);
|
||||
}
|
||||
}
|
||||
|
@ -238,8 +238,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto offset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, lenY, canCastY);
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto offset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
oZ[offset] = OpType::op(oX[xOffset], y[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -261,9 +261,9 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, lenY, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(oX[xOffset], y[yOffset]);
|
||||
}
|
||||
}
|
||||
|
@ -362,7 +362,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
oZ[offset] = OpType::op(x[offset], oY[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -382,8 +382,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(x[offset], oY[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -403,8 +403,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto xOffset = shape::indexOffset(f, yShapeInfo, xShapeInfoCast, lenX, canCastX);
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto xOffset = shape::indexOffset(f, yShapeInfo, xShapeInfoCast, canCastX);
|
||||
oZ[offset] = OpType::op(x[xOffset], oY[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -424,8 +424,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto offset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, lenX, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto offset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
oZ[offset] = OpType::op(x[offset], oY[yOffset]);
|
||||
}
|
||||
}
|
||||
|
@ -447,9 +447,9 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto xOffset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, lenX, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto xOffset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(x[xOffset], oY[yOffset]);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -178,7 +178,7 @@ namespace functions {
|
|||
// all this stuff already happens within thread
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
oZ[offset] = OpType::op(oX[offset], y[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -198,8 +198,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(oX[offset], y[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -219,8 +219,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, lenY, canCastY);
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
oZ[offset] = OpType::op(oX[offset], y[yOffset]);
|
||||
}
|
||||
}
|
||||
|
@ -240,8 +240,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto offset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, lenY, canCastY);
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto offset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
oZ[offset] = OpType::op(oX[xOffset], y[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -263,9 +263,9 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, lenY, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(oX[xOffset], y[yOffset]);
|
||||
}
|
||||
}
|
||||
|
@ -365,7 +365,7 @@ namespace functions {
|
|||
// all this stuff already happens within thread
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
oZ[offset] = OpType::op(x[offset], oY[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -385,8 +385,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(x[offset], oY[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -406,8 +406,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto xOffset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, lenX, canCastX);
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto xOffset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
oZ[offset] = OpType::op(x[xOffset], oY[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -427,8 +427,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto offset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, lenX, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto offset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
oZ[offset] = OpType::op(x[offset], oY[yOffset]);
|
||||
}
|
||||
}
|
||||
|
@ -450,9 +450,9 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto xOffset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, lenX, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto xOffset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(x[xOffset], oY[yOffset]);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -178,7 +178,7 @@ namespace functions {
|
|||
// all this stuff already happens within thread
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
oZ[offset] = OpType::op(oX[offset], y[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -198,8 +198,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(oX[offset], y[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -219,8 +219,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, lenY, canCastY);
|
||||
auto offset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
oZ[offset] = OpType::op(oX[offset], y[yOffset]);
|
||||
}
|
||||
}
|
||||
|
@ -240,8 +240,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto offset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, lenY, canCastY);
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto offset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
oZ[offset] = OpType::op(oX[xOffset], y[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -263,9 +263,9 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, lenY, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto xOffset = shape::indexOffset(f, xTadShapeShapeInfo, tadShapeShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(oX[xOffset], y[yOffset]);
|
||||
}
|
||||
}
|
||||
|
@ -365,7 +365,7 @@ namespace functions {
|
|||
// all this stuff already happens within thread
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
oZ[offset] = OpType::op(x[offset], oY[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -385,8 +385,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(x[offset], oY[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -406,8 +406,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto xOffset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, lenX, canCastX);
|
||||
auto offset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto xOffset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
oZ[offset] = OpType::op(x[xOffset], oY[offset]);
|
||||
}
|
||||
}
|
||||
|
@ -427,8 +427,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto offset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, lenX, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto offset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
oZ[offset] = OpType::op(x[offset], oY[yOffset]);
|
||||
}
|
||||
}
|
||||
|
@ -450,9 +450,9 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (int f = 0; f < tadLength; f++) {
|
||||
auto xOffset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, lenX, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, lenZ, canCastZ);
|
||||
auto xOffset = shape::indexOffset(f, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(f, yTadShapeShapeInfo, tadShapeShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(f, zTadShapeInfo, tadShapeInfoZCast, canCastZ);
|
||||
oZ[zOffset] = OpType::op(x[xOffset], oY[yOffset]);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -92,7 +92,7 @@ Nd4jLong IndexReduce<X, Y>::execScalar(void *vx, Nd4jLong *xShapeInfo, void *vex
|
|||
auto ulen = info.getItersPerThread(threadNum);
|
||||
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(threadOffset + i, xShapeInfo, xShapeInfoCast, len, canCastX);
|
||||
auto offset = shape::indexOffset(threadOffset + i, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
IndexValue<X> curr(x[offset], threadOffset + i);
|
||||
local = OpType::update(local, curr, extraParams);
|
||||
}
|
||||
|
|
|
@ -166,7 +166,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for(unsigned int i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpType::op(x[offset], y[0], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -183,8 +183,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for(unsigned int i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, n, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[xOffset], y[0], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -218,7 +218,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpType::op(x[offset], y[offset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -238,8 +238,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, n, canCastZ);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[offset], y[offset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -259,8 +259,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, n, canCastY);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
z[offset] = OpType::op(x[offset], y[yOffset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -280,8 +280,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, n, canCastY);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
z[offset] = OpType::op(x[xOffset], y[offset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -303,9 +303,9 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, n, canCastY);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, n, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[xOffset], y[yOffset], extraParams);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -158,7 +158,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for(Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpType::op(x[offset], y[0], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -176,8 +176,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for(Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, n, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[xOffset], y[0], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -209,7 +209,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpType::op(x[offset], y[offset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -229,8 +229,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, n, canCastZ);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[offset], y[offset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -250,8 +250,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, n, canCastY);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
z[offset] = OpType::op(x[offset], y[yOffset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -271,8 +271,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, n, canCastY);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
z[offset] = OpType::op(x[xOffset], y[offset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -294,9 +294,9 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, n, canCastY);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, n, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[xOffset], y[yOffset], extraParams);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -158,7 +158,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for(Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpType::op(x[offset], y[0], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -176,8 +176,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for(Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, n, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[xOffset], y[0], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -209,7 +209,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpType::op(x[offset], y[offset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -229,8 +229,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, n, canCastZ);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[offset], y[offset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -250,8 +250,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, n, canCastY);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
z[offset] = OpType::op(x[offset], y[yOffset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -271,8 +271,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, n, canCastY);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
z[offset] = OpType::op(x[xOffset], y[offset], extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -294,9 +294,9 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, n, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, n, canCastY);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, n, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[xOffset], y[yOffset], extraParams);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -70,7 +70,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, length, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpClass::op(x[offset], y[offset], i, length, rng, extraArguments);
|
||||
}
|
||||
}
|
||||
|
@ -90,8 +90,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, length, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, length, canCastZ);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpClass::op(x[offset], y[offset], i, length, rng, extraArguments);
|
||||
}
|
||||
}
|
||||
|
@ -111,8 +111,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, length, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, length, canCastY);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
z[offset] = OpClass::op(x[offset], y[yOffset], i, length, rng, extraArguments);
|
||||
}
|
||||
}
|
||||
|
@ -132,8 +132,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < info.getItersPerThread(threadNum); i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, length, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, length, canCastY);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
z[offset] = OpClass::op(x[xOffset], y[offset], i, length, rng, extraArguments);
|
||||
}
|
||||
}
|
||||
|
@ -155,9 +155,9 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, length, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, length, canCastY);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, length, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(i + threadOffset, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpClass::op(x[xOffset], y[yOffset], i, length, rng, extraArguments);
|
||||
}
|
||||
}
|
||||
|
@ -196,7 +196,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, length, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpClass::op(x[offset], i, length, rng, extraArguments);
|
||||
}
|
||||
}
|
||||
|
@ -214,8 +214,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, length, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, length, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpClass::op(x[xOffset], i, length, rng, extraArguments);
|
||||
}
|
||||
}
|
||||
|
@ -247,7 +247,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (Nd4jLong i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, length, canCastZ);
|
||||
auto offset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[offset] = OpClass::op(i+threadOffset, length, rng, extraArguments);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -77,7 +77,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_PARALLEL_FOR_SIMD_THREADS(maxThreads)
|
||||
for(Nd4jLong i = 0; i < length; ++i)
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCastX)], extraParams), extraParams);
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCastX)], extraParams), extraParams);
|
||||
|
||||
|
||||
for (int e = 0; e < maxThreads; e++)
|
||||
|
@ -112,7 +112,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_PARALLEL_FOR_SIMD
|
||||
for(Nd4jLong i = 0; i < length; ++i)
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCastX)], extraParams), extraParams);
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCastX)], extraParams), extraParams);
|
||||
|
||||
for (int e = 0; e < omp_get_max_threads(); e++)
|
||||
start = OpType::update(start, intermediate[e], extraParams);
|
||||
|
|
|
@ -81,7 +81,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_PARALLEL_FOR_SIMD_THREADS(maxThreads)
|
||||
for(Nd4jLong i = 0; i < length; ++i)
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCastX)], extraParams), extraParams);
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCastX)], extraParams), extraParams);
|
||||
|
||||
|
||||
for (int e = 0; e < maxThreads; e++)
|
||||
|
@ -115,7 +115,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_PARALLEL_FOR_SIMD
|
||||
for(Nd4jLong i = 0; i < length; ++i)
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCastX)], extraParams), extraParams);
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCastX)], extraParams), extraParams);
|
||||
|
||||
for (int e = 0; e < omp_get_max_threads(); e++)
|
||||
start = OpType::update(start, intermediate[e], extraParams);
|
||||
|
|
|
@ -77,7 +77,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_PARALLEL_FOR_SIMD_THREADS(maxThreads)
|
||||
for(Nd4jLong i = 0; i < length; ++i)
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCastX)], extraParams), extraParams);
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCastX)], extraParams), extraParams);
|
||||
|
||||
|
||||
for (int e = 0; e < maxThreads; e++)
|
||||
|
@ -113,7 +113,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_PARALLEL_FOR_SIMD
|
||||
for(Nd4jLong i = 0; i < length; ++i)
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCastX)], extraParams), extraParams);
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCastX)], extraParams), extraParams);
|
||||
|
||||
for (int e = 0; e < omp_get_max_threads(); e++)
|
||||
start = OpType::update(start, intermediate[e], extraParams);
|
||||
|
|
|
@ -79,7 +79,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_PARALLEL_FOR_SIMD_THREADS(maxThreads)
|
||||
for(Nd4jLong i = 0; i < length; ++i)
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCastX)], extraParams), extraParams);
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCastX)], extraParams), extraParams);
|
||||
|
||||
|
||||
for (int e = 0; e < maxThreads; e++)
|
||||
|
@ -117,7 +117,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_PARALLEL_FOR_SIMD_THREADS(maxThreads)
|
||||
for(Nd4jLong i = 0; i < length; ++i)
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCastX)], extraParams), extraParams);
|
||||
intermediate[omp_get_thread_num()] = OpType::update(intermediate[omp_get_thread_num()], OpType::op(x[shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCastX)], extraParams), extraParams);
|
||||
|
||||
for (int e = 0; e < maxThreads; e++)
|
||||
start = OpType::update(start, intermediate[e], extraParams);
|
||||
|
|
|
@ -95,7 +95,7 @@ void Reduce3<X,Z>::execScalar(void *vx, Nd4jLong *xShapeInfo,
|
|||
PRAGMA_OMP_PARALLEL_FOR_SIMD_THREADS(t._numThreads)
|
||||
for(unsigned int i = 0; i < length; i++) {
|
||||
const auto threadNum = omp_get_thread_num();
|
||||
auto offset = shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCastX);
|
||||
auto offset = shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
intermediate[threadNum] = OpType::update(intermediate[threadNum], OpType::op(x[offset], y[offset], extraParamsLocal + 3 * threadNum), extraParamsLocal + 3 * threadNum);
|
||||
}
|
||||
} else {
|
||||
|
@ -105,8 +105,8 @@ void Reduce3<X,Z>::execScalar(void *vx, Nd4jLong *xShapeInfo,
|
|||
PRAGMA_OMP_PARALLEL_FOR_SIMD_THREADS(t._numThreads)
|
||||
for(unsigned int i = 0; i < length; i++) {
|
||||
const auto threadNum = omp_get_thread_num();
|
||||
auto xOffset = shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCastX);
|
||||
auto yOffset = shape::indexOffset(i, yShapeInfo, yShapeInfoCast, length, canCastY);
|
||||
auto xOffset = shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto yOffset = shape::indexOffset(i, yShapeInfo, yShapeInfoCast, canCastY);
|
||||
intermediate[threadNum] = OpType::update(intermediate[threadNum], OpType::op(x[xOffset], y[yOffset], extraParamsLocal + 3 * threadNum), extraParamsLocal + 3 * threadNum);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -165,7 +165,7 @@ void ScalarTransform<X, Y, Z>::transform(void *vx, Nd4jLong *xShapeInfo,
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, len, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpType::op(x[offset], scalar, extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -183,8 +183,8 @@ void ScalarTransform<X, Y, Z>::transform(void *vx, Nd4jLong *xShapeInfo,
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, len, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, len, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[xOffset], scalar, extraParams);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -173,7 +173,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, len, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpType::op(x[offset], scalar, extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -191,8 +191,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, len, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, len, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[xOffset], scalar, extraParams);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -173,7 +173,7 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, len, canCastX);
|
||||
auto offset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
z[offset] = OpType::op(x[offset], scalar, extraParams);
|
||||
}
|
||||
}
|
||||
|
@ -191,8 +191,8 @@ namespace functions {
|
|||
|
||||
PRAGMA_OMP_SIMD
|
||||
for (unsigned int i = 0; i < ulen; i++) {
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, len, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, len, canCastZ);
|
||||
auto xOffset = shape::indexOffset(i + threadOffset, xShapeInfo, xShapeInfoCast, canCastX);
|
||||
auto zOffset = shape::indexOffset(i + threadOffset, zShapeInfo, zShapeInfoCast, canCastZ);
|
||||
z[zOffset] = OpType::op(x[xOffset], scalar, extraParams);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -92,7 +92,7 @@ namespace functions {
|
|||
|
||||
for (Nd4jLong i = 0; i < length; i++) {
|
||||
|
||||
auto xOffset = shape::indexOffset(i, xShapeInfo, xShapeInfoCast, length, canCast);
|
||||
auto xOffset = shape::indexOffset(i, xShapeInfo, xShapeInfoCast, canCast);
|
||||
|
||||
SummaryStatsData<X> curr;
|
||||
curr.initWithValue(x[xOffset]);
|
||||
|
@ -175,7 +175,7 @@ namespace functions {
|
|||
}
|
||||
else {
|
||||
for (int i = 1; i < tadLength; i ++) {
|
||||
auto xOffset = shape::indexOffset(i, tadShapeShapeInfo, tadShapeShapeInfoCast, tadLength, canCast);
|
||||
auto xOffset = shape::indexOffset(i, tadShapeShapeInfo, tadShapeShapeInfoCast, canCast);
|
||||
|
||||
SummaryStatsData <X> indexVal2;
|
||||
indexVal2.initWithValue(tx[xOffset]);
|
||||
|
|
|
@ -64,8 +64,8 @@ static __global__ void broadcastInverseSimple(
|
|||
namespace functions {
|
||||
namespace broadcast {
|
||||
|
||||
static Nd4jLong __device__ __noinline__ _getIndexOffset(Nd4jLong index, Nd4jLong *shapeInfo, Nd4jLong length) {
|
||||
return shape::getIndexOffset(index, shapeInfo, length);
|
||||
static Nd4jLong __device__ __noinline__ _getIndexOffset(Nd4jLong index, Nd4jLong *shapeInfo) {
|
||||
return shape::getIndexOffset(index, shapeInfo);
|
||||
}
|
||||
|
||||
static Nd4jLong __device__ __noinline__ _length(Nd4jLong *shapeInfo) {
|
||||
|
@ -154,9 +154,9 @@ namespace functions {
|
|||
else {
|
||||
// it is expected that x and z tads and y array all have the same length
|
||||
for (Nd4jLong i = threadIdx.x; i < tadLength; i+= blockDim.x) {
|
||||
auto xOffset = _getIndexOffset(i, xShapeInfo, tadLength);
|
||||
auto yOffset = _getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
||||
auto zOffset = _getIndexOffset(i, tadOnlyShapeInfoZ, tadLength);
|
||||
auto xOffset = _getIndexOffset(i, xShapeInfo);
|
||||
auto yOffset = _getIndexOffset(i, tadOnlyShapeInfo);
|
||||
auto zOffset = _getIndexOffset(i, tadOnlyShapeInfoZ);
|
||||
rZ[zOffset] = OpType::op(x[xOffset], rY[yOffset]);
|
||||
}
|
||||
}
|
||||
|
@ -219,9 +219,9 @@ namespace functions {
|
|||
// it is expected that x and z tads and y array all have the same length
|
||||
for (Nd4jLong i = threadIdx.x; i < tadLength; i+= blockDim.x) {
|
||||
|
||||
auto xOffset = _getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
||||
auto yOffset = _getIndexOffset(i, yShapeInfo, tadLength);
|
||||
auto zOffset = _getIndexOffset(i, tadOnlyShapeInfoZ, tadLength);
|
||||
auto xOffset = _getIndexOffset(i, tadOnlyShapeInfo);
|
||||
auto yOffset = _getIndexOffset(i, yShapeInfo);
|
||||
auto zOffset = _getIndexOffset(i, tadOnlyShapeInfoZ);
|
||||
rZ[zOffset] = OpType::op(rX[xOffset], y[yOffset]);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -145,9 +145,9 @@ namespace functions {
|
|||
else {
|
||||
// it is expected that x and z tads and y array all have the same length
|
||||
for (Nd4jLong i = threadIdx.x; i < tadLength; i+= blockDim.x) {
|
||||
auto xOffset = shape::getIndexOffset(i, xShapeInfo, tadLength);
|
||||
auto yOffset = shape::getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
||||
auto zOffset = shape::getIndexOffset(i, tadOnlyShapeInfoZ, tadLength);
|
||||
auto xOffset = shape::getIndexOffset(i, xShapeInfo);
|
||||
auto yOffset = shape::getIndexOffset(i, tadOnlyShapeInfo);
|
||||
auto zOffset = shape::getIndexOffset(i, tadOnlyShapeInfoZ);
|
||||
|
||||
rZ[zOffset] = OpType::op(x[xOffset], rY[yOffset]);
|
||||
}
|
||||
|
@ -213,9 +213,9 @@ namespace functions {
|
|||
else {
|
||||
// it is expected that x and z tads and y array all have the same length
|
||||
for (Nd4jLong i = threadIdx.x; i < tadLength; i+= blockDim.x) {
|
||||
auto xOffset = shape::getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
||||
auto yOffset = shape::getIndexOffset(i, yShapeInfo, tadLength);
|
||||
auto zOffset = shape::getIndexOffset(i, tadOnlyShapeInfoZ, tadLength);
|
||||
auto xOffset = shape::getIndexOffset(i, tadOnlyShapeInfo);
|
||||
auto yOffset = shape::getIndexOffset(i, yShapeInfo);
|
||||
auto zOffset = shape::getIndexOffset(i, tadOnlyShapeInfoZ);
|
||||
|
||||
rZ[zOffset] = OpType::op(rX[xOffset], y[yOffset]);
|
||||
}
|
||||
|
|
|
@ -139,9 +139,9 @@ namespace functions {
|
|||
else {
|
||||
// it is expected that x and z tads and y array all have the same length
|
||||
for (Nd4jLong i = threadIdx.x; i < tadLength; i+= blockDim.x) {
|
||||
auto xOffset = shape::getIndexOffset(i, xShapeInfo, tadLength);
|
||||
auto yOffset = shape::getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
||||
auto zOffset = shape::getIndexOffset(i, tadOnlyShapeInfoZ, tadLength);
|
||||
auto xOffset = shape::getIndexOffset(i, xShapeInfo);
|
||||
auto yOffset = shape::getIndexOffset(i, tadOnlyShapeInfo);
|
||||
auto zOffset = shape::getIndexOffset(i, tadOnlyShapeInfoZ);
|
||||
|
||||
rZ[zOffset] = OpType::op(x[xOffset], rY[yOffset]);
|
||||
}
|
||||
|
@ -207,9 +207,9 @@ namespace functions {
|
|||
else {
|
||||
// it is expected that x and z tads and y array all have the same length
|
||||
for (Nd4jLong i = threadIdx.x; i < tadLength; i+= blockDim.x) {
|
||||
auto xOffset = shape::getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
||||
auto yOffset = shape::getIndexOffset(i, yShapeInfo, tadLength);
|
||||
auto zOffset = shape::getIndexOffset(i, tadOnlyShapeInfoZ, tadLength);
|
||||
auto xOffset = shape::getIndexOffset(i, tadOnlyShapeInfo);
|
||||
auto yOffset = shape::getIndexOffset(i, yShapeInfo);
|
||||
auto zOffset = shape::getIndexOffset(i, tadOnlyShapeInfoZ);
|
||||
|
||||
rZ[zOffset] = OpType::op(rX[xOffset], y[yOffset]);
|
||||
}
|
||||
|
|
|
@ -251,7 +251,7 @@ namespace functions {
|
|||
sPartials[threadIdx.x] = OpType::startingIndexValue(dx);
|
||||
|
||||
for(int i = threadIdx.x;i < tadLength; i += blockDim.x) {
|
||||
auto xOffset = tadOffsetForBlock + shape::getIndexOffset(i, tadOnlyShapeInfo, tadLength);
|
||||
auto xOffset = tadOffsetForBlock + shape::getIndexOffset(i, tadOnlyShapeInfo);
|
||||
IndexValue<X> comp {dx[xOffset], i};
|
||||
sPartials[threadIdx.x] = OpType::update(sPartials[threadIdx.x], comp, extraParams);
|
||||
}
|
||||
|
@ -299,7 +299,7 @@ namespace functions {
|
|||
} else {
|
||||
|
||||
for(Nd4jLong i = tid;i < n; i += blockDim.x * gridDim.x) {
|
||||
auto offset = shape::getIndexOffset(i, xShapeInfo, n);
|
||||
auto offset = shape::getIndexOffset(i, xShapeInfo);
|
||||
IndexValue<X> indexVal = {dx[offset], i};
|
||||
reduction = OpType::update(reduction, indexVal, extraParams);
|
||||
}
|
||||
|
|
|
@ -115,7 +115,7 @@ namespace functions {
|
|||
sPartials[threadIdx.x] = OpType::update(sPartials[threadIdx.x], OpType::op(x[i * xEws], extraParams), extraParams);
|
||||
else
|
||||
for (int i = tid; i < len; i += blockDim.x * gridDim.x)
|
||||
sPartials[threadIdx.x] = OpType::update(sPartials[threadIdx.x], OpType::op(x[shape::getIndexOffset(i, xShapeInfo, len)], extraParams), extraParams);
|
||||
sPartials[threadIdx.x] = OpType::update(sPartials[threadIdx.x], OpType::op(x[shape::getIndexOffset(i, xShapeInfo)], extraParams), extraParams);
|
||||
|
||||
__syncthreads();
|
||||
aggregatePartials<OpType>(sPartials, threadIdx.x, nd4j::math::nd4j_min<int>(blockDim.x, len), extraParams);
|
||||
|
|
|
@ -73,7 +73,7 @@ namespace functions {
|
|||
|
||||
|
||||
for (Nd4jLong i = tid; i < length; i+= totalThreads) {
|
||||
z[shape::getIndexOffset(i, zShapeInfo, length)] = OpType::op(y[shape::getIndexOffset(i, yShapeInfo, length)], scalar, params);
|
||||
z[shape::getIndexOffset(i, zShapeInfo)] = OpType::op(y[shape::getIndexOffset(i, yShapeInfo)], scalar, params);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -72,8 +72,8 @@ namespace functions {
|
|||
|
||||
|
||||
for (Nd4jLong i = tid; i < length; i+= gridDim.x * blockDim.x) {
|
||||
auto xOffset2 = shape::getIndexOffset(i, shapeInfo, length);
|
||||
auto zOffset2 = shape::getIndexOffset(i, zShapeInfo, length);
|
||||
auto xOffset2 = shape::getIndexOffset(i, shapeInfo);
|
||||
auto zOffset2 = shape::getIndexOffset(i, zShapeInfo);
|
||||
result[zOffset2] = OpType::op(dy[xOffset2], params);
|
||||
}
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue