029b84e2b7
* RL4J: Add generic update rule (#502) Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com> * Shyrma reduce (#481) * - start working on improving of cpu legacy code for reduce ops Signed-off-by: Yurii <iuriish@yahoo.com> * - further work on improving legacy loops Signed-off-by: Yurii <iuriish@yahoo.com> * - still working on improving reduce ops Signed-off-by: Yurii <iuriish@yahoo.com> * - further work on improving reduce ops Signed-off-by: Yurii <iuriish@yahoo.com> * - testing speed run of new reduce op Signed-off-by: Yurii <iuriish@yahoo.com> * - working on improvement of default loop for reduce op Signed-off-by: Yurii <iuriish@yahoo.com> * - update signatures of stuff which calls reduce ops Signed-off-by: Yurii <iuriish@yahoo.com> * - make corrections in cuda reduce kernels Signed-off-by: Yurii <iuriish@yahoo.com> * - change loop for default case in broadcast legacy ops Signed-off-by: Yurii <iuriish@yahoo.com> * - comment some shape stuff Signed-off-by: Yurii <iuriish@yahoo.com> * - comment unnecessary prints in RNGtests Signed-off-by: Yurii <iuriish@yahoo.com> * - finish to resolve conflicts after master has been merged Signed-off-by: Yurii <iuriish@yahoo.com> * - get rid of some compilation mistakes of cuda stuff Signed-off-by: Yurii <iuriish@yahoo.com> * - minor changes Signed-off-by: Yurii <iuriish@yahoo.com> * - further search for bug causing crash on java test Signed-off-by: Yurii <iuriish@yahoo.com> * - add scalar case in reduce_ ... exec stuff Signed-off-by: Yurii <iuriish@yahoo.com> * - minor corrections in NAtiveOps.cu Signed-off-by: Yurii <iuriish@yahoo.com> * - add switch to scalar case execReduceXD functions Signed-off-by: Yurii <iuriish@yahoo.com> * - add support for vectors old shape in ConstantShapeHelper::createShapeInfoWithNoUnitiesForReduce Signed-off-by: Yurii <iuriish@yahoo.com> * - correct cuda mirrorPad Signed-off-by: Yurii <iuriish@yahoo.com> * - add support for vectors old shape in cuda createShapeInfoWithNoUnitiesForReduce Signed-off-by: Yurii <iuriish@yahoo.com> Co-authored-by: raver119 <raver119@gmail.com> * Add support for CUDA 11.0 (#492) * Add support for CUDA 11.0 * libnd4j tweaks for CUDA 11 Signed-off-by: raver119@gmail.com <raver119@gmail.com> * bindings update, again? Signed-off-by: raver119@gmail.com <raver119@gmail.com> * * Update versions of JavaCPP Presets for FFmpeg, OpenBLAS, and NumPy * update API to match CUDA 8 Signed-off-by: raver119@gmail.com <raver119@gmail.com> * * Update version of JavaCPP Presets for CPython * C++ updated for cuDNN 8.0 Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one more test Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one more test Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one more test Signed-off-by: raver119@gmail.com <raver119@gmail.com> * 128-bit alignment for workspaces Signed-off-by: raver119@gmail.com <raver119@gmail.com> * change seed in 1 test Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Fix dependecy duplication in python4j-parent pom * Fix group id for in python4j-numpy * few tests tweaked Signed-off-by: raver119@gmail.com <raver119@gmail.com> * Remove macosx-x86_64-gpu from nd4j-tests-tensorflow * few minor tweaks for IndexReduce Signed-off-by: raver119@gmail.com <raver119@gmail.com> * one test removed Signed-off-by: raver119@gmail.com <raver119@gmail.com> Co-authored-by: raver119@gmail.com <raver119@gmail.com> Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com> * RL4J: Add SyncTrainer and AgentLearnerBuilder for a few algorithms (#504) Signed-off-by: Alexandre Boulanger <aboulang2002@yahoo.com> Co-authored-by: Alexandre Boulanger <44292157+aboulang2002@users.noreply.github.com> Co-authored-by: Yurii Shyrma <iuriish@yahoo.com> Co-authored-by: raver119 <raver119@gmail.com> Co-authored-by: Serhii Shepel <9946053+sshepel@users.noreply.github.com> |
||
---|---|---|
.. | ||
project | ||
src | ||
.gitignore | ||
.scalafmt.conf | ||
README.md | ||
build.sbt | ||
pom.xml | ||
sbt-pom.xml |
README.md
ND4S: Scala bindings for ND4J
ND4S is open-source Scala bindings for ND4J. Released under an Apache 2.0 license.
Main Features
- NDArray manipulation syntax sugar with safer type.
- NDArray slicing syntax, similar with NumPy.
Installation
Install via Maven
ND4S is already included in official Maven repositories.
With IntelliJ, incorporation of ND4S is easy: just create a new Scala project, go to "Project Settings"/Libraries, add "From Maven...", and search for nd4s.
As an alternative, one may simply add the line below to build.sbt
and re-build project.
val nd4jVersion = "1.0.0-alpha"
libraryDependencies += "org.nd4j" % "nd4j-native-platform" % nd4jVersion
libraryDependencies += "org.nd4j" %% "nd4s" % nd4jVersion
One may want to check our maven repository page and replace 1.0.0-alpha
with the latest version.
No need for git-cloning & compiling!
Clone from the GitHub Repo
ND4S is actively developed. You can clone the repository, compile it, and reference it in your project.
Clone the repository:
$ git clone https://github.com/eclipse/deeplearning4j.git
Compile the project:
$ cd nd4s
$ sbt +publish-local
Try ND4S in REPL
The easiest way to play ND4S around is cloning this repository and run the following command.
$ cd nd4s
$ sbt test:console
It starts REPL with importing org.nd4s.Implicits._
and org.nd4j.linalg.factory.Nd4j
automatically. It uses jblas backend at default.
scala> val arr = (1 to 9).asNDArray(3,3)
arr: org.nd4j.linalg.api.ndarray.INDArray =
[[1.00,2.00,3.00]
[4.00,5.00,6.00]
[7.00,8.00,9.00]]
scala> val sub = arr(0->2,1->3)
sub: org.nd4j.linalg.api.ndarray.INDArray =
[[2.00,3.00]
[5.00,6.00]]
CheatSheet(WIP)
ND4S syntax | Equivalent NumPy syntax | Result |
---|---|---|
Array(Array(1,2,3),Array(4,5,6)).toNDArray | np.array(1, 2 , 3], [4, 5, 6) | 1.0, 2.0, 3.0] [4.0, 5.0, 6.0 |
val arr = (1 to 9).asNDArray(3,3) | arr = np.arange(1,10).reshape(3,3) | 1.0, 2.0, 3.0] [4.0, 5.0, 6.0] ,[7.0, 8.0, 9.0 |
arr(0,0) | arr[0,0] | 1.0 |
arr(0,->) | arr[0,:] | [1.0, 2.0, 3.0] |
arr(--->) | arr[...] | 1.0, 2.0, 3.0] [4.0, 5.0, 6.0] ,[7.0, 8.0, 9.0 |
arr(0 -> 3 by 2, ->) | arr[0:3:2,:] | 1.0, 2.0, 3.0] [7.0, 8.0, 9.0 |
arr(0 to 2 by 2, ->) | arr[0:3:2,:] | 1.0, 2.0, 3.0] [7.0, 8.0, 9.0 |
arr.filter(_ > 3) | np.where(arr > 3, arr, 0) | 0.0, 0.0, 0.0] [4.0, 5.0, 6.0] ,[7.0, 8.0, 9.0 |
arr.map(_ % 3) | 1.0, 2.0, 0.0] [1.0, 2.0, 0.0] ,[1.0, 2.0, 0.0 | |
arr.filterBit(_ < 4) | 1.0, 1.0, 1.0] [0.0, 0.0, 0.0] ,[0.0, 0.0, 0.0 | |
arr + arr | arr + arr | 2.0, 4.0, 6.0] [8.0, 10.0, 12.0] ,[14.0, 16.0, 18.0 |
arr * arr | arr * arr | 1.0, 4.0, 9.0] [16.0, 25.0, 36.0] ,[49.0, 64.0, 81.0 |
arr dot arr | np.dot(arr, arr) | 30.0, 36.0, 42.0] [66.0, 81.0, 96.0] ,[102.0, 126.0, 150.0 |
arr.sumT | np.sum(arr) | 45.0 //returns Double value |
val comp = Array(1 + i, 1 + 2 * i).toNDArray | comp = np.array([1 + 1j, 1 + 2j]) | [1.0 + 1.0i ,1.0 + 2.0i] |
comp.sumT | np.sum(comp) | 2.0 + 3.0i //returns IComplexNumber value |
for(row <- arr.rowP if row.get(0) > 1) yield row*2 | 8.00,10.00,12.00] [14.00,16.00,18.00 | |
val tensor = (1 to 8).asNDArray(2,2,2) | tensor = np.arange(1,9).reshape(2,2,2) | [1.00,2.00] [3.00,4.00 5.00,6.00] [7.00,8.00] |
for(slice <- tensor.sliceP if slice.get(0) > 1) yield slice*2 | [10.00,12.00][14.00,16.00] | |
arr(0 -> 3 by 2, ->) = 0 | 0.00,0.00,0.00] [4.00,5.00,6.00] [0.00,0.00,0.00 |