parent
4cb281c01f
commit
c0f91e5c3c
|
@ -1,175 +0,0 @@
|
|||
---
|
||||
title: Elementwise Operations and Basic Usage
|
||||
short_title: Basic Usage
|
||||
description: How to use elementwise operations and other beginner concepts in ND4J.
|
||||
category: ND4J
|
||||
weight: 1
|
||||
---
|
||||
|
||||
|
||||
## Introduction
|
||||
|
||||
The basic operations of linear algebra are matrix creation, addition and multiplication. This guide will show you how to perform those operations with ND4J, as well as various advanced transforms.
|
||||
|
||||
* [Syntax](./nd4j-syntax)
|
||||
* [Elementwise](./nd4j-elementwise)
|
||||
* [Reshape/Transpose Matrices](./nd4j-matrix-manipulation)
|
||||
* [Tensor](./nd4j-tensor)
|
||||
* [Swapping CPUs for GPUs](./deeplearning4j-config-gpu-cpu)
|
||||
|
||||
The Java code below will create a simple 2 x 2 matrix, populate it with integers, and place it in the nd-array variable nd:
|
||||
```java
|
||||
INDArray nd = Nd4j.create(new float[]{1,2,3,4},new int[]{2,2});
|
||||
```
|
||||
If you print out this array
|
||||
```java
|
||||
System.out.println(nd);
|
||||
```
|
||||
you'll see this
|
||||
```java
|
||||
[[1.0 ,3.0]
|
||||
[2.0 ,4.0]
|
||||
]
|
||||
```
|
||||
A matrix with two rows and two columns, which orders its elements by column and which we'll call matrix nd.
|
||||
|
||||
A matrix that ordered its elements by row would look like this:
|
||||
```java
|
||||
[[1.0 ,2.0]
|
||||
[3.0 ,4.0]
|
||||
]
|
||||
```
|
||||
## Elementwise scalar operations
|
||||
|
||||
The simplest operations you can perform on a matrix are elementwise scalar operations; for example, adding the scalar 1 to each element of the matrix, or multiplying each element by the scalar 5. Let's try it.
|
||||
```java
|
||||
nd.add(1);
|
||||
```
|
||||
This line of code represents this operation:
|
||||
```java
|
||||
[[1.0 + 1 ,3.0 + 1]
|
||||
[2.0 + 1,4.0 + 1]
|
||||
]
|
||||
```
|
||||
and here is the result
|
||||
```java
|
||||
[[2.0 ,4.0]
|
||||
[3.0 ,5.0]
|
||||
]
|
||||
```
|
||||
There are two ways to perform any operation in ND4J, destructive and nondestructive; i.e. operations that change the underlying data, or operations that simply work with a copy of the data. Destructive operations will have an "i" at the end -- addi, subi, muli, divi. The "i" means the operation is performed "in place," directly on the data rather than a copy, while nd.add() leaves the original untouched.
|
||||
|
||||
Elementwise scalar multiplication looks like this:
|
||||
```java
|
||||
nd.mul(5);
|
||||
```
|
||||
And produces this:
|
||||
```java
|
||||
[[10.0 ,20.0]
|
||||
[15.0 ,25.0]
|
||||
]
|
||||
```
|
||||
Subtraction and division follow a similar pattern:
|
||||
```java
|
||||
nd.subi(3);
|
||||
nd.divi(2);
|
||||
```
|
||||
If you perform all these operations on your initial 2 x 2 matrix, you should end up with this matrix:
|
||||
```java
|
||||
[[3.5 ,8.5]
|
||||
[6.0 ,11.0]
|
||||
]
|
||||
```
|
||||
## Elementwise vector operations
|
||||
|
||||
When performed with simple units like scalars, the operations of arithmetic are unambiguous. But working with matrices, addition and multiplication can mean several things. With vector-on-matrix operations, you have to know what kind of addition or multiplication you're performing in each case.
|
||||
|
||||
First, we'll create a 2 x 2 matrix, a column vector and a row vector.
|
||||
```java
|
||||
INDArray nd = Nd4j.create(new float[]{1,2,3,4},new int[]{2,2});
|
||||
INDArray nd2 = Nd4j.create(new float[]{5,6},new int[]{2,1}); //vector as column
|
||||
INDArray nd3 = Nd4j.create(new float[]{5,6},new int[]{2}); //vector as row
|
||||
```
|
||||
Notice that the shape of the two vectors is specified with their final parameters. {2,1} means the vector is vertical, with elements populating two rows and one column. A simple {2} means the vector populates along a single row that spans two columns -- horizontal. You're first matrix will look like this
|
||||
```java
|
||||
[[1.00, 2.00],
|
||||
[3.00, 4.00]]
|
||||
```
|
||||
Here's how you add a column vector to a matrix:
|
||||
|
||||
nd.addColumnVector(nd2);
|
||||
|
||||
And here's the best way to visualize what's happening. The top element of the column vector combines with the top elements of each column in the matrix, and so forth. The sum matrix represents the march of that column vector across the matrix from left to right, adding itself along the way.
|
||||
```java
|
||||
[1.0 ,2.0] [5.0] [6.0 ,7.0]
|
||||
[3.0 ,4.0] + [6.0] = [9.0 ,10.0]
|
||||
```
|
||||
But let's say you preserved the initial matrix and instead added a row vector.
|
||||
```java
|
||||
nd.addRowVector(nd3);
|
||||
```
|
||||
Then your equation is best visualized like this:
|
||||
```java
|
||||
[1.0 ,2.0] [6.0 ,8.0]
|
||||
[3.0 ,4.0] + [5.0 ,6.0] = [8.0 ,10.0]
|
||||
```
|
||||
In this case, the leftmost element of the row vector combines with the leftmost elements of each row in the matrix, and so forth. The sum matrix represents that row vector falling down the matrix from top to bottom, adding itself at each level.
|
||||
|
||||
So vector addition can lead to different results depending on the orientation of your vector. The same is true for multiplication, subtraction and division and every other vector operation.
|
||||
|
||||
In ND4J, row vectors and column vectors look the same when you print them out with
|
||||
```java
|
||||
System.out.println(nd);
|
||||
```
|
||||
They will appear like this.
|
||||
```java
|
||||
[5.0 ,6.0]
|
||||
```
|
||||
Don't be fooled. Getting the parameters right at the beginning is crucial. addRowVector and addColumnVector will not produce different results when using the same initial vector, because they do not change a vector's orientation as row or column.
|
||||
|
||||
## Elementwise matrix operations
|
||||
|
||||
To carry out scalar and vector elementwise operations, we basically pretend we have two matrices of equal shape. Elementwise scalar multiplication can be represented several ways.
|
||||
```java
|
||||
[1.0 ,3.0] [c , c] [1.0 ,3.0] [1c ,3c]
|
||||
c * [2.0 ,4.0] = [c , c] * [2.0 ,4.0] = [2c ,4c]
|
||||
```
|
||||
So you see, elementwise operations match the elements of one matrix with their precise counterparts in another matrix. The element in row 1, column 1 of matrix nd will only be added to the element in row one column one of matrix c.
|
||||
|
||||
This is clearer when we start elementwise vector operations. We imaginee the vector, like the scalar, as populating a matrix of equal dimensions to matrix nd. Below, you can see why row and column vectors lead to different sums.
|
||||
|
||||
Column vector:
|
||||
```java
|
||||
[1.0 ,3.0] [5.0] [1.0 ,3.0] [5.0 ,5.0] [6.0 ,8.0]
|
||||
[2.0 ,4.0] + [6.0] = [2.0 ,4.0] + [6.0 ,6.0] = [8.0 ,10.0]
|
||||
```
|
||||
Row vector:
|
||||
```java
|
||||
[1.0 ,3.0] [1.0 ,3.0] [5.0 ,6.0] [6.0 ,9.0]
|
||||
[2.0 ,4.0] + [5.0 ,6.0] = [2.0 ,4.0] + [5.0 ,6.0] = [7.0 ,10.0]
|
||||
```
|
||||
Now you can see why row vectors and column vectors produce different results. They are simply shorthand for different matrices.
|
||||
|
||||
Given that we've already been doing elementwise matrix operations implicitly with scalars and vectors, it's a short hop to do them with more varied matrices:
|
||||
```java
|
||||
INDArray nd4 = Nd4j.create(new float[]{5,6,7,8},new int[]{2,2});
|
||||
|
||||
nd.add(nd4);
|
||||
```
|
||||
Here's how you can visualize that command:
|
||||
```java
|
||||
[1.0 ,3.0] [5.0 ,7.0] [6.0 ,10.0]
|
||||
[2.0 ,4.0] + [6.0 ,8.0] = [8.0 ,12.0]
|
||||
```
|
||||
Muliplying the initial matrix nd with matrix nd4 works the same way:
|
||||
```java
|
||||
nd.muli(nd4);
|
||||
|
||||
[1.0 ,3.0] [5.0 ,7.0] [5.0 ,21.0]
|
||||
[2.0 ,4.0] * [6.0 ,8.0] = [12.0 ,32.0]
|
||||
```
|
||||
The term of art for this particular matrix manipulation is a [Hadamard product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)).
|
||||
|
||||
These toy matrices are a useful heuristic to introduce the ND4J interface as well as basic ideas in linear algebra. This framework, however, is built to handle billions of parameters in n dimensions (and beyond...).
|
||||
|
||||
Next, we'll look at more complicated [matrix operations](../matrixwise.html).
|
|
@ -1,76 +0,0 @@
|
|||
---
|
||||
title: Elementwise Operations
|
||||
short_title: Elementwise Ops
|
||||
description: Different elementwise ops in ND4J.
|
||||
category: ND4J
|
||||
weight: 10
|
||||
---
|
||||
|
||||
|
||||
Elementwise operations are more intuitive than vectorwise operations, because the elements of one matrix map clearly onto the other, and to obtain the result, you have to perform just one arithmetical operation. (The example code [lives here](https://github.com/SkymindIO/nd4j-examples/blob/master/src/main/java/org/nd4j/examples/MatrixOperationExample.java).)
|
||||
|
||||
With vectorwise matrix operations, you will have to first build intuition and also perform multiple steps. There are two basic types of matrix multiplication: inner (dot) product and outer product. The inner product results in a matrix of reduced dimensions, the outer product results in one of expanded dimensions. A helpful mnemonic: Expand outward, contract inward.
|
||||
|
||||
## Inner product
|
||||
|
||||
Unlike Hadamard products, which require that both matrices have equal rows and columns, inner products simply require that the number of columns of the first matrix equal the number of rows of the second. For example, this works
|
||||
```java
|
||||
[3.0]
|
||||
[1.0 ,2.0] * [4.0] = (1.0 * 3.0) + (2.0 * 4.0) = 11
|
||||
```
|
||||
Notice a 1 x 2 row times a 2 x 1 column produces a scalar. This operation reduces the dimensions to 1,1. You can imagine rotating the row vector [1.0 ,2.0] clockwise to stand on its end, placed against the column vector. The two top elements are then multiplied by each other, as are the bottom two, and the two products are added to consolidate in a single scalar.
|
||||
|
||||
In ND4J, you would create the two vectors like this:
|
||||
```java
|
||||
INDArray nd = Nd4j.create(new float[]{1,2},new int[]{2}); //row vector
|
||||
INDArray nd2 = Nd4j.create(new float[]{3,4},new int[]{2, 1}); //column vector
|
||||
```
|
||||
And multiply them like this
|
||||
```java
|
||||
nd.mmul(nd2);
|
||||
```
|
||||
Notice ND4J code mirrors the equation in that nd * nd2 is row vector times column vector. The method is mmul, rather than the mul we used for elementwise operations, and the extra "m" stands for "matrix."
|
||||
|
||||
Now let's take the same operation, while adding an additional column to a new array we'll call nd4.
|
||||
```java
|
||||
INDArray nd4 = Nd4j.create(new float[]{3,4,5,6},new int[]{2, 2});
|
||||
nd.mmul(nd4);
|
||||
[3.0 ,5.0]
|
||||
[1.0 ,2.0] * [4.0 ,6.0] = [(1.0 * 3.0) + (2.0 * 4.0), (1.0 * 5.0) + (2.0 * 6.0)] = [11, 17]
|
||||
```
|
||||
Now let's add an extra row to the first matrix, call it nd3, and multiply it by nd4
|
||||
```java
|
||||
INDArray nd3 = Nd4j.create(new float[]{1,3,2,4},new int[]{2,2});
|
||||
nd3.mmul(nd4);
|
||||
```
|
||||
The equation will look like this
|
||||
```java
|
||||
[1.0 ,2.0] [3.0 ,5.0] [(1.0 * 3.0) + (2.0 * 4.0), (1.0 * 5.0) + (2.0 * 6.0), [11, 17]
|
||||
[3.0 ,4.0] * [4.0 ,6.0] = (3.0 * 3.0) + (4.0 * 4.0), (3.0 * 5.0) + (4.0 * 6.0),] = [25, 39]
|
||||
```
|
||||
## Outer product
|
||||
|
||||
Taking the outer product of the two vectors we first worked with is as simple as reversing their order.
|
||||
```java
|
||||
nd2.mmul(nd);
|
||||
|
||||
[3.0] [(3.0 * 1.0), (3.0 * 2.0) [3.0 ,6.0] [3.0] [1.0 ,2.0]
|
||||
[4.0] * [1.0 ,2.0] = (4.0 * 1.0), (4.0 * 2.0) = [4.0 ,8.0] = [4.0] * [1.0 ,2.0]
|
||||
```
|
||||
It turns out the multiplying nd2 by nd is the same as multiplying it by two nd's stacked on top of each other. That's an outer product. As you can see, outer products also require fewer operations, since they don't combine two products into one element in the final matrix.
|
||||
|
||||
A few aspects of ND4J code should be noted here. Firstly, the method mmul takes two parameters.
|
||||
```java
|
||||
nd.mmul(MATRIX TO MULTIPLY WITH, MATRIX TO WHICH THE PRODUCT SHOULD BE ASSIGNED);
|
||||
```
|
||||
which could be expressed like this
|
||||
```java
|
||||
nd.mmul(nd2, ndv);
|
||||
```
|
||||
which is the same as this line
|
||||
```java
|
||||
ndv = nd.mmul(nd2);
|
||||
```
|
||||
Using the second parameter to specify the nd-array to which the product should be assigned is a convention common in ND4J.
|
||||
|
||||
To learn about transposing and reshaping matrices with ND4J, [click here](../reshapetranspose.html).
|
|
@ -1,90 +0,0 @@
|
|||
---
|
||||
title: Matrix Manipulation
|
||||
short_title: Matrix Manipulation
|
||||
description: Operations for matrix manipulation, including transpose, reshape, in ND4J.
|
||||
category: ND4J
|
||||
weight: 10
|
||||
---
|
||||
|
||||
|
||||
There are several other basic matrix manipulations to highlight as you learn ND4J's workings. ([Example code](https://github.com/SkymindIO/nd4j-examples/blob/master/src/main/java/org/nd4j/examples/ReshapeOperationExample.java).)
|
||||
|
||||
### Transpose
|
||||
|
||||
The transpose of a matrix is its mirror image. An element located in row 1, column 2, in matrix A will be located in row 2, column 1, in the transpose of matrix A, whose mathematical notation is A to the T, or A^T. Notice that the elements along the diagonal of a square matrix do not move -- they are at the hinge of the reflection. In ND4J, transpose matrices like this:
|
||||
```java
|
||||
INDArray nd = Nd4j.create(new float[]{1, 2, 3, 4}, new int[]{2, 2});
|
||||
|
||||
[1.0 ,3.0]
|
||||
[2.0 ,4.0]
|
||||
nd.transpose();
|
||||
|
||||
[1.0 ,2.0]
|
||||
[3.0 ,4.0]
|
||||
```
|
||||
And a long matrix like this
|
||||
```java
|
||||
[1.0 ,3.0 ,5.0 ,7.0 ,9.0 ,11.0]
|
||||
[2.0 ,4.0 ,6.0 ,8.0 ,10.0 ,12.0]
|
||||
```
|
||||
Looks like this when it is transposed
|
||||
```java
|
||||
[1.0 ,2.0]
|
||||
[3.0 ,4.0]
|
||||
[5.0 ,6.0]
|
||||
[7.0 ,8.0]
|
||||
[9.0 ,10.0]
|
||||
[11.0 ,12.0]
|
||||
```
|
||||
In fact, transpose is just an important subset of a more general operation: reshape.
|
||||
|
||||
### Reshape
|
||||
|
||||
Yes, matrices can be reshaped. You can change the number of rows and columns they have. The reshaped matrix has to fulfill one condition: the product of its rows and columns must equal the product of the row and columns of the original matrix. For example, proceeding columnwise, you can reshape a 3 by 4 matrix into a 2 by 6 matrix:
|
||||
```java
|
||||
INDArray nd2 = Nd4j.create(new float[]{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}, new int[]{2, 6});
|
||||
```
|
||||
The array nd2 looks like this
|
||||
```java
|
||||
[1.0 ,3.0 ,5.0 ,7.0 ,9.0 ,11.0]
|
||||
[2.0 ,4.0 ,6.0 ,8.0 ,10.0 ,12.0]
|
||||
```
|
||||
Reshaping it is easy, and follows the same convention by which we gave it shape to begin with
|
||||
```java
|
||||
nd2.reshape(3,4);
|
||||
|
||||
[1.0 ,4.0 ,7.0 ,10.0]
|
||||
[2.0 ,5.0 ,8.0 ,11.0]
|
||||
[3.0 ,6.0 ,9.0 ,12.0]
|
||||
```
|
||||
### Linear view
|
||||
|
||||
This is straight view of an arbitrary nd-array. You can go through the nd-array like a vector, linearly, squashing it into one long line. Linear view allows you to do nondestructive operations (reshape and other operations can be destructive because elements are changed within the nd-array). Linear views are only good for elementwise operations (rather than matrix operations), since the views do not preserve the order of the buffer.
|
||||
```java
|
||||
nd2.linearView();
|
||||
|
||||
[1.0 ,2.0 ,3.0 ,4.0 ,5.0 ,6.0 ,7.0 ,8.0 ,9.0 ,10.0 ,11.0 ,12.0]
|
||||
```
|
||||
### Broadcast
|
||||
|
||||
Broadcast is advanced. It usually happens in the background without having to be called. The simplest way to understand it is by working with one long row vector, like the one above.
|
||||
```java
|
||||
nd2 = Nd4j.create(new float[]{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12});
|
||||
```
|
||||
Broadcasting will actually take multiple copies of that row vector and put them together into a larger matrix. The first parameter is the number of copies you want "broadcast," as well as the number of rows involved. In order not to throw a compiler error, make the second parameter of broadcast equal to the number of elements in your row vector.
|
||||
```java
|
||||
nd2.broadcast(new int[]{3,12});
|
||||
|
||||
[1.0 ,4.0 ,7.0 ,10.0 ,1.0 ,4.0 ,7.0 ,10.0 ,1.0 ,4.0 ,7.0 ,10.0]
|
||||
[2.0 ,5.0 ,8.0 ,11.0 ,2.0 ,5.0 ,8.0 ,11.0 ,2.0 ,5.0 ,8.0 ,11.0]
|
||||
[3.0 ,6.0 ,9.0 ,12.0 ,3.0 ,6.0 ,9.0 ,12.0 ,3.0 ,6.0 ,9.0 ,12.0]
|
||||
|
||||
nd2.broadcast(new int[]{6,12});
|
||||
|
||||
[1.0 ,7.0 ,1.0 ,7.0 ,1.0 ,7.0 ,1.0 ,7.0 ,1.0 ,7.0 ,1.0 ,7.0]
|
||||
[2.0 ,8.0 ,2.0 ,8.0 ,2.0 ,8.0 ,2.0 ,8.0 ,2.0 ,8.0 ,2.0 ,8.0]
|
||||
[3.0 ,9.0 ,3.0 ,9.0 ,3.0 ,9.0 ,3.0 ,9.0 ,3.0 ,9.0 ,3.0 ,9.0]
|
||||
[4.0 ,10.0 ,4.0 ,10.0 ,4.0 ,10.0 ,4.0 ,10.0 ,4.0 ,10.0 ,4.0 ,10.0]
|
||||
[5.0 ,11.0 ,5.0 ,11.0 ,5.0 ,11.0 ,5.0 ,11.0 ,5.0 ,11.0 ,5.0 ,11.0]
|
||||
[6.0 ,12.0 ,6.0 ,12.0 ,6.0 ,12.0 ,6.0 ,12.0 ,6.0 ,12.0 ,6.0 ,12.0]
|
||||
```
|
|
@ -1,133 +0,0 @@
|
|||
---
|
||||
title: ND4J Syntax
|
||||
short_title: Syntax
|
||||
description: General syntax and structure of the ND4J API.
|
||||
category: ND4J
|
||||
weight: 10
|
||||
---
|
||||
|
||||
|
||||
For the complete nd4j-api index, please consult the [Javadoc](../doc).
|
||||
|
||||
There are three types of operations used in ND4J: scalars, transforms and accumulations. We’ll use the word op synonymously with operation. You can see the lists of those three kinds of [ND4J ops under the directories here]( https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/api/ops/impl
|
||||
). Each Java file in each list is an op.
|
||||
|
||||
Most of the ops just take [enums](https://docs.oracle.com/javase/tutorial/java/javaOO/enum.html), or a list of discrete values that you can autocomplete. Activation functions are the exception, because they take strings such as `"relu"` or `"tanh"`.
|
||||
|
||||
Scalars, transforms and accumulations each have their own patterns. Transforms are the simplest, since the take a single argument and perform an operation on it. Absolute value is a transform that takes the argument `x` like so `abs(IComplexNDArray ndarray)` and produces the result which is the absolute value of x. Similarly, you would apply to the sigmoid transform `sigmoid()` to produce the "sigmoid of x".
|
||||
|
||||
Scalars just take two arguments: the input and the scalar to be applied to that input. For example, `ScalarAdd()` takes two arguments: the input `INDArray x` and the scalar `Number num`; i.e. `ScalarAdd(INDArray x, Number num)`. The same format applies to every Scalar op.
|
||||
|
||||
Finally, we have accumulations, which are also known as reductions in GPU-land. Accumulations add arrays and vectors to one another and can *reduce* the dimensions of those arrays in the result by adding their elements in a rowwise op. For example, we might run an accumulation on the array
|
||||
```java
|
||||
[1 2
|
||||
3 4]
|
||||
```
|
||||
Which would give us the vector
|
||||
```
|
||||
[3
|
||||
7]
|
||||
```
|
||||
Reducing the columns (i.e. dimensions) from two to one.
|
||||
|
||||
Accumulations can be either pairwise or scalar. In a pairwise reduction, we might be dealing with two arrays, x and y, which have the same shape. In that case, we could calculate the cosine similarity of x and y by taking their elements two by two.
|
||||
|
||||
cosineSim(x[i], y[i])
|
||||
|
||||
Or take `EuclideanDistance(arr, arr2)`, a reduction between one array `arr` and another `arr2`.
|
||||
|
||||
Many ND4J ops are overloaded, meaning methods sharing a common name have different argument lists. Below we will explain only the simplest configurations.
|
||||
|
||||
As you can see, there are three possible argument types with ND4J ops: inputs, optional arguments and outputs. The outputs are specified in the ops' constructor. The inputs are specified in the parentheses following the method name, always in the first position, and the optional arguments are used to transform the inputs; e.g. the scalar to add; the coefficient to multiply by, always in the second position.
|
||||
|
||||
|Method| What it does |
|
||||
|:----|:-------------:|
|
||||
|**Transforms**||
|
||||
|ACos(INDArray x)|Trigonometric inverse cosine, elementwise. The inverse of cos such that, if `y = cos(x)`, then `x = ACos(y)`.|
|
||||
|ASin(INDArray x)|Also known as arcsin. Inverse sine, elementwise.|
|
||||
|ATan(INDArray x)|Trigonometric inverse tangent, elementwise. The inverse of tan, such that, if `y = tan(x)` then `x = ATan(y)`.|
|
||||
|Transforms.tanh(myArray)|Hyperbolic tangent: a sigmoidal function. This applies elementwise tanh inplace.|
|
||||
|Nd4j.getExecutioner().exec(Nd4j.getOpFactory() .createTransform("tanh", myArray))|equivalent to the above|
|
||||
|
||||
For other transforms, [please see this page](https://github.com/eclipse/deeplearning4j/tree/master/nd4j/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/ops/transforms/Transforms.java).
|
||||
|
||||
Here are two examples of performing `z = tanh(x)`, in which the original array `x` is unmodified.
|
||||
```java
|
||||
INDArray x = Nd4j.rand(3,2); //input
|
||||
INDArray z = Nd4j.create(3,2); //output
|
||||
Nd4j.getExecutioner().exec(new Tanh(x,z));
|
||||
Nd4j.getExecutioner().exec(Nd4j.getOpFactory().createTransform("tanh",x,z));
|
||||
```
|
||||
The latter two examples above use ND4J's basic convention for all ops, in which we have 3 NDArrays, x, y and z.
|
||||
```
|
||||
x is input, always required
|
||||
y is (optional) input, only used in some ops (like CosineSimilarity, AddOp etc)
|
||||
z is output
|
||||
```
|
||||
Frequently, `z = x` (this is the default if you use a constructor with only one argument). But there are exceptions for situations like `x = x + y`. Another possibility is `z = x + y`, etc.
|
||||
|
||||
## Accumulations
|
||||
|
||||
Most accumulations are accessable directly via the INDArray interface.
|
||||
|
||||
For example, to add up all elements of an NDArray:
|
||||
```java
|
||||
double sum = myArray.sumNumber().doubleValue();
|
||||
```
|
||||
Accum along dimension example - i.e., sum values in each row:
|
||||
```java
|
||||
INDArray tenBy3 = Nd4j.ones(10,3); //10 rows, 3 columns
|
||||
INDArray sumRows = tenBy3.sum(0);
|
||||
System.out.println(sumRows); //Output: [ 10.00, 10.00, 10.00]
|
||||
```
|
||||
Accumulations along dimensions generalize, so you can sum along two dimensions of any array with two or more dimensions.
|
||||
|
||||
## Subset Operations on Arrays
|
||||
|
||||
A simple example:
|
||||
```java
|
||||
INDArray random = Nd4j.rand(3, 3);
|
||||
System.out.println(random);
|
||||
[[0.93,0.32,0.18]
|
||||
[0.20,0.57,0.60]
|
||||
[0.96,0.65,0.75]]
|
||||
|
||||
INDArray lastTwoRows = random.get(NDArrayIndex.interval(1,3),NDArrayIndex.all());
|
||||
```
|
||||
Interval is fromInclusive, toExclusive; note that can equivalently use inclusive version: NDArrayIndex.interval(1,2,true);
|
||||
```java
|
||||
System.out.println(lastTwoRows);
|
||||
[[0.20,0.57,0.60]
|
||||
[0.96,0.65,0.75]]
|
||||
|
||||
INDArray twoValues = random.get(NDArrayIndex.point(1),NDArrayIndex.interval(0, 2));
|
||||
System.out.println(twoValues);
|
||||
[ 0.20, 0.57]
|
||||
```
|
||||
These are views of the underlying array, **not** copy operations (which provides greater flexibility and doesn't have cost of copying).
|
||||
```java
|
||||
twoValues.addi(5.0);
|
||||
System.out.println(twoValues);
|
||||
[ 5.20, 5.57]
|
||||
|
||||
System.out.println(random);
|
||||
[[0.93,0.32,0.18]
|
||||
[5.20,5.57,0.60]
|
||||
[0.96,0.65,0.75]]
|
||||
```
|
||||
To avoid in-place behaviour, random.get(...).dup() to make a copy.
|
||||
|
||||
|**Scalar**||
|
||||
|INDArray.add(number)|Returns the result of adding `number` to each entry of `INDArray x`; e.g. myArray.add(2.0)|
|
||||
|INDArray.addi(number)|Returns the result of adding `number` to each entry of `INDArray x`.|
|
||||
|ScalarAdd(INDArray x, Number num)|Returns the result of adding `num` to each entry of `INDArray x`.|
|
||||
|ScalarDivision(INDArray x, Number num)|Returns the result of dividing each entry of `INDArray x` by `num`.|
|
||||
|ScalarMax(INDArray x, Number num)|Compares each entry of `INDArray x` to `num` and returns the higher quantity.|
|
||||
|ScalarMultiplication(INDArray x, Number num)|Returns the result of multiplying each entry of `INDArray x` by `num`.|
|
||||
|ScalarReverseDivision(INDArray x, Number num)|Returns the result of dividing `num` by each element of `INDArray x`.|
|
||||
|ScalarReverseSubtraction(INDArray x, Number num)|Returns the result of subtracting each entry of `INDArray x` from `num`.|
|
||||
|ScalarSet(INDArray x, Number num)|This sets the value of each entry of `INDArray x` to `num`.|
|
||||
|ScalarSubtraction(INDArray x, Number num)|Returns the result of subtracting `num` from each entry of `INDArray x`.|
|
||||
|
||||
|
||||
If you do not understand the explanation of ND4J's syntax, cannot find a definition for a method, or would like to request that a function be added, please let us know on [Gitter live chat](https://gitter.im/deeplearning4j/deeplearning4j).
|
|
@ -1,61 +0,0 @@
|
|||
---
|
||||
title: Tensors in ND4J
|
||||
short_title: Tensors
|
||||
description: Vectors, Scalars, and Tensors in ND4J.
|
||||
category: ND4J
|
||||
weight: 2
|
||||
---
|
||||
|
||||
## Tensors & ND4J
|
||||
|
||||
A vector, that column of numbers we feed into neural nets, is simply a subclass of a more general mathematical structure called a *tensor*. A tensor is a multidimensional array.
|
||||
|
||||
You are already familiar with a matrix composed of rows and columns: the rows extend along the y axis and the columns along the x axis. Each axis is a dimension. Tensors have additional dimensions.
|
||||
|
||||
Tensors also have a so-called [*rank*](http://mathworld.wolfram.com/TensorRank.html): a scalar, or single number, is of rank 0; a vector is rank 1; a matrix is rank 2; and entities of rank 3 and above are all simply called tensors.
|
||||
|
||||
It may be helpful to think of a scalar as a point, a vector as a line, a matrix as a plane, and tensors as objects of three dimensions or more. A matrix has rows and columns, two dimensions, and therefore is of rank 2. A three-dimensional tensor, such as those we use to represent color images, has channels, rows and columns, and therefore counts as rank 3.
|
||||
|
||||
As mathematical objects with multiple dimensions, tensors have a shape, and we specify that shape by treating tensors as n-dimensional arrays.
|
||||
|
||||
With ND4J, we do that by creating a new nd array and feeding it data, shape and order as its parameters. In pseudo code, this would be
|
||||
```java
|
||||
nd4j.createArray(data, shape, order)
|
||||
```
|
||||
In real code, this line
|
||||
```java
|
||||
INDArray arr = Nd4j.create(new float[]{1,2,3,4},new int[]{2,2},'c');
|
||||
```
|
||||
creates an array with four elements, whose shape is 2 by 2, and whose order is "row major", or rows first, which is the default in C. (In contrast, Fortran uses "column major" ordering, and could be specified with an 'f' as the third parameter.) The distinction between thetwo orderings, for the array created above, is best illustrated with a table:
|
||||
|
||||
| Row-major (C) | Column-major (Fortran) |
|
||||
| :-------------: |:-------------:|
|
||||
| [1,2] | [1,3] |
|
||||
| [3,4] | [2,4] |
|
||||
|
||||
Once we create an n-dimensional array, we may want to work with slices of it. Rather than copying the data, which is expensive, we can simply "view" muli-dimensional slices. A slice of array "a" could be defined like this:
|
||||
```java
|
||||
a[0:5,3:4,6:7]
|
||||
```
|
||||
which would give you the first 5 channels, rows 3 to 4 and columns 6 to 7, and so forth for *n* dimensions, which each individual dimension's slice starting before the colon and ending after it.
|
||||
|
||||
## Linear Buffer
|
||||
|
||||
Now, while it is useful to imagine matrices as two-dimensional planes, and 3-D tensors are cubic volumes, we store all tensors as a linear buffer. That is, they are all flattened to a row of numbers.
|
||||
|
||||
For that linear buffer, we specify something called *stride*. Stride tells the computation layer how to interpret the flattened representation. It is the number of elements you skip in the buffer to get to the next channel or row or column. There's a stride for each dimension.
|
||||
|
||||
Here's a brief video summarizing how tensors are converted into linear byte buffers for ND4J.
|
||||
|
||||
<iframe width="420" height="315" src="https://www.youtube.com/embed/EHHtyRKQIJ0" frameborder="0" allowfullscreen></iframe>
|
||||
|
||||
## Additional Resources and Definitions
|
||||
|
||||
The word tensor derives from the Latin *tendere*, or "to stretch"; therefore, tensor relates to *that which stretches, the stretcher*. Tensor was introduced to English from the German in 1915, after being coined by Woldemar Voigt in 1898. The mathematical object is called a tensor because an early application of the idea was the study of materials stretching under tension.
|
||||
```
|
||||
Tensors are generalizations of scalars (that have no indices), vectors (that have exactly one index), and matrices (that have exactly two indices) to an arbitrary number of indices. - Mathworld
|
||||
|
||||
tensor, n. a mathematical object analogous to but more general than a vector, represented by an array of components that are functions of the coordinates of a space.
|
||||
```
|
||||
* [Multidimensional Arrays](https://www.mathworks.com/help/matlab/math/multidimensional-arrays.html?requestedDomain=www.mathworks.com)
|
||||
* [Tensor on Wikipedia](https://en.wikipedia.org/wiki/Tensor)
|
Loading…
Reference in New Issue