Go to file
Brian Rosenberger 5f2258b710
Gitea Actions Demo / Explore-Gitea-Actions (push) Failing after 50m27s Details
Reorganising build.gradle for CUDA 12
Signed-off-by: brian <brian@brutex.de>
2023-09-02 21:46:14 +02:00
.docker Reorganising build.gradle for CUDA 12 2023-08-30 16:46:38 +02:00
.gitea/workflows Reorganising build.gradle for CUDA 12 2023-09-02 21:46:14 +02:00
.jenkins Reorganising build.gradle for CUDA 12 2023-08-30 16:48:33 +02:00
.old Fixing tests 2023-05-08 09:22:38 +02:00
brutex-extended-tests Reorganising build.gradle for CUDA 12 2023-09-01 10:31:56 +02:00
buildSrc Refactoring to Brutex Network modules layout and introduced gradle build system 2022-09-20 15:40:53 +02:00
cavis-common-platform Reorganising build.gradle for CUDA 12 2023-08-30 16:46:38 +02:00
cavis-datavec Update leptonica library to 1.83.0 2023-08-09 11:45:48 +02:00
cavis-dnn Update vulnerable hadoop-common from 3.2.0 to 3.2.4 2023-08-11 08:03:29 +02:00
cavis-full Adding cuDNN support 2023-04-17 10:37:01 +02:00
cavis-native Reorganising build.gradle for CUDA 12 2023-09-02 21:41:22 +02:00
cavis-nd4j Update underlying library versions for javacpp 1.5.9 2023-08-10 10:55:46 +02:00
cavis-ui Fixing Tests 2023-08-07 10:39:16 +02:00
cavis-zoo Using @SuperBuilder for LayerConfigurations 2023-04-27 16:05:43 +02:00
gradle/wrapper Reorganising build.gradle for CUDA 12 2023-08-28 09:54:17 +02:00
.gitignore Adding .metadata to .gitignore list 2023-08-14 14:59:26 +02:00
LICENSE Refactoring to Brutex Network modules layout and introduced gradle build system 2022-09-20 15:40:53 +02:00
NOTICE Refactoring to Brutex Network modules layout and introduced gradle build system 2022-09-20 15:40:53 +02:00
README.md Refactoring and separation of IModel / Layer 2023-04-17 10:37:01 +02:00
build.gradle Reorganising build.gradle for CUDA 12 2023-08-28 09:35:18 +02:00
build_requirements.md Fix javadoc and cleanup 2023-04-17 10:37:00 +02:00
chooseBackend.gradle Reorganising build.gradle for CUDA 12 2023-08-28 09:35:18 +02:00
createTestBackends.gradle Reorganising build.gradle for CUDA 12 2023-08-28 09:35:18 +02:00
deeplearning4j.ipr Refactoring to Brutex Network modules layout and introduced gradle build system 2022-09-20 15:40:53 +02:00
deeplearning4j.iws Refactoring to Brutex Network modules layout and introduced gradle build system 2022-09-20 15:40:53 +02:00
gradle.properties Reorganising build.gradle for CUDA 12 2023-08-30 10:37:32 +02:00
gradlew gan example 2023-08-07 10:39:16 +02:00
gradlew.bat gan example 2023-08-07 10:39:16 +02:00
settings.gradle Reorganising build.gradle for CUDA 12 2023-08-28 09:35:18 +02:00
vsconfig.gradle Reorganising build.gradle for CUDA 12 2023-08-28 09:35:18 +02:00

README.md

Documentation Get help at the community forum javadoc License

The Eclipse Deeplearning4J (DL4J) ecosystem is a set of projects intended to support all the needs of a JVM based deep learning application. This means starting with the raw data, loading and preprocessing it from wherever and whatever format it is in to building and tuning a wide variety of simple and complex deep learning networks.

Because Deeplearning4J runs on the JVM you can use it with a wide variety of JVM based languages other than Java, like Scala, Kotlin, Clojure and many more.

The DL4J stack comprises of:

  • DL4J: High level API to build MultiLayerNetworks and ComputationGraphs with a variety of layers, including custom ones. Supports importing Keras models from h5, including tf.keras models (as of 1.0.0-beta7) and also supports distributed training on Apache Spark
  • ND4J: General purpose linear algebra library with over 500 mathematical, linear algebra and deep learning operations. ND4J is based on the highly-optimized C++ codebase LibND4J that provides CPU (AVX2/512) and GPU (CUDA) support and acceleration by libraries such as OpenBLAS, OneDNN (MKL-DNN), cuDNN, cuBLAS, etc
  • SameDiff : Part of the ND4J library, SameDiff is our automatic differentiation / deep learning framework. SameDiff uses a graph-based (define then run) approach, similar to TensorFlow graph mode. Eager graph (TensorFlow 2.x eager/PyTorch) graph execution is planned. SameDiff supports importing TensorFlow frozen model format .pb (protobuf) models. Import for ONNX, TensorFlow SavedModel and Keras models are planned. Deeplearning4j also has full SameDiff support for easily writing custom layers and loss functions.
  • DataVec: ETL for machine learning data in a wide variety of formats and files (HDFS, Spark, Images, Video, Audio, CSV, Excel etc)
  • Arbiter: Library for hyperparameter search
  • LibND4J : C++ library that underpins everything. For more information on how the JVM acceses native arrays and operations refer to JavaCPP

All projects in the DL4J ecosystem support Windows, Linux and macOS. Hardware support includes CUDA GPUs (11.2, 10.0, 10.1, 10.2 except OSX), x86 CPU (x86_64, avx2, avx512), ARM CPU (arm, arm64, armhf) and PowerPC (ppc64le).

Using Eclipse Deeplearning4J in your project

Deeplearning4J has quite a few dependencies. For this reason we only support usage with a build tool.

<dependencies>
  <dependency>
      <groupId>org.deeplearning4j</groupId>
      <artifactId>deeplearning4j-core</artifactId>
      <version>1.0.0-beta7</version>
  </dependency>
  <dependency>
      <groupId>org.nd4j</groupId>
      <artifactId>nd4j-native-platform</artifactId>
      <version>1.0.0-beta7</version>
  </dependency>
</dependencies>

Add these dependencies to your pom.xml file to use Deeplearning4J with the CPU backend. A full standalone project example is available in the example repository, if you want to start a new Maven project from scratch.

A taste of code

Deeplearning4J offers a very high level API for defining even complex neural networks. The following example code shows you how LeNet, a convolutional neural network, is defined in DL4J.

NeuralNetConfiguration conf = NeuralNetConfiguration.builder()
                .seed(seed)
                .l2(0.0005)
                .weightInit(WeightInit.XAVIER)
                .updater(new Adam(1e-3))
                
                .layer(new ConvolutionLayer.Builder(5, 5)
                        .stride(1,1)
                        .nOut(20)
                        .activation(Activation.IDENTITY)
                        .build())
                .layer(new SubsamplingLayer.Builder(PoolingType.MAX)
                        .kernelSize(2,2)
                        .stride(2,2)
                        .build())
                .layer(new ConvolutionLayer.Builder(5, 5)
                        .stride(1,1)
                        .nOut(50)
                        .activation(Activation.IDENTITY)
                        .build())
                .layer(new SubsamplingLayer.Builder(PoolingType.MAX)
                        .kernelSize(2,2)
                        .stride(2,2)
                        .build())
                .layer(new DenseLayer.Builder().activation(Activation.RELU)
                        .nOut(500).build())
                .layer(new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
                        .nOut(outputNum)
                        .activation(Activation.SOFTMAX)
                        .build())
                .inputType(InputType.convolutionalFlat(28,28,1))
                .build();

Documentation, Guides and Tutorials

You can find the official documentation for Deeplearning4J and the other libraries of its ecosystem at http://deeplearning4j.konduit.ai/.

Want some examples?

We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples

Building from source

It is preferred to use the official pre-compiled releases (see above). But if you want to build from source, first take a look at the prerequisites for building from source here: https://deeplearning4j.konduit.ai/getting-started/build-from-source.

To build everything, we can use commands like

./change-cuda-versions.sh x.x
./change-scala-versions.sh 2.xx
./change-spark-versions.sh x
mvn clean install -Dmaven.test.skip -Dlibnd4j.cuda=x.x -Dlibnd4j.compute=xx

or

mvn -B -V -U clean install -pl '!jumpy,!pydatavec,!pydl4j' -Dlibnd4j.platform=linux-x86_64 -Dlibnd4j.chip=cuda -Dlibnd4j.cuda=9.2 -Dlibnd4j.compute=<your GPU CC> -Djavacpp.platform=linux-x86_64 -Dmaven.test.skip=true

An example of GPU "CC" or compute capability is 61 for Titan X Pascal.

License

Apache License 2.0