cavis/libnd4j/pom.xml

415 lines
17 KiB
XML
Raw Normal View History

2019-06-06 14:21:15 +02:00
<?xml version="1.0" encoding="UTF-8"?>
<!--~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ Copyright (c) 2015-2018 Skymind, Inc.
~
~ This program and the accompanying materials are made available under the
~ terms of the Apache License, Version 2.0 which is available at
~ https://www.apache.org/licenses/LICENSE-2.0.
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
~ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
~ License for the specific language governing permissions and limitations
~ under the License.
~
~ SPDX-License-Identifier: Apache-2.0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j</artifactId>
<version>1.0.0-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<groupId>org.nd4j</groupId>
<artifactId>libnd4j</artifactId>
<packaging>pom</packaging>
<name>libnd4j</name>
<description>The C++ engine that powers the scientific computing library ND4J - n-dimensional
arrays for Java
</description>
<url>http://nd4j.org/</url>
<licenses>
<license>
<name>Apache License, Version 2.0</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
<developers>
<developer>
<id>agibsonccc</id>
<name>Adam Gibson</name>
<email>adam@skymind.io</email>
</developer>
<developer>
<id>raver119</id>
<name>raver119</name>
</developer>
<developer>
<id>saudet</id>
<name>Samuel Audet</name>
</developer>
</developers>
<properties>
<!-- CUDA version is linked with the artifact name so cannot move to parent pom.xml -->
<cuda.version>10.1</cuda.version>
<cudnn.version>7.6</cudnn.version>
2019-06-06 14:21:15 +02:00
<libnd4j.build>release</libnd4j.build>
<libnd4j.chip>cpu</libnd4j.chip>
<libnd4j.platform>${javacpp.platform}</libnd4j.platform>
<libnd4j.extension></libnd4j.extension>
<libnd4j.cuda></libnd4j.cuda>
<libnd4j.compute></libnd4j.compute>
<libnd4j.classifier>${libnd4j.platform}</libnd4j.classifier>
<libnd4j.buildthreads></libnd4j.buildthreads>
Platform helpers (#8216) * platform helpers draft Signed-off-by: raver119 <raver119@gmail.com> * typo Signed-off-by: raver119 <raver119@gmail.com> * disable platform cmake Signed-off-by: raver119 <raver119@gmail.com> * another draft Signed-off-by: raver119 <raver119@gmail.com> * mkldnn convolution refactored Signed-off-by: raver119 <raver119@gmail.com> * minor tweaks Signed-off-by: raver119 <raver119@gmail.com> * one more safety check Signed-off-by: raver119 <raver119@gmail.com> * prototype works Signed-off-by: raver119 <raver119@gmail.com> * meh Signed-off-by: raver119 <raver119@gmail.com> * force static library mode for mkldnn Signed-off-by: raver119 <raver119@gmail.com> * - ismax fix - experimental arg fix - don't enforce openblas on Apple hardware Signed-off-by: raver119 <raver119@gmail.com> * bunch of small fixes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * declare concurrent Signed-off-by: raver119@gmail.com <raver119@gmail.com> * - MKLDNN version upgrade to 1.0.2 - avgpool2d/maxpool2d APIs update Signed-off-by: raver119 <raver119@gmail.com> * - avgpool2d_bp/maxpool2d_bp APIs update Signed-off-by: raver119 <raver119@gmail.com> * - conv2d/batchnorm APIs update Signed-off-by: raver119 <raver119@gmail.com> * - lrn/conv2d_bp/conv3d/conv3d_bp APIs update Signed-off-by: raver119 <raver119@gmail.com> * all ops converted to MKLDNN 1.x Signed-off-by: raver119 <raver119@gmail.com> * bunch of tweaks Signed-off-by: raver119 <raver119@gmail.com> * namespace for platform helpers Signed-off-by: raver119 <raver119@gmail.com> * make sure platform helpers aren't opimized out Signed-off-by: raver119 <raver119@gmail.com> * build cpu_features on x86 systems Signed-off-by: raver119 <raver119@gmail.com> * build cpu_features on x86 systems Signed-off-by: raver119 <raver119@gmail.com> * more of cpu_features Signed-off-by: raver119 <raver119@gmail.com> * - mkldnn removed from java - cpu_features checks in CpuNDArrayFactory Signed-off-by: raver119 <raver119@gmail.com> * F16C definition renamed Signed-off-by: raver119 <raver119@gmail.com> * some mkldnn rearrangements Signed-off-by: raver119 <raver119@gmail.com> * check supported instructions before doing anything Signed-off-by: raver119 <raver119@gmail.com> * typo Signed-off-by: raver119 <raver119@gmail.com> * missied impl Signed-off-by: raver119 <raver119@gmail.com> * BUILD_PIC option Signed-off-by: raver119 <raver119@gmail.com> * conv2d fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool3d fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool3d_bp fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool2d_bp leak fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool3d_bp leak fix Signed-off-by: raver119 <raver119@gmail.com> * maxpool bp leaks fixed Signed-off-by: raver119 <raver119@gmail.com> * printf removed Signed-off-by: raver119 <raver119@gmail.com> * batchnorm fix Signed-off-by: raver119 <raver119@gmail.com> * AVX warning/error polishing Signed-off-by: AlexDBlack <blacka101@gmail.com> * Fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * More polish Signed-off-by: AlexDBlack <blacka101@gmail.com> * Polish Signed-off-by: AlexDBlack <blacka101@gmail.com> * remove previous MKL-DNN support layer Signed-off-by: raver119 <raver119@gmail.com> * avx2 tweak Signed-off-by: raver119 <raver119@gmail.com> * allow static for apple Signed-off-by: raver119@gmail.com <raver119@gmail.com> * exclude mkldnn in one more place Signed-off-by: raver119 <raver119@gmail.com> * exclude mkldnn in one more place Signed-off-by: raver119 <raver119@gmail.com> * restore OPENBLAS_PATH use Signed-off-by: raver119 <raver119@gmail.com> * add runtime check for avx/avx2 support Signed-off-by: raver119 <raver119@gmail.com> * convolution_auto Signed-off-by: raver119 <raver119@gmail.com> * Add logic for helper argument * minor test fix Signed-off-by: raver119 <raver119@gmail.com> * few tweaks Signed-off-by: raver119 <raver119@gmail.com> * few tweaks Signed-off-by: raver119 <raver119@gmail.com> * skip OpTracker props for non-x86 builds Signed-off-by: raver119 <raver119@gmail.com> * linux arm isn't x86 :) Signed-off-by: raver119 <raver119@gmail.com> * avx-512 Signed-off-by: raver119 <raver119@gmail.com> * CUDA presets fix Signed-off-by: raver119 <raver119@gmail.com> * BUILD_PIC Signed-off-by: raver119 <raver119@gmail.com> * prefetchw for avx2 Signed-off-by: raver119 <raver119@gmail.com> * BUILD_PIC again Signed-off-by: raver119 <raver119@gmail.com>
2019-09-11 20:50:28 +02:00
<libnd4j.helper></libnd4j.helper>
2019-06-06 14:21:15 +02:00
</properties>
<build>
<extensions>
<extension>
<groupId>org.kuali.maven.wagons</groupId>
<artifactId>maven-s3-wagon</artifactId>
<version>1.2.1</version>
</extension>
</extensions>
<plugins>
<!-- Infer the number of CPU cores and put it in the cpu.core.count property -->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>${maven-build-helper-plugin.version}</version>
<executions>
<execution>
<id>get-cpu-count</id>
<goals>
<goal>cpu-count</goal>
</goals>
<configuration>
<cpuCount>cpu.core.count</cpuCount>
</configuration>
</execution>
</executions>
</plugin>
2019-06-06 14:21:15 +02:00
<plugin>
<groupId>org.bytedeco</groupId>
<artifactId>javacpp</artifactId>
<version>${javacpp.version}</version>
<dependencies>
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>openblas-platform</artifactId>
<version>${openblas.version}-${javacpp-presets.version}</version>
</dependency>
</dependencies>
<configuration>
<properties>${libnd4j.platform}</properties>
<buildResources>
<buildResource>/${javacpp.platform.library.path}/</buildResource>
<buildResource>/org/bytedeco/openblas/${libnd4j.platform}/</buildResource>
</buildResources>
<includeResources>
<includeResource>/${javacpp.platform.library.path}/include/</includeResource>
<includeResource>/org/bytedeco/openblas/${libnd4j.platform}/include/</includeResource>
</includeResources>
<linkResources>
<linkResource>/${javacpp.platform.library.path}/</linkResource>
<linkResource>/${javacpp.platform.library.path}/lib/</linkResource>
<linkResource>/org/bytedeco/openblas/${libnd4j.platform}/</linkResource>
<linkResource>/org/bytedeco/openblas/${libnd4j.platform}/lib/</linkResource>
</linkResources>
</configuration>
<executions>
<execution>
<id>javacpp-cppbuild-validate</id>
<phase>validate</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
<execution>
<id>javacpp-cppbuild-compile</id>
<phase>compile</phase>
<goals>
<goal>build</goal>
</goals>
<configuration>
<skip>${libnd4j.cpu.compile.skip}</skip>
<buildCommand>
<program>bash</program>
<argument>${project.basedir}/buildnativeoperations.sh</argument>
<argument>--build-type</argument>
<argument>${libnd4j.build}</argument>
<argument>--chip</argument>
<argument>${libnd4j.chip}</argument>
<argument>--platform</argument>
<argument>${libnd4j.platform}</argument>
<argument>--chip-extension</argument>
<argument>${libnd4j.extension}</argument>
<argument>--chip-version</argument>
<argument>${cuda.version}</argument>
<argument>--compute</argument>
<argument>${libnd4j.compute}</argument>
<argument>${libnd4j.tests}</argument>
<argument>-j</argument>
<argument>${libnd4j.buildthreads}</argument>
Platform helpers (#8216) * platform helpers draft Signed-off-by: raver119 <raver119@gmail.com> * typo Signed-off-by: raver119 <raver119@gmail.com> * disable platform cmake Signed-off-by: raver119 <raver119@gmail.com> * another draft Signed-off-by: raver119 <raver119@gmail.com> * mkldnn convolution refactored Signed-off-by: raver119 <raver119@gmail.com> * minor tweaks Signed-off-by: raver119 <raver119@gmail.com> * one more safety check Signed-off-by: raver119 <raver119@gmail.com> * prototype works Signed-off-by: raver119 <raver119@gmail.com> * meh Signed-off-by: raver119 <raver119@gmail.com> * force static library mode for mkldnn Signed-off-by: raver119 <raver119@gmail.com> * - ismax fix - experimental arg fix - don't enforce openblas on Apple hardware Signed-off-by: raver119 <raver119@gmail.com> * bunch of small fixes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * declare concurrent Signed-off-by: raver119@gmail.com <raver119@gmail.com> * - MKLDNN version upgrade to 1.0.2 - avgpool2d/maxpool2d APIs update Signed-off-by: raver119 <raver119@gmail.com> * - avgpool2d_bp/maxpool2d_bp APIs update Signed-off-by: raver119 <raver119@gmail.com> * - conv2d/batchnorm APIs update Signed-off-by: raver119 <raver119@gmail.com> * - lrn/conv2d_bp/conv3d/conv3d_bp APIs update Signed-off-by: raver119 <raver119@gmail.com> * all ops converted to MKLDNN 1.x Signed-off-by: raver119 <raver119@gmail.com> * bunch of tweaks Signed-off-by: raver119 <raver119@gmail.com> * namespace for platform helpers Signed-off-by: raver119 <raver119@gmail.com> * make sure platform helpers aren't opimized out Signed-off-by: raver119 <raver119@gmail.com> * build cpu_features on x86 systems Signed-off-by: raver119 <raver119@gmail.com> * build cpu_features on x86 systems Signed-off-by: raver119 <raver119@gmail.com> * more of cpu_features Signed-off-by: raver119 <raver119@gmail.com> * - mkldnn removed from java - cpu_features checks in CpuNDArrayFactory Signed-off-by: raver119 <raver119@gmail.com> * F16C definition renamed Signed-off-by: raver119 <raver119@gmail.com> * some mkldnn rearrangements Signed-off-by: raver119 <raver119@gmail.com> * check supported instructions before doing anything Signed-off-by: raver119 <raver119@gmail.com> * typo Signed-off-by: raver119 <raver119@gmail.com> * missied impl Signed-off-by: raver119 <raver119@gmail.com> * BUILD_PIC option Signed-off-by: raver119 <raver119@gmail.com> * conv2d fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool3d fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool3d_bp fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool2d_bp leak fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool3d_bp leak fix Signed-off-by: raver119 <raver119@gmail.com> * maxpool bp leaks fixed Signed-off-by: raver119 <raver119@gmail.com> * printf removed Signed-off-by: raver119 <raver119@gmail.com> * batchnorm fix Signed-off-by: raver119 <raver119@gmail.com> * AVX warning/error polishing Signed-off-by: AlexDBlack <blacka101@gmail.com> * Fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * More polish Signed-off-by: AlexDBlack <blacka101@gmail.com> * Polish Signed-off-by: AlexDBlack <blacka101@gmail.com> * remove previous MKL-DNN support layer Signed-off-by: raver119 <raver119@gmail.com> * avx2 tweak Signed-off-by: raver119 <raver119@gmail.com> * allow static for apple Signed-off-by: raver119@gmail.com <raver119@gmail.com> * exclude mkldnn in one more place Signed-off-by: raver119 <raver119@gmail.com> * exclude mkldnn in one more place Signed-off-by: raver119 <raver119@gmail.com> * restore OPENBLAS_PATH use Signed-off-by: raver119 <raver119@gmail.com> * add runtime check for avx/avx2 support Signed-off-by: raver119 <raver119@gmail.com> * convolution_auto Signed-off-by: raver119 <raver119@gmail.com> * Add logic for helper argument * minor test fix Signed-off-by: raver119 <raver119@gmail.com> * few tweaks Signed-off-by: raver119 <raver119@gmail.com> * few tweaks Signed-off-by: raver119 <raver119@gmail.com> * skip OpTracker props for non-x86 builds Signed-off-by: raver119 <raver119@gmail.com> * linux arm isn't x86 :) Signed-off-by: raver119 <raver119@gmail.com> * avx-512 Signed-off-by: raver119 <raver119@gmail.com> * CUDA presets fix Signed-off-by: raver119 <raver119@gmail.com> * BUILD_PIC Signed-off-by: raver119 <raver119@gmail.com> * prefetchw for avx2 Signed-off-by: raver119 <raver119@gmail.com> * BUILD_PIC again Signed-off-by: raver119 <raver119@gmail.com>
2019-09-11 20:50:28 +02:00
<argument>-h</argument>
<argument>${libnd4j.helper}</argument>
2019-06-06 14:21:15 +02:00
</buildCommand>
<workingDirectory>${project.basedir}</workingDirectory>
</configuration>
</execution>
<execution>
<id>libnd4j-test-run</id>
<phase>test</phase>
<goals>
<goal>build</goal>
</goals>
<configuration>
<skip>${libnd4j.test.skip}</skip>
<workingDirectory>${basedir}/tests_cpu</workingDirectory>
<buildCommand>
<program>bash</program>
<argument>run_tests.sh</argument>
</buildCommand>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-clean-plugin</artifactId>
<version>2.6</version>
<executions>
<execution>
<id>javacpp-cppbuild-clean</id>
<phase>clean</phase>
<goals>
<goal>clean</goal>
</goals>
<configuration>
<filesets>
<fileset>
<directory>blasbuild</directory>
</fileset>
</filesets>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.0.0</version>
<configuration>
<descriptors>
<descriptor>assembly.xml</descriptor>
</descriptors>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<profiles>
<!-- Use -Dlibnd4j.singlethread to use single threaded build (multi-threaded by default) -->
<profile>
<id>libnd4-single-thread</id>
<activation>
<property>
<name>libnd4j.singlethread</name>
</property>
</activation>
<properties>
<libnd4j.buildthreads>1</libnd4j.buildthreads>
</properties>
</profile>
<profile>
<id>libnd4-multi-thread</id>
<activation>
<property>
<name>!libnd4j.singlethread</name>
</property>
</activation>
<properties>
<!-- Note: CPU core count is from build helper plugin -->
<libnd4j.buildthreads>${cpu.core.count}</libnd4j.buildthreads>
</properties>
</profile>
2019-06-06 14:21:15 +02:00
<profile>
<id>chip</id>
<activation>
<property>
<name>libnd4j.chip</name>
</property>
</activation>
<properties>
<libnd4j.classifier>${libnd4j.platform}-${libnd4j.chip}-${cuda.version}</libnd4j.classifier>
</properties>
</profile>
<profile>
<id>extension</id>
<activation>
<property>
<name>libnd4j.extension</name>
</property>
</activation>
<properties>
<libnd4j.classifier>${libnd4j.platform}-${libnd4j.extension}</libnd4j.classifier>
</properties>
</profile>
<profile>
<id>cuda</id>
<activation>
<property>
<name>libnd4j.cuda</name>
</property>
</activation>
<properties>
<!-- Leave libnd4j.chip and libnd4j.classifier as is to build CPU version as well. -->
<!-- <libnd4j.chip>cuda</libnd4j.chip> -->
<!-- <libnd4j.classifier>${libnd4j.platform}-cuda-${cuda.version}</libnd4j.classifier> -->
</properties>
<build>
<plugins>
<plugin>
<groupId>org.bytedeco</groupId>
<artifactId>javacpp</artifactId>
<version>${javacpp.version}</version>
<configuration>
<properties>${libnd4j.platform}</properties>
</configuration>
<executions>
<execution>
<id>javacpp-cppbuild-compile-cuda</id>
<phase>compile</phase>
<goals>
<goal>build</goal>
</goals>
<configuration>
<skip>${libnd4j.cuda.compile.skip}</skip>
<buildCommand>
<program>bash</program>
<argument>${project.basedir}/buildnativeoperations.sh
</argument>
<argument>--build-type</argument>
<argument>${libnd4j.build}</argument>
<argument>--chip</argument>
<argument>cuda</argument>
<argument>--platform</argument>
<argument>${libnd4j.platform}</argument>
<argument>--chip-extension</argument>
<argument>${libnd4j.extension}</argument>
<argument>--chip-version</argument>
<argument>${cuda.version}</argument>
<argument>--compute</argument>
<argument>${libnd4j.compute}</argument>
<argument>${libnd4j.tests}</argument>
</buildCommand>
<workingDirectory>${project.basedir}</workingDirectory>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<id>libnd4j-package-cuda</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
<configuration>
<descriptors>
<descriptor>assembly-cuda.xml</descriptor>
</descriptors>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>clean-tests</id>
<activation>
<file>
<exists>${basedir}/tests_cpu/Makefile</exists>
</file>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.6.0</version>
<executions>
<execution>
<id>libnd4j-test-clean</id>
<phase>clean</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>make</executable>
<workingDirectory>${basedir}/tests_cpu</workingDirectory>
<arguments>
<argument>clean</argument>
</arguments>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
Platform helpers (#8216) * platform helpers draft Signed-off-by: raver119 <raver119@gmail.com> * typo Signed-off-by: raver119 <raver119@gmail.com> * disable platform cmake Signed-off-by: raver119 <raver119@gmail.com> * another draft Signed-off-by: raver119 <raver119@gmail.com> * mkldnn convolution refactored Signed-off-by: raver119 <raver119@gmail.com> * minor tweaks Signed-off-by: raver119 <raver119@gmail.com> * one more safety check Signed-off-by: raver119 <raver119@gmail.com> * prototype works Signed-off-by: raver119 <raver119@gmail.com> * meh Signed-off-by: raver119 <raver119@gmail.com> * force static library mode for mkldnn Signed-off-by: raver119 <raver119@gmail.com> * - ismax fix - experimental arg fix - don't enforce openblas on Apple hardware Signed-off-by: raver119 <raver119@gmail.com> * bunch of small fixes Signed-off-by: raver119@gmail.com <raver119@gmail.com> * declare concurrent Signed-off-by: raver119@gmail.com <raver119@gmail.com> * - MKLDNN version upgrade to 1.0.2 - avgpool2d/maxpool2d APIs update Signed-off-by: raver119 <raver119@gmail.com> * - avgpool2d_bp/maxpool2d_bp APIs update Signed-off-by: raver119 <raver119@gmail.com> * - conv2d/batchnorm APIs update Signed-off-by: raver119 <raver119@gmail.com> * - lrn/conv2d_bp/conv3d/conv3d_bp APIs update Signed-off-by: raver119 <raver119@gmail.com> * all ops converted to MKLDNN 1.x Signed-off-by: raver119 <raver119@gmail.com> * bunch of tweaks Signed-off-by: raver119 <raver119@gmail.com> * namespace for platform helpers Signed-off-by: raver119 <raver119@gmail.com> * make sure platform helpers aren't opimized out Signed-off-by: raver119 <raver119@gmail.com> * build cpu_features on x86 systems Signed-off-by: raver119 <raver119@gmail.com> * build cpu_features on x86 systems Signed-off-by: raver119 <raver119@gmail.com> * more of cpu_features Signed-off-by: raver119 <raver119@gmail.com> * - mkldnn removed from java - cpu_features checks in CpuNDArrayFactory Signed-off-by: raver119 <raver119@gmail.com> * F16C definition renamed Signed-off-by: raver119 <raver119@gmail.com> * some mkldnn rearrangements Signed-off-by: raver119 <raver119@gmail.com> * check supported instructions before doing anything Signed-off-by: raver119 <raver119@gmail.com> * typo Signed-off-by: raver119 <raver119@gmail.com> * missied impl Signed-off-by: raver119 <raver119@gmail.com> * BUILD_PIC option Signed-off-by: raver119 <raver119@gmail.com> * conv2d fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool3d fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool3d_bp fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool2d_bp leak fix Signed-off-by: raver119 <raver119@gmail.com> * avgpool3d_bp leak fix Signed-off-by: raver119 <raver119@gmail.com> * maxpool bp leaks fixed Signed-off-by: raver119 <raver119@gmail.com> * printf removed Signed-off-by: raver119 <raver119@gmail.com> * batchnorm fix Signed-off-by: raver119 <raver119@gmail.com> * AVX warning/error polishing Signed-off-by: AlexDBlack <blacka101@gmail.com> * Fix Signed-off-by: AlexDBlack <blacka101@gmail.com> * More polish Signed-off-by: AlexDBlack <blacka101@gmail.com> * Polish Signed-off-by: AlexDBlack <blacka101@gmail.com> * remove previous MKL-DNN support layer Signed-off-by: raver119 <raver119@gmail.com> * avx2 tweak Signed-off-by: raver119 <raver119@gmail.com> * allow static for apple Signed-off-by: raver119@gmail.com <raver119@gmail.com> * exclude mkldnn in one more place Signed-off-by: raver119 <raver119@gmail.com> * exclude mkldnn in one more place Signed-off-by: raver119 <raver119@gmail.com> * restore OPENBLAS_PATH use Signed-off-by: raver119 <raver119@gmail.com> * add runtime check for avx/avx2 support Signed-off-by: raver119 <raver119@gmail.com> * convolution_auto Signed-off-by: raver119 <raver119@gmail.com> * Add logic for helper argument * minor test fix Signed-off-by: raver119 <raver119@gmail.com> * few tweaks Signed-off-by: raver119 <raver119@gmail.com> * few tweaks Signed-off-by: raver119 <raver119@gmail.com> * skip OpTracker props for non-x86 builds Signed-off-by: raver119 <raver119@gmail.com> * linux arm isn't x86 :) Signed-off-by: raver119 <raver119@gmail.com> * avx-512 Signed-off-by: raver119 <raver119@gmail.com> * CUDA presets fix Signed-off-by: raver119 <raver119@gmail.com> * BUILD_PIC Signed-off-by: raver119 <raver119@gmail.com> * prefetchw for avx2 Signed-off-by: raver119 <raver119@gmail.com> * BUILD_PIC again Signed-off-by: raver119 <raver119@gmail.com>
2019-09-11 20:50:28 +02:00
<!-- Profiles to set the default libnd4j.helper property, example: mkdnn -->
<profile>
<id>libnd4j-helper-avx2</id>
<activation>
<property>
<name>libnd4j.extension</name>
<value>avx2</value>
</property>
</activation>
<properties>
<libnd4j.helper>mkldnn</libnd4j.helper>
</properties>
</profile>
<profile>
<id>libnd4j-helper-avx512</id>
<activation>
<property>
<name>libnd4j.extension</name>
<value>avx512</value>
</property>
</activation>
<properties>
<libnd4j.helper>mkldnn</libnd4j.helper>
</properties>
</profile>
2019-06-06 14:21:15 +02:00
</profiles>
</project>