Skip to content

Commit

Permalink
* Upgrade presets for PyTorch 2.6.0
Browse files Browse the repository at this point in the history
  • Loading branch information
saudet committed Feb 3, 2025
1 parent 56229de commit b2a1273
Show file tree
Hide file tree
Showing 1,536 changed files with 3,066 additions and 3,199 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@

* Introduce `macosx-arm64` builds for ARPACK-NG, CMINPACK, FFTW, GSL, TensorFlow Lite, ONNX, ONNX Runtime ([issue #1069](https://github.com/bytedeco/javacpp-presets/issues/1069))
* Upgrade presets for OpenCV 4.11.0, DNNL 3.6.2, CPython 3.13.1, NumPy 2.2.1, SciPy 1.15.1, LLVM 19.1.6, ONNX Runtime 1.20.1
* Upgrade presets for OpenCV 4.11.0, DNNL 3.6.2, CPython 3.13.1, NumPy 2.2.1, SciPy 1.15.1, LLVM 19.1.6, PyTorch 2.6.0, ONNX Runtime 1.20.1

### November 16, 2024 version 1.5.11
* Enable distributed package using Gloo in presets for PyTorch ([pull #1510](https://github.com/bytedeco/javacpp-presets/pull/1510))
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ Each child module in turn relies by default on the included [`cppbuild.sh` scrip
* NVIDIA Video Codec SDK 12.2.x https://developer.nvidia.com/nvidia-video-codec-sdk
* OpenCL 3.0.x https://github.com/KhronosGroup/OpenCL-ICD-Loader
* MXNet 1.9.x https://github.com/apache/incubator-mxnet
* PyTorch 2.5.x https://github.com/pytorch/pytorch
* PyTorch 2.6.x https://github.com/pytorch/pytorch
* SentencePiece 0.2.0 https://github.com/google/sentencepiece
* TensorFlow 1.15.x https://github.com/tensorflow/tensorflow
* TensorFlow Lite 2.18.x https://github.com/tensorflow/tensorflow
Expand Down
2 changes: 1 addition & 1 deletion platform/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,7 @@
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.5.1-${project.version}</version>
<version>2.6.0-${project.version}</version>
</dependency>
<dependency>
<groupId>org.bytedeco</groupId>
Expand Down
12 changes: 6 additions & 6 deletions pytorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Introduction
------------
This directory contains the JavaCPP Presets module for:

* PyTorch 2.5.1 https://pytorch.org/
* PyTorch 2.6.0 https://pytorch.org/

Please refer to the parent README.md file for more detailed information about the JavaCPP Presets.

Expand Down Expand Up @@ -40,36 +40,36 @@ We can use [Maven 3](http://maven.apache.org/) to download and install automatic
<modelVersion>4.0.0</modelVersion>
<groupId>org.bytedeco.pytorch</groupId>
<artifactId>simplemnist</artifactId>
<version>1.5.11</version>
<version>1.5.12-SNAPSHOT</version>
<properties>
<exec.mainClass>SimpleMNIST</exec.mainClass>
</properties>
<dependencies>
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.5.1-1.5.11</version>
<version>2.6.0-1.5.12-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies required to use CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>2.5.1-1.5.11</version>
<version>2.6.0-1.5.12-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>cuda-platform-redist</artifactId>
<version>12.6-9.5-1.5.11</version>
<version>12.6-9.5-1.5.12-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled full version of MKL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>mkl-platform-redist</artifactId>
<version>2025.0-1.5.11</version>
<version>2025.0-1.5.12-SNAPSHOT</version>
</dependency>
</dependencies>
<build>
Expand Down
4 changes: 3 additions & 1 deletion pytorch/cppbuild.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ if [[ -z "$PLATFORM" ]]; then
fi

export BUILD_TEST=0
#export CUDAHOSTCC="clang"
#export CUDAHOSTCXX="clang++"
export CUDACXX="/usr/local/cuda/bin/nvcc"
export CUDA_HOME="/usr/local/cuda"
export CUDNN_HOME="/usr/local/cuda"
Expand Down Expand Up @@ -38,7 +40,7 @@ if [[ $PLATFORM == windows* ]]; then
export PYTHON_BIN_PATH=$(which python.exe)
fi

PYTORCH_VERSION=2.5.1
PYTORCH_VERSION=2.6.0

export PYTORCH_BUILD_VERSION="$PYTORCH_VERSION"
export PYTORCH_BUILD_NUMBER=1
Expand Down
2 changes: 1 addition & 1 deletion pytorch/platform/gpu/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>2.5.1-${project.parent.version}</version>
<version>2.6.0-${project.parent.version}</version>
<name>JavaCPP Presets Platform GPU for PyTorch</name>

<properties>
Expand Down
2 changes: 1 addition & 1 deletion pytorch/platform/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.5.1-${project.parent.version}</version>
<version>2.6.0-${project.parent.version}</version>
<name>JavaCPP Presets Platform for PyTorch</name>

<properties>
Expand Down
2 changes: 1 addition & 1 deletion pytorch/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

<groupId>org.bytedeco</groupId>
<artifactId>pytorch</artifactId>
<version>2.5.1-${project.parent.version}</version>
<version>2.6.0-${project.parent.version}</version>
<name>JavaCPP Presets for PyTorch</name>

<dependencies>
Expand Down
4 changes: 2 additions & 2 deletions pytorch/samples/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform</artifactId>
<version>2.5.1-1.5.12-SNAPSHOT</version>
<version>2.6.0-1.5.12-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies required to use CUDA, cuDNN, and NCCL -->
<dependency>
<groupId>org.bytedeco</groupId>
<artifactId>pytorch-platform-gpu</artifactId>
<version>2.5.1-1.5.12-SNAPSHOT</version>
<version>2.6.0-1.5.12-SNAPSHOT</version>
</dependency>

<!-- Additional dependencies to use bundled CUDA, cuDNN, and NCCL -->
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down Expand Up @@ -32,11 +32,18 @@ public class AOTIModelContainerRunner extends Pointer {


public native @ByVal TensorVector run(
@ByRef TensorVector inputs);
@Const @ByRef TensorVector inputs,
Pointer stream_handle/*=nullptr*/);
public native @ByVal TensorVector run(
@Const @ByRef TensorVector inputs);

public native @ByVal ExtraFilesMap getConstantNamesToOriginalFQNs();
public native @ByVal StringIntMap getConstantNamesToDtypes();
public native void update_inactive_constant_buffer(@Cast("const torch::inductor::TensorConstantMap*") @ByRef SizeTStringMap const_map);
public native void update_constant_buffer(
@ByRef StringTensorUMap tensor_map,
@Cast("bool") boolean use_inactive,
@Cast("bool") boolean validate_full_updates);
public native void update_constant_buffer(
@Cast("const torch::inductor::TensorConstantMap*") @ByRef SizeTStringMap const_map,
@Cast("bool") boolean use_inactive,
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down Expand Up @@ -45,5 +45,9 @@ public AOTIModelContainerRunnerCpu(
private native void allocate(
@StdString String model_so_path);

public native @ByVal TensorVector run(@ByRef TensorVector inputs);
public native @ByVal TensorVector run(
@Const @ByRef TensorVector inputs,
Pointer stream_handle/*=nullptr*/);
public native @ByVal TensorVector run(
@Const @ByRef TensorVector inputs);
}
2 changes: 1 addition & 1 deletion pytorch/src/gen/java/org/bytedeco/pytorch/ASMoutput.java
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down Expand Up @@ -36,6 +36,8 @@ public class AcceleratorHooksInterface extends Pointer {
// Whether the device at device_index is fully initialized or not.
public native @Cast("bool") boolean hasPrimaryContext(@Cast("c10::DeviceIndex") byte device_index);

public native void init();

public native @Cast("c10::DeviceIndex") byte deviceCount();

public native void setCurrentDevice(@Cast("c10::DeviceIndex") byte device);
Expand All @@ -49,4 +51,14 @@ public class AcceleratorHooksInterface extends Pointer {
public native @Cast("bool") boolean isPinnedPtr(@Const Pointer data);

public native Allocator getPinnedMemoryAllocator();

public native @ByVal Device getDeviceFromPtr(Pointer data);

public native @Const @ByRef Generator getDefaultGenerator(
@Cast("c10::DeviceIndex") byte device_index/*=-1*/);
public native @Const @ByRef Generator getDefaultGenerator();

public native @ByVal Generator getNewGenerator(
@Cast("c10::DeviceIndex") byte device_index/*=-1*/);
public native @ByVal Generator getNewGenerator();
}
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down
10 changes: 5 additions & 5 deletions pytorch/src/gen/java/org/bytedeco/pytorch/Adagrad.java
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down Expand Up @@ -26,15 +26,15 @@ public class Adagrad extends Optimizer {
public Adagrad(Pointer p) { super(p); }

public Adagrad(
@ByVal OptimizerParamGroupVector param_groups,
@Const @ByRef OptimizerParamGroupVector param_groups,
@ByVal(nullValue = "torch::optim::AdagradOptions{}") AdagradOptions defaults) { super((Pointer)null); allocate(param_groups, defaults); }
private native void allocate(
@ByVal OptimizerParamGroupVector param_groups,
@Const @ByRef OptimizerParamGroupVector param_groups,
@ByVal(nullValue = "torch::optim::AdagradOptions{}") AdagradOptions defaults);
public Adagrad(
@ByVal OptimizerParamGroupVector param_groups) { super((Pointer)null); allocate(param_groups); }
@Const @ByRef OptimizerParamGroupVector param_groups) { super((Pointer)null); allocate(param_groups); }
private native void allocate(
@ByVal OptimizerParamGroupVector param_groups);
@Const @ByRef OptimizerParamGroupVector param_groups);

public Adagrad(@ByVal TensorVector params, @ByVal(nullValue = "torch::optim::AdagradOptions{}") AdagradOptions defaults) { super((Pointer)null); allocate(params, defaults); }
private native void allocate(@ByVal TensorVector params, @ByVal(nullValue = "torch::optim::AdagradOptions{}") AdagradOptions defaults);
Expand Down
4 changes: 2 additions & 2 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AdagradOptions.java
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand All @@ -17,7 +17,7 @@
import static org.bytedeco.javacpp.global.chrono.*;

import static org.bytedeco.pytorch.global.torch.*;
// namespace torch
// namespace torch::serialize

@Namespace("torch::optim") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdagradOptions extends OptimizerCloneableAdagradOptions {
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down
10 changes: 5 additions & 5 deletions pytorch/src/gen/java/org/bytedeco/pytorch/Adam.java
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down Expand Up @@ -26,15 +26,15 @@ public class Adam extends Optimizer {
public Adam(Pointer p) { super(p); }

public Adam(
@ByVal OptimizerParamGroupVector param_groups,
@Const @ByRef OptimizerParamGroupVector param_groups,
@ByVal(nullValue = "torch::optim::AdamOptions{}") AdamOptions defaults) { super((Pointer)null); allocate(param_groups, defaults); }
private native void allocate(
@ByVal OptimizerParamGroupVector param_groups,
@Const @ByRef OptimizerParamGroupVector param_groups,
@ByVal(nullValue = "torch::optim::AdamOptions{}") AdamOptions defaults);
public Adam(
@ByVal OptimizerParamGroupVector param_groups) { super((Pointer)null); allocate(param_groups); }
@Const @ByRef OptimizerParamGroupVector param_groups) { super((Pointer)null); allocate(param_groups); }
private native void allocate(
@ByVal OptimizerParamGroupVector param_groups);
@Const @ByRef OptimizerParamGroupVector param_groups);
public Adam(@ByVal TensorVector params, @ByVal(nullValue = "torch::optim::AdamOptions{}") AdamOptions defaults) { super((Pointer)null); allocate(params, defaults); }
private native void allocate(@ByVal TensorVector params, @ByVal(nullValue = "torch::optim::AdamOptions{}") AdamOptions defaults);
public Adam(@ByVal TensorVector params) { super((Pointer)null); allocate(params); }
Expand Down
4 changes: 2 additions & 2 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AdamOptions.java
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand All @@ -17,7 +17,7 @@
import static org.bytedeco.javacpp.global.chrono.*;

import static org.bytedeco.pytorch.global.torch.*;
// namespace torch
// namespace torch::serialize

@Namespace("torch::optim") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdamOptions extends OptimizerCloneableAdamOptions {
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down
10 changes: 5 additions & 5 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AdamW.java
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down Expand Up @@ -26,15 +26,15 @@ public class AdamW extends Optimizer {
public AdamW(Pointer p) { super(p); }

public AdamW(
@ByVal OptimizerParamGroupVector param_groups,
@Const @ByRef OptimizerParamGroupVector param_groups,
@ByVal(nullValue = "torch::optim::AdamWOptions{}") AdamWOptions defaults) { super((Pointer)null); allocate(param_groups, defaults); }
private native void allocate(
@ByVal OptimizerParamGroupVector param_groups,
@Const @ByRef OptimizerParamGroupVector param_groups,
@ByVal(nullValue = "torch::optim::AdamWOptions{}") AdamWOptions defaults);
public AdamW(
@ByVal OptimizerParamGroupVector param_groups) { super((Pointer)null); allocate(param_groups); }
@Const @ByRef OptimizerParamGroupVector param_groups) { super((Pointer)null); allocate(param_groups); }
private native void allocate(
@ByVal OptimizerParamGroupVector param_groups);
@Const @ByRef OptimizerParamGroupVector param_groups);
public AdamW(@ByVal TensorVector params, @ByVal(nullValue = "torch::optim::AdamWOptions{}") AdamWOptions defaults) { super((Pointer)null); allocate(params, defaults); }
private native void allocate(@ByVal TensorVector params, @ByVal(nullValue = "torch::optim::AdamWOptions{}") AdamWOptions defaults);
public AdamW(@ByVal TensorVector params) { super((Pointer)null); allocate(params); }
Expand Down
4 changes: 2 additions & 2 deletions pytorch/src/gen/java/org/bytedeco/pytorch/AdamWOptions.java
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand All @@ -17,7 +17,7 @@
import static org.bytedeco.javacpp.global.chrono.*;

import static org.bytedeco.pytorch.global.torch.*;
// namespace torch
// namespace torch::serialize

@Namespace("torch::optim") @NoOffset @Properties(inherit = org.bytedeco.pytorch.presets.torch.class)
public class AdamWOptions extends OptimizerCloneableAdamWOptions {
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
// Targeted by JavaCPP version 1.5.11: DO NOT EDIT THIS FILE
// Targeted by JavaCPP version 1.5.12-SNAPSHOT: DO NOT EDIT THIS FILE

package org.bytedeco.pytorch;

Expand Down
Loading

0 comments on commit b2a1273

Please sign in to comment.