Skip to main content Link Menu Expand (external link) Document Search Copy Copied

ROCm Execution Provider

The ROCm Execution Provider enables hardware accelerated computation on AMD ROCm-enabled GPUs.

Contents

Install

NOTE Please make sure to install the proper version of Pytorch specified here PyTorch Version.

For Nightly PyTorch builds please see Pytorch home and select ROCm as the Compute Platform.

Pre-built binaries of ONNX Runtime with ROCm EP are published for most language bindings. Please reference Install ORT.

Requirements

ONNX Runtime ROCm
main 5.4
1.13 5.4
1.13 5.3.2
1.12 5.2.3
1.12 5.2

Build

For build instructions, please see the BUILD page.

Usage

C/C++

Ort::Env env = Ort::Env{ORT_LOGGING_LEVEL_ERROR, "Default"};
Ort::SessionOptions so;
int device_id = 0;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_ROCm(so, device_id));

The C API details are here.

Python

Python APIs details are here.

Performance Tuning

For performance tuning, please see guidance on this page: ONNX Runtime Perf Tuning

Samples

Python

import onnxruntime as ort

model_path = '<path to model>'

providers = [
    'ROCmExecutionProvider',
    'CPUExecutionProvider',
]

session = ort.InferenceSession(model_path, providers=providers)