MIGraphX Execution Provider

The MIGraphX execution provider uses AMD’s Deep Learning graph optimization engine to accelerate ONNX model on AMD GPUs.



NOTE Please make sure to install the proper version of Pytorch specified here PyTorch Version.

For Nightly PyTorch builds please see Pytorch home and select ROCm as the Compute Platform.

Pre-built binaries of ONNX Runtime with MIGraphX EP are published for most language bindings. Please reference Install ORT.


ONNX Runtime MIGraphX
main 5.4
1.14 5.4
1.13 5.4
1.13 5.3.2
1.12 5.2.3
1.12 5.2


For build instructions, please see the BUILD page.



Ort::Env env = Ort::Env{ORT_LOGGING_LEVEL_ERROR, "Default"};
Ort::SessionOptions so;
int device_id = 0;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_MIGraphX(so, device_id));

The C API details are here.


When using the Python wheel from the ONNX Runtime build with MIGraphX execution provider, it will be automatically prioritized over the default GPU or CPU execution providers. There is no need to separately register the execution provider.

Python APIs details are here.

Note that the next release (ORT 1.10) will require explicitly setting the providers parameter if you want to use execution provider other than the default CPU provider when instantiating InferenceSession.

You can check here for a python script to run an model on either the CPU or MIGraphX Execution Provider.

Configuration Options

MIGraphX providers an environment variable ORT_MIGRAPHX_FP16_ENABLE to enable the FP16 mode.



import onnxruntime as ort

model_path = '<path to model>'

providers = [

session = ort.InferenceSession(model_path, providers=providers)