mirror of
https://github.com/ultralytics/ultralytics.git
synced 2025-09-15 15:48:41 +08:00
Fix inaccurate example in Export docs (#17161)
This commit is contained in:
parent
98aa4bbd43
commit
f80d0d75c4
@ -136,13 +136,13 @@ INT8 quantization is an excellent way to compress the model and speed up inferen
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO("yolo11n.pt") # Load a model
|
||||
model.export(format="onnx", int8=True)
|
||||
model.export(format="engine", int8=True)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo export model=yolo11n.pt format=onnx int8=True # export model with INT8 quantization
|
||||
yolo export model=yolo11n.pt format=engine int8=True # export TensorRT model with INT8 quantization
|
||||
```
|
||||
|
||||
INT8 quantization can be applied to various formats, such as TensorRT and CoreML. More details can be found in the [Export section](../modes/export.md).
|
||||
|
||||
Loading…
Reference in New Issue
Block a user