Compare commits

...

1 Commits

Author SHA1 Message Date
Istvan Kiss
d946fbeaee Precision support page update 2025-02-04 15:03:31 +01:00

View File

@@ -1,19 +1,23 @@
.. meta::
:description: Supported data types in ROCm
:keywords: int8, float8, float8 (E4M3), float8 (E5M2), bfloat8, float16, half, bfloat16, tensorfloat32, float,
float32, float64, double, AMD, ROCm, AMDGPU
:description: Supported data types of AMD GPUs and libraries in ROCm.
:keywords: precision, data types, HIP types, int8, float8, float8 (E4M3),
float8 (E5M2), bfloat8, float16, half, bfloat16, tensorfloat32,
float, float32, float64, double, AMD data types, HIP data types,
ROCm precision, ROCm data types
*************************************************************
Precision support
Data types and precision support
*************************************************************
Use the following sections to identify data types and HIP types ROCm™ supports.
This topic lists the supported data types of AMD GPUs and ROCm libraries.
Corresponding :doc:`HIP <hip:index>` data types are also noted.
Integral types
==========================================
The signed and unsigned integral types that are supported by ROCm are listed in the following table,
together with their corresponding HIP type and a short description.
The signed and unsigned integral types supported by ROCm are listed in
the following table, along with their corresponding HIP type and a short
description.
.. list-table::
@@ -46,8 +50,8 @@ together with their corresponding HIP type and a short description.
Floating-point types
==========================================
The floating-point types that are supported by ROCm are listed in the following table, together with
their corresponding HIP type and a short description.
The floating-point types supported by ROCm are listed in the following
table, along with their corresponding HIP type and a short description.
.. image:: ../data/about/compatibility/floating-point-data-types.png
:alt: Supported floating-point types
@@ -63,43 +67,62 @@ their corresponding HIP type and a short description.
*
- float8 (E4M3)
- ``-``
- An 8-bit floating-point number that mostly follows IEEE-754 conventions and **S1E4M3** bit layout, as described in `8-bit Numerical Formats for Deep Neural Networks <https://arxiv.org/abs/2206.02915>`_ , with expanded range and with no infinity or signed zero. NaN is represented as negative zero.
- An 8-bit floating-point number that mostly follows IEEE-754 conventions
and **S1E4M3** bit layout, as described in `8-bit Numerical Formats for Deep Neural Networks <https://arxiv.org/abs/2206.02915>`_ ,
with expanded range and no infinity or signed zero. NaN is
represented as negative zero.
*
- float8 (E5M2)
- ``-``
- An 8-bit floating-point number mostly following IEEE-754 conventions and **S1E5M2** bit layout, as described in `8-bit Numerical Formats for Deep Neural Networks <https://arxiv.org/abs/2206.02915>`_ , with expanded range and with no infinity or signed zero. NaN is represented as negative zero.
- An 8-bit floating-point number mostly following IEEE-754 conventions and
**S1E5M2** bit layout, as described in `8-bit Numerical Formats for Deep Neural Networks <https://arxiv.org/abs/2206.02915>`_ ,
with expanded range and no infinity or signed zero. NaN is
represented as negative zero.
*
- float16
- ``half``
- A 16-bit floating-point number that conforms to the IEEE 754-2008 half-precision storage format.
- A 16-bit floating-point number that conforms to the IEEE 754-2008
half-precision storage format.
*
- bfloat16
- ``bfloat16``
- A shortened 16-bit version of the IEEE 754 single-precision storage format.
- A shortened 16-bit version of the IEEE 754 single-precision storage
format.
*
- tensorfloat32
- ``-``
- A floating-point number that occupies 32 bits or less of storage, providing improved range compared to half (16-bit) format, at (potentially) greater throughput than single-precision (32-bit) formats.
- A floating-point number that occupies 32 bits or less of storage,
providing improved range compared to half (16-bit) format, at
(potentially) greater throughput than single-precision (32-bit) formats.
*
- float32
- ``float``
- A 32-bit floating-point number that conforms to the IEEE 754 single-precision storage format.
- A 32-bit floating-point number that conforms to the IEEE 754
single-precision storage format.
*
- float64
- ``double``
- A 64-bit floating-point number that conforms to the IEEE 754 double-precision storage format.
- A 64-bit floating-point number that conforms to the IEEE 754
double-precision storage format.
.. note::
* The float8 and tensorfloat32 types are internal types used in calculations in Matrix Cores and can be stored in any type of the same size.
* The encodings for FP8 (E5M2) and FP8 (E4M3) that are natively supported by MI300 differ from the FP8 (E5M2) and FP8 (E4M3) encodings used in H100 (`FP8 Formats for Deep Learning <https://arxiv.org/abs/2209.05433>`_).
* The float8 and tensorfloat32 types are internal types used in calculations
in Matrix Cores and can be stored in any type of the same size.
* The encodings for FP8 (E5M2) and FP8 (E4M3) that the
MI300 series natively supports differ from the FP8 (E5M2) and FP8 (E4M3)
encodings used in NVIDIA H100
(`FP8 Formats for Deep Learning <https://arxiv.org/abs/2209.05433>`_).
* In some AMD documents and articles, float8 (E5M2) is referred to as bfloat8.
ROCm support icons
==========================================
In the following sections, we use icons to represent the level of support. These icons, described in the
following table, are also used on the library data type support pages.
In the following sections, icons represent the level of support. These
icons, described in the following table, are also used in the library data type
support pages.
.. list-table::
:header-rows: 1
@@ -121,14 +144,27 @@ following table, are also used on the library data type support pages.
.. note::
* Full support means that the type is supported natively or with hardware emulation.
* Native support means that the operations for that type are implemented in hardware. Types that are not natively supported are emulated with the available hardware. The performance of non-natively supported types can differ from the full instruction throughput rate. For example, 16-bit integer operations can be performed on the 32-bit integer ALUs at full rate; however, 64-bit integer operations might need several instructions on the 32-bit integer ALUs.
* Any type can be emulated by software, but this page does not cover such cases.
* Full support means that the type is supported natively or with hardware
emulation.
Hardware type support
* Native support means that the operations for that type are implemented in
hardware. Types that are not natively supported are emulated with the
available hardware. The performance of non-natively supported types can
differ from the full instruction throughput rate. For example, 16-bit
integer operations can be performed on the 32-bit integer ALUs at full rate;
however, 64-bit integer operations might need several instructions on the
32-bit integer ALUs.
* Any type can be emulated by software, but this page does not cover such
cases.
Hardware data type support
==========================================
AMD GPU hardware support for data types is listed in the following tables.
The following tables provide information about AMD Instinct accelerators support
for various data types. The MI200 series GPUs, which include MI210, MI250, and
MI250X, are based on the CDNA2 architecture. The MI300 series GPUs, consisting
of MI300A, MI300X, and MI325X, are built on the CDNA3 architecture.
Compute units support
-------------------------------------------------------------------------------
@@ -375,21 +411,23 @@ The following table lists data type support for atomic operations.
.. note::
For cases that are not natively supported, you can emulate atomic operations using software.
Software-emulated atomic operations have high negative performance impact when they frequently
access the same memory address.
You can emulate atomic operations using software for cases that are not
natively supported. Software-emulated atomic operations have a high negative
performance impact when they frequently access the same memory address.
Data Type support in ROCm Libraries
Data type support in ROCm libraries
==========================================
ROCm library support for int8, float8 (E4M3), float8 (E5M2), int16, float16, bfloat16, int32,
tensorfloat32, float32, int64, and float64 is listed in the following tables.
ROCm library support for int8, float8 (E4M3), float8 (E5M2), int16, float16,
bfloat16, int32, tensorfloat32, float32, int64, and float64 is listed in the
following tables.
Libraries input/output type support
-------------------------------------------------------------------------------
The following tables list ROCm library support for specific input and output data types. For a detailed
description, refer to the corresponding library data type support page.
The following tables list ROCm library support for specific input and output
data types. Refer to the corresponding library data type support page for a
detailed description.
.. tab-set::
@@ -516,8 +554,9 @@ description, refer to the corresponding library data type support page.
Libraries internal calculations type support
-------------------------------------------------------------------------------
The following tables list ROCm library support for specific internal data types. For a detailed
description, refer to the corresponding library data type support page.
The following tables list ROCm library support for specific internal data types.
Refer to the corresponding library data type support page for a detailed
description.
.. tab-set::