From 8bd7f3665588e6e44c32ea138955f1141bad6d94 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 31 Jul 2024 22:58:59 +0000 Subject: [PATCH] build based on 1c08e7ac --- dev/.documenter-siteinfo.json | 2 +- dev/api/index.html | 2 +- dev/devices/index.html | 4 ++-- dev/exceptions/index.html | 2 +- dev/execution_control/index.html | 2 +- dev/hostcall/index.html | 2 +- dev/index.html | 4 ++-- dev/kernel_programming/index.html | 26 +++++++++++++------------- dev/logging/index.html | 2 +- dev/memory/index.html | 2 +- dev/printing/index.html | 2 +- dev/profiling/index.html | 2 +- dev/quickstart/index.html | 2 +- dev/streams/index.html | 4 ++-- 14 files changed, 29 insertions(+), 29 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 10b640c3e..badb3ca88 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-07-30T11:18:04","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-07-31T22:58:54","documenter_version":"1.5.0"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index a31066f41..b88345da5 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-154489943-2', {'page_path': location.pathname + location.search + location.hash}); -

AMDGPU API Reference

Indexing

AMDGPU.Device.gridItemDimFunction
gridItemDim()::ROCDim3

Returns the size of the grid in workitems. This behaviour is different from CUDA where gridDim gives the size of the grid in blocks.

source

Use these functions for compatibility with CUDA.jl.

Synchronization

AMDGPU.Device.sync_workgroup_countFunction
sync_workgroup_count(predicate::Cint)::Cint

Identical to sync_workgroup, with the additional feature that it evaluates the predicate for all workitems in the workgroup and returns the number of workitems for which predicate evaluates to non-zero.

source
AMDGPU.Device.sync_workgroup_andFunction
sync_workgroup_and(predicate::Cint)::Cint

Identical to sync_workgroup, with the additional feature that it evaluates the predicate for all workitems in the workgroup and returns non-zero if and only if predicate evaluates to non-zero for all of them.

source
AMDGPU.Device.sync_workgroup_orFunction
sync_workgroup_or(predicate::Cint)::Cint

Identical to sync_workgroup, with the additional feature that it evaluates the predicate for all workitems in the workgroup and returns non-zero if and only if predicate evaluates to non-zero for any of them.

source
+

AMDGPU API Reference

Indexing

AMDGPU.Device.gridItemDimFunction
gridItemDim()::ROCDim3

Returns the size of the grid in workitems. This behaviour is different from CUDA where gridDim gives the size of the grid in blocks.

source

Use these functions for compatibility with CUDA.jl.

Synchronization

AMDGPU.Device.sync_workgroup_countFunction
sync_workgroup_count(predicate::Cint)::Cint

Identical to sync_workgroup, with the additional feature that it evaluates the predicate for all workitems in the workgroup and returns the number of workitems for which predicate evaluates to non-zero.

source
AMDGPU.Device.sync_workgroup_andFunction
sync_workgroup_and(predicate::Cint)::Cint

Identical to sync_workgroup, with the additional feature that it evaluates the predicate for all workitems in the workgroup and returns non-zero if and only if predicate evaluates to non-zero for all of them.

source
AMDGPU.Device.sync_workgroup_orFunction
sync_workgroup_or(predicate::Cint)::Cint

Identical to sync_workgroup, with the additional feature that it evaluates the predicate for all workitems in the workgroup and returns non-zero if and only if predicate evaluates to non-zero for any of them.

source
diff --git a/dev/devices/index.html b/dev/devices/index.html index d1a01b53d..03b36262e 100644 --- a/dev/devices/index.html +++ b/dev/devices/index.html @@ -6,5 +6,5 @@

Devices

In AMDGPU, all GPU devices are auto-detected by the runtime, if they're supported.

AMDGPU maintains a global default device. The default device is relevant for all kernel and GPUArray operations. If one is not specified via @roc or an equivalent interface, then the default device is used for those operations, which affects compilation and kernel launch.

The device bound to a current Julia task is accessible via AMDGPU.device method. The list of available devices can be queried with AMDGPU.devices method.

If you have a HIPDevice object, you can also switch the device with AMDGPU.device!. This will switch it only within the task it is called from.

xd1 = AMDGPU.ones(Float32, 16) # On `AMDGPU.device()` device.
 
 AMDGPU.device!(AMDGPU.devices()[2]) # Switch to second device.
-xd2 = AMDPGU.ones(Float32, 16) # On second device.

Additionally, devices have an associated numeric ID. This value is bounded between 1 and length(AMDGPU.devices()), and device 1 is the default device when AMDGPU is first loaded. The ID of the device associated with the current task can be queried with AMDGPU.device_id and changed with AMDGPU.device_id!.

AMDGPU.deviceFunction
device()::HIPDevice

Get currently active device. This device is used when launching kernels via @roc.

source
device(A::ROCArray) -> HIPDevice

Return the device associated with the array A.

source
AMDGPU.device!Function
device!(device::HIPDevice)

Switch current device being used. This switches only for a task inside which it is called.

source
AMDGPU.device_idFunction
device_id() -> Int
-device_id(device::HIPDevice) -> Int

Returns the numerical device ID for device or for the current AMDGPU.device().

source
AMDGPU.device_id!Function
device_id!(idx::Integer)

Sets the current device to AMDGPU.devices()[idx]. See device_id for details on the numbering semantics.

source

Device Properties

AMDGPU.HIP.propertiesFunction
properties(dev::HIPDevice)::hipDeviceProp_t

Get all properties for the device. See HIP documentation for hipDeviceProp_t for the meaning of each field.

source
+xd2 = AMDPGU.ones(Float32, 16) # On second device.

Additionally, devices have an associated numeric ID. This value is bounded between 1 and length(AMDGPU.devices()), and device 1 is the default device when AMDGPU is first loaded. The ID of the device associated with the current task can be queried with AMDGPU.device_id and changed with AMDGPU.device_id!.

AMDGPU.devicesFunction
devices()

Get list of all devices.

source
AMDGPU.deviceFunction
device()::HIPDevice

Get currently active device. This device is used when launching kernels via @roc.

source
device(A::ROCArray) -> HIPDevice

Return the device associated with the array A.

source
AMDGPU.device!Function
device!(device::HIPDevice)

Switch current device being used. This switches only for a task inside which it is called.

source
AMDGPU.device_idFunction
device_id() -> Int
+device_id(device::HIPDevice) -> Int

Returns the numerical device ID for device or for the current AMDGPU.device().

source
AMDGPU.device_id!Function
device_id!(idx::Integer)

Sets the current device to AMDGPU.devices()[idx]. See device_id for details on the numbering semantics.

source

Device Properties

AMDGPU.HIP.nameFunction
name(dev::HIPDevice)::String

Get name of the device.

source
AMDGPU.HIP.wavefrontsizeFunction
wavefrontsize(d::HIPDevice)::Cint

Get size of the wavefront. AMD GPUs support either 32 or 64.

source
AMDGPU.HIP.gcn_archFunction
gcn_arch(d::HIPDevice)::String

Get GCN architecture for the device.

source
AMDGPU.HIP.device_idFunction
device_id(d::HIPDevice)

Zero-based device ID as expected by HIP functions. Differs from AMDGPU.device_id method by 1.

source
AMDGPU.HIP.propertiesFunction
properties(dev::HIPDevice)::hipDeviceProp_t

Get all properties for the device. See HIP documentation for hipDeviceProp_t for the meaning of each field.

source
diff --git a/dev/exceptions/index.html b/dev/exceptions/index.html index f32c42c85..95993f940 100644 --- a/dev/exceptions/index.html +++ b/dev/exceptions/index.html @@ -25,4 +25,4 @@ [4] synchronize() @ AMDGPU ~/.julia/dev/AMDGPU/src/highlevel.jl:154 [5] top-level scope - @ REPL[5]:1

Kernel-thrown exceptions are thrown during the host synchronization AMDGPU.synchronize or on the next kernel launch.

Kernels that hit an exception will write its information into a pre-allocated host buffer. Once complete, the wavefront throwing the exception will lock the buffer to prevent other wavefronts from overwriting the exception and stop itself, but other wavefronts will continue executing.

+ @ REPL[5]:1

Kernel-thrown exceptions are thrown during the host synchronization AMDGPU.synchronize or on the next kernel launch.

Kernels that hit an exception will write its information into a pre-allocated host buffer. Once complete, the wavefront throwing the exception will lock the buffer to prevent other wavefronts from overwriting the exception and stop itself, but other wavefronts will continue executing.

diff --git a/dev/execution_control/index.html b/dev/execution_control/index.html index eed972362..0ba6e5624 100644 --- a/dev/execution_control/index.html +++ b/dev/execution_control/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-154489943-2', {'page_path': location.pathname + location.search + location.hash}); -

Execution Control and Intrinsics

GPU execution is similar to CPU execution in some ways, although there are many differences. AMD GPUs have Compute Units (CUs), which can be thought of like CPU cores. Those CUs have (on pre-Navi architectures) 64 "shader processors", which are essentially the same as CPU SIMD lanes. The lanes in a CU operate in lockstep just like CPU SIMD lanes, and have execution masks and various kinds of SIMD instructions available. CUs execute wavefronts, which are pieces of work split off from a single kernel launch. A single CU can run one out of many wavefronts (one is chosen by the CU scheduler each cycle), which allows for very efficient parallel and concurrent execution on the device. Each wavefront runs independently of the other wavefronts, only stopping to synchronize with other wavefronts or terminate when specified by the program.

We can control wavefront execution through a variety of intrinsics provided by ROCm. For example, the endpgm() intrinsic stops the current wavefront's execution, and is also automatically inserted by the compiler at the end of each kernel (except in certain unique cases).

signal_completion(x) signals the "kernel doorbell" with the value x, which is the signal checked by the CPU wait call to determine when the kernel has completed. This doorbell is set to 0 automatically by GPU hardware once the kernel is complete.

sendmsg(x,y=0) and sendmsghalt(x,y=0) can be used to signal special conditions to the scheduler/hardware, such as making requests to stop wavefront generation, or halt all running wavefronts. Check the ISA manual for details!

+

Execution Control and Intrinsics

GPU execution is similar to CPU execution in some ways, although there are many differences. AMD GPUs have Compute Units (CUs), which can be thought of like CPU cores. Those CUs have (on pre-Navi architectures) 64 "shader processors", which are essentially the same as CPU SIMD lanes. The lanes in a CU operate in lockstep just like CPU SIMD lanes, and have execution masks and various kinds of SIMD instructions available. CUs execute wavefronts, which are pieces of work split off from a single kernel launch. A single CU can run one out of many wavefronts (one is chosen by the CU scheduler each cycle), which allows for very efficient parallel and concurrent execution on the device. Each wavefront runs independently of the other wavefronts, only stopping to synchronize with other wavefronts or terminate when specified by the program.

We can control wavefront execution through a variety of intrinsics provided by ROCm. For example, the endpgm() intrinsic stops the current wavefront's execution, and is also automatically inserted by the compiler at the end of each kernel (except in certain unique cases).

signal_completion(x) signals the "kernel doorbell" with the value x, which is the signal checked by the CPU wait call to determine when the kernel has completed. This doorbell is set to 0 automatically by GPU hardware once the kernel is complete.

sendmsg(x,y=0) and sendmsghalt(x,y=0) can be used to signal special conditions to the scheduler/hardware, such as making requests to stop wavefront generation, or halt all running wavefronts. Check the ISA manual for details!

diff --git a/dev/hostcall/index.html b/dev/hostcall/index.html index 9d06f087c..9e8fb874c 100644 --- a/dev/hostcall/index.html +++ b/dev/hostcall/index.html @@ -17,4 +17,4 @@ AMDGPU.synchronize(; stop_hostcalls=true) # Stop hostcall. AMDGPU.Device.free!(hc) # Free hostcall buffers. -@assert Array(y)[1] ≈ 42f0

In this example, HostCallHolder is used to create and launch HostCall. HostCallHolder contains the HostCall structure itself that is passed to kernel, a task that is spawned on creation and some additional info for controlling the lifetime of the task.

First argument is a function we want to execute when we call the hostcall. In this case we add 42f0 to input argument x and return the result.

Second and third arguments are the return type Float32 and the tuple of types of input arguments Tuple{Float32}.

hostcall! is used to execute the function on the host, wait on the result, and obtain the return values. At the moment, it is performed once per workgroup.

Continuous Host-Call

By default, hostcalls can be used only once. After executing the function on the host, the task finishes and exits.

However, if you need your hostcall to live indefinitely, pass continuous=true keyword argument to HostCallHolder(...; continuous=true).

To then stop the hostcall, call Device.non_continuous!(hc) or Device.finish!(hc) on the HostCallHolder.

The difference between them is that non_continuous! will allow calling hostcall one more time before exiting, while finish! will exit immediately.

finish! can be used on any HostCallHolder to force-exit the running hostcall task.

Free hostcall buffers

For custom hostcalls it is important to call AMDGPU.Device.free! once kernel has finished to free buffers that hostcall used in the process.

+@assert Array(y)[1] ≈ 42f0

In this example, HostCallHolder is used to create and launch HostCall. HostCallHolder contains the HostCall structure itself that is passed to kernel, a task that is spawned on creation and some additional info for controlling the lifetime of the task.

First argument is a function we want to execute when we call the hostcall. In this case we add 42f0 to input argument x and return the result.

Second and third arguments are the return type Float32 and the tuple of types of input arguments Tuple{Float32}.

hostcall! is used to execute the function on the host, wait on the result, and obtain the return values. At the moment, it is performed once per workgroup.

Continuous Host-Call

By default, hostcalls can be used only once. After executing the function on the host, the task finishes and exits.

However, if you need your hostcall to live indefinitely, pass continuous=true keyword argument to HostCallHolder(...; continuous=true).

To then stop the hostcall, call Device.non_continuous!(hc) or Device.finish!(hc) on the HostCallHolder.

The difference between them is that non_continuous! will allow calling hostcall one more time before exiting, while finish! will exit immediately.

finish! can be used on any HostCallHolder to force-exit the running hostcall task.

Free hostcall buffers

For custom hostcalls it is important to call AMDGPU.Device.free! once kernel has finished to free buffers that hostcall used in the process.

diff --git a/dev/index.html b/dev/index.html index c88a98509..0de6920f0 100644 --- a/dev/index.html +++ b/dev/index.html @@ -6,7 +6,7 @@

Programming AMD GPUs with Julia

Julia support for programming AMD GPUs is currently provided by the AMDGPU.jl package. This package contains everything necessary to program for AMD GPUs in Julia, including:

  • An interface for compiling and running kernels written in Julia through LLVM's AMDGPU backend.
  • An interface for working with the HIP runtime API, necessary for launching compiled kernels and controlling the GPU.
  • An array type implementing the GPUArrays.jl interface, providing high-level array operations.

Installation

Simply add the AMDGPU.jl package to your Julia environment:

using Pkg
 Pkg.add("AMDGPU")

To ensure that everything works, you can run the test suite:

using AMDGPU
 using Pkg
-Pkg.test("AMDGPU")

Requirements

  • Julia 1.9 or higher (Navi 3 requires Julia 1.10+).
  • 64-bit Linux or Windows.
  • Minimal supported ROCm version is 5.3.
  • Required software:
LinuxWindows
ROCmROCm
-AMD Software: Adrenalin Edition

On Windows AMD Software: Adrenalin Edition contains HIP library itself, while ROCm provides support for other functionality.

Windows OS missing functionality

Windows does not yet support Hostcall, which means that some of the functionality does not work, like:

  • device printing;
  • dynamic memory allocation (from kernels).

These hostcalls are sometimes launched when AMDGPU detects that a kernel might throw an exception, specifically during conversions, like: Int32(1f0).

To avoid this, use 'unsafe' conversion option: unsafe_trunc(Int32, 1f0).

ROCm system libraries

AMDGPU.jl looks into standard directories and uses Libdl.find_library to find ROCm libraries.

Standard path:

  • Linux: /opt/rocm
  • Windows: C:/Program Files/AMD/ROCm/<rocm-version>

If you have non-standard path for ROCm, set ROCM_PATH=<path> environment variable before launching Julia.

ROCm artifacts

There is limited support for ROCm 5.4+ artifacts which can be enabled with AMDGPU.use_artifacts!.

Limited means not all libraries are available and some of the functionality may be disabled.

AMDGPU.ROCmDiscovery.use_artifacts!Function
use_artifacts!(flag::Bool = true)

Pass true to switch from system-wide ROCm installtion to artifacts. When using artifacts, system-wide installation is not needed at all.

source

Extra Setup Details

List of additional steps that may be needed to take to ensure everything is working:

  • Make sure your user is in the same group as /dev/kfd, other than root.

    For example, it might be the render group:

    crw-rw---- 1 root render 234, 0 Aug 5 11:43 kfd

    In this case, you can add yourself to it:

    sudo usermod -aG render username

  • ROCm libraries should be in the standard library locations, or in your LD_LIBRARY_PATH.

  • If you get an error message along the lines of GLIB_CXX_... not found, it's possible that the C++ runtime used to build the ROCm stack and the one used by Julia are different. If you built the ROCm stack yourself this is very likely the case since Julia normally ships with its own C++ runtime.

    For more information, check out this GitHub issue. A quick fix is to use the LD_PRELOAD environment variable to make Julia use the system C++ runtime library, for example:

    LD_PRELOAD=/usr/lib/libstdc++.so julia

    Alternatively, you can build Julia from source as described here. To quickly debug this issue start Julia and try to load a ROCm library:

    using Libdl Libdl.dlopen("/opt/rocm/hsa/lib/libhsa-runtime64.so.1")

Once all of this is setup properly, you should be able to do using AMDGPU successfully.

See the Quick Start documentation for an introduction to using AMDGPU.jl.

Preferences

AMDGPU.jl supports setting preferences. Template of LocalPreferences.toml with all options:

[AMDGPU]
+Pkg.test("AMDGPU")

Requirements

  • Julia 1.9 or higher (Navi 3 requires Julia 1.10+).
  • 64-bit Linux or Windows.
  • Minimal supported ROCm version is 5.3.
  • Required software:
LinuxWindows
ROCmROCm
-AMD Software: Adrenalin Edition

On Windows AMD Software: Adrenalin Edition contains HIP library itself, while ROCm provides support for other functionality.

Windows OS missing functionality

Windows does not yet support Hostcall, which means that some of the functionality does not work, like:

  • device printing;
  • dynamic memory allocation (from kernels).

These hostcalls are sometimes launched when AMDGPU detects that a kernel might throw an exception, specifically during conversions, like: Int32(1f0).

To avoid this, use 'unsafe' conversion option: unsafe_trunc(Int32, 1f0).

ROCm system libraries

AMDGPU.jl looks into standard directories and uses Libdl.find_library to find ROCm libraries.

Standard path:

  • Linux: /opt/rocm
  • Windows: C:/Program Files/AMD/ROCm/<rocm-version>

If you have non-standard path for ROCm, set ROCM_PATH=<path> environment variable before launching Julia.

ROCm artifacts

There is limited support for ROCm 5.4+ artifacts which can be enabled with AMDGPU.use_artifacts!.

Limited means not all libraries are available and some of the functionality may be disabled.

AMDGPU.ROCmDiscovery.use_artifacts!Function
use_artifacts!(flag::Bool = true)

Pass true to switch from system-wide ROCm installtion to artifacts. When using artifacts, system-wide installation is not needed at all.

source

Extra Setup Details

List of additional steps that may be needed to take to ensure everything is working:

  • Make sure your user is in the same group as /dev/kfd, other than root.

    For example, it might be the render group:

    crw-rw---- 1 root render 234, 0 Aug 5 11:43 kfd

    In this case, you can add yourself to it:

    sudo usermod -aG render username

  • ROCm libraries should be in the standard library locations, or in your LD_LIBRARY_PATH.

  • If you get an error message along the lines of GLIB_CXX_... not found, it's possible that the C++ runtime used to build the ROCm stack and the one used by Julia are different. If you built the ROCm stack yourself this is very likely the case since Julia normally ships with its own C++ runtime.

    For more information, check out this GitHub issue. A quick fix is to use the LD_PRELOAD environment variable to make Julia use the system C++ runtime library, for example:

    LD_PRELOAD=/usr/lib/libstdc++.so julia

    Alternatively, you can build Julia from source as described here. To quickly debug this issue start Julia and try to load a ROCm library:

    using Libdl Libdl.dlopen("/opt/rocm/hsa/lib/libhsa-runtime64.so.1")

Once all of this is setup properly, you should be able to do using AMDGPU successfully.

See the Quick Start documentation for an introduction to using AMDGPU.jl.

Preferences

AMDGPU.jl supports setting preferences. Template of LocalPreferences.toml with all options:

[AMDGPU]
 # If `true` (default), eagerly run GC to keep the pool from growing too big.
 # GC is triggered during new allocatoins or synchronization points.
 eager_gc = false
@@ -20,4 +20,4 @@
 # Default is "none", which does not apply any limitation.
 hard_memory_limit = "none"
 # Notice a space between the value and percentage sign.
-# hard_memory_limit = "80 %"
+# hard_memory_limit = "80 %" diff --git a/dev/kernel_programming/index.html b/dev/kernel_programming/index.html index 4c6da3947..073806905 100644 --- a/dev/kernel_programming/index.html +++ b/dev/kernel_programming/index.html @@ -6,7 +6,7 @@

Kernel Programming

Launch Configuration

While an almost arbitrarily large number of workitems can be executed per kernel launch, the hardware can only support executing a limited number of wavefronts at one time.

To alleviate this, the compiler calculates the "occupancy" of each compiled kernel (which is the number of wavefronts that can be simultaneously executing on the GPU), and passes this information to the hardware; the hardware then launches a limited number of wavefronts at once, based on the kernel's "occupancy" values.

The rest of the wavefronts are not launched until hardware resources become available, which means that a kernel with better occupancy will see more of its wavefronts executing simultaneously (which often leads to better performance). Suffice to say, it's important to know the occupancy of kernels if you want the best performance.

Like CUDA.jl, AMDGPU.jl has the ability to calculate kernel occupancy, with the launch_configuration function:

kernel = @roc launch=false mykernel(args...)
 occupancy = AMDGPU.launch_configuration(kernel)
 @show occupancy.gridsize
-@show occupancy.groupsize

Specifically, launch_configuration calculates the occupancy of mykernel(args...), and then calculates an optimal groupsize based on the occupancy. This value can then be used to select the groupsize for the kernel:

@roc groupsize=occupancy.groupsize mykernel(args...)
AMDGPU.@rocMacro
@roc [kwargs...] func(args...)

High-level interface for launching kernels on GPU. Upon a first call it will be compiled, subsequent calls will re-use the compiled object.

Several keyword arguments are supported:

  • launch::Bool = true: whether to launch the kernel. If false, then returns a compiled kernel which can be launched by calling it and passing arguments.
  • Arguments that influence kernel compilation, see AMDGPU.Compiler.hipfunction.
  • Arguments that influence kernel launch, see AMDGPU.Runtime.HIPKernel.
source
AMDGPU.Runtime.HIPKernelType
(ker::HIPKernel)(args::Vararg{Any, N}; kwargs...)

Launch compiled HIPKernel by passing arguments to it.

The following kwargs are supported:

  • gridsize::ROCDim = 1: Size of the grid.
  • groupsize::ROCDim = 1: Size of the workgroup.
  • shmem::Integer = 0: Amount of dynamically-allocated shared memory in bytes.
  • stream::HIP.HIPStream = AMDGPU.stream(): Stream on which to launch the kernel.
source
AMDGPU.Compiler.hipfunctionFunction
hipfunction(f::F, tt::TT = Tuple{}; kwargs...)

Compile Julia function f to a HIP kernel given a tuple of argument's types tt that it accepts.

The following kwargs are supported:

  • name::Union{String, Nothing} = nothing: A unique name to give a compiled kernel.
  • unsafe_fp_atomics::Bool = true: Whether to use 'unsafe' floating-point atomics. AMD GPU devices support fast atomic read-modify-write (RMW) operations on floating-point values. On single- or double-precision floating-point values this may generate a hardware RMW instruction that is faster than emulating the atomic operation using an atomic compare-and-swap (CAS) loop.
source

Atomics

AMDGPU.jl relies on Atomix.jl for atomics.

Example of a kernel that computes atomic max:

using AMDGPU
+@show occupancy.groupsize

Specifically, launch_configuration calculates the occupancy of mykernel(args...), and then calculates an optimal groupsize based on the occupancy. This value can then be used to select the groupsize for the kernel:

@roc groupsize=occupancy.groupsize mykernel(args...)
AMDGPU.@rocMacro
@roc [kwargs...] func(args...)

High-level interface for launching kernels on GPU. Upon a first call it will be compiled, subsequent calls will re-use the compiled object.

Several keyword arguments are supported:

  • launch::Bool = true: whether to launch the kernel. If false, then returns a compiled kernel which can be launched by calling it and passing arguments.
  • Arguments that influence kernel compilation, see AMDGPU.Compiler.hipfunction.
  • Arguments that influence kernel launch, see AMDGPU.Runtime.HIPKernel.
source
AMDGPU.Runtime.HIPKernelType
(ker::HIPKernel)(args::Vararg{Any, N}; kwargs...)

Launch compiled HIPKernel by passing arguments to it.

The following kwargs are supported:

  • gridsize::ROCDim = 1: Size of the grid.
  • groupsize::ROCDim = 1: Size of the workgroup.
  • shmem::Integer = 0: Amount of dynamically-allocated shared memory in bytes.
  • stream::HIP.HIPStream = AMDGPU.stream(): Stream on which to launch the kernel.
source
AMDGPU.Compiler.hipfunctionFunction
hipfunction(f::F, tt::TT = Tuple{}; kwargs...)

Compile Julia function f to a HIP kernel given a tuple of argument's types tt that it accepts.

The following kwargs are supported:

  • name::Union{String, Nothing} = nothing: A unique name to give a compiled kernel.
  • unsafe_fp_atomics::Bool = true: Whether to use 'unsafe' floating-point atomics. AMD GPU devices support fast atomic read-modify-write (RMW) operations on floating-point values. On single- or double-precision floating-point values this may generate a hardware RMW instruction that is faster than emulating the atomic operation using an atomic compare-and-swap (CAS) loop.
source

Atomics

AMDGPU.jl relies on Atomix.jl for atomics.

Example of a kernel that computes atomic max:

using AMDGPU
 
 function ker_atomic_max!(target, source, indices)
     i = workitemIdx().x + (workgroupIdx().x - 0x1) * workgroupDim().x
@@ -20,7 +20,7 @@
 source = ROCArray(rand(UInt32, n))
 indices = ROCArray(rand(1:bins, n))
 target = ROCArray(zeros(UInt32, bins))
-@roc groupsize=256 gridsize=4 ker_atomic_max!(target, source, indices)

Device Intrinsics

Wavefront-Level Primitives

AMDGPU.Device.activelaneFunction
activelane()::Cuint

Get id of the current lane within a wavefront/warp.

julia> function ker!(x)
+@roc groupsize=256 gridsize=4 ker_atomic_max!(target, source, indices)

Device Intrinsics

Wavefront-Level Primitives

AMDGPU.Device.activelaneFunction
activelane()::Cuint

Get id of the current lane within a wavefront/warp.

julia> function ker!(x)
            i = AMDGPU.Device.activelane()
            x[i + 1] = i
            return
@@ -33,7 +33,7 @@
 
 julia> Array(x)
 1×8 Matrix{Int32}:
- 0  1  2  3  4  5  6  7
source
AMDGPU.Device.ballotFunction
ballot(predicate::Bool)::UInt64

Return a value whose Nth bit is set if and only if predicate evaluates to true for the Nth lane and the lane is active.

julia> function ker!(x)
+ 0  1  2  3  4  5  6  7
source
AMDGPU.Device.ballotFunction
ballot(predicate::Bool)::UInt64

Return a value whose Nth bit is set if and only if predicate evaluates to true for the Nth lane and the lane is active.

julia> function ker!(x)
            x[1] = AMDGPU.Device.ballot(true)
            return
        end
@@ -45,7 +45,7 @@
 
 julia> x
 1-element ROCArray{UInt64, 1, AMDGPU.Runtime.Mem.HIPBuffer}:
- 0x00000000ffffffff
source
AMDGPU.Device.ballot_syncFunction
ballot_sync(mask::UInt64, predicate::Bool)::UInt64

Evaluate predicate for all non-exited threads in mask and return an integer whose Nth bit is set if and only if predicate is true for the Nth thread of the wavefront and the Nth thread is active.

julia> function ker!(x)
+ 0x00000000ffffffff
source
AMDGPU.Device.ballot_syncFunction
ballot_sync(mask::UInt64, predicate::Bool)::UInt64

Evaluate predicate for all non-exited threads in mask and return an integer whose Nth bit is set if and only if predicate is true for the Nth thread of the wavefront and the Nth thread is active.

julia> function ker!(x)
            i = AMDGPU.Device.activelane()
            if i % 2 == 0
                mask = 0x0000000055555555 # Only even threads.
@@ -60,7 +60,7 @@
 julia> @roc groupsize=32 ker!(x);
 
 julia> bitstring(Array(x)[1])
-"0000000000000000000000000000000001010101010101010101010101010101"
source
AMDGPU.Device.bpermuteFunction
bpermute(addr::Integer, val::Cint)::Cint

Read data stored in val from the lane VGPR (vector general purpose register) given by addr.

The permute instruction moves data between lanes but still uses the notion of byte addressing, as do other LDS instructions. Hence, the value in the addr VGPR should be desired_lane_id * 4, since VGPR values are 4 bytes wide.

Example below shifts all values in the wavefront by 1 to the "left".

julia> function ker!(x)
+"0000000000000000000000000000000001010101010101010101010101010101"
source
AMDGPU.Device.bpermuteFunction
bpermute(addr::Integer, val::Cint)::Cint

Read data stored in val from the lane VGPR (vector general purpose register) given by addr.

The permute instruction moves data between lanes but still uses the notion of byte addressing, as do other LDS instructions. Hence, the value in the addr VGPR should be desired_lane_id * 4, since VGPR values are 4 bytes wide.

Example below shifts all values in the wavefront by 1 to the "left".

julia> function ker!(x)
            i::Cint = AMDGPU.Device.activelane()
            # `addr` points to the next immediate lane.
            addr = ((i + 1) % 8) * 4 # VGPRs are 4 bytes wide
@@ -76,7 +76,7 @@
 
 julia> x
 1×8 ROCArray{Int32, 2, AMDGPU.Runtime.Mem.HIPBuffer}:
- 1  2  3  4  5  6  7  0
source
AMDGPU.Device.permuteFunction
permute(addr::Integer, val::Cint)::Cint

Put data stored in val to the lane VGPR (vector general purpose register) given by addr.

Example below shifts all values in the wavefront by 1 to the "right".

julia> function ker!(x)
+ 1  2  3  4  5  6  7  0
source
AMDGPU.Device.permuteFunction
permute(addr::Integer, val::Cint)::Cint

Put data stored in val to the lane VGPR (vector general purpose register) given by addr.

Example below shifts all values in the wavefront by 1 to the "right".

julia> function ker!(x)
            i::Cint = AMDGPU.Device.activelane()
            # `addr` points to the next immediate lane.
            addr = ((i + 1) % 8) * 4 # VGPRs are 4 bytes wide
@@ -92,7 +92,7 @@
 
 julia> x
 1×8 ROCArray{Int32, 2, AMDGPU.Runtime.Mem.HIPBuffer}:
- 7  0  1  2  3  4  5  6
source
AMDGPU.Device.shflFunction
shfl(val, lane, width = wavefrontsize())

Read data stored in val from a lane (this is a more high-level op than bpermute).

If lane is outside the range [0:width - 1], the value returned corresponds to the value held by the lane modulo width (within the same subsection).

julia> function ker!(x)
+ 7  0  1  2  3  4  5  6
source
AMDGPU.Device.shflFunction
shfl(val, lane, width = wavefrontsize())

Read data stored in val from a lane (this is a more high-level op than bpermute).

If lane is outside the range [0:width - 1], the value returned corresponds to the value held by the lane modulo width (within the same subsection).

julia> function ker!(x)
            i::UInt32 = AMDGPU.Device.activelane()
            x[i + 1] = AMDGPU.Device.shfl(i, i + 1)
            return
@@ -118,7 +118,7 @@
 
 julia> Int.(x)
 1×8 ROCArray{Int64, 2, AMDGPU.Runtime.Mem.HIPBuffer}:
- 1  2  3  0  5  6  7  4
source
AMDGPU.Device.shfl_syncFunction
shfl_sync(mask::UInt64, val, lane, width = wavefrontsize())

Synchronize threads according to a mask and read data stored in val from a lane ID.

source
AMDGPU.Device.shfl_upFunction
shfl_up(val, δ, width = wavefrontsize())

Same as shfl, but instead of specifying lane ID, accepts δ that is subtracted from the current lane ID. I.e. read from a lane with lower ID relative to the caller.

julia> function ker!(x)
+ 1  2  3  0  5  6  7  4
source
AMDGPU.Device.shfl_syncFunction
shfl_sync(mask::UInt64, val, lane, width = wavefrontsize())

Synchronize threads according to a mask and read data stored in val from a lane ID.

source
AMDGPU.Device.shfl_upFunction
shfl_up(val, δ, width = wavefrontsize())

Same as shfl, but instead of specifying lane ID, accepts δ that is subtracted from the current lane ID. I.e. read from a lane with lower ID relative to the caller.

julia> function ker!(x)
            i = AMDGPU.Device.activelane()
            x[i + 1] = AMDGPU.Device.shfl_up(i, 1)
            return
@@ -131,7 +131,7 @@
 
 julia> x
 1×8 ROCArray{Int64, 2, AMDGPU.Runtime.Mem.HIPBuffer}:
- 0  0  1  2  3  4  5  6
source
AMDGPU.Device.shfl_up_syncFunction
shfl_up_sync(mask::UInt64, val, δ, width = wavefrontsize())

Synchronize threads according to a mask and read data stored in val from a lane with lower ID relative to the caller.

source
AMDGPU.Device.shfl_downFunction
shfl_down(val, δ, width = wavefrontsize())

Same as shfl, but instead of specifying lane ID, accepts δ that is added to the current lane ID. I.e. read from a lane with higher ID relative to the caller.

julia> function ker!(x)
+ 0  0  1  2  3  4  5  6
source
AMDGPU.Device.shfl_up_syncFunction
shfl_up_sync(mask::UInt64, val, δ, width = wavefrontsize())

Synchronize threads according to a mask and read data stored in val from a lane with lower ID relative to the caller.

source
AMDGPU.Device.shfl_downFunction
shfl_down(val, δ, width = wavefrontsize())

Same as shfl, but instead of specifying lane ID, accepts δ that is added to the current lane ID. I.e. read from a lane with higher ID relative to the caller.

julia> function ker!(x)
            i = AMDGPU.Device.activelane()
            x[i + 1] = AMDGPU.Device.shfl_down(i, 1, 8)
            return
@@ -144,7 +144,7 @@
 
 julia> x
 1×8 ROCArray{Int64, 2, AMDGPU.Runtime.Mem.HIPBuffer}:
- 1  2  3  4  5  6  7  7
source
AMDGPU.Device.shfl_down_syncFunction
shfl_down_sync(mask::UInt64, val, δ, width = wavefrontsize())

Synchronize threads according to a mask and read data stored in val from a lane with higher ID relative to the caller.

source
AMDGPU.Device.shfl_xorFunction
shfl_xor(val, lane_mask, width = wavefrontsize())

Same as shfl, but instead of specifying lane ID, performs bitwise XOR of the caller's lane ID with the lane_mask.

julia> function ker!(x)
+ 1  2  3  4  5  6  7  7
source
AMDGPU.Device.shfl_down_syncFunction
shfl_down_sync(mask::UInt64, val, δ, width = wavefrontsize())

Synchronize threads according to a mask and read data stored in val from a lane with higher ID relative to the caller.

source
AMDGPU.Device.shfl_xorFunction
shfl_xor(val, lane_mask, width = wavefrontsize())

Same as shfl, but instead of specifying lane ID, performs bitwise XOR of the caller's lane ID with the lane_mask.

julia> function ker!(x)
            i = AMDGPU.Device.activelane()
            x[i + 1] = AMDGPU.Device.shfl_xor(i, 1)
            return
@@ -157,7 +157,7 @@
 
 julia> x
 1×8 ROCArray{Int64, 2, AMDGPU.Runtime.Mem.HIPBuffer}:
- 1  0  3  2  5  4  7  6
source
AMDGPU.Device.shfl_xor_syncFunction
shfl_xor_sync(mask::UInt64, val, lane_mask, width = wavefrontsize())

Synchronize threads according to a mask and read data stored in val from a lane according to a bitwise XOR of the caller's lane ID with the lane_mask.

source
AMDGPU.Device.any_syncFunction
any_sync(mask::UInt64, predicate::Bool)::Bool

Evaluate predicate for all non-exited threads in mask and return non-zero if and only if predicate evaluates to non-zero for any of them.

julia> function ker!(x)
+ 1  0  3  2  5  4  7  6
source
AMDGPU.Device.shfl_xor_syncFunction
shfl_xor_sync(mask::UInt64, val, lane_mask, width = wavefrontsize())

Synchronize threads according to a mask and read data stored in val from a lane according to a bitwise XOR of the caller's lane ID with the lane_mask.

source
AMDGPU.Device.any_syncFunction
any_sync(mask::UInt64, predicate::Bool)::Bool

Evaluate predicate for all non-exited threads in mask and return non-zero if and only if predicate evaluates to non-zero for any of them.

julia> function ker!(x)
            i = AMDGPU.Device.activelane()
            if i % 2 == 0
                mask = 0x0000000055555555 # Only even threads.
@@ -173,7 +173,7 @@
 
 julia> x
 1-element ROCArray{Bool, 1, AMDGPU.Runtime.Mem.HIPBuffer}:
- 1
source
AMDGPU.Device.all_syncFunction
all_sync(mask::UInt64, predicate::Bool)::Bool

Evaluate predicate for all non-exited threads in mask and return non-zero if and only if predicate evaluates to non-zero for all of them.

julia> function ker!(x)
+ 1
source
AMDGPU.Device.all_syncFunction
all_sync(mask::UInt64, predicate::Bool)::Bool

Evaluate predicate for all non-exited threads in mask and return non-zero if and only if predicate evaluates to non-zero for all of them.

julia> function ker!(x)
            i = AMDGPU.Device.activelane()
            if i % 2 == 0
                mask = 0x0000000055555555 # Only even threads.
@@ -189,4 +189,4 @@
 
 julia> x
 1-element ROCArray{Bool, 1, AMDGPU.Runtime.Mem.HIPBuffer}:
- 1
source
+ 1source diff --git a/dev/logging/index.html b/dev/logging/index.html index b5ba5ba89..0248eb327 100644 --- a/dev/logging/index.html +++ b/dev/logging/index.html @@ -9,4 +9,4 @@ fill!(B, 1f0) C = Array(B) end -@show logs[1] +@show logs[1] diff --git a/dev/memory/index.html b/dev/memory/index.html index bdda94477..27fe6d0b4 100644 --- a/dev/memory/index.html +++ b/dev/memory/index.html @@ -62,4 +62,4 @@ xd * xd # Freeing is a no-op for `xd`, since `xd` does not own the underlying memory. -AMDGPU.unsafe_free!(xd) # No-op.

Notice mandatory ; lock=false keyword, this is to be able to differentiate between host & device pointers.

+AMDGPU.unsafe_free!(xd) # No-op.

Notice mandatory ; lock=false keyword, this is to be able to differentiate between host & device pointers.

diff --git a/dev/printing/index.html b/dev/printing/index.html index d8109ddce..db431f825 100644 --- a/dev/printing/index.html +++ b/dev/printing/index.html @@ -38,4 +38,4 @@ My index is 1 # :grid -My index is 1

Differences to @cuprintf

Similar to CUDA's @cuprintf, @rocprintf is a printf-compatible macro which takes a format string and arguments, and commands the host CPU to display it as formatted text. However, in contrast to @cuprintf, we use AMDGPU's hostcall and Julia's Printf stdlib to implement this. This means that anything that Printf can print, so can @rocprintf (assuming such an object can be represented on the GPU). The macro is also handled as a regular hostcall, which means that argument types are checked at compile time (although currently, any errors while printing will be detected on the host, and will terminate the kernel).

+My index is 1

Differences to @cuprintf

Similar to CUDA's @cuprintf, @rocprintf is a printf-compatible macro which takes a format string and arguments, and commands the host CPU to display it as formatted text. However, in contrast to @cuprintf, we use AMDGPU's hostcall and Julia's Printf stdlib to implement this. This means that anything that Printf can print, so can @rocprintf (assuming such an object can be represented on the GPU). The macro is also handled as a regular hostcall, which means that argument types are checked at compile time (although currently, any errors while printing will be detected on the host, and will terminate the kernel).

diff --git a/dev/profiling/index.html b/dev/profiling/index.html index c768471ee..366e487bf 100644 --- a/dev/profiling/index.html +++ b/dev/profiling/index.html @@ -34,4 +34,4 @@ @roc groupsize=groupsize gridsize=gridsize mycopy!(dst, src) end AMDGPU.synchronize() - ...

Running profiling again and visualizing results we now see that kernel launches are adjacent to each other and that the average wall duration is lower.

Zoomed outZoomed in
imageimage

Debugging

Use JULIA_AMDGPU_LAUNCH_BLOCKING=1 and HIP_LAUNCH_BLOCKING=1 to synchronize immediately after launching GPU kernels. This will allow to pinpoint exact kernel that caused the exception.

+ ...

Running profiling again and visualizing results we now see that kernel launches are adjacent to each other and that the average wall duration is lower.

Zoomed outZoomed in
imageimage

Debugging

Use JULIA_AMDGPU_LAUNCH_BLOCKING=1 and HIP_LAUNCH_BLOCKING=1 to synchronize immediately after launching GPU kernels. This will allow to pinpoint exact kernel that caused the exception.

diff --git a/dev/quickstart/index.html b/dev/quickstart/index.html index c92cc7f25..6a09bf8da 100644 --- a/dev/quickstart/index.html +++ b/dev/quickstart/index.html @@ -28,4 +28,4 @@ julia> @roc groupsize=groupsize gridsize=gridsize vadd!(c_d, a_d, b_d); julia> Array(c_d) ≈ c -true

The easiest way to launch a GPU kernel is with the @roc macro, specifying groupsize and gridsize to cover full array, and calling it like a regular function.

Keep in mind that kernel launches are asynchronous, meaning that you need to synchronize before you can use the result (e.g. with AMDGPU.synchronize). However, GPU <-> CPU transfers synchronize implicitly.

The grid is the domain over which the entire kernel executes over. The grid will be split into multiple workgroups by hardware automatically, and the kernel does not complete until all workgroups complete.

Like OpenCL, AMDGPU has the concept of "workitems", "workgroups", and the "grid". A workitem is a single thread of execution, capable of performing arithmentic operations. Workitems are grouped into "wavefronts" ("warps" in CUDA) which share the same compute unit, and execute the same instructions simulatenously. The workgroup is a logical unit of compute supported by hardware which comprises multiple wavefronts, which shares resources (specifically local memory) and can be efficiently synchronized. A workgroup may be executed by one or multiple hardware compute units, making it often the only dimension of importance for smaller kernel launches.

Notice how we explicitly specify that this function does not return a value by adding the return statement. This is necessary for all GPU kernels and we can enforce it by adding a return, return nothing, or even nothing at the end of the kernel. If this statement is omitted, Julia will attempt to return the value of the last evaluated expression, in this case a Float64, which will cause a compilation failure as kernels cannot return values.

Naming conventions

Throughout this example we use terms like "work group" and "work item". These terms are used by the Khronos consortium and their APIs including OpenCL and Vulkan, as well as the HSA foundation.

NVIDIA, on the other hand, uses some different terms in their CUDA API, which might be confusing to some users porting their kernels from CUDA to AMDGPU.

As a quick summary, here is a mapping of the most common terms:

AMDGPUCUDA
workitemIdxthreadIdx
workgroupIdxblockIdx
workgroupDimblockDim
gridItemDimNo equivalent
gridGroupDimgridDim
groupsizethreads
gridsizeblocks
streamstream
+true

The easiest way to launch a GPU kernel is with the @roc macro, specifying groupsize and gridsize to cover full array, and calling it like a regular function.

Keep in mind that kernel launches are asynchronous, meaning that you need to synchronize before you can use the result (e.g. with AMDGPU.synchronize). However, GPU <-> CPU transfers synchronize implicitly.

The grid is the domain over which the entire kernel executes over. The grid will be split into multiple workgroups by hardware automatically, and the kernel does not complete until all workgroups complete.

Like OpenCL, AMDGPU has the concept of "workitems", "workgroups", and the "grid". A workitem is a single thread of execution, capable of performing arithmentic operations. Workitems are grouped into "wavefronts" ("warps" in CUDA) which share the same compute unit, and execute the same instructions simulatenously. The workgroup is a logical unit of compute supported by hardware which comprises multiple wavefronts, which shares resources (specifically local memory) and can be efficiently synchronized. A workgroup may be executed by one or multiple hardware compute units, making it often the only dimension of importance for smaller kernel launches.

Notice how we explicitly specify that this function does not return a value by adding the return statement. This is necessary for all GPU kernels and we can enforce it by adding a return, return nothing, or even nothing at the end of the kernel. If this statement is omitted, Julia will attempt to return the value of the last evaluated expression, in this case a Float64, which will cause a compilation failure as kernels cannot return values.

Naming conventions

Throughout this example we use terms like "work group" and "work item". These terms are used by the Khronos consortium and their APIs including OpenCL and Vulkan, as well as the HSA foundation.

NVIDIA, on the other hand, uses some different terms in their CUDA API, which might be confusing to some users porting their kernels from CUDA to AMDGPU.

As a quick summary, here is a mapping of the most common terms:

AMDGPUCUDA
workitemIdxthreadIdx
workgroupIdxblockIdx
workgroupDimblockDim
gridItemDimNo equivalent
gridGroupDimgridDim
groupsizethreads
gridsizeblocks
streamstream
diff --git a/dev/streams/index.html b/dev/streams/index.html index 4b75fecf1..1db07cbe6 100644 --- a/dev/streams/index.html +++ b/dev/streams/index.html @@ -9,6 +9,6 @@ x = AMDGPU.stream!(() -> AMDGPU.ones(Float32, 16), stream)
stream = AMDGPU.HIPStream()
 @roc stream=stream kernel(...)

Streams also have an inherent priority, which allows control of kernel submission latency and on-device scheduling preference with respect to kernels submitted on other streams. There are three priorities: normal (the default), low, and high priority.

Priority of the default stream can be set with AMDGPU.priority!. Alternatively, it can be set at stream creation time:

low_prio = HIPStream(:low)
 high_prio = HIPStream(:high)
-normal_prio = HIPStream(:normal) # or just omit "priority"
AMDGPU.streamFunction
stream()::HIPStream

Get the HIP stream that should be used as the default one for the currently executing task.

source
AMDGPU.stream!Function
stream!(s::HIPStream)

Change the default stream to be used within the same Julia task.

source
stream!(f::Base.Callable, stream::HIPStream)

Change the default stream to be used within the same Julia task, execute f and revert to the original stream.

Returns:

Return value of the function f.

source
AMDGPU.priority!Function
priority!(p::Symbol)

Change the priority of the default stream. Accepted values are :normal (the default), :low and :high.

source
priority!(f::Base.Callable, priority::Symbol)

Chnage the priority of default stream, execute f and revert to the original priority. Accepted values are :normal (the default), :low and :high.

Returns:

Return value of the function f.

source
AMDGPU.HIP.HIPStreamType
HIPStream(priority::Symbol = :normal)

Arguments:

  • priority::Symbol: Priority of the stream: :normal, :high or :low.

Create HIPStream with given priority. Device is the default device that's currently in use.

source
HIPStream(stream::hipStream_t)

Create HIPStream from hipStream_t handle. Device is the default device that's currently in use.

source

Synchronization

AMDGPU.jl by default uses non-blocking stream synchronization with AMDGPU.synchronize to work correctly with TLS and Hostcall.

Users, however, can switch to a blocking synchronization globally with nonblocking_synchronization preference or with fine-grained AMDGPU.synchronize(; blocking=true). Blocking synchronization might offer slightly lower latency.

You can also perform synchronization of the expression with AMDGPU.@sync macro, which will execute given expression and synchronize afterwards (using AMDGPU.synchronize under the hood).

AMDGPU.@sync begin
+normal_prio = HIPStream(:normal) # or just omit "priority"
AMDGPU.streamFunction
stream()::HIPStream

Get the HIP stream that should be used as the default one for the currently executing task.

source
AMDGPU.stream!Function
stream!(s::HIPStream)

Change the default stream to be used within the same Julia task.

source
stream!(f::Base.Callable, stream::HIPStream)

Change the default stream to be used within the same Julia task, execute f and revert to the original stream.

Returns:

Return value of the function f.

source
AMDGPU.priority!Function
priority!(p::Symbol)

Change the priority of the default stream. Accepted values are :normal (the default), :low and :high.

source
priority!(f::Base.Callable, priority::Symbol)

Chnage the priority of default stream, execute f and revert to the original priority. Accepted values are :normal (the default), :low and :high.

Returns:

Return value of the function f.

source
AMDGPU.HIP.HIPStreamType
HIPStream(priority::Symbol = :normal)

Arguments:

  • priority::Symbol: Priority of the stream: :normal, :high or :low.

Create HIPStream with given priority. Device is the default device that's currently in use.

source
HIPStream(stream::hipStream_t)

Create HIPStream from hipStream_t handle. Device is the default device that's currently in use.

source

Synchronization

AMDGPU.jl by default uses non-blocking stream synchronization with AMDGPU.synchronize to work correctly with TLS and Hostcall.

Users, however, can switch to a blocking synchronization globally with nonblocking_synchronization preference or with fine-grained AMDGPU.synchronize(; blocking=true). Blocking synchronization might offer slightly lower latency.

You can also perform synchronization of the expression with AMDGPU.@sync macro, which will execute given expression and synchronize afterwards (using AMDGPU.synchronize under the hood).

AMDGPU.@sync begin
     @roc ...
-end

Finally, you can perform full device synchronization with AMDGPU.device_synchronize.

AMDGPU.synchronizeFunction
synchronize(stream::HIPStream = stream(); blocking::Bool = false)

Wait until all kernels executing on stream have completed.

If there are running HostCalls, then blocking must be false. Additionally, if you want to stop host calls afterwards, then provide stop_hostcalls=true keyword argument.

source
AMDGPU.@syncMacro
@sync ex

Run expression ex on currently active stream and synchronize the GPU on that stream afterwards.

See also: synchronize.

source
AMDGPU.device_synchronizeFunction

Blocks until all kernels on all streams have completed. Uses currently active device.

source
+end

Finally, you can perform full device synchronization with AMDGPU.device_synchronize.

AMDGPU.synchronizeFunction
synchronize(stream::HIPStream = stream(); blocking::Bool = false)

Wait until all kernels executing on stream have completed.

If there are running HostCalls, then blocking must be false. Additionally, if you want to stop host calls afterwards, then provide stop_hostcalls=true keyword argument.

source
AMDGPU.@syncMacro
@sync ex

Run expression ex on currently active stream and synchronize the GPU on that stream afterwards.

See also: synchronize.

source
AMDGPU.device_synchronizeFunction

Blocks until all kernels on all streams have completed. Uses currently active device.

source