Skip to content

Commit

Permalink
Many more tweaks
Browse files Browse the repository at this point in the history
  • Loading branch information
lezcano committed Sep 15, 2023
1 parent 279b4dd commit 1172ac0
Showing 1 changed file with 34 additions and 36 deletions.
70 changes: 34 additions & 36 deletions blogpost/post.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ to make the most out of it.

## Compiling NumPy code into Parallel C++

We will take as our running example the iteration step in a K-Means algorithm
presented in this [NumPy book](https://realpython.com/numpy-array-programming/#clustering-algorithms)
We take as our running example one step in a K-Means algorithm.
This piece of code is borrowed from this [NumPy book](https://realpython.com/numpy-array-programming/#clustering-algorithms)

```python
import numpy as np
Expand Down Expand Up @@ -47,14 +47,15 @@ assert np.allclose(prediction, new_pred)
```

The compiled function yields a 9x speed-up when running it on 1 core. Even
better, since the compiled code also runs on multiple cores, we get a **57x speed-up**
when running it on 32 cores. Note that vanilla NumPy always runs on
just one core.

We may inspect the generated C++ code by running the script with
`TORCH_LOGS=output_code`, and we can see that `torch.compile` was able to
compile the broadcasting, together with the two reductions into just one
for-loop, and it parallelizes it using OpenMP
better, as opposed to NumPy, our generated code does take advantage of all the
cores in a processor. As such, when we run it on 32 cores, we get a **57x speed-up**.
Note that PyTorch always uses all the available cores unless explicitly restricted,
so this is the default behavior you get when using `torch.compile`.

We may inspect the generated C++ code by running the script with the
environment variable `TORCH_LOGS=output_code`. When doing so, we can see that
`torch.compile` was able to compile the broadcasting, together with the two
reductions into just one for-loop, and parallelizes it using OpenMP
```c++
extern "C" void kernel(const double* in_ptr0, const long* in_ptr1, long* out_ptr0) {
#pragma omp parallel num_threads(32)
Expand Down Expand Up @@ -96,8 +97,8 @@ def triton_(in_ptr0, in_ptr1, out_ptr0, xnumel, XBLOCK : tl.constexpr):
Running this small snippet on an RTX 2060 gives an **8x speed-up** over the
original NumPy code. This is something, but it is not particularly impressive,
given the speed-ups we have seen on CPU. Let's have a look into how to get some
proper speed-ups on CUDA via a couple minor changes.
given the speed-ups we have seen on CPU. Let's have a look into how to squeeze
the most out of our GPU via a couple minor changes.
**`float64` vs `float32`**. Many GPUs, in particular consumer-grade ones, are
rather sluggish when running operations on `float64`. For this reason, changing
Expand All @@ -119,8 +120,8 @@ the CPU numbers. This is caused by a small transformation that `torch.compile`
does behind the scenes. The code above takes NumPy arrays and returns NumPy
arrays. All of these arrays are on CPU, but the computations are performed on
the GPU. This means that every time the function is called, `torch.compile` has
to copy all these arrays from CPU to the GPU, and then copy the result from
CUDA back to CPU to preserve the original semantics. There is no native
to copy all these arrays from CPU to the GPU, and then copy the result
back to CPU to preserve the original semantics. There is no native
solution to this issue in NumPy, as NumPy does not have the notion of a
`device`. That being said, we can work around it by creating a wrapper to this
function so that it accepts PyTorch tensors and returns PyTorch tensors.
Expand All @@ -143,19 +144,19 @@ in CUDA and perform the computations in `float32`, we see a **200x speed-up**
over the initial NumPy implementation on `float32` arrays.

**Mixing NumPy and PyTorch**. In this example, we had to write a small adaptor
to move the data from CPU to CUDA and back. In programs that mix PyTorch and
to convert tensors to ndarrays and then back to tensors. In programs that mix PyTorch and
NumPy this is already done by calling `x.detach().cpu().numpy()` (or simply
`x.numpy(force=True)`). Since when running under `torch.compile` we can run
NumPy code in CUDA, we can simply modify this code to call `x.numpy()` and when
running it under `device("cuda")`, as we did above, it will generate efficient
CUDA code from original NumPy calls without copying the data from CUDA to CPU
at all. Note that the resulting code would not run without `torch.compile`. For
it to run in eager mode you would need to go rollback `x.numpy(force=True)`.
it to run in eager mode one would need to rollback to `x.numpy(force=True)`.

## Further Speed-up tricks

**General advice**. The CUDA code we have shown is already quite efficient, but
it is true that this is a rather tiny program. When dealing with larger
it is true that the running example is rather short. When dealing with larger
programs, we may need to tweak parts of it to make it more efficient. A good
place to start is the [`torch.compile` troubleshooting
page](https://pytorch.org/docs/stable/dynamo/troubleshooting.html#performance-profiling).
Expand All @@ -164,7 +165,7 @@ identify problematic code that may cause slowdowns.

**Advice when compiling NumPy code**. NumPy, even if rather similar to
PyTorch, is often used very differently. It is rather common to perform
computations in NumPy and then do an if/else depending on the value of the
computations in NumPy and then do an if/else depending on values within the
array, or perform operations in-place, perhaps via boolean masks. These
constructions, while supported by `torch.compile`, hamper its performance.
Changes like moving from in-place indexing to using `np.where`, writing the
Expand All @@ -174,17 +175,17 @@ ops can go a long way.
To write fast NumPy code, it is best to avoid loops, but sometimes they are
unavoidable. When tracing through a loop, `torch.compile` will try to fully
unroll it. This is sometimes desirable, but sometimes it may not even be
possible, like when we have a dynamic stopping condition (like a while loop).
possible, like when we have a dynamic stopping condition, like in a while loop.
In these cases, it may be best to just compile the body of the loop, perhaps
compiling a few iterations at a time (loop unrolling).
a few iterations at a time (loop unrolling).

**Debugging NumPy code**. Debugging is rather tricky when a compiler is
involved. To figure out whether an error you are hitting is a `torch.compile`
error, or an error from the program, you can execute your NumPy program without
`torch.compile` by replacing the NumPy import by `import torch._numpy as np`.
This is should just be used for **debugging purposes** and is in no way a
replacement for the PyTorch API, as it is **much slower** and, as a private API,
may change without notice.
**may change without notice**.

## Differences between NumPy and `torch.compile`d NumPy

Expand Down Expand Up @@ -219,31 +220,28 @@ array([128], dtype=int8)
>>> np.asarray([1], dtype=np.int8) + 128
array([129], dtype=int16)
```
These rules are changing to follow a set of rules that is closer to that of
PyTorch in NumPy 2.0. The relevant technical document is [NEP 50](https://numpy.org/neps/nep-0050-scalar-promotion.html).
NumPy 2.0 is changing these rules to follow others that are closer to those
PyTorch. The relevant technical document is [NEP 50](https://numpy.org/neps/nep-0050-scalar-promotion.html).
`torch.compile` went ahead and implemented NEP 50 rather than the about-to-be-deprecated rules.

In general, `torch.compile` will match the semantics of the lastest NumPy release.

## Beyond NumPy: SciPy and scikit-learn

In parallel to this effort of making torch.compile understand NumPy code, other
In parallel to this effort of making `torch.compile` understand NumPy code, other
Quansight engineers have designed and proposed a way to support PyTorch tensors
within scikit-learn and SciPy. This was received enthusiastically by other
maintainers from these libraries, as it was shown that using PyTorch as a
backend would often yield considerable speed-ups.and support in a number of
APIs. Support for PyTorch tensors across an initial set of APIs and submodules
has now been merged into both projects.
backend would often yield considerable speed-ups.
Both projects have now merge initial support for PyTorch tensors across a number of
APIs and submodules.

This sets the stepping stone to move towards a future where PyTorch tensors
can be used within other libraries in the Python data ecosystem. Even more,
this will enable running these other libraries on GPU and even compiling code
mixing these libraries and PyTorch, similar to how we have been discussed in
this will enable running these other libraries on GPUs and even compiling code
mixing these libraries and PyTorch, similar to what we have been discussed in
this post.

Note that this initial support is just restricted to a few algorithms in
scikit-learn and to `scipy.cluster` and `scipy.fft` in SciPy.

If you want to learn more about this effort, how to use it, or how to help
moving it forward, see this post. [TODO link post]

Expand All @@ -252,12 +250,12 @@ moving it forward, see this post. [TODO link post]
PyTorch has committed since its inception to be a framework compatible with the
rest of the Python ecosystem. Enabling compiling NumPy programs, and
establishing the tools necessary to do the same for other prominent libraries
are two more steps in this direction. Quansight and Meta continue working in
this direction, improving the compatibility between PyTorch and the rest of the
are two more steps in this direction. Quansight and Meta continue working hand on
hand, improving the compatibility between PyTorch and the rest of the
ecosystem.

From Quansight, we would like to thank Meta for funding this project as well as
previous work on improving NumPy compatibility within PyTorch, and the project
that led to support PyTorch within scikit-learn and SciPy. These are giant leaps
towards consolidating PyTorch as the reference framework within the open source
that led to supporting PyTorch within scikit-learn and SciPy. These are giant leaps
towards consolidating PyTorch as the framework of choice within the open source
Python data ecosystem.

0 comments on commit 1172ac0

Please sign in to comment.