Skip to content

Commit

Permalink
Update
Browse files Browse the repository at this point in the history
  • Loading branch information
nv-rliu committed Oct 4, 2024
1 parent 469724e commit 26b4a95
Show file tree
Hide file tree
Showing 2 changed files with 32 additions and 28 deletions.
56 changes: 30 additions & 26 deletions docs/cugraph/source/nx_cugraph/how-it-works.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,35 +4,28 @@ NetworkX has the ability to **dispatch function calls to separately-installed th

NetworkX backends let users experience improved performance and/or additional functionality without changing their NetworkX Python code. Examples include backends that provide algorithm acceleration using GPUs, parallel processing, graph database integration, and more.

While NetworkX is a pure-Python implementation with minimal to no dependencies, backends may be written in other languages and require specialized hardware and/or OS support, additional software dependencies, or even separate services. Installation instructions vary based on the backend, and additional information can be found from the individual backend project pages listed in the NetworkX Backend Gallery.
While NetworkX is a pure-Python implementation with minimal to no dependencies, backends may be written in other languages and require specialized hardware and/or OS support, additional software dependencies, or even separate services. Installation instructions vary based on the backend, and additional information can be found from the individual backend project pages.


![nxcg-execution-flow](../_static/nxcg-execution-diagram.jpg)

## Enabling nx-cugraph

NetworkX will use nx-cugraph as the graph analytics backend if any of the
following are used:
NetworkX will use `nx-cugraph` as the backend if any of the following are used:

### `NETWORKX_BACKEND_PRIORITY` environment variable.
### `NX_CUGRAPH_AUTOCONFIG` environment variable.

The `NETWORKX_BACKEND_PRIORITY` environment variable can be used to have NetworkX automatically dispatch to specified backends. This variable can be set to a single backend name, or a comma-separated list of backends ordered using the priority which NetworkX should try. If a NetworkX function is called that nx-cugraph supports, NetworkX will redirect the function call to nx-cugraph automatically, or fall back to the next backend in the list if provided, or run using the default NetworkX implementation. See [NetworkX Backends and Configs](https://networkx.org/documentation/stable/reference/backends.html).
The `NX_CUGRAPH_AUTOCONFIG` environment variable can be used to configure NetworkX for full zero code change acceleration using `nx-cugraph`. If a NetworkX function is called that `nx-cugraph` supports, NetworkX will redirect the function call to `nx-cugraph` automatically, or fall back to either another backend if enabled or the default NetworkX implementation. See the [NetworkX documentation on backends](https://networkx.org/documentation/stable/reference/backends.html) for configuring NetworkX manually.

For example, this setting will have NetworkX use nx-cugraph for any function called by the script supported by nx-cugraph, and the default NetworkX implementation for all others.
```
bash> NETWORKX_BACKEND_PRIORITY=cugraph python my_networkx_script.py
```

This example will have NetworkX use nx-cugraph for functions it supports, then try other_backend if nx-cugraph does not support them, and finally the default NetworkX implementation if not supported by either backend:
```
bash> NETWORKX_BACKEND_PRIORITY="cugraph,other_backend" python my_networkx_script.py
bash> NX_CUGRAPH_AUTOCONFIG=True python my_networkx_script.py
```

### `backend=` keyword argument

To explicitly specify a particular backend for an API, use the `backend=`
keyword argument. This argument takes precedence over the
`NETWORKX_BACKEND_PRIORITY` environment variable. This requires anyone
`NX_CUGRAPH_AUTOCONFIG` environment variable. This requires anyone
running code that uses the `backend=` keyword argument to have the specified
backend installed.

Expand All @@ -49,9 +42,9 @@ requires the user to write code for a specific backend, and therefore requires
the backend to be installed, but has the advantage of ensuring a particular
behavior without the potential for runtime conversions.

To use type-based dispatching with nx-cugraph, the user must import the backend
To use type-based dispatching with `nx-cugraph`, the user must import the backend
directly in their code to access the utilities provided to create a Graph
instance specifically for the nx-cugraph backend.
instance specifically for the `nx-cugraph` backend.

Example:
```python
Expand Down Expand Up @@ -84,31 +77,42 @@ G = nx.from_pandas_edgelist(df, source="src", target="dst")
Run the command:
```
user@machine:/# ipython bc_demo.ipy
CPU times: user 7min 36s, sys: 5.22 s, total: 7min 41s
Wall time: 7min 41s
```

You will observe a run time of approximately 7 minutes...more or less depending on your CPU.

Run the command again, this time specifying cugraph as the NetworkX backend.
```bash
user@machine:/# NX_CUGRAPH_AUTOCONFIG=True ipython bc_demo.ipy

CPU times: user 4.14 s, sys: 1.13 s, total: 5.27 s
Wall time: 5.32 s
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
This run will be much faster, typically around 20 seconds depending on your GPU.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
This run will be much faster, typically around 4 seconds depending on your GPU.
```bash
user@machine:/# NX_CUGRAPH_AUTOCONFIG=True ipython bc_demo.ipy
```
There is also an option to cache the graph conversion to GPU. This can dramatically improve performance when running multiple algorithms on the same graph. Caching is enabled by default for NetworkX versions 3.4 and later, but if using an older version, set "NETWORKX_CACHE_CONVERTED_GRAPHS=True"
There is also an option to cache the graph conversion to GPU. This can dramatically improve performance when running multiple algorithms on the same graph. Caching is enabled by default for NetworkX versions 3.4 and later, but if using an older version, set `NETWORKX_CACHE_CONVERTED_GRAPHS=True`.
```
NETWORKX_BACKEND_PRIORITY=cugraph NETWORKX_CACHE_CONVERTED_GRAPHS=True ipython bc_demo.ipy
user@machine:/# NX_CUGRAPH_AUTOCONFIG=cugraph NETWORKX_CACHE_CONVERTED_GRAPHS=True ipython bc_demo.ipy
```

When running Python interactively, the cugraph backend can be specified as an argument in the algorithm call.

For example:
```
```python
nx.betweenness_centrality(cit_patents_graph, k=k, backend="cugraph")
```

*Note, the examples above were run using the following specs*:
```{note}
NetworkX 3.4
nx-cugraph 24.10
CPU: Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz 45GB RAM
GPU: NVIDIA Quadro RTX 8000 80GB RAM
```

The latest list of algorithms supported by nx-cugraph can be found [here](https://github.com/rapidsai/cugraph/blob/HEAD/python/nx-cugraph/README.md#algorithms) or in the next section.

---
The latest list of algorithms supported by `nx-cugraph` can be found in the [cugraph documentation](https://github.com/rapidsai/cugraph/blob/HEAD/python/nx-cugraph/README.md#algorithms), or in the next section.
4 changes: 2 additions & 2 deletions docs/cugraph/source/nx_cugraph/installation.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Getting Started
# Installing nx-cugraph

This guide describes how to install ``nx-cugraph`` and use it in your workflows.

Expand All @@ -14,7 +14,7 @@ This guide describes how to install ``nx-cugraph`` and use it in your workflows.

More details about system requirements can be found in the [RAPIDS System Requirements Documentation](https://docs.rapids.ai/install#system-req).

## Installing nx-cugraph
## Installing Packages

Read the [RAPIDS Quick Start Guide](https://docs.rapids.ai/install) to learn more about installing all RAPIDS libraries.

Expand Down

0 comments on commit 26b4a95

Please sign in to comment.