Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update dependency lightning to v2.3.3 [security] #602

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Aug 6, 2024

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
lightning 2.1.4 -> 2.3.3 age adoption passing confidence

GitHub Vulnerability Alerts

CVE-2024-5980

A vulnerability in the /v1/runs API endpoint of lightning-ai/pytorch-lightning v2.2.4 allows attackers to exploit path traversal when extracting tar.gz files. When the LightningApp is running with the plugin_server, attackers can deploy malicious tar.gz plugins that embed arbitrary files with path traversal vulnerabilities. This can result in arbitrary files being written to any directory in the victim's local file system, potentially leading to remote code execution.


Release Notes

Lightning-AI/lightning (lightning)

v2.3.3: Patch release v2.3.3

Compare Source

This release removes the code from the main lightning package that was reported in CVE-2024-5980.

v2.3.2: Patch release v2.3.2

Compare Source

Includes a minor bugfix that avoids a conflict with the entrypoint command with another package #​20041.

v2.3.1: Patch release v2.3.1

Compare Source

Includes minor bugfixes and stability improvements.

Full Changelog: Lightning-AI/pytorch-lightning@2.3.0...2.3.1

v2.3.0: Lightning v2.3: Tensor Parallelism and 2D Parallelism

Compare Source

Lightning AI is excited to announce the release of Lightning 2.3 ⚡

Did you know? The Lightning philosophy extends beyond a boilerplate-free deep learning framework: We've been hard at work bringing you Lightning Studio. Code together, prototype, train, deploy, host AI web apps. All from your browser, with zero setup.

This release introduces experimental support for Tensor Parallelism and 2D Parallelism, PyTorch 2.3 support, and several bugfixes and stability improvements.

Highlights

Tensor Parallelism (beta)

Tensor parallelism (TP) is a technique that splits up the computation of selected layers across GPUs to save memory and speed up distributed models. To enable TP as well as other forms of parallelism, we introduce a ModelParallelStrategy for both Lightning Trainer and Fabric. Under the hood, TP is enabled through new experimental PyTorch APIs like DTensor and torch.distributed.tensor.parallel.

PyTorch Lightning

Enabling TP in a model with PyTorch Lightning requires you to implement the LightningModule.configure_model() method where you convert selected layers of a model to paralellized layers. This is an advanced feature, because it requires a deep understanding of the model architecture. Open the tutorial Studio to learn the basics of Tensor Parallelism.

Open In Studio

 

import lightning as L
from lightning.pytorch.strategies import ModelParallelStrategy
from torch.distributed.tensor.parallel import ColwiseParallel, RowwiseParallel
from torch.distributed.tensor.parallel import parallelize_module

### 1. Implement the `configure_model()` method in LightningModule
class LitModel(L.LightningModule):
    def __init__(self):
        super().__init__()
        self.model = FeedForward(8192, 8192)

    def configure_model(self):

### Lightning will set up a `self.device_mesh` for you
        tp_mesh = self.device_mesh["tensor_parallel"]

### Use PyTorch's distributed tensor APIs to parallelize the model
        plan = {
            "w1": ColwiseParallel(),
            "w2": RowwiseParallel(),
            "w3": ColwiseParallel(),
        }
        parallelize_module(self.model, tp_mesh, plan)

    def training_step(self, batch):
        ...

### 2. Create the strategy
strategy = ModelParallelStrategy()

### 3. Configure devices and set the strategy in Trainer
trainer = L.Trainer(accelerator="cuda", devices=2, strategy=strategy)
trainer.fit(...)
Full training example (requires at least 2 GPUs).
import torch
import torch.nn as nn
import torch.nn.functional as F

from torch.distributed.tensor.parallel import ColwiseParallel, RowwiseParallel
from torch.distributed.tensor.parallel import parallelize_module

import lightning as L
from lightning.pytorch.demos.boring_classes import RandomDataset
from lightning.pytorch.strategies import ModelParallelStrategy

class FeedForward(nn.Module):
    def __init__(self, dim, hidden_dim):
        super().__init__()
        self.w1 = nn.Linear(dim, hidden_dim, bias=False)
        self.w2 = nn.Linear(hidden_dim, dim, bias=False)
        self.w3 = nn.Linear(dim, hidden_dim, bias=False)

    def forward(self, x):
        return self.w2(F.silu(self.w1(x)) * self.w3(x))

class LitModel(L.LightningModule):
    def __init__(self):
        super().__init__()
        self.model = FeedForward(8192, 8192)

    def configure_model(self):
        if self.device_mesh is None:
            return

### Lightning will set up a `self.device_mesh` for you
        tp_mesh = self.device_mesh["tensor_parallel"]

### Use PyTorch's distributed tensor APIs to parallelize the model
        plan = {
            "w1": ColwiseParallel(),
            "w2": RowwiseParallel(),
            "w3": ColwiseParallel(),
        }
        parallelize_module(self.model, tp_mesh, plan)

    def training_step(self, batch):
        output = self.model(batch)
        loss = output.sum()
        return loss

    def configure_optimizers(self):
        return torch.optim.AdamW(self.model.parameters(), lr=3e-3)

    def train_dataloader(self):

### Trainer configures the sampler automatically for you such that
### all batches in a tensor-parallel group are identical
        dataset = RandomDataset(8192, 64)
        return torch.utils.data.DataLoader(dataset, batch_size=8, num_workers=2)

strategy = ModelParallelStrategy()
trainer = L.Trainer(
    accelerator="cuda",
    devices=2,
    strategy=strategy,
    max_epochs=1,
)

model = LitModel()
trainer.fit(model)

trainer.print(f"Peak memory usage: {torch.cuda.max_memory_allocated() / 1e9:.02f} GB")

Lightning Fabric

Applying TP in a model with Fabric requires you to implement a special function where you convert selected layers of a model to paralellized layers. This is an advanced feature, because it requires a deep understanding of the model architecture. Open the tutorial Studio to learn the basics of Tensor Parallelism.

Open In Studio

 

import lightning as L
from lightning.fabric.strategies import ModelParallelStrategy
from torch.distributed.tensor.parallel import ColwiseParallel, RowwiseParallel
from torch.distributed.tensor.parallel import parallelize_module

### 1. Implement the parallelization function for your model
def parallelize_feedforward(model, device_mesh):

### Lightning will set up a device mesh for you
    tp_mesh = device_mesh["tensor_parallel"]

### Use PyTorch's distributed tensor APIs to parallelize the model
    plan = {
        "w1": ColwiseParallel(),
        "w2": RowwiseParallel(),
        "w3": ColwiseParallel(),
    }
    parallelize_module(model, tp_mesh, plan)
    return model

### 2. Pass the parallelization function to the strategy
strategy = ModelParallelStrategy(parallelize_fn=parallelize_feedforward)

### 3. Configure devices and set the strategy in Fabric
fabric = L.Fabric(accelerator="cuda", devices=2, strategy=strategy)
fabric.launch()
Full training example (requires at least 2 GPUs).
import torch
import torch.nn as nn
import torch.nn.functional as F

from torch.distributed.tensor.parallel import ColwiseParallel, RowwiseParallel
from torch.distributed.tensor.parallel import parallelize_module

import lightning as L
from lightning.pytorch.demos.boring_classes import RandomDataset
from lightning.fabric.strategies import ModelParallelStrategy

class FeedForward(nn.Module):
    def __init__(self, dim, hidden_dim):
        super().__init__()
        self.w1 = nn.Linear(dim, hidden_dim, bias=False)
        self.w2 = nn.Linear(hidden_dim, dim, bias=False)
        self.w3 = nn.Linear(dim, hidden_dim, bias=False)

    def forward(self, x):
        return self.w2(F.silu(self.w1(x)) * self.w3(x))

def parallelize_feedforward(model, device_mesh):

### Lightning will set up a device mesh for you
    tp_mesh = device_mesh["tensor_parallel"]

### Use PyTorch's distributed tensor APIs to parallelize the model
    plan = {
        "w1": ColwiseParallel(),
        "w2": RowwiseParallel(),
        "w3": ColwiseParallel(),
    }
    parallelize_module(model, tp_mesh, plan)
    return model

strategy = ModelParallelStrategy(parallelize_fn=parallelize_feedforward)
fabric = L.Fabric(accelerator="cuda", devices=2, strategy=strategy)
fabric.launch()

### Initialize the model
model = FeedForward(8192, 8192)
model = fabric.setup(model)

### Define the optimizer
optimizer = torch.optim.AdamW(model.parameters(), lr=3e-3)
optimizer = fabric.setup_optimizers(optimizer)

### Define dataset/dataloader
dataset = RandomDataset(8192, 64)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=8)
dataloader = fabric.setup_dataloaders(dataloader)

### Simplified training loop
for i, batch in enumerate(dataloader):
    output = model(batch)
    loss = output.sum()
    fabric.backward(loss)
    optimizer.step()
    optimizer.zero_grad()
    fabric.print(f"Iteration {i} complete")

fabric.print(f"Peak memory usage: {torch.cuda.max_memory_allocated() / 1e9:.02f} GB")

2D Parallelism (beta)

Tensor Parallelism by itself can be very effective for efficient inference of very large models. For training, TP is typically combined with other forms of parallelism, such as FSDP, to increase throughput and scalability on large clusters with 100s of GPUs. The new ModelParallelStrategy in this release supports the combination of TP + FSDP, which is referred to as 2D parallelism.

For an introduction to this feature, please also refer to the tutorial Studios (PyTorch Lightning, Lightning Fabric). At the moment, the PyTorch team is reimplementing FSDP under the name FSDP2 with the aim to make it compose well with other parallelisms such as TP. Therefore, for the experimental 2D parallelism support, you'll need to switch to using FSDP2 with the new ModelParallelStrategy. Please refer to our docs (PyTorch Lightning, Lightning Fabric) and stay tuned for future releases as these APIs mature.

Training Mode in Model Summary

The model summary table that gets displayed when you run Trainer.fit() now contains a new column "Mode" that shows the training mode each layer is in (#​19468).

  | Name                 | Type            | Params | Mode 
-----------------------------------------------------------------
0 | model                | Sam             | 93.7 M | train
1 | model.image_encoder  | ImageEncoderViT | 89.7 M | eval 
2 | model.prompt_encoder | PromptEncoder   | 6.2 K  | train
3 | model.mask_decoder   | MaskDecoder     | 4.1 M  | train
-----------------------------------------------------------------
93.7 M    Trainable params
0         Non-trainable params
93.7 M    Total params
374.942   Total estimated model params size (MB)

A module in PyTorch is always either in train (default) or eval mode.
This improvement should give users more visibility into the state of their model and help debug issues, for example when you need to make sure certain layers of the model are frozen.

Special Forward Methods in Fabric

Until now, Lightning Fabric warned the user in case the forward pass of the model or a subset of its modules was conducted through methods other than the dedicated forward method of the PyTorch module. The reason for this is that PyTorch needs to run special hooks in case of DDP/FSDP and other strategies to function properly, and not running through the real forward method would skip these hooks and lead to correctness issues.

In Lightning Fabric 2.3, we added a feature to explicitly mark alternative forward methods so that Fabric can add the necessary rerouting behind the scenes:

import lightning as L

fabric = L.Fabric(devices=2, strategy="ddp")
fabric.launch()

model = MyModel()
model = fabric.setup(model)

### OK: Calling the model directly
output = model(input)

### ERROR: Calling another method that calls forward indirectly
prediction = model.generate(input)

### New: Mark special forward methods explicitly before using them
model.mark_forward_method(model.generate)

### OK: Now can use `model.generate()` in DDP/FSDP without issues
prediction = model.generate(input)

Find the full example and more details in our docs.

Notable Changes

The 2.0 series of Lightning releases guarantees core API stability: No name changes, argument renaming, hook removals etc. on core interfaces (Trainer, LightningModule, etc.) unless a feature is specifically marked experimental. Here we list a few behavioral changes made in places where the change was justified if it significantly improves the user experience, improves performance, or fixes the correctness of a feature. These changes will likely not impact most users.

Skipping the training step in DDP

It is no longer allowed to skip training_step() by returning None in distributed training (#​19918). The following usage was previously possible but would result in unpredictable hangs and timeouts in distributed training:

def training_step(self, batch):
    loss = ...
    if loss.isnan():

### No longer allowed in multi-GPU!
### Raises error in Lightning >= 2.3
        return None
    return loss

We decided to raise an error if the user attempts to return None when running in a multi-GPU setting.

Miscellaneous Changes

  • Dropped support for PyTorch 1.13 (#​19300). With every new Lightning release, we add official support for the latest PyTorch stable version and drop the oldest version in our support window.
  • The prepare_data() hook in LightningModule and LightningDataModule is now subject to a barrier without timeout to avoid long-running tasks to be interrupted (#​19448). Similarly, also in Fabric the Fabric.rank_zero_first context manager now uses an infinite barrier (#​19448).

CHANGELOG

PyTorch Lightning

Added
  • The ModelSummary and RichModelSummary callbacks now display the training mode of each layer in the column "Mode" (#​19468)
  • Added load_from_checkpoint support for LightningCLI when using dependency injection (#​18105)
  • Added robust timer duration parsing with an informative error message when parsing fails (#​19513)
  • Added on_exception hook to LightningDataModule (#​19601)
  • Added support for PyTorch 2.3 (#​19708)
  • Added ModelParallelStrategy to support 2D parallelism (#​19878, #​19888)
  • Added a call to torch.distributed.destroy_process_group in atexit handler if process group needs destruction (#​19931)
  • Added support for configuring hybrid-sharding by passing a tuple for the FSDPStrategy(device_mesh=...) argument (#​19504)
Changed
  • The prepare_data() hook in LightningModule and LightningDataModule is now subject to a barrier without timeout to avoid long-running tasks to be interrupted (#​19448)
  • Relaxed the requirement for custom batch samplers to expose drop_last for prediction (#​19678)
  • It is no longer allowed to skip training_step() by returning None in distributed training (#​19918)
Removed
  • Removed the Bagua integration (Trainer(strategy="bagua")) (#​19445)
  • Removed support for PyTorch 1.13 (#​19706)
Fixed
  • Fixed a matrix shape mismatch issue when running a model loaded from a quantized checkpoint (bitsandbytes) (#​19886)
  • Fixed WandbLogger.log_hyperparameters() raising an error if hyperparameters are not JSON serializable (#​19769)
  • Fixed an issue with the LightningCLI not being able to set the ModelCheckpoint(save_last=...) argument (#​19808)
  • Fixed an issue causing ValueError for certain object such as TorchMetrics when dumping hyperparameters to YAML (#​19804)
  • Fixed resetting epoch_loop.restarting to avoid full validation run after LearningRateFinder (#​19818)

Lightning Fabric

Added
  • Added sanitization for classes before logging them as hyperparameters (#​19771)
  • Enabled consolidating distributed checkpoints through fabric consolidate in the new CLI (#​19560)
  • Added the ability to explicitly mark forward methods in Fabric via _FabricModule.mark_forward_method() (#​19690)
  • Added support for PyTorch 2.3 (#​19708)
  • Added ModelParallelStrategy to support 2D parallelism (#​19846, #​19852, #​19870, #​19872)
  • Added a call to torch.distributed.destroy_process_group in atexit handler if process group needs destruction (#​19931)
  • Added support for configuring hybrid-sharding by passing a tuple for the FSDPStrategy(device_mesh=...) argument (#​19504)
Changed
  • Renamed lightning run model to fabric run (#​19442, #​19527)
  • The Fabric.rank_zero_first context manager now uses a barrier without timeout to avoid long-running tasks to be interrupted (#​19448)
  • Fabric now raises an error if you forget to call fabric.backward() when it is needed by the strategy or precision selection (#​19447, #​19493)
  • _BackwardSyncControl can now control what to do when gradient accumulation is disabled (#​19577)
Removed
  • Removed support for PyTorch 1.13 (#​19706)
Fixed
  • Fixed a matrix shape mismatch issue when running a model loaded from a quantized checkpoint (bitsandbytes) (#​19886)

Full commit list: 2.2.0 -> 2.3.0

Contributors

We thank all our contributors who submitted pull requests for features, bug fixes and documentation updates.

New Contributors
Did you know?

Chuck Norris is a big fan and daily user of Lightning Studio.

v2.2.5: Patch release v2.2.5

Compare Source

PyTorch Lightning + Fabric

Fixed
  • Fixed a matrix shape mismatch issue when running a model loaded from a quantized checkpoint (bitsandbytes) (#​19886)

Full Changelog: Lightning-AI/pytorch-lightning@2.2.4...2.2.5

v2.2.4: Patch release v2.2.4

Compare Source

App

Fixed
  • Fixed HTTPClient retry for flow/work queue (#​19837)

PyTorch

No Changes.

Fabric

No Changes.

Full Changelog: Lightning-AI/pytorch-lightning@2.2.3...2.2.4

v2.2.3: Patch release v2.2.3

Compare Source

PyTorch

Fixed
  • Fixed WandbLogger.log_hyperparameters() raising an error if hyperparameters are not JSON serializable (#​19769)

Fabric

No Changes.

Full Changelog: Lightning-AI/pytorch-lightning@2.2.2...2.2.3

v2.2.2: Patch release v2.2.2

Compare Source

PyTorch

Fixed
  • Fixed an issue causing a TypeError when using torch.compile as a decorator (#​19627)
  • Fixed a KeyError when saving a FSDP sharded checkpoint and setting save_weights_only=True (#​19524)

Fabric

Fixed
  • Fixed an issue causing a TypeError when using torch.compile as a decorator (#​19627)
  • Fixed issue where some model methods couldn't be monkeypatched after being Fabric wrapped (#​19705)
  • Fixed an issue causing weights to be reset in Fabric.setup() when using FSDP (#​19755)

Full Changelog: Lightning-AI/pytorch-lightning@2.2.1...2.2.2

Contributors

@​ankitgola005 @​awaelchli @​Borda @​carmocca @​dmitsf @​dvoytan-spark @​fnhirwa

v2.2.1: Patch release v2.2.1

Compare Source

PyTorch

Fixed
  • Fixed an issue with CSVLogger trying to append to file from a previous run when the version is set manually (#​19446)
  • Fixed the divisibility check for Trainer.accumulate_grad_batches and Trainer.log_every_n_steps in ThroughputMonitor (#​19470)
  • Fixed support for Remote Stop and Remote Abort with NeptuneLogger (#​19130)
  • Fixed infinite recursion error in precision plugin graveyard (#​19542)

Fabric

Fixed
  • Fixed an issue with CSVLogger trying to append to file from a previous run when the version is set manually (#​19446)

Full Changelog: Lightning-AI/pytorch-lightning@2.2.0post...2.2.1

Contributors

@​Raalsky @​awaelchli @​carmocca @​Borda

If we forgot someone due to not matching commit email with GitHub account, let us know :]

v2.2.0.post0: Minor release correction

Compare Source

Full Changelog: Lightning-AI/pytorch-lightning@2.2.0...2.2.0.post0

v2.2.0: Lightning v2.2

Compare Source

Lightning AI is excited to announce the release of Lightning 2.2 ⚡

Did you know? The Lightning philosophy extends beyond a boilerplate-free deep learning framework: We've been hard at work bringing you Lightning Studio. Code together, prototype, train, deploy, host AI web apps. All from your browser, with zero setup.

While our previous release was packed with many big new features, this time around we're rolling out mainly improvements based on feedback from the community. And of course, as the name implies, this release fully supports the latest PyTorch 2.2 🎉

Highlights

Monitoring Throughput

Lightning now has built-in utilities to measure throughput metrics such as batches/sec, samples/sec and Model FLOP Utilization (MFU) (#​18848).

Trainer:

For the Trainer, this comes in form of a ThroughputMonitor callback. In order to track samples/sec, you need to provide a function to tell the monitor how to extract the batch dimension from your input. Furthermore, if you want to track MFU, you can provide a sample forward pass and the ThroughputMonitor will automatically estimate the utilization based on the hardware you are running on:

import lightning as L
from lightning.pytorch.callbacks import ThroughputMonitor
from lightning.fabric.utilities.throughput import measure_flops

class MyModel(LightningModule):
    def setup(self, stage):
        with torch.device("meta"):
            model = MyModel()

        def sample_forward():
            batch = torch.randn(..., device="meta")
            return model(batch)

        self.flops_per_batch = measure_flops(model, sample_forward, loss_fn=torch.Tensor.sum)

throughput = ThroughputMonitor(
    batch_size_fn=lambda batch: batch.size(0),

### optional, if your samples have a length (like number of tokens)
    sample_fn=lambda batch: batch.size(1)
)
trainer = L.Trainer(log_every_n_steps=10, callbacks=throughput, logger=...)
model = MyModel()
trainer.fit(model)

The results get automatically sent to the logger if one is configured on the Trainer.

Fabric:

For Fabric, the ThroughputMonitor is a simple utility object on which you call .update() and compute_and_log() during the training loop:

import lightning as L
from lightning.fabric.utilities import ThroughputMonitor

fabric = L.Fabric(logger=...)
throughput = ThroughputMonitor(fabric)

t0 = time()
for batch_idx, batch in enumerate(train_dataloader):
    do_work()
    torch.cuda.synchronize()  # required or else time() won't be correct
    throughput.update(
        time=(time() - t0), 
        batches=batch_idx, 
        samples=(batch_idx * batch_size)
    )
    if batch_idx % 10 == 0:
        throughput.compute_and_log(step=batch_idx)

Check out our TinyLlama LLM pretraining script for a full example using Fabric's ThroughputMonitor.

The troughput utilities can report:

  • batches per second (per process and across process)
  • samples per second (per process and across process)
  • items per second (e.g. tokens) (per process and across process)
  • flops per second (per process and across process)
  • model flops utilization (MFU) (per process)
  • total time, total samples, total batches, and total items (per process)

Improved Handling of Evaluation Mode

When you train a model and have validation enabled, the Trainer automatically calls .eval() when transitioning to the validation loop, and .train() when validation ends. Until now, this had the unfortunate side effect that any submodules in your LightningModule that were in evaluation mode get reset to train mode. In Lightning 2.2, the Trainer now captures the mode of every submodule before switching to validation, and restores the mode the modules were in when validation ends (#​18951, #​18951, #​18951). This improvement will help users avoid silent correctness bugs and removes boilerplate code for managing frozen layers.

import lightning as L

class LitModel(L.LightningModule):
    def __init__(self):
        super().__init__()
        self.trainable_module = ...

### This will now stay in eval mode
        self.frozen_module = ...
        self.frozen_module.eval()
        
    def training_step(self, batch):

### Previously, modules were all in train mode
### Now: Modules are in mode they were set up with
        assert self.trainable_module.training
        assert not self.frozen_module.training
        ...
        
    def validation_step(self, batch):

### All modules are in eval mode
        ...
    
    
model = LitModel()
trainer = L.Trainer()
trainer.fit(model)

If you have overridden any of the LightningModule.on_{validation,test,predict}_model_{eval,train} hooks, they will still get called and execute your custom logic, but they are no longer required if you added them to preserve the eval mode of frozen modules.

[!IMPORTANT]
In some libraries, for example HuggingFace, models are created in evaluation mode by default (e.g. HFModel.from_pretrained(...)). Starting from 2.2, you will have to set .train() on these models if you intend to train them.

Converting FSDP Checkpoints

In the previous release, we introduced distributed checkpointing with FSDP to speed up saving and loading checkpoints for big models. These checkpoints are in a special format saved in a folder with shards from each GPU in a separate file. While these checkpoints can be loaded back with Lightning Trainer or Fabric very easily, they aren't easy to load or process externally. In Lightning 2.2, we introduced a CLI utility that lets you consolidate the checkpoint folder to a single file that can be loaded in raw PyTorch with torch.load() for example (#​19213).

Given you saved a distributed checkpoint, you can then convert it like so:

### For Trainer checkpoints:
python -m lightning.pytorch.utilities.consolidate_checkpoint path/to/my/checkpoint

### For Fabric checkpoints:
python -m lightning.fabric.utilities.consolidate_checkpoint path/to/my/checkpoint

Read more about distributed checkpointing in our documentation: Trainer, Fabric.

Improvements to Compiling DDP/FSDP in Fabric

PyTorch 2.0+ introduced torch.compile, a powerful tool to speed up your models without changing the code.
We now added a comprehensive guide how to use torch.compile correctly with tips and tricks to help you troubleshoot common issues. On top of that, Fabric.setup() will now reapply torch.compile on top of DDP/FSDP if you are enabling these strategies (#​19280).

import lightning as L

### Select a distributed strategy (DDP, FSDP, ...)
fabric = L.Fabric(strategy="ddp", devices=8)

### Compile your model before `.setup()`
model = torch.compile(model)

### Now automatically handles compiling also over DDP/FSDP
model = fabric.setup(model)

### You can opt-out if it is causing trouble
model = fabric.setup(model, _reapply_compile=False)

You might see fewer graph breaks, but there won't be any significant speed-ups with this. We introduced this mainly to make Fabric ready for future improvements from PyTorch to optimizing distributed operations.

Saving and Loading DataLoader State

If you use a dataloader/iterable that implements the .state_dict() and .load_state_dict() interface, the Trainer will now automatically save and load their state in the checkpoint (#​19361).

import lightning as L

class MyDataLoader:
    """A dataloader that implements the 'stateful' interface."""
    
    def state_dict(self):

### Return a dictionary with state
        return {"batches_fetched": ...}
    
    def load_state_dict(self, state_dict):

### Load the state from the checkpoint
        self.batches_fetched = state_dict["batches_fetched"]

model = ...
dataloader = MyDataLoader()
trainer = L.Trainer()

### Saves checkpoints that include the dataloader state
trainer.fit(model, dataloader)

### When you resume training, the dataloader can now load its state
trainer.fit(model, dataloader, ckpt_path="path/to/my/checkpoint")

Note that the standard PyTorch DataLoader does not support this stateful interface. This feature only works on loaders that implement these two methods. A dataloader that supports full fault-tolerance will be included in our upcoming release of Lightning Data - a library to optimize data preprocessing and streaming in the cloud. Stay tuned!

Non-strict Checkpoint Loading in Trainer

A feature that has been requested for a long time by the community is non-strict checkpoint loading. By default, a checkpoint in PyTorch is loaded with strict=True to ensure all keys in the saved checkpoint match what's in the model's state dict.
However, in some use cases it might make sense to exclude certain weights from being included in the checkpoint. When resuming training, the user would then be required to set strict=False, which wasn't configurable until now.

You can now set the attribute strict_loading=False on your LightningModule if you want to allow loading partial checkpoints (#​19404).

import lightning as L

class LitModel(L.LightningModule):
    def __init__(self):
        super().__init__()

### This model only trains the decoder, we don't save the encoder
        self.encoder = from_pretrained(...).requires_grad_(False)
        self.decoder = Decoder()

### Set to False because we only care about the decoder
        self.strict_loading = False
    
    def state_dict(self):

### Don't save the encoder, it is not being trained
        return {k: v for k, v in super().state_dict().items() if "encoder" not in k}

...

trainer = L.Trainer()
model = LitModel()

### Will load weights with `.load_state_dict(strict=model.strict_loading)`
trainer.fit(model, ckpt_path="path/to/checkpoint")

Full documentation here.

Notable Changes

The 2.0 series of Lightning releases guarantees core API stability: No name changes, argument renaming, hook removals etc. on core interfaces (Trainer, LightningModule, etc.) unless a feature is specifically marked experimental. Here we list a few behavioral changes made in places where the change was justified if it significantly improves the user experience, improves performance, or fixes the correctness of a feature. These changes will likely not impact most users.

ModelCheckpoint's save-last Feature

In Lightning 2.1, we made the ModelCheckpoint(..., save_last=True) feature save a symbolic link to the last saved checkpoint instead of rewriting the checkpoint (#​18748). This time saver is especially useful for large models who take a while to save. However, many users were confused by the new behavior and wanted it turned off, saving a copy instead of a symbolic link like before. In Lightning 2.2, we are reverting this decision and make the linking opt-in (#​19191):

from lightning.pytorch.callbacks import ModelCheckpoint

### In 2.1 saves a symbolic link "last.ckpt" to the last checkpoint saved
### In 2.2 saves "last.ckpt" as a copy of the last checkpoint saved
checkpoint = ModelCheckpoint("./my_checkpoints", save_last=True)

### You can opt-in to save a symlink (if possible)
checkpoint = ModelCheckpoint("./my_checkpoints", save_last="link")

Removed Problematic Default Seeding

The seed_everything(x) utility function is useful to set the seed for several libraries like PyTorch, NumPy and Python in a single line of code. However, until now you were allowed to omit passing a seeding value, in which case the function picked a seed value randomly. In certain cases, for example when processes are launched externally (e.g., SLURM, torchelastic etc.), this default behavior is dangerous because each process will independently choose a random seed. This can affect sampling, randomized validation splits, and other behaviors that rely on each process having the same seed. In 2.2, we removed this default behavior and default to a seed value 0 (#​18846):

from lightning.pytorch.utilities import seed_everything

### Set the random seed for PyTorch, NumPy, Python etc.
seed_everything(42)

### Not setting a value now defaults to 0
seed_everything()

In the unlikely event that you relied on the previous behavior, you now have to choose the seed randomly yourself:

seed_everything(random.randint(0, 1000000))

Miscellaneous Changes

  • Dropped support for PyTorch 1.12 (#​19300)
  • The columns in the metrics.csv file produced by CSVLogger are now sorted alphabetically (#​19159)
  • Added support for meta-device initialization and materialization of 4-bit Bitsandbytes layers (#​19150)
  • Added TransformerEnginePrecision(fallback_compute_dtype=) to control the dtype of operations that don't support fp8 (#​19082)
  • We renamed the TransformerEnginePrecision(dtype=) argument to weights_dtype and made it required (#​19082)
  • The LightningModule.load_from_checkpoint() function now calls .configure_model() on the model if it is overridden, to ensure all layers can be loaded from the checkpoint (#​19036)

CHANGELOG

PyTorch Lightning

Added
  • Added lightning.pytorch.callbacks.ThroughputMonitor to track throughput and log it (#​18848)
  • The Trainer now restores the training mode set through .train() or .eval() on a submodule-level when switching from validation to training (#​18951)
  • Added support for meta-device initialization and materialization of 4-bit Bitsandbytes layers (#​19150)
  • Added TransformerEnginePrecision(fallback_compute_dtype=) to control the dtype of operations that don't support fp8 (#​19082)
  • Added the option ModelCheckpoint(save_last='link') to create a symbolic link for the 'last.ckpt' file (#​19191)
  • Added a utility function and CLI to consolidate FSDP sharded checkpoints into a single file (#​19213)
  • The TQDM progress bar now respects the env variable TQDM_MINITERS for setting the refresh rate (#​19381)
  • Added support for saving and loading stateful training DataLoaders (#​19361)
  • Added shortcut name strategy='deepspeed_stage_1_offload' to the strategy registry (#​19075)
  • Added support for non-strict state-dict loading in Trainer via the new LightningModule.strict_loading = True | False attribute (#​19404)
Changed
  • seed_everything() without passing in a seed no longer randomly selects a seed, and now defaults to 0 (#​18846)
  • The LightningModule.on_{validation,test,predict}_model_{eval,train} now only get called if they are overridden by the user (#​18951)
  • The Trainer.fit() loop no longer calls LightningModule.train() at the start; it now preserves the user's configuration of frozen layers (#​18951)
  • The LightningModule.load_from_checkpoint() function now calls .configure_model() on the model if it is overridden, to ensure all layers can be loaded from the checkpoint (#​19036)
  • Restored usage of step parameter when logging metrics with NeptuneLogger (#​19126)
  • Changed the TransformerEnginePrecision(dtype=) argument to weights_dtype and made it required (#​19082)
  • The columns in the metrics.csv file produced by CSVLogger are now sorted alphabetically (#​19159)
  • Reverted back to creating a checkpoint copy when ModelCheckpoint(save_last=True) instead of creating a symbolic link (#​19191)
Deprecated
  • Deprecated all precision plugin classes under lightning.pytorch.plugins with the suffix Plugin in the name (#​18840)
Removed
  • Removed support for PyTorch 1.12 (#​19300)
Fixed
  • Fixed issue where the precision="transformer-engine" argument would not replace layers by default (#​19082)
  • Fixed issue where layers created in LightningModule.setup or LightningModule.configure_model wouldn't get converted when using the Bitsandbytes or TransformerEngine plugins (#​19061)
  • Fixed the input validation logic in FSDPStrategy to accept a device_mesh (#​19392)

Lightning Fabric

Added
  • Added lightning.fabric.utilities.ThroughputMonitor and lightning.fabric.utilities.Throughput to track throughput and log it (#​18848)
  • Added lightning.fabric.utilities.AttributeDict for convenient dict-attribute access to represent state in script (#​18943)
  • Added support for meta-device initialization and materialization of 4-bit Bitsandbytes layers (#​19150)
  • Added TransformerEnginePrecision(fallback_compute_dtype=) to control the dtype of operations that don't support fp8 ([#​19082](https://redirect.git

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot requested a review from a team as a code owner August 6, 2024 10:41
@renovate renovate bot added the dependencies Pull requests that update a dependency file label Aug 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants