Skip to content

Releases: awslabs/Renate

Release 0.5.2

02 Jul 13:38
6e59d3c
Compare
Choose a tag to compare

Minor release that upgrades versions of sagemaker, requests, Pillow, and transformers to account for vulnerabilities.

Release 0.5.1

23 Jan 13:59
65b51a9
Compare
Choose a tag to compare

Minor release that changes versions of Pillow and transformers library to account for untrusted data vulnerability in transformers<4.36.0 and arbitrary code execution in Pillow<10.2.0.

v0.5.0

05 Dec 11:21
02daeaf
Compare
Choose a tag to compare

🤩 Highlights

In this release we focused on the addition of methods for continual learning that do not require storing data in memory. In particular, we implemented methods that can work in combination with pre-trained transformer models.

🌟 New Features

🛢 Datasets

📜 Documentation Updates

🐛 Bug Fixes

Full Changelog: v0.4.0...v0.5.0

Release 0.4.0

25 Sep 16:00
7198a92
Compare
Choose a tag to compare

🤩 Highlights

Renate 0.4.0 adds multi-gpu training via deepspeed, data shift detectors, L2P as a new updater, and a couple of new datasets for benchmarking (WildTimeData, CLEAR, DomainNet, 4TextDataset).

🌟 New Features

🛢 Datasets

📜 Documentation Updates

  • Add doc page and example for shift detection by @lballes in #244
  • Add example of using renate in your own script by @lballes in #274
  • Describe Installation of Dependencies for Benchmarking by @wistuba in #313
  • Improve title for the NLP example by @610v4nn1 in #416

🐛 Bug Fixes

🏗️ Code Refactoring

  • Remove obsolete set_transforms from memory buffer by @lballes in #265
  • Missing dependency and problem with import by @wistuba in #272
  • Using HuggingFace ViT implementation (#219) by @prabhuteja12 in #303
  • Introduce RenateLightningModule by @wistuba in #301
  • Cleanup iCarl by @wistuba in #358
  • Abstracting prompting transformer for use in L2P and S-Prompt by @prabhuteja12 in #420
  • Adding flags to expose gradient clipping args in Trainer by @prabhuteja12 in #361
  • Wild Time Benchmarks and Small Memory Hack by @wistuba in #363
  • Clean Up Learner Checkpoint and Fix Model Loading by @wistuba in #365
  • Enable Custom Grouping for DataIncrementalScenario by @wistuba in #368
  • Masking of logits of irrelevant classes by @prabhuteja12 in #364
  • Modifies current text transformer implementation to a RenateBenchmarkingModule by @prabhuteja12 in #380
  • Replace memory batch size with a fraction of the total batch size by @wistuba in #359
  • Make offline ER us total batch size in first update by @lballes in #381

🔧 Maintenance

Full Changelog: v0.3.1...v0.4.0

Release v0.3.1

06 Jun 14:32
d50e0cd
Compare
Choose a tag to compare

What's Changed

  • Adding a missing dependency and fixing a case where a conditional requirement was unnecessarily required by @wistuba in #284

Full Changelog: v0.3.0...v0.3.1

Release v0.3.0

24 May 11:33
Compare
Choose a tag to compare

What's Changed

  • Covariate shift detection by @lballes (#237, #242, #244). Shift detection may help users decide when to update model. We now provide methods for covariate shift detection in renate.shift.
  • Wild Time benchmarking by @wistuba in #187. Wild Time is a collection of datasets that exhibit temporal data distribution shifts. It is now available for benchmarking in Renate.
  • Improved NLP support by @wistuba (#213, #233). There's now a RenateModule for convenient usage of Hugging Face Transformers. NLP and models are now included in the benchmarking suite.
  • Bug fixes and minor improvements.

Full Changelog: v0.2.1...v0.3.0

Release v0.2.1

11 May 10:20
aeae35c
Compare
Choose a tag to compare

What's Changed

  • Update README.rst with paper ref by @610v4nn1
  • Add doc page explaining NLP example by @lballes
  • Bugfix, removed the need to specify the chunk id @wistuba

Full Changelog: v0.2.0...v0.2.1

Release v0.2.0

24 Apr 16:17
f2c7d15
Compare
Choose a tag to compare

Renate v0.2 is finally here! 🌟
In these 88 new commits we made a number of enhancements and fixes.
It has been a great team effort and we are very happy to see that two more developers decided to contribute to Renate.

Highlights

  • Scalable data buffer (@lballes). Since replay-based methods are used in many practical applications, and having a larger memory buffer leads to better performance, we made sure Renate users will be able to use a replay-memory larger than the physical memory they have available on their machines. This will enable more folks to use Renate in practice, especially in combinations with large models and datasets.
  • Avalanche learning strategies are usable in Renate(@wistuba). Avalanche is a library for continual learning aiming at making research reproducible. While Renate focuses on real-world applications, it can still be useful to for users to compare with the training strategies implemented in Avalanche. To this purpose, Renate now allows the usage of Avalanche training strategies but not all the functionalities are available for Avalanche training strategies (see details here ).
  • Simplified interfaces (@610v4nn1, @wistuba). We simplified naming for attributes and methods to make the library more intuitive and easier to use. Usability is always among our priorities and we will be happy to get more feedback after these changes.
  • Additional tests (@wesk). We increased the amount of testing done for every PR and we are not running a number of quick training jobs. This will allow us to capture additional problems which may come from the interaction between different components of the library and which, usually, are not captured by unit tests.

There is way more to be discovered, from the examples using pre-trained text models (nlp_finetuning folder in the examples) to the additional Scenario classes created to test the algorithms in different environments.

New Contributors

Full Changelog

See the full changelog: v0.1.0...v0.2.0

Initial Release

28 Nov 14:15
f3e9302
Compare
Choose a tag to compare

First public release of Renate.
The library provides the ability to:

  • train and retrain neural network models
  • optimize the hyperparameters when training
  • run training jobs either locally or Amazon SageMaker

The package also contains documentation, examples, and scripts for experimentation.

Contributors (ordered by number of commits)