Releases: Farama-Foundation/Arcade-Learning-Environment
Arcade Learning Environment 0.7.1
This release adds some niceties around Gym as well as expands upon some deprecation warnings which may have been confusing. The biggest change in this release is that the Gym environment is now installed to gym.envs.atari:AtariEnv
which is now backwards compatible with the previous entry point.
Furthermore, users no longer need to import the ALE when constructing a *-v5
environment. We now use the new Gym environment plugin system for all environments, i.e., v0, v4, v5
. Additionally, Gym adds new tools for downloading/installing ROMs. For more info, check out Gym's release notes.
Added
- Added
ale-import-roms --import-from-pkg {pkg}
- Use
gym.envs.atari
as a namespace package to maintain backwards compatability with theAtariEnv
entry point. - The ALE now uses Gym's environment plugin system in
gym>=0.21
(openai/gym#2383, openai/gym#2409, openai/gym#2411). Users no longer are required to importale_py
to use a-v5
environment.
Changed
- Silence unsupported ROMs warning behind
ImportError
. To view these errors you should now supply the environment variablePYTHONWARNINGS=default::ImportWarning:ale_py.roms
. - Reworked ROM error messages to provide more helpful suggestions.
- General metadata changes to the Python package.
Fixed
- Add missing
std::
name qualifier when enabling SDL (@anadrome) - Fixed mandatory kwarg for
gym.envs.atari:AtariEnv.clone_state
.
Arcade Learning Environment 0.7.0
This release focuses on consolidating the ALE into a cohesive package to reduce fragmentation across the community. To this end, the ALE now distributes native Python wheels, replaces the legacy Atari wrapper in OpenAI Gym, and includes additional features like out-of-the-box SDL support for visualizing your agents.
For a full explainer see our release blog post: https://brosa.ca/blog/ale-release-v0.7.
Added
- Native support for OpenAI Gym
- Native Python interface using pybind11 which results in a speedup for Python workloads as well as proper support for objects like
ALEState
- Python ROM management, e.g.,
ale-import-roms
- PyPi Python wheels published as
ale-py
+ we distribute SDL2 for out of the box visualization + audio support isSupportedROM(path)
to check if a ROM file is supported by the ALE- Added new games: Atlantis2, Backgammon, BasicMath, Blackjack, Casino, Crossbow, DarkChambers, Earthworld, Entombed, ET, FlagCapture, Hangman, HauntedHouse, HumanCannonball, Klax, MarioBros, MiniatureGolf, Othello, Pacman, Pitfall2, SpaceWar, Superman, Surround, TicTacToe3D, VideoCheckers, VideoChess, VideoCube, WordZapper (thanks @tkoeppe)
- Added (additional) mode/difficulty settings for: Lost Luggage, Turmoil, Tron Dead Discs, Pong, Mr. Do, King Kong, Frogger, Adventure (thanks @tkoeppe)
- Added
cloneState(include_rng)
which will eventually replacecloneSystemState
(behind the scenescloneSystemState
is equivalent tocloneState(include_rng=True)
). - Added
setRAM
which can be useful for modifying the environment, e.g., learning a causal model over RAM transitions, altering game dynamics, etc.
Changed
- Rewrote SDL support using SDL2 primitives
- SDL2 now renders every frame independent of frameskip
- SDL2 renders at the proper ROM framerate (added benefit of audio sync support)
- Rewrote entire CMake infrastructure which now supports vcpkg natively
- C++ minimum version is now C++17
- Changed all relative imports to absolute imports
- Switched from Travis CI to Github Actions
- Allow for paddle controller's min/max setting to be configurable
- More robust version handling between C++ & Python distributions
- Updated Markdown documentation to replace TeX manual
Fixed
- Fixed bankswitching type for UA cartridges
- Fixed a SwapPort bug in Surround
- Fixed multiple bugs in handling invalid ROM files (thanks @tkoeppe)
- Fixed initialization of TIA static data to make it thread safe (thanks @tkoeppe)
- Fixed RNG initialization, this was one of the last barriers to making the ALE fully deterministic, we are now fully deterministic
Removed
- Removed FIFO interface
- Removed RL-GLUE support
- Removed ALE CLI interface
- Removed Java interface
- Removed
ALEInterface::load()
,ALEInterface::save()
. If you require this stack functionality it's easy to implement on your own usingALEInterface::cloneState(include_rng)
- Removed os-dependent filesystem code in favour of C++17
std::fs
- Removed human control mode
- Removed old makefile build system in favour of CMake
- Removed bspf
- Removed unused controller types: Driving, Booster, Keyboard
- Removed AtariVox
- Removed Stella types (e.g., Array) in favour of STL types
- Remove Stella debugger
- Remove Stella CheatManager
- Lots of code cleanups conforming to best practices (thanks @tkoeppe)
Arcade Learning Environment 0.6.1
This collects a number of minor changes from 0.6.0, spanning about two years.
Changed
- Speedup of up to 30% by optimizing variable types (@qstanczyk)
Fixed
- Fixed switch fall-through with Gravitar lives detection (@lespeholt)
Arcade Learning Environment 0.6.0
This is the first release of a brand new version of the ALE, including modes, difficulties, and a dozen new games.
Added
- Support for modes and difficulties in Atari games (@mcmachado)
- Frame maxpooling as a post-processing option (@skylian)
- Added support for: Turmoil, Koolaid, Tron Deadly Discs, Mr. Do, Donkey Kong, Keystone Kapers, Frogger, Sir Lancelot, Laser Gates, Lost Luggage,
- Added MD5 list of supported ROMs
Changed
- Disabled color averaging by default
- Replaced TinyMT with C++11 random
Fixed
- Fixed old color averaging scheme (PR #181)
- Fixed minimal action set in Pong
- Fixed termination issues in Q*Bert
Arcade Learning Environment 0.5.2
This is a minor release of ALE 0.5, meant to reflect a number of bug fixes and PRs that have been added over the last two years. Note that a new major release (0.6) should be released within the next three months.
Added
- Routines for ALEState serialization (@Jragonmiris).
Changed
- Enforce flags existence (@mcmachado).
Fixed
- Fix RNG issues introduced in 0.5.0.
- Additional bug fixes.
Arcade Learning Environment 0.5.1
This is the official release of the Arcade Learning Environment, version 0.5.1. This version sees bug fixes from 0.5.0, additions to the C++ and Python interfaces, and additional error checking. The interfaces should be considered mostly stable, but are likely to see a few tweaks before version 1.0.
Added
- Added RNG serialization capability.
Changed
- Refactored Python getScreenRGB to return unpacked RGB values (@spragunr).
- Sets the default value of the color_averaging flag to be true. It was true by default in previous versions but was changed in 0.5.0. Reverted for backward compatibility.
Fixed
- Bug fixes from ALE 0.5.0.
Arcade Learning Environment 0.5.0
This is the official release of the Arcade Learning Environment, version 0.5.0. This version sees a major code overhaul, including simpler installation, better interfaces, visualization, and optional controller stochasticity. The interfaces should be considered mostly stable, but may see a few tweaks before version 1.0.
Added
- Added action_repeat_stochasticity.
- Added sound playback, visualization.
- Added screen/sound recording ability.
- CMake now available.
- Incorporated Benjamin Goodrich's Python interface.
- Added examples for shared library, Python, fifo, RL-Glue interfaces.
- Incorporated Java agent into main repository.
Changed
- Better ALEInterface.
- Many other changes.
Fixed
- Some game fixes.
Removed
- Removed internal controller, now superseded by shared library interface.
- Removed the following command-line flags: 'output_file', 'system_reset_steps', 'use_environment_distribution', 'backward_compatible_save', internal agent flags
- The flag 'use_starting_actions' was removed and internally its value is always 'true'.
- The flag 'disable_color_averaging' was renamed to 'color_averaging' and FALSE is its default value.