Skip to content

Releases: Farama-Foundation/Arcade-Learning-Environment

ALE v0.10.1

28 Sep 08:32
6a7e0ae
Compare
Choose a tag to compare

Revert change to requirements that numpy < 2.0, now numpy > 1.20 and add support for building from source distribution, tar.gz (though not recommended).

ALE v0.10

24 Sep 09:23
025b282
Compare
Choose a tag to compare

In v0.10, ALE now has its own dedicated website, https://ale.farama.org/ with Atari's documentation being moved from Gymnasium.

We have moved the project main code from src into src/ale to help incorporate ALE into C++ projects and in the Python API, we have updated get_keys_to_action to work with gymnasium.utils.play by changing the key for no-op from None to the e key.

Furthermore, we have updated the API to support continuous actions by @jjshoots and @psc-g, see https://arxiv.org/pdf/2410.23810 for the impact.

Previously, users could interact with the ALE interface with only discrete actions linked to joystick controls, ie:

  • All left actions (LEFTDOWN, LEFTUP, LEFT...) -> paddle left max
  • All right actions (RIGHTDOWN, RIGHTUP, RIGHT...) -> paddle right max
  • Up... etc.
  • Down... etc.

However, for games using paddles, this loses the ability to specify non-max values for moving left or right. Therefore, this release adds to both the Python and C++ interfaces the ability to use continuous actions (FYI, this only impacts environments with paddles, otherwise they can't make use of this change).

C++ interface changes

Old Discrete ALE interface

reward_t ALEInterface::act(Action action)

New Mixed Discrete-Continuous ALE interface

reward_t ALEInterface::act(Action action, float paddle_strength = 1.0)

Games where the paddle is not used simply have the paddle_strength parameter ignored.
This mirrors the real-world scenario where you have a paddle connected, but the game doesn't react to it when the paddle is turned.
This maintains backwards compatibility.

Python interface changes

Old Discrete ALE Python Interface

ale.act(action: int)

New Mixed Discrete-Continuous ALE Python Interface

ale.act(action: int, strength: float = 1.0)

The continuous action space is implemented at the Python level within the Gymnasium environment.

if continuous:
    # action is expected to be a [2,] array of floats
    x, y = action[0] * np.cos(action[1]), action[0] * np.sin(action[1])
    action_idx = self.map_action_idx(
        left_center_right=(
            -int(x < self.continuous_action_threshold)
            + int(x > self.continuous_action_threshold)
        ),
        down_center_up=(
            -int(y < self.continuous_action_threshold)
            + int(y > self.continuous_action_threshold)
        ),
        fire=(action[-1] > self.continuous_action_threshold),
    )
    ale.act(action_idx, action[1])

Full Changelog: v0.9.1...v0.10.0

ALE v0.9.1

01 Aug 10:12
aff5939
Compare
Choose a tag to compare

This release adds support for NumPy 2.0 by updating the PyBind 11 version to 2.13.1 used to compile the wheels, see #535 for the changes.

We have added support for users to use their own PyBind version if already installed when compiling

Full Changelog: v0.9.0...v0.9.1

ALE v0.9.0

20 May 14:53
750d7f9
Compare
Choose a tag to compare

Previously, ALE implemented only a Gym based environment, however, as Gym is no longer maintained (the last commit was 18 months ago). We have updated ale-py to use Gymnasium >= 1.0.0a1 (a maintained fork of Gym) as the sole backend environment implementation. For more information on Gymnasium’s API, see their introduction page.

import gymnasium as gym
import ale_py

gym.register_envs(ale_py)  # unnecessary but prevents IDEs from complaining

env = gym.make("ALE/Pong-v5", render_mode="human")

obs, info = env.reset()
episode_over = False
while not episode_over:
	action = policy(obs)  # replace with actual policy
	obs, reward, terminated, truncated, info = env.step(action)
	episode_over = terminated or truncated
env.close()

An important change in this update is that the Atari ROMs are packaged within the PyPI installation such that users no longer require pip install "gym[accept-rom-license]" (AutoROM) or ale-import-roms for downloading or loading ROMs. This should significantly simplify installing Atari for users. For users who wish to load ROMs from an alternative folder, use the ALE_ROM_DIR system environment variable to specify a folder directory.

Importantly, Gymnasium 1.0.0 removes a registration plugin system that ale-py utilises where atari environments would be registered behind the scenes. As a result, projects will need to import ale_py, to register all the atari environments, before an atari environment can be created with gymnasium.make. We understand this will cause annoyance to some users, however, the previous method brought significant complexity behind the scenes that the development team believed caused more issues than help.

Other changes

  • Added Python 3.12 support.
  • Replace interactive exit by sys.exit (#498)
  • Fix C++ documentation example links(#501)
  • Add support for gcc 13 (#503)
  • Unpin cmake dependency and remove wheel from build system (#493)
  • Add missing imports for cstdint (#486)
  • Allow installing without git (#492)
  • Update to require importlib-resources for < 3.9 (#491)

Full Changelog: v0.8.1...v0.9.0

Arcade Learning Environment 0.8.1

17 Feb 06:36
ba84c14
Compare
Choose a tag to compare

Added

  • Added type stubs for the native ALE Python module generated via pybind11. You'll now get type hints in your IDE.

Fixed

  • Fixed render_mode attribute on legacy Gym environment (@younik)
  • Fixed a bug which could parse invalid ROM names containing numbers, e.g., TicTacToe3D or Pitfall2
  • Changed the ROM identifier of VideoChess & VideoCube to match VideoCheckers & VideoPinball.
    Specifically, the environment ID changed from Videochess -> VideoChess and Videocube -> VideoCube.
    Most ROMs had the ID correctly as video_chess.bin and video_cube.bin but for those who didn't you can
    simply run ale-import-roms which will automatically correct this for you.
  • Reverted back to manylinux2014 (glibc 2.17) to better support older operating systems.

Arcade Learning Environment 0.8.0

09 Sep 11:48
d59d006
Compare
Choose a tag to compare

Added

  • Added compliance with the Gym v26 API. This includes multiple breaking changes to the Gym API. See the Gym release for additional information.
  • Reworked the ROM plugin API resulting in reduced startup time when importing ale_py.roms.
  • Added a truncation API to the ALE interface to query whether an episode was truncated or terminated (ale.game_over(with_truncation=true/false) and ale.game_truncated())
  • Added proper Gym truncation on max episode frames. This no longer relies on the TimeLimit wrapper with the new truncation API in Gym v26.
  • Added a setting for truncating on loss-of-life.
  • Added a setting for clamping rewards.
  • Added const keywords to attributes in ale::ALEInterface (#457) (@AlessioZanga).
  • Added explicit exports via __all__ in ale-py so linting tools can better detect exports.
  • Added builds for Python 3.11.

Fixed

  • Moved the Gym environment entrypoint from gym.envs.atari:AtariEnv to ale_py.env.gym:AtariEnv. This resolves many issues with the namespace package but does break backwards compatability for some Gym code that relied on the entry point being prefixed with gym.envs.atari.

Arcade Learning Environment 0.7.5

18 Apr 19:30
db37282
Compare
Choose a tag to compare

Added

  • Added validation for Gym's frameskip values.
  • Made ROM loading more robust with module-level __getattr__ and __dir__.
  • Added py.typed to the Python module's root directory to support type checkers.
  • Bumped SDL to v2.0.16.

Fixed

  • Fixed Gym render mode metadata. (@vwxyzjn)
  • Fixed Gym warnings about seeding.hash_seed and random.randint.
  • Fixed build infrastructure issues from the migration to setuptools>=0.61.

Removed

  • Removed Gym's .render(mode='human'). Gym now uses the render_mode keyword argument in the environment constructor.

Arcade Learning Environment 0.7.4

17 Feb 04:12
069f8bd
Compare
Choose a tag to compare

Added

  • Proper C++ namespacing for the ALE and Stella (@tuero)
  • vcpkg manifest. You can now install dependencies via cmake.
  • Support for the new Gym (0.22) reset API, i.e., the seed and return_info keyword arguments.
  • Moved cibuildwheel config from Github Actions to pyproject.toml.

Fixed

  • Fixed a bug with the terminal signal in ChopperCommand #434
  • Fixed warnings with importlib-metadata on Python < 3.9.
  • Reverted the Gym v5 defaults to align with the post-DQN literature. That is, moving from a frameskip of 5 -> 4, and full action set -> minimal action set.

Arcade Learning Environment 0.7.3

02 Nov 22:44
978d2ce
Compare
Choose a tag to compare

This update includes a minor addition that allows users to load ROMs from a directory specified by the environment variable ALE_PY_ROM_DIR.

Added

  • Environment variable ALE_PY_ROM_DIR which if specified will search for ROMs in ${ALE_PY_ROM_DIR}/*.bin. (@joshgreaves)

Arcade Learning Environment 0.7.2

07 Oct 23:43
a7a216c
Compare
Choose a tag to compare

This release includes a bug fix for Windows and Python 3.10 wheels. Note that we no longer build wheels for Python 3.6 which is considered end of life as of December 2021.

Added

  • Package Tetris by Colin Hughes. This ROM is made publicly available by the author. This is useful for other open-source packages to be able to unit test against the ALE. (@tfboyd)
  • Python 3.10 prebuilt wheels

Fixed

  • Fixed an issue with isSupportedROM on Windows which was causing incorrect ROM hashes.

Removed

  • Python 3.6 prebuilt wheels