Cart Pole
lerax.env.classic_control.CartPole
Bases: AbstractClassicControlEnv[CartPoleState, Int[Array, ''], Float[Array, '4']]
CartPole environment matching the Gymnasium Cart Pole environment.
Note
To achieve identical dynamics to Gymnasium set solver=diffrax.Euler().
Action Space
The action space is discrete with two actions:
- 0: Push cart to the left
- 1: Push cart to the right
The action applies a fixed magnitude force to the cart in the specified direction for the duration of the time step.
Observation Space
The observation space is a 4-dimensional continuous space representing the state of the cart and pole:
| Index | Observation | Min Value | Max Value |
|---|---|---|---|
| 0 | Cart Position | -4.8 | 4.8 |
| 1 | Cart Velocity | -Inf | Inf |
| 2 | Pole Angle | -24 deg (-0.418 rad) | 24 deg (0.418 rad) |
| 3 | Pole Angular Velocity | -Inf | Inf |
These values are double the termination thresholds to allow for some margin.
These limits can be modified via the theta_threshold_radians and x_threshold parameters.
Reward
The reward is 1 for every step taken, including the termination step.
Termination
The episode terminates when:
- The pole angle exceeds ±12 degrees from vertical.
- The cart position exceeds ±2.4 units from the center.
These values can be modified via the theta_threshold_radians and x_threshold parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gravity
|
Float[ArrayLike, '']
|
The gravity constant. |
9.8
|
cart_mass
|
Float[ArrayLike, '']
|
The mass of the cart. |
1.0
|
pole_mass
|
Float[ArrayLike, '']
|
The mass of the pole. |
0.1
|
half_length
|
Float[ArrayLike, '']
|
The half-length of the pole. |
0.5
|
force_mag
|
Float[ArrayLike, '']
|
The magnitude of the force applied to the cart. |
10.0
|
theta_threshold_radians
|
Float[ArrayLike, '']
|
The angle threshold for terminating the episode. |
12 * 2 * jnp.pi / 360
|
x_threshold
|
Float[ArrayLike, '']
|
The position threshold for terminating the episode. |
2.4
|
dt
|
Float[ArrayLike, '']
|
The time step for the simulation. |
0.02
|
solver
|
diffrax.AbstractSolver | None
|
The differential equation solver used for simulating the dynamics. |
None
|
stepsize_controller
|
diffrax.AbstractStepSizeController | None
|
The step size controller for the solver. |
None
|
render_states
render_states(
states: Sequence[StateType],
renderer: AbstractRenderer | Literal["auto"] = "auto",
dt: float = 0.0,
)
Render a sequence of frames from multiple states.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
states
|
Sequence[StateType]
|
A sequence of environment states to render. |
required |
renderer
|
AbstractRenderer | Literal['auto']
|
The renderer to use for rendering. If "auto", uses the default renderer. |
'auto'
|
dt
|
float
|
The time delay between rendering each frame, in seconds. |
0.0
|
render_stacked
render_stacked(
states: StateType,
renderer: AbstractRenderer | Literal["auto"] = "auto",
dt: float = 0.0,
)
Render multiple frames from stacked states.
Stacked states are typically batched states stored in a pytree structure.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
states
|
StateType
|
A pytree of stacked environment states to render. |
required |
renderer
|
AbstractRenderer | Literal['auto']
|
The renderer to use for rendering. If "auto", uses the default renderer. |
'auto'
|
dt
|
float
|
The time delay between rendering each frame, in seconds. |
0.0
|
reset
Wrap the functional logic into a Gym API reset method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
Key[Array, '']
|
A JAX PRNG key for any stochasticity in the reset. |
required |
Returns:
| Type | Description |
|---|---|
tuple[StateType, ObsType, dict]
|
A tuple of the initial state, initial observation, and additional info. |
step
step(
state: StateType,
action: ActType,
*,
key: Key[Array, ""],
) -> tuple[
StateType,
ObsType,
Float[Array, ""],
Bool[Array, ""],
Bool[Array, ""],
dict,
]
Wrap the functional logic into a Gym API step method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
state
|
StateType
|
The current environment state. |
required |
action
|
ActType
|
The action to take. |
required |
key
|
Key[Array, '']
|
A JAX PRNG key for any stochasticity in the step. |
required |
Returns:
| Type | Description |
|---|---|
tuple[StateType, ObsType, Float[Array, ''], Bool[Array, ''], Bool[Array, ''], dict]
|
A tuple of the next state, observation, reward, terminal flag, truncate flag, and additional info. |