Skip to content

Flatten Observation

lerax.wrapper.FlattenObservation

Bases: AbstractPureObservationWrapper[Float[Array, ' flat'], StateType, Float[Array, ' ...'], ObsType, MaskType]

Flatten the observation space into a 1-D array.

Attributes:

Name Type Description
env AbstractEnvLike

The environment to wrap.

observation_space Box

The observation space of the wrapper.

Parameters:

Name Type Description Default
env AbstractEnvLike

The environment to wrap.

required

name property

name: str

Return the name of the environment

action_space property

action_space: AbstractSpace[ActType, MaskType]

unwrapped property

unwrapped: AbstractEnv

Return the wrapped environment

env instance-attribute

env: AbstractEnvLike = env

func instance-attribute

func: Callable = self.env.observation_space.flatten_sample

observation_space instance-attribute

observation_space: Box = Box(
    -jnp.inf,
    jnp.inf,
    shape=(
        int(
            jnp.asarray(
                self.env.observation_space.flat_size
            )
        ),
    ),
)

initial

initial(*, key: Key) -> PureObservationState[StateType]

action_mask

action_mask(
    state: PureObservationState[StateType], *, key: Key
) -> MaskType | None

transition

transition(
    state: PureObservationState[StateType],
    action: ActType,
    *,
    key: Key,
) -> PureObservationState[StateType]

observation

observation(
    state: PureObservationState[StateType], *, key: Key
) -> WrapperObsType

reward

reward(
    state: PureObservationState[StateType],
    action: ActType,
    next_state: PureObservationState[StateType],
    *,
    key: Key,
) -> Float[Array, ""]

terminal

terminal(
    state: PureObservationState[StateType], *, key: Key
) -> Bool[Array, ""]

truncate

truncate(
    state: PureObservationState[StateType],
) -> Bool[Array, ""]

state_info

state_info(state: PureObservationState[StateType]) -> dict

transition_info

transition_info(
    state: PureObservationState[StateType],
    action: ActType,
    next_state: PureObservationState[StateType],
) -> dict

default_renderer

default_renderer() -> AbstractRenderer

Return the default renderer for the wrapped environment

render

render(state: WrapperStateType, renderer: AbstractRenderer)

Render a frame from a state

render_states

render_states(
    states: Sequence[StateType],
    renderer: AbstractRenderer | Literal["auto"] = "auto",
    dt: float = 0.0,
)

Render a sequence of frames from multiple states.

Parameters:

Name Type Description Default
states Sequence[StateType]

A sequence of environment states to render.

required
renderer AbstractRenderer | Literal['auto']

The renderer to use for rendering. If "auto", uses the default renderer.

'auto'
dt float

The time delay between rendering each frame, in seconds.

0.0

render_stacked

render_stacked(
    states: StateType,
    renderer: AbstractRenderer | Literal["auto"] = "auto",
    dt: float = 0.0,
)

Render multiple frames from stacked states.

Stacked states are typically batched states stored in a pytree structure.

Parameters:

Name Type Description Default
states StateType

A pytree of stacked environment states to render.

required
renderer AbstractRenderer | Literal['auto']

The renderer to use for rendering. If "auto", uses the default renderer.

'auto'
dt float

The time delay between rendering each frame, in seconds.

0.0

reset

reset(*, key: Key) -> tuple[StateType, ObsType, dict]

Wrap the functional logic into a Gym API reset method.

Parameters:

Name Type Description Default
key Key

A JAX PRNG key for any stochasticity in the reset.

required

Returns:

Type Description
tuple[StateType, ObsType, dict]

A tuple of the initial state, initial observation, and additional info.

step

step(
    state: StateType, action: ActType, *, key: Key
) -> tuple[
    StateType,
    ObsType,
    Float[Array, ""],
    Bool[Array, ""],
    Bool[Array, ""],
    dict,
]

Wrap the functional logic into a Gym API step method.

Parameters:

Name Type Description Default
state StateType

The current environment state.

required
action ActType

The action to take.

required
key Key

A JAX PRNG key for any stochasticity in the step.

required

Returns:

Type Description
tuple[StateType, ObsType, Float[Array, ''], Bool[Array, ''], Bool[Array, ''], dict]

A tuple of the next state, observation, reward, terminal flag, truncate flag, and additional info.

__init__

__init__(env: AbstractEnvLike)