flatland.trajectories.trajectories module#

class flatland.trajectories.trajectories.Trajectory(data_dir: Path, ep_id: str = NOTHING, trains_positions: DataFrame | None = None, actions: DataFrame | None = None, trains_arrived: DataFrame | None = None, trains_rewards_dones_infos: DataFrame | None = None)[source]#

Bases: object

Encapsulates episode data (actions, positions etc.) for one or multiple episodes for further analysis/evaluation.

Aka. Episode Aka. Recording

In contrast to rllib (ray-project/ray), we use a tabular approach (tsv-backed) instead of `dict`s.

Directory structure: - event_logs

ActionEvents.discrete_action – holds set of action to be replayed for the related episodes. TrainMovementEvents.trains_arrived – holds success rate for the related episodes. TrainMovementEvents.trains_positions – holds the positions for the related episodes. TrainMovementEvents.trains_rewards_dones_infos – holds the rewards for the related episodes.

  • serialised_state

    <ep_id>.pkl – Holds the pickled environment version for the episode.

Indexing:
  • actions for step i are index i-1 (i.e. starting at 0)

  • positions before step i are indexed i-1 (i.e. starting at 0)

  • positions after step are indexed i (i.e. starting at 1)

action_collect(env_time: int, agent_id: int, action: RailEnvActions)[source]#
action_lookup(env_time: int, agent_id: int) RailEnvActions[source]#

Method used to retrieve the stored action (if available). Defaults to 2 = MOVE_FORWARD.

Parameters#

env_time: int

action going into step env_time

agent_id: int

agent ID

Returns#

RailEnvActions

The action to step the env.

arrived_collect(env_time: int, success_rate: float)[source]#
compare_actions(other: Trajectory, start_step: int | None = None, end_step: int | None = None) DataFrame[source]#
compare_arrived(other: Trajectory, start_step: int | None = None, end_step: int | None = None) DataFrame[source]#
compare_positions(other: Trajectory, start_step: int | None = None, end_step: int | None = None) DataFrame[source]#
compare_rewards_dones_infos(other: Trajectory, start_step: int | None = None, end_step: int | None = None) DataFrame[source]#
load(episode_only: bool = False)[source]#
property outputs_dir: Path#
persist()[source]#
position_collect(env_time: int, agent_id: int, position: Tuple[Tuple[int, int], int])[source]#
position_lookup(env_time: int, agent_id: int) Tuple[Tuple[int, int], int][source]#

Method used to retrieve the stored position (if available).

Parameters#

env_time: int

position before (!) step env_time

agent_id: int

agent ID

Returns#

Tuple[Tuple[int, int], int]

The position in the format ((row, column), direction).

restore_episode(start_step: int | None = None) RailEnv | None[source]#

Restore an episode.

Parameters#

start_stepOptional[int]

start from snapshot (if it exists)

Returns#

RailEnv

the rail env or None if the snapshot at the step does not exist

rewards_dones_infos_collect(env_time: int, agent_id: int, reward: float, info: Any, done: bool)[source]#
trains_arrived_lookup() Series[source]#

Method used to retrieve the trains arrived for the episode.

Parameters#

movements_df: pd.DataFrame

Data frame from event_logs/TrainMovementEvents.trains_arrived.tsv

Returns#

pd.Series

The trains arrived data.

trains_rewards_dones_infos_lookup(env_time: int, agent_id: int) Tuple[float, bool, Dict][source]#

Method used to retrieve the rewards for the episode.

Parameters#

env_time: int

action going into step env_time

agent_id: int

agent ID

Returns#

pd.DataFrame

The trains arrived data.