Evaluation#

We use the terms from arc42 for the different views.

Building Block View Trajectory Generation and Evaluation#

This is a conceptual view and reflects the target state of the implementation. In Flatland implementation, we currently do not yet distinguish between configuration and state, they go together in RailEnvPersister and a trajectory currently consists of full snapshots.

        classDiagram
    class Runner {
        Trajectory: +generate_trajectory_from_policy(Policy policy, ObservationBuilder obs_builder, int snapshot_interval)$ Trajectory
    }
    
    class Evaluator {
        Trajectory: +evaluate(Trajectory trajectory)
    }
    
    class Trajectory {
        Trajectory: +Path data_dir
        Trajectory: +UUID ep_id
        Trajectory: +run(int from_step, int to_step=-1)
    }



    

    class EnvSnapshot {
        EnvSnapshot: +Path data_dir
        Trajectory: +UUID ep_id
    }

    class EnvConfiguration
    EnvConfiguration: +int max_episode_steps
    EnvConfiguration: +int height
    EnvConfiguration: +int width
    EnvConfiguration: +Rewards reward_function
    EnvConfiguration: +MalGen
    EnvConfiguration: +RailGen etc. reset

    class EnvState {
        EnvState: +Grid rail
    }

    
        class EnvConfiguration

        class EnvState

        class EnvSnapshot
        
        class EnvActions

        class EnvRewards
    

    

    EnvSnapshot --> "1" EnvConfiguration
    EnvSnapshot --> "1" EnvState
    Trajectory --> "1" EnvConfiguration
    Trajectory --> "1..*" EnvState
    Trajectory --> "1..*" EnvActions
    Trajectory --> "1..*" EnvRewards

    class Policy
    Policy: act(int handle, Observation observation)

    class ObservationBuilder
    ObservationBuilder: get()
    ObservationBuilder: get_many()

    class Submission
    Submission --> "1" Policy
    Submission --> ObservationBuilder
    

Remarks:

  • Trajectory needs not start at step 0

  • Trajectory needs not contain state for every step - however, when starting the trajectory from an intermediate step, the snapshot must exist.

Runtime View Trajectory Generation#

        flowchart TD
    subgraph Trajectory.create_from_policy
        start((" ")) -->|data_dir| D0
        D0(RailEnvPersister.load_new) -->|env| E{env done?}
        E -->|no:<br/>observations| G{Agent loop:<br/> more agents?}
        G --->|observation| G1(policy.act)
        G1 -->|action| G
        G -->|no:<br/> actions| F3(env.step)
        F3 -->|observations,rewards,info| E
        E -->|yes:<br/> rewards| H((("&nbsp;")))
    end

    style Policy fill: #ffe, stroke: #333, stroke-width: 1px, color: black
    style G1 fill: #ffe, stroke: #333, stroke-width: 1px, color: black
    style Env fill: #fcc, stroke: #333, stroke-width: 1px, color: black
    style F3 fill: #fcc, stroke: #333, stroke-width: 1px, color: black
    subgraph legend
        Env(Environment)
        Policy(Policy)
        Trajectory(Trajectory)
    end

    Trajectory.create_from_policy~~~legend
    

Trajectory Generation and Evaluation#

Create a trajectory from a random policy and inspect the output

import tempfile
from pathlib import Path
from typing import Any, Optional

from flatland.env_generation.env_generator import env_generator
from flatland.trajectories.trajectories import Policy
from flatland.utils.seeding import np_random, random_state_to_hashablestate
from flatland.evaluators.trajectory_evaluator import TrajectoryEvaluator, evaluate_trajectory
from flatland.trajectories.trajectories import Trajectory
class RandomPolicy(Policy):
    def __init__(self, action_size: int = 5, seed=42):
        super(RandomPolicy, self).__init__()
        self.action_size = action_size
        self.np_random, _ = np_random(seed=seed)

    def act(self, handle: int, observation: Any, **kwargs):
        return self.np_random.choice(self.action_size)
with tempfile.TemporaryDirectory() as tmpdirname:
    data_dir = Path(tmpdirname)
    trajectory = Trajectory.create_from_policy(policy=RandomPolicy(), data_dir=data_dir, snapshot_interval=15)
    # np_random in loaded episode is same as if it comes directly from env_generator incl. reset()!
    env = trajectory.restore_episode()
    gen, _, _ = env_generator()
    assert random_state_to_hashablestate(env.np_random) == random_state_to_hashablestate(gen.np_random)

    
    # inspect output
    for p in sorted(data_dir.rglob("**/*")):
        print(p)

    # inspect the actions taken by the policy
    print(trajectory.read_actions())

    # verify steps 5 to 15 - we can start at 5 as there is a snapshot for step 5.
    TrajectoryEvaluator(trajectory).evaluate(start_step=15,end_step=25, tqdm_kwargs={"disable": True})
/opt/hostedtoolcache/Python/3.10.17/x64/lib/python3.10/site-packages/flatland/envs/rail_generators.py:321: UserWarning: Could not set all required cities! Created 1/2
  warnings.warn(city_warning)
/opt/hostedtoolcache/Python/3.10.17/x64/lib/python3.10/site-packages/flatland/envs/rail_generators.py:217: UserWarning: [WARNING] Changing to Grid mode to place at least 2 cities.
  warnings.warn("[WARNING] Changing to Grid mode to place at least 2 cities.")
  0%|          | 0/471 [00:00<?, ?it/s]
  1%|          | 5/471 [00:00<00:09, 49.30it/s]
  2%|▏         | 11/471 [00:00<00:09, 50.37it/s]
  4%|▎         | 17/471 [00:00<00:11, 38.70it/s]
  5%|▍         | 23/471 [00:00<00:10, 42.98it/s]
  6%|▌         | 29/471 [00:00<00:09, 45.63it/s]
  7%|▋         | 34/471 [00:00<00:09, 45.55it/s]
  8%|▊         | 40/471 [00:00<00:08, 47.90it/s]
 10%|▉         | 45/471 [00:00<00:08, 48.22it/s]
 11%|█         | 51/471 [00:01<00:08, 50.21it/s]
 12%|█▏        | 57/471 [00:01<00:07, 51.76it/s]
 13%|█▎        | 63/471 [00:01<00:07, 51.54it/s]
 15%|█▍        | 69/471 [00:01<00:07, 52.80it/s]
 16%|█▌        | 75/471 [00:01<00:07, 53.15it/s]
 17%|█▋        | 81/471 [00:01<00:07, 54.06it/s]
 18%|█▊        | 87/471 [00:01<00:07, 54.79it/s]
 20%|█▉        | 93/471 [00:01<00:07, 53.26it/s]
 21%|██        | 99/471 [00:01<00:06, 54.02it/s]
 22%|██▏       | 105/471 [00:02<00:06, 53.52it/s]
 24%|██▎       | 111/471 [00:02<00:06, 54.30it/s]
 25%|██▍       | 117/471 [00:02<00:06, 55.20it/s]
 26%|██▌       | 123/471 [00:02<00:06, 54.26it/s]
 27%|██▋       | 129/471 [00:02<00:06, 55.21it/s]
 29%|██▊       | 135/471 [00:02<00:06, 54.23it/s]
 30%|██▉       | 141/471 [00:02<00:05, 55.20it/s]
 31%|███       | 147/471 [00:02<00:05, 55.85it/s]
 32%|███▏      | 153/471 [00:02<00:05, 55.33it/s]
 34%|███▍      | 159/471 [00:03<00:05, 54.56it/s]
 35%|███▌      | 165/471 [00:03<00:05, 54.83it/s]
 37%|███▋      | 172/471 [00:03<00:05, 56.73it/s]
 38%|███▊      | 179/471 [00:03<00:05, 58.19it/s]
 39%|███▉      | 185/471 [00:03<00:05, 56.84it/s]
 41%|████      | 192/471 [00:03<00:04, 58.36it/s]
 42%|████▏     | 198/471 [00:03<00:04, 57.94it/s]
 44%|████▎     | 205/471 [00:03<00:04, 59.51it/s]
 45%|████▍     | 211/471 [00:03<00:04, 58.66it/s]
 46%|████▋     | 218/471 [00:04<00:04, 59.61it/s]
 48%|████▊     | 224/471 [00:04<00:04, 59.13it/s]
 49%|████▉     | 230/471 [00:04<00:04, 57.51it/s]
 50%|█████     | 236/471 [00:04<00:04, 57.59it/s]
 51%|█████▏    | 242/471 [00:04<00:04, 55.35it/s]
 53%|█████▎    | 248/471 [00:04<00:04, 53.66it/s]
 54%|█████▍    | 254/471 [00:04<00:03, 54.26it/s]
 55%|█████▌    | 260/471 [00:04<00:03, 53.59it/s]
 56%|█████▋    | 266/471 [00:04<00:03, 54.89it/s]
 58%|█████▊    | 272/471 [00:05<00:03, 53.75it/s]
 59%|█████▉    | 278/471 [00:05<00:03, 54.75it/s]
 60%|██████    | 284/471 [00:05<00:03, 55.50it/s]
 62%|██████▏   | 290/471 [00:05<00:03, 55.10it/s]
 63%|██████▎   | 296/471 [00:05<00:03, 56.19it/s]
 64%|██████▍   | 302/471 [00:05<00:03, 55.97it/s]
 66%|██████▌   | 309/471 [00:05<00:02, 57.42it/s]
 67%|██████▋   | 315/471 [00:05<00:02, 56.51it/s]
 68%|██████▊   | 322/471 [00:05<00:02, 57.68it/s]
 70%|██████▉   | 329/471 [00:06<00:02, 58.72it/s]
 71%|███████   | 335/471 [00:06<00:02, 57.49it/s]
 73%|███████▎  | 342/471 [00:06<00:02, 59.12it/s]
 74%|███████▍  | 348/471 [00:06<00:02, 58.16it/s]
 75%|███████▌  | 355/471 [00:06<00:01, 58.87it/s]
 77%|███████▋  | 361/471 [00:06<00:01, 58.37it/s]
 78%|███████▊  | 368/471 [00:06<00:01, 59.82it/s]
 80%|███████▉  | 375/471 [00:06<00:01, 59.25it/s]
 81%|████████  | 382/471 [00:06<00:01, 60.54it/s]
 83%|████████▎ | 389/471 [00:07<00:01, 61.82it/s]
 84%|████████▍ | 396/471 [00:07<00:01, 61.07it/s]
 86%|████████▌ | 403/471 [00:07<00:01, 62.10it/s]
 87%|████████▋ | 410/471 [00:07<00:00, 61.12it/s]
 89%|████████▊ | 417/471 [00:07<00:00, 61.94it/s]
 90%|█████████ | 424/471 [00:07<00:00, 61.23it/s]
 92%|█████████▏| 431/471 [00:07<00:00, 62.25it/s]
 93%|█████████▎| 438/471 [00:07<00:00, 61.62it/s]
 94%|█████████▍| 445/471 [00:07<00:00, 62.57it/s]
 96%|█████████▌| 452/471 [00:08<00:00, 61.66it/s]
 97%|█████████▋| 459/471 [00:08<00:00, 61.60it/s]
 99%|█████████▉| 466/471 [00:08<00:00, 59.94it/s]
100%|█████████▉| 470/471 [00:08<00:00, 56.03it/s]
/opt/hostedtoolcache/Python/3.10.17/x64/lib/python3.10/site-packages/flatland/envs/rail_generators.py:321: UserWarning: Could not set all required cities! Created 1/2
  warnings.warn(city_warning)
/opt/hostedtoolcache/Python/3.10.17/x64/lib/python3.10/site-packages/flatland/envs/rail_generators.py:217: UserWarning: [WARNING] Changing to Grid mode to place at least 2 cities.
  warnings.warn("[WARNING] Changing to Grid mode to place at least 2 cities.")
/tmp/tmpsi4fnwm8/event_logs
/tmp/tmpsi4fnwm8/event_logs/ActionEvents.discrete_action.tsv
/tmp/tmpsi4fnwm8/event_logs/TrainMovementEvents.trains_arrived.tsv
/tmp/tmpsi4fnwm8/event_logs/TrainMovementEvents.trains_positions.tsv
/tmp/tmpsi4fnwm8/outputs
/tmp/tmpsi4fnwm8/serialised_state
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0000.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0015.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0030.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0045.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0060.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0075.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0090.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0105.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0120.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0135.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0150.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0165.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0180.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0195.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0210.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0225.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0240.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0255.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0270.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0285.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0300.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0315.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0330.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0345.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0360.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0375.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0390.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0405.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0420.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0435.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0450.pkl
/tmp/tmpsi4fnwm8/serialised_state/dcc73031-ac30-452e-89aa-d9af8a8a3176_step0465.pkl
                                episode_id  env_time  agent_id  action
0     dcc73031-ac30-452e-89aa-d9af8a8a3176         0         0       3
1     dcc73031-ac30-452e-89aa-d9af8a8a3176         0         1       1
2     dcc73031-ac30-452e-89aa-d9af8a8a3176         0         2       3
3     dcc73031-ac30-452e-89aa-d9af8a8a3176         0         3       2
4     dcc73031-ac30-452e-89aa-d9af8a8a3176         0         4       2
...                                    ...       ...       ...     ...
3292  dcc73031-ac30-452e-89aa-d9af8a8a3176       470         2       0
3293  dcc73031-ac30-452e-89aa-d9af8a8a3176       470         3       2
3294  dcc73031-ac30-452e-89aa-d9af8a8a3176       470         4       3
3295  dcc73031-ac30-452e-89aa-d9af8a8a3176       470         5       4
3296  dcc73031-ac30-452e-89aa-d9af8a8a3176       470         6       4

[3297 rows x 4 columns]
0.0% trains arrived. Expected 0.0%. 24 elapsed steps.

List of Policy implementations#

  • tests.trajectories.test_trajectories.RandomPolicy

  • flatland_baselines.deadlock_avoidance_heuristic.policy.deadlock_avoidance_policy.DeadLockAvoidancePolicy (see flatland-baselines)

  • flatland/ml/ray/wrappers.ray_policy_wrapper and flatland/ml/ray/wrappers.ray_checkpoint_policy_wrapper for wrapping RLlib RLModules.

Flatland Callbacks#

Flatland callbacks can be used for custom metrics and custom postprocessing.

import inspect
from flatland.envs.rail_env import RailEnv
from flatland.callbacks.callbacks import FlatlandCallbacks, make_multi_callbacks
lines, _ = inspect.getsourcelines(FlatlandCallbacks)
print("".join(lines))
class FlatlandCallbacks:
    """
    Abstract base class for Flatland callbacks similar to rllib, see https://github.com/ray-project/ray/blob/master/rllib/callbacks/callbacks.py.

    These callbacks can be used for custom metrics and custom postprocessing.

    By default, all of these callbacks are no-ops.
    """

    def on_episode_start(
        self,
        *,
        env: Optional[Environment] = None,
        data_dir: Path = None,
        **kwargs,
    ) -> None:
        """Callback run right after an Episode has been started.

        This method gets called after `env.reset()`.

        Parameters
        ---------
            env : Environment
                the env
            data_dir : Path
                trajectory data dir
            kwargs:
                Forward compatibility placeholder.
        """
        pass

    def on_episode_step(
        self,
        *,
        env: Optional[Environment] = None,
        data_dir: Path = None,
        **kwargs,
    ) -> None:
        """Called on each episode step (after the action(s) has/have been logged).

        This callback is also called after the final step of an episode,
        meaning when terminated/truncated are returned as True
        from the `env.step()` call.

        The exact time of the call of this callback is after `env.step([action])` and
        also after the results of this step (observation, reward, terminated, truncated,
        infos) have been logged to the given `episode` object.

        Parameters
        ---------
            env : Environment
                the env
            data_dir : Path
                trajectory data dir
            kwargs:
                Forward compatibility placeholder.
        """
        pass

    def on_episode_end(
        self,
        *,
        env: Optional[Environment] = None,
        data_dir: Path = None,
        **kwargs,
    ) -> None:
        """Called when an episode is done (after terminated/truncated have been logged).

        The exact time of the call of this callback is after `env.step([action])`

        Parameters
        ---------
            env : Environment
                the env
            data_dir : Path
                trajectory data dir
            kwargs:
                Forward compatibility placeholder.
        """
        pass
class DummyCallbacks(FlatlandCallbacks):
    def on_episode_step(
        self,
        *,
        env: Optional[RailEnv] = None,
        **kwargs,
    ) -> None:
        if (env._elapsed_steps - 1) % 10 == 0:
            print(f"step{env._elapsed_steps - 1}")
with tempfile.TemporaryDirectory() as tmpdirname:
    data_dir = Path(tmpdirname)
    trajectory = Trajectory.create_from_policy(policy=RandomPolicy(), data_dir=data_dir, snapshot_interval=15)
    TrajectoryEvaluator(trajectory, callbacks=make_multi_callbacks(DummyCallbacks())).evaluate(tqdm_kwargs={"disable": True})
/opt/hostedtoolcache/Python/3.10.17/x64/lib/python3.10/site-packages/flatland/envs/rail_generators.py:321: UserWarning: Could not set all required cities! Created 1/2
  warnings.warn(city_warning)
/opt/hostedtoolcache/Python/3.10.17/x64/lib/python3.10/site-packages/flatland/envs/rail_generators.py:217: UserWarning: [WARNING] Changing to Grid mode to place at least 2 cities.
  warnings.warn("[WARNING] Changing to Grid mode to place at least 2 cities.")
  0%|          | 0/471 [00:00<?, ?it/s]
  1%|▏         | 6/471 [00:00<00:08, 54.54it/s]
  3%|▎         | 12/471 [00:00<00:08, 55.34it/s]
  4%|▍         | 18/471 [00:00<00:08, 52.55it/s]
  5%|▌         | 24/471 [00:00<00:08, 53.58it/s]
  6%|▋         | 30/471 [00:00<00:08, 52.08it/s]
  8%|▊         | 36/471 [00:00<00:08, 53.11it/s]
  9%|▉         | 42/471 [00:00<00:08, 52.80it/s]
 10%|█         | 48/471 [00:00<00:08, 51.61it/s]
 11%|█▏        | 54/471 [00:01<00:07, 52.76it/s]
 13%|█▎        | 60/471 [00:01<00:07, 52.04it/s]
 14%|█▍        | 66/471 [00:01<00:07, 51.75it/s]
 15%|█▌        | 72/471 [00:01<00:07, 52.55it/s]
 17%|█▋        | 78/471 [00:01<00:07, 51.70it/s]
 18%|█▊        | 84/471 [00:01<00:07, 53.44it/s]
 19%|█▉        | 90/471 [00:01<00:07, 53.34it/s]
 20%|██        | 96/471 [00:01<00:06, 54.32it/s]
 22%|██▏       | 102/471 [00:01<00:06, 55.27it/s]
 23%|██▎       | 108/471 [00:02<00:06, 54.49it/s]
 24%|██▍       | 114/471 [00:02<00:06, 55.11it/s]
 25%|██▌       | 120/471 [00:02<00:06, 53.68it/s]
 27%|██▋       | 126/471 [00:02<00:06, 54.15it/s]
 28%|██▊       | 132/471 [00:02<00:06, 54.96it/s]
 29%|██▉       | 138/471 [00:02<00:06, 54.24it/s]
 31%|███       | 144/471 [00:02<00:05, 55.33it/s]
 32%|███▏      | 150/471 [00:02<00:05, 54.61it/s]
 33%|███▎      | 156/471 [00:02<00:05, 55.07it/s]
 34%|███▍      | 162/471 [00:03<00:05, 56.00it/s]
 36%|███▌      | 168/471 [00:03<00:05, 55.56it/s]
 37%|███▋      | 175/471 [00:03<00:05, 57.43it/s]
 38%|███▊      | 181/471 [00:03<00:05, 57.02it/s]
 40%|███▉      | 188/471 [00:03<00:04, 58.75it/s]
 41%|████▏     | 195/471 [00:03<00:04, 58.24it/s]
 43%|████▎     | 202/471 [00:03<00:04, 59.78it/s]
 44%|████▍     | 209/471 [00:03<00:04, 60.14it/s]
 46%|████▌     | 216/471 [00:03<00:04, 59.35it/s]
 47%|████▋     | 222/471 [00:04<00:04, 59.52it/s]
 48%|████▊     | 228/471 [00:04<00:04, 58.39it/s]
 50%|████▉     | 234/471 [00:04<00:04, 58.84it/s]
 51%|█████     | 240/471 [00:04<00:03, 57.93it/s]
 52%|█████▏    | 247/471 [00:04<00:03, 58.72it/s]
 54%|█████▎    | 253/471 [00:04<00:03, 58.84it/s]
 55%|█████▍    | 259/471 [00:04<00:03, 57.16it/s]
 56%|█████▋    | 265/471 [00:04<00:03, 57.68it/s]
 58%|█████▊    | 271/471 [00:04<00:03, 56.49it/s]
 59%|█████▉    | 277/471 [00:04<00:03, 57.38it/s]
 60%|██████    | 284/471 [00:05<00:03, 58.47it/s]
 62%|██████▏   | 290/471 [00:05<00:03, 57.61it/s]
 63%|██████▎   | 297/471 [00:05<00:02, 59.12it/s]
 64%|██████▍   | 303/471 [00:05<00:02, 58.33it/s]
 66%|██████▌   | 310/471 [00:05<00:02, 59.69it/s]
 67%|██████▋   | 316/471 [00:05<00:02, 58.82it/s]
 69%|██████▊   | 323/471 [00:05<00:02, 60.00it/s]
 70%|███████   | 330/471 [00:05<00:02, 58.75it/s]
 72%|███████▏  | 337/471 [00:05<00:02, 60.15it/s]
 73%|███████▎  | 344/471 [00:06<00:02, 52.42it/s]
 74%|███████▍  | 350/471 [00:06<00:02, 53.69it/s]
 76%|███████▌  | 357/471 [00:06<00:02, 56.48it/s]
 77%|███████▋  | 363/471 [00:06<00:01, 56.81it/s]
 79%|███████▊  | 370/471 [00:06<00:01, 58.77it/s]
 80%|███████▉  | 376/471 [00:06<00:01, 58.35it/s]
 81%|████████▏ | 383/471 [00:06<00:01, 59.19it/s]
 83%|████████▎ | 390/471 [00:06<00:01, 59.17it/s]
 84%|████████▍ | 397/471 [00:07<00:01, 60.38it/s]
 86%|████████▌ | 404/471 [00:07<00:01, 61.41it/s]
 87%|████████▋ | 411/471 [00:07<00:00, 60.59it/s]
 89%|████████▊ | 418/471 [00:07<00:00, 61.37it/s]
 90%|█████████ | 425/471 [00:07<00:00, 60.59it/s]
 92%|█████████▏| 432/471 [00:07<00:00, 61.63it/s]
 93%|█████████▎| 439/471 [00:07<00:00, 60.88it/s]
 95%|█████████▍| 446/471 [00:07<00:00, 60.99it/s]
 96%|█████████▌| 453/471 [00:07<00:00, 60.10it/s]
 98%|█████████▊| 460/471 [00:08<00:00, 60.31it/s]
 99%|█████████▉| 467/471 [00:08<00:00, 58.79it/s]
100%|█████████▉| 470/471 [00:08<00:00, 56.89it/s]

step0
step10
step20
step30
step40
step50
step60
step70
step80
step90
step100
step110
step120
step130
step140
step150
step160
step170
step180
step190
step200
step210
step220
step230
step240
step250
step260
step270
step280
step290
step300
step310
step320
step330
step340
step350
step360
step370
step380
step390
step400
step410
step420
step430
step440
step450
step460
step470
0.0% trains arrived. Expected 0.0%. 470 elapsed steps.

List of FlatlandCallbacks#

  • flatland.callbacks.generate_movie_callbacks.GenerateMovieCallbacks

  • flatland.integrations.interactiveai/interactiveai.FlatlandInteractiveAICallbacks

  • flatland.trajectories.trajectory_snapshot_callbacks.TrajectorySnapshotCallbacks