A modular and transferable reinforcement learning framework for the fleet rebalancing problem

E Skordilis, Y Hou, C Tripp, M Moniot… - IEEE Transactions …, 2021 - ieeexplore.ieee.org
IEEE Transactions on Intelligent Transportation Systems, 2021ieeexplore.ieee.org
Mobility on demand (MoD) systems show great promise in realizing flexible and efficient
urban transportation. However, significant technical challenges arise from operational
decision making associated with MoD vehicle dispatch and fleet rebalancing. For this
reason, operators tend to employ simplified algorithms that have been demonstrated to work
well in a particular setting. To help bridge the gap between novel and existing methods, we
propose a modular framework for fleet rebalancing based on model-free reinforcement …
Mobility on demand (MoD) systems show great promise in realizing flexible and efficient urban transportation. However, significant technical challenges arise from operational decision making associated with MoD vehicle dispatch and fleet rebalancing. For this reason, operators tend to employ simplified algorithms that have been demonstrated to work well in a particular setting. To help bridge the gap between novel and existing methods, we propose a modular framework for fleet rebalancing based on model-free reinforcement learning (RL) that can leverage an existing dispatch method to minimize system cost. In particular, by treating dispatch as part of the environment dynamics, a centralized agent can learn to intermittently direct the dispatcher to reposition free vehicles and mitigate against fleet imbalance. We formulate RL state and action spaces as distributions over a grid partitioning of the operating area, making the framework scalable and avoiding the complexities associated with multiagent RL. Numerical experiments, using real-world trip and network data, demonstrate that RL reduces waiting time by 28% to 38% for the same-day evaluation, 17% to 44% for cross-day evaluation, and 22% to 25% for cross-season evaluation compared with no rebalancing scenarios. This approach has several distinct advantages over baseline methods including: improved system cost; high degree of adaptability to the selected dispatch method; and the ability to perform scale-invariant transfer learning between problem instances with similar vehicle and request distributions.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果