Reinforcement learning for angle-only intercept guidance of maneuvering targets
We present a novel guidance law that uses observations consisting solely of seeker line-of-
sight angle measurements and their rate of change. The policy is optimized using
reinforcement meta-learning and demonstrated in a simulated terminal phase of a mid-
course exo-atmospheric interception. Importantly, the guidance law does not require range
estimation, making it particularly suitable for passive seekers. The optimized policy maps
stabilized seeker line-of-sight angles and their rate of change directly to commanded thrust …
sight angle measurements and their rate of change. The policy is optimized using
reinforcement meta-learning and demonstrated in a simulated terminal phase of a mid-
course exo-atmospheric interception. Importantly, the guidance law does not require range
estimation, making it particularly suitable for passive seekers. The optimized policy maps
stabilized seeker line-of-sight angles and their rate of change directly to commanded thrust …
Abstract
We present a novel guidance law that uses observations consisting solely of seeker line-of-sight angle measurements and their rate of change. The policy is optimized using reinforcement meta-learning and demonstrated in a simulated terminal phase of a mid-course exo-atmospheric interception. Importantly, the guidance law does not require range estimation, making it particularly suitable for passive seekers. The optimized policy maps stabilized seeker line-of-sight angles and their rate of change directly to commanded thrust for the missile's divert thrusters. Optimization with reinforcement meta-learning allows the optimized policy to adapt to target acceleration, and we demonstrate that the policy performs better than augmented zero-effort miss guidance with perfect target acceleration knowledge. The optimized policy is computationally efficient and requires minimal memory, and should be compatible with today's flight processors.
Elsevier