Personal Web: https://www.fit.edu/faculty-profiles/g/weinan-gao/
Weinan Gao received the Ph.D. degree in Electrical Engineering from New York University, Brooklyn, NY in 2017. He is currently an Assistant Professor and a Ph.D. advisor of Mechanical and Civil Engineering in the College of Engineering and Science at Florida Institute of Technology, Melbourne, FL since 2020. Previously, he was an Assistant Professor of Electrical and Computer Engineering in the Allen E. Paulson College of Engineering and Computing at Georgia Southern University, Statesboro, GA in 2017-2020, and a Visiting Professor of Mitsubishi Electric Research Laboratory (MERL), Cambridge, MA in 2018. His research interests include reinforcement learning, adaptive dynamic programming (ADP), optimal control, cooperative adaptive cruise control, intelligent transportation systems, sampled-data control systems, and output regulation theory. He is the recipient of the best paper award in IEEE International Conference on Real-time Computing and Robotics (RCAR) in 2018, and the David Goodman Research Award at New York University in 2019. Dr. Gao is currently an Associate Editor of IEEE/CAA Journal of Automatica Sinica, Neurocomputing, an Early Career Advisory Board Member of Control Engineering Practice, a Guest Editor of Complex & Intelligent Systems, a member of Editorial Board of Neural Computing and Applications, and a Technical Committee Member in IEEE Control Systems Society on Nonlinear Systems and Control and in IFAC TC 1.2 Adaptive and Learning Systems.
Learning-Based Adaptive Optimal Control and Applications to Connected and Autonomous Vehicles
The connected and autonomous vehicle (CAV) technology can prevent secondary crashes, reducing property damage and injury, congestion, and emissions. Among all CAV studies, the controller design of CAV has attracted considerable attention among researchers in the field of control, optimization, and communication. In this talk, I will introduce several intelligent cruise control design strategies under the framework of reinforcement learning and adaptive dynamic programming to address the adaptive optimal control problem of CAVs. Approximate optimal control policies are learned by collected online data from vehicles without relying on the knowledge of either human or vehicle models. Microscopic traffic simulation results show that these approaches can increase traffic throughput while reducing fuel usage.