Deep Learning-based Eco-driving System for Battery Electric Vehicles


Book Description

Eco-driving strategies based on connected and automated vehicles (CAV) technology, such as Eco-Approach and Departure (EAD), have attracted significant worldwide interest due to their potential to save energy and reduce tail-pipe emissions. In this project, the research team developed and tested a deep learning–based trajectory-planning algorithm (DLTPA) for EAD. The DLTPA has two processes: offline (training) and online (implementation), and it is composed of two major modules: 1) a solution feasibility checker that identifies whether there is a feasible trajectory subject to all the system constraints, e.g., maximum acceleration or deceleration; and 2) a regressor to predict the speed of the next time-step. Preliminary simulation with microscopic traffic modeling software PTV VISSIM showed that the proposed DLTPA can achieve the optimal solution in terms of energy savings and a greater balance of energy savings vs. computational efforts when compared to the baseline scenarios where no EAD is implemented and the optimal solution (in terms of energy savings) is provided by a graph-based trajectory planning algorithm.




Deep Learning and Its Applications for Vehicle Networks


Book Description

Deep Learning (DL) is an effective approach for AI-based vehicular networks and can deliver a powerful set of tools for such vehicular network dynamics. In various domains of vehicular networks, DL can be used for learning-based channel estimation, traffic flow prediction, vehicle trajectory prediction, location-prediction-based scheduling and routing, intelligent network congestion control mechanism, smart load balancing and vertical handoff control, intelligent network security strategies, virtual smart and efficient resource allocation and intelligent distributed resource allocation methods. This book is based on the work from world-famous experts on the application of DL for vehicle networks. It consists of the following five parts: (I) DL for vehicle safety and security: This part covers the use of DL algorithms for vehicle safety or security. (II) DL for effective vehicle communications: Vehicle networks consist of vehicle-to-vehicle and vehicle-to-roadside communications. This part covers how Intelligent vehicle networks require a flexible selection of the best path across all vehicles, adaptive sending rate control based on bandwidth availability and timely data downloads from a roadside base-station. (III) DL for vehicle control: The myriad operations that require intelligent control for each individual vehicle are discussed in this part. This also includes emission control, which is based on the road traffic situation, the charging pile load is predicted through DL andvehicle speed adjustments based on the camera-captured image analysis. (IV) DL for information management: This part covers some intelligent information collection and understanding. We can use DL for energy-saving vehicle trajectory control based on the road traffic situation and given destination information; we can also natural language processing based on DL algorithm for automatic internet of things (IoT) search during driving. (V) Other applications. This part introduces the use of DL models for other vehicle controls. Autonomous vehicles are becoming more and more popular in society. The DL and its variants will play greater roles in cognitive vehicle communications and control. Other machine learning models such as deep reinforcement learning will also facilitate intelligent vehicle behavior understanding and adjustment. This book will become a valuable reference to your understanding of this critical field.




Reinforcement Learning in Eco-driving for Connected and Automated Vehicles


Book Description

Connected and Automated Vehicles (CAVs) can significantly improve transportation efficiency by taking advantage of advanced connectivity technologies. Meanwhile, the combination of CAVs and powertrain electrification, such as Hybrid Electric Vehicles (HEVs) and Plug-in Hybrid Electric Vehicles (PHEVs), offers greater potential to improve fuel economy due to the extra control flexibility compared to vehicles with a single power source. In this context, the eco-driving control optimization problem seeks to design the optimal speed and powertrain components usage profiles based upon the information received by advanced mapping or Vehicle-to-Everything (V2X) communications to minimize the energy consumed by the vehicle over a given itinerary. To overcome the real-time computational complexity and embrace the stochastic nature of the driving task, the application and extension of state-of-the-art (SOTA) Deep Reinforcement Learning (Deep RL, DRL) algorithms to the eco-driving problem for a mild-HEV is studied in this dissertation. For better training and a more comprehensive evaluation, an RL environment, consisting of a mild HEV powertrain and vehicle dynamics model and a large-scale microscopic traffic simulator, is developed. To benchmark the performance of the developed strategies, two causal controllers, namely a baseline strategy representing human drivers and a deterministic optimal-control-based strategy, and the non-causal wait-and-see solution are implemented. In the first RL application, the eco-driving problem is formulated as a Partially Observable Markov Decision Process, and a SOTA model-free DRL (MFDRL) algorithm, Proximal Policy Optimization with Long Short-term Memory as function approximator, is used. Evaluated over 100 trips randomly generated in the city of Columbus, OH, the MFDRL agent shows a 17% fuel economy improvement against the baseline strategy while keeping the average travel time comparable. While showing performance comparable to the optimal-control-based strategy, the actor of the MFDRL agent offers an explicit control policy that significantly reduces the onboard computation. Subsequently, a model-based DRL (MBDRL) algorithm, Safe Model-based Off-policy Reinforcement Learning (SMORL) is proposed. The algorithm addresses the following issues emerged from the MFDRL development: a) the cumbersome process necessary to design the rewarding mechanism, b) the lack of the constraint satisfaction and feasibility guarantee and c) the low sample efficiency. Specifically, SMORL consists of three key components, a massively parallelizable dynamic programming trajectory optimizer, a value function learned in an off-policy fashion and a learned safe set as a generative model. Evaluated under the same conditions, the SMORL agent shows a 21% reduction on the fuel consumption over the baseline and the dominant performance over the MFDRL agent and the deterministic optimal-control-based controller.




Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles


Book Description

The urgent need for vehicle electrification and improvement in fuel efficiency has gained increasing attention worldwide. Regarding this concern, the solution of hybrid vehicle systems has proven its value from academic research and industry applications, where energy management plays a key role in taking full advantage of hybrid electric vehicles (HEVs). There are many well-established energy management approaches, ranging from rules-based strategies to optimization-based methods, that can provide diverse options to achieve higher fuel economy performance. However, the research scope for energy management is still expanding with the development of intelligent transportation systems and the improvement in onboard sensing and computing resources. Owing to the boom in machine learning, especially deep learning and deep reinforcement learning (DRL), research on learning-based energy management strategies (EMSs) is gradually gaining more momentum. They have shown great promise in not only being capable of dealing with big data, but also in generalizing previously learned rules to new scenarios without complex manually tunning. Focusing on learning-based energy management with DRL as the core, this book begins with an introduction to the background of DRL in HEV energy management. The strengths and limitations of typical DRL-based EMSs are identified according to the types of state space and action space in energy management. Accordingly, value-based, policy gradient-based, and hybrid action space-oriented energy management methods via DRL are discussed, respectively. Finally, a general online integration scheme for DRL-based EMS is described to bridge the gap between strategy learning in the simulator and strategy deployment on the vehicle controller.




Reinforcement Learning-Enabled Intelligent Energy Management for Hybrid Electric Vehicles


Book Description

Powertrain electrification, fuel decarburization, and energy diversification are techniques that are spreading all over the world, leading to cleaner and more efficient vehicles. Hybrid electric vehicles (HEVs) are considered a promising technology today to address growing air pollution and energy deprivation. To realize these gains and still maintain good performance, it is critical for HEVs to have sophisticated energy management systems. Supervised by such a system, HEVs could operate in different modes, such as full electric mode and power split mode. Hence, researching and constructing advanced energy management strategies (EMSs) is important for HEVs performance. There are a few books about rule- and optimization-based approaches for formulating energy management systems. Most of them concern traditional techniques and their efforts focus on searching for optimal control policies offline. There is still much room to introduce learning-enabled energy management systems founded in artificial intelligence and their real-time evaluation and application. In this book, a series hybrid electric vehicle was considered as the powertrain model, to describe and analyze a reinforcement learning (RL)-enabled intelligent energy management system. The proposed system can not only integrate predictive road information but also achieve online learning and updating. Detailed powertrain modeling, predictive algorithms, and online updating technology are involved, and evaluation and verification of the presented energy management system is conducted and executed.




Probabilistic Prediction of Energy Demand and Driving Range for Electric Vehicles with Federated Learning


Book Description

In this work, an extension of the federated averaging algorithm, FedAvg-Gaussian, is applied to train probabilistic neural networks. The performance advantage of probabilistic prediction models is demonstrated and it is shown that federated learning can improve driving range prediction. Using probabilistic predictions, routing and charge planning based on destination attainability can be applied. Furthermore, it is shown that probabilistic predictions lead to reduced travel time.




Powertrain and Vehicle Longitudinal Motion Control for Personalized Eco-driving of P0+P4 Mild Hybrid Electric Vehicles


Book Description

Due to the increasing trend of greenhouse gas emissions, the United States Environmental Protection Agency (EPA) has started to publish strict regulations regarding emissions for different types of vehicles. Battery electric vehicles (BEVs) have drawn much attention in recent years because they potentially eliminate all tailpipe emissions. However, due to charging speed and capacity limitations on battery, currently, EV users are facing the problem of range anxiety and the lack of charging stations. Hybrid electric vehicles (HEVs), which possess the advantages of both conventional vehicles and BEVs, appear to be a viable solution to cope with such strict emission regulations while mitigating range anxiety. Among all types of hybrid electric powertrain systems, a P0+P4 system possesses distinct advantages: two electric motors located on the front and rear axles allow brake energy to be recovered from both axles. Moreover, the dual motor configuration enables the driver to switch among front-drive, rear-drive and all-wheel-drive modes. Particularly, a 48V P0+P4 HEV requires less expensive wiring and electric shock protection and hence it is considered to be the most cost-effective HEV for reducing GHG emissions. This dissertation focusses on improving the energy efficiency, ride comfort, and safety of a 48V P0+P4 MHEV. To achieve these goals, this dissertation proposes a hierarchical control design among domains of power-split and vehicle longitudinal motion of 48V P0+P4 MHEV. In the domain of power-split, two real-time implementable controllers are proposed: (1) the optimization-based controller and (2) the learning-based controller. In the optimization-based control design, the approximated adaptive equivalent consumption minimization strategy (AA-ECMS) with a suboptimal braking distribution derived from dynamic programming (DP) analysis is proposed to capture the global optimal operation trends of the P0 motor operation, front/rear tire force distribution. In the learning-based control design, twin delayed deep deterministic policy gradient with prioritized exploration and experience replay (TD3+PEER), a novel prioritized exploration approach, is proposed to encourage the deep reinforcement learning (DRL) agent to explore states with complex dynamics. Both proposed power-split controllers achieve better fuel economy during the test trips compared to state-of-art rule-based and learning-based controllers. In vehicle longitudinal motion control design, two controllers have been developed using model predictive control (MPC): (1) the defensive ecological adaptive cruise control (DEco-ACC) and (2) the personalized one-pedal-driving (POPD). The DEco-ACC is a novel car-following algorithm that balances fuel economy, ride comfort, and avoidance of blind spots from neighboring vehicles. In DEco-ACC, a novel continuous and differentiable penalty function is proposed to describe the projection of several neighboring vehicles' blind spots to the ego vehicle's traffic lane. The proposed MPC-based controller considers this blind spot penalty function as a soft constraint within its prediction horizon; and is able to make its own decision to either yield, pass, or stay within the blind spots based on the MPC's cost function and the traffic scenario. The POPD is a novel personalized one-pedal driving method that can learn the individual driver's preference during everyday driving. In POPD, two types of MPC constraints that represent distinct driver's behavior are identified by analyzing 450 real-world drivers' data. And then, the POPD algorithm is validated in both the simulation environment and the human-in-the-loop (HIL) traffic simulator. These algorithms ensure car following safety and enhance the driver's comfort. The energy performance of DEco-ACC and POPD with proposed power-split algorithms are evaluated within corresponding chapters.




Assisted Eco-Driving


Book Description

This book discusses an integrative approach combining Human Factors expertise with Automotive Engineering. It develops an in-depth case study of designing a fuel-efficient driving intervention and offers an examination of an innovative study of feed-forward eco-driving advice. Assisted Eco-Driving: A Practical Guide to the Design and Testing of an Eco-Driving Assistance System offers an examination of an innovative study of feed-forward eco-driving advice based on current vehicle and road environment status. It presents lessons, insights and utilises a documented scientific and research-led approach to designing novel speed advisory and fuel use minimisation systems suitable for combustion vehicles, hybrids and electric vehicles The audience consists of system designers and those working with interfaces and interactions, UX, human factors and ergonomics and system engineering. Automotive academics, researchers, and practitioners will also find this book of interest.




Intelligent Transport Systems


Book Description

This book constitutes the proceedings of the 6th International Conference on Intelligent Transport Systems, INTSYS 2022, which was held in Lisbon, Portugal, in December 15-16, 2022. With the globalization of trade and transportation and the consequent multi-modal solutions used, additional challenges are faced by organizations and countries. Intelligent Transport Systems make transport safer, more efficient, and more sustainable by applying information and communication technologies to all transportation modes. The 15 revised full papers in this book were selected from 45 submissions and are organized in three thematic sessions on smart city; transportation modes and AI; intelligent transportation and electric vehicles.




Fundamentals of Artificial Neural Networks


Book Description

A systematic account of artificial neural network paradigms that identifies fundamental concepts and major methodologies. Important results are integrated into the text in order to explain a wide range of existing empirical observations and commonly used heuristics.