Background: In most of the researches, the autonomous driving problem is either solved with policy gradients or DQN. In this paper, we tried to eliminate the problem of policy gradients which is high variance and an overestimation of values in DQN. We used DDQN as it has low variance, and it solves the problem of overestimation in DQN.
Aim: The main aim of this paper is to propose a framework for an autonomous driving model that takes in raw sensor information as input data and predicts actions as output from the model which could then be used for simulating the car.
Objective: The main objective of this paper is to use DDQN and Discretization technique to solve the autonomous driving problem and get better results even with a continuous action space.
Methods: To solve the bridge between self-driving cars and reinforcement learning we used Double Deep Q-Networks as this could help to prevent the overestimation of values by decoupling the selection from the evaluation. Also, to solve the problem of continuous action space we used the discretization technique in which variables are grouped into bins and each bin is assigned a value in such a way that the relationship between the bins is preserved.
Result: The experimental results showed improved performance of the agent. The agent was tested for different conditions like curve roads and traffic, which showed the agent can drive at different conditions as well. We also illustrated how DDQN performed well over policy gradients just by adding a simple discretization technique to make the action space discrete and overcoming the issue of overestimation of q-values.
Conclusion: The gym environment and reward function were designed for DDQN to work. We have also used CARLA as a virtual simulator for training purposes. Finally, we have demonstrated that our agent could perform well in different cases and conditions. As a reminder note, we can improve our agent to also work for following traffic light rules and other road safety measures.