Developing Autonomous Stack for Cooperated Automated Driving
- Aman Kumar Singh
- Oct 25, 2023
- 3 min read
Updated: Feb 27
The field of autonomous driving has witnessed remarkable advancements in recent years, thanks to the integration of cutting-edge technologies such as Machine Learning (ML), Deep Learning (DL), Computer Vision, and Robotics.
In this article, I tell you my journey of building an autonomous stack for cooperative automated driving, using Quanser's QCar as the platform. By leveraging various hardware and software components, including the Robot Operating System (ROS), Intel Realsense D345i, Inertial Measurement Unit (IMU), RGB Camera, and the Jetson TX2, I was able to develop a system capable of autonomously navigating in dynamic environments while interacting with other vehicles.

The Quanser QCar
To start this project, We chose the Quanser QCar, a versatile robotic platform which provides a foundation for developing and testing autonomous systems, allowing for experimentation with different sensors and control strategies. Its modular design and compatibility with various hardware components made it an ideal choice for my project.
The Hardware Setup
Intel Realsense D345i: The Intel Realsense D345i camera provided essential depth sensing capabilities, allowing the QCar to perceive its environment in 3D. This is critical for obstacle detection and collision avoidance.
Inertial Measurement Unit (IMU): The IMU offers precise data on the vehicle's orientation and movement. This data is essential for accurate control and localization, particularly when the vehicle is in motion.
RGB Camera: The RGB camera was used for capturing visual information from the QCar's perspective. This data was invaluable for tasks like lane detection, traffic sign recognition, and general scene understanding.
Jetson TX2: The Jetson TX2, a high-performance embedded computing platform, served as the brain of the autonomous stack. It was responsible for processing sensor data, running the control algorithms, and making real-time decisions.
The Software Stack
ROS (Robot Operating System): ROS played a pivotal role in orchestrating the various software components of the autonomous stack. It enabled seamless communication between the sensors, control algorithms, and the QCar's actuators.
Machine Learning and Deep Learning: ML and DL models were used for several tasks, such as object detection, lane tracking, and decision making. These models were trained on large datasets to ensure the system's ability to handle diverse real-world scenarios.
Computer Vision: Computer vision techniques, including image processing and feature extraction, were applied to the RGB camera's feed. This allowed the QCar to understand its surroundings and make informed decisions based on visual cues.
General Robotics: General robotics principles were used to design and implement control algorithms. These algorithms enabled the QCar to follow desired trajectories, avoid obstacles, and react to dynamic traffic conditions.
Cooperative Automated Driving
One of the most exciting aspects of this project was enabling cooperative automated driving. The QCar was designed to communicate with other vehicles and infrastructure using V2X (Vehicle-to-Everything) technology.
During my internship, I explored various strategies to enhance the cooperative automated driving system. One of the key components I incorporated was the Constant Time Headway (CTH) policy. The CTH policy is a critical aspect of cooperative driving, as it helps maintain safe distances between vehicles and ensures smooth traffic flow.
The Constant Time Headway policy, often used in adaptive cruise control systems, calculates the appropriate following distance based on the speed of the lead vehicle. It allows the autonomous vehicle to maintain a safe gap that is proportional to the speed, reducing the risk of accidents and improving traffic efficiency. Integrating this policy into the QCar's control system was a significant step in ensuring safe and cooperative driving behavior.
Additionally, as my internship progressed, I delved into Reinforcement Learning (RL) approaches to further enhance the capabilities of the autonomous stack. RL offers the potential to train autonomous agents to make decisions based on interaction with their environment. While I made initial strides in this direction, my internship eventually came to an end, leaving ample room for further exploration and development.
RL-based approaches have the potential to optimize the QCar's decision-making process in complex, dynamic traffic scenarios. By training the vehicle through interactions with various traffic conditions, it can learn to adapt and make real-time decisions that are both safe and efficient.
Cooperative automated driving is an evolving field that requires continuous research and development. The insights gained during my internship laid a solid foundation for future work, suggesting that the incorporation of advanced policies and machine learning techniques can lead to even more capable and sophisticated autonomous systems.

As I look back on this experience, it's clear that the pursuit of excellence in autonomous driving technology will continue to drive innovation in the automotive industry. The combination of hardware, software, and intelligent decision-making algorithms holds the key to creating safer, more efficient, and more collaborative autonomous vehicles.
Technical Presentation
For a more in-depth understanding of the project, including detailed technical specifications, code snippets, and results, I have prepared a presentation.
Comentarios