Developing an AI-Powered Autonomous Vehicle for Last Mile Delivery
Introduction
We developed a fully autonomous delivery vehicle (ADV) capable of navigating across the LUMS campus, from Visitor’s Parking to Free Parking, addressing last-mile delivery challenges with a sustainable, self-driving solution. The project focuses on:
- Hardware Setup: Ensuring the ADV has robust and reliable components.
- Localization and Mapping: Allowing the vehicle to navigate campus accurately.
- Semantic Segmentation and Obstacle Detection: Enabling real-time identification of pathways and obstacles.
- Sensor Fusion: Integrating data from various sensors for smooth and safe operation.
This ADV demonstrates how autonomous technology can streamline campus deliveries while contributing to eco-friendly, efficient logistics.
Fig. 1 : General Block Diagram
Hardware Setup
The hardware setup of our autonomous delivery vehicle (ADV) was designed to support self-navigation and data processing for on-campus deliveries. Initially, the vehicle featured a basic chassis with motorized rear wheels and free-turning front wheels, powered by a 76V battery and controlled through a Jetson Nano. The core components of our ADV include:
- NVIDIA Jetson Nano: This compact but powerful processor, with 128 CUDA cores and 4GB memory, handles the ADV’s data processing, enabling autonomous navigation.
- Intel RealSense Depth Camera D455: Captures 3D visual data within a range of 0.6m to 6m, providing essential depth perception for obstacle detection.
- Brainpower Motor Controller: Drives the wheels with forward and reverse capabilities, while also tracking speed through Hall sensor feedback to the Jetson Nano.
- ESP (Electronic Stability Control): Acts as a master safety switch to halt operations when needed, generating motor control signals for precise vehicle movement.
To improve stability, we introduced a printed circuit board (PCB) to secure connections and filter data noise, reduced vehicle weight by removing an unnecessary frame, upgraded the front wheels and added suspensions, and replaced the power system with a 24V battery and a buck converter. Additional sensors, including a 9DoF IMU, GPS module, SLAMTEC Lidar, and an extra magnetometer, enhanced the ADV’s environmental awareness and navigation accuracy, creating a robust platform for autonomous campus deliveries.
Fig. 2 : Hardware Setup Block Diagram
Localization and Mapping
Fig 3: Map of LUMS Created Using GPS Coordinates.
Sensor Fusion
Fig. 4 : Sensor Fusion Diagram
Navigation and Semantic Segmentation :
Navigation in our ADV system relies on two key components: global and local navigation, which collectively enable safe and efficient movement through the environment.
Global Navigation
- Focuses on determining the overall route from start to destination.
- Provides a sequence of GPS waypoints that guide the ADV through the campus.
- Utilizes the ROS (Robot Operating System) navigation stack to find the shortest path.
- Implements algorithms such as Dijkstra, A*, BFS, and RRT to compute efficient routes.
- Ensures timely delivery and reduces unnecessary detours.
Local Navigation
- Manages immediate obstacles and real-time route adjustments.
- Uses semantic segmentation to identify road surfaces and obstacles.
- Incorporates obstacle detection to avoid collisions, ensuring safe navigation.
Semantic Segmentation
- Divides images into distinct segments, assigning each segment a label that represents a specific object class (e.g., road, car, pedestrian).
- Employs Convolutional Neural Networks (CNNs) to classify each pixel, learning to identify patterns and features associated with different object classes during training.
- Initially aimed to use pretrained models on Cityscapes—a dataset with over 5000 annotated frames across 30 classes.
- Faced challenges when applied to LUMS roads, due to unique features such as lack of lane markings and traffic lights.
- As a solution, we created a custom LUMS Road Dataset to improve model performance for our specific environment.