Ali Rehman

Developing an AI-Powered Autonomous Vehicle for Last Mile Delivery

Introduction

We developed a fully autonomous delivery vehicle (ADV) capable of navigating across the LUMS campus, from Visitor’s Parking to Free Parking, addressing last-mile delivery challenges with a sustainable, self-driving solution. The project focuses on:

  • Hardware Setup: Ensuring the ADV has robust and reliable components.
  • Localization and Mapping: Allowing the vehicle to navigate campus accurately.
  • Semantic Segmentation and Obstacle Detection: Enabling real-time identification of pathways and obstacles.
  • Sensor Fusion: Integrating data from various sensors for smooth and safe operation.

This ADV demonstrates how autonomous technology can streamline campus deliveries while contributing to eco-friendly, efficient logistics.

Fig. 1 : General Block Diagram

Hardware Setup

 

The hardware setup of our autonomous delivery vehicle (ADV) was designed to support self-navigation and data processing for on-campus deliveries. Initially, the vehicle featured a basic chassis with motorized rear wheels and free-turning front wheels, powered by a 76V battery and controlled through a Jetson Nano. The core components of our ADV include:

  • NVIDIA Jetson Nano: This compact but powerful processor, with 128 CUDA cores and 4GB memory, handles the ADV’s data processing, enabling autonomous navigation.
  • Intel RealSense Depth Camera D455: Captures 3D visual data within a range of 0.6m to 6m, providing essential depth perception for obstacle detection.
  • Brainpower Motor Controller: Drives the wheels with forward and reverse capabilities, while also tracking speed through Hall sensor feedback to the Jetson Nano.
  • ESP (Electronic Stability Control): Acts as a master safety switch to halt operations when needed, generating motor control signals for precise vehicle movement.

To improve stability, we introduced a printed circuit board (PCB) to secure connections and filter data noise, reduced vehicle weight by removing an unnecessary frame, upgraded the front wheels and added suspensions, and replaced the power system with a 24V battery and a buck converter. Additional sensors, including a 9DoF IMU, GPS module, SLAMTEC Lidar, and an extra magnetometer, enhanced the ADV’s environmental awareness and navigation accuracy, creating a robust platform for autonomous campus deliveries.

Fig. 2 : Hardware Setup Block Diagram

Localization and Mapping

For mapping and localization, we created a GPS-based global map of the LUMS campus, with nodes representing specific GPS coordinates and edges based on distances between them. This map, covering a 5 km radius, allows the ADV to identify its location by matching its current GPS coordinates to the nearest node on the map, giving an approximate position on campus. From this point, the ADV calculates a path to the destination by selecting a series of GPS waypoints. These global coordinates are converted into a local reference frame, enabling precise navigation. A PID controller then directs the ADV along this calculated path, dynamically adjusting its movements to stay aligned with the designated route.

Fig 3: Map of LUMS Created Using GPS Coordinates.

Sensor Fusion

To ensure precise localization and smooth navigation, we implemented sensor fusion to integrate GPS data into the ADV’s local frame. Using a magnetometer, we aligned GPS data with the robot’s orientation by mapping the position relative to true north. This transformation requires the robot’s initial and current GPS coordinates, along with the rotation of the robot’s frame relative to global North. Since GPS data can be low-frequency and prone to positional jumps, we combined it with high-frequency data from odometry and IMU sensors using an Extended Kalman Filter (EKF). This EKF integration yielded high-frequency, filtered position data. We then applied another EKF to further refine this data by merging it with low-frequency GPS information, resulting in a high-frequency GPS position estimation. This multi-layered sensor fusion provided a steady, accurate localization system that allowed the ADV to track its position with high fidelity on campus pathways.

Fig. 4 : Sensor Fusion Diagram

Navigation and Semantic Segmentation :

Navigation in our ADV system relies on two key components: global and local navigation, which collectively enable safe and efficient movement through the environment.

Global Navigation

  • Focuses on determining the overall route from start to destination.
  • Provides a sequence of GPS waypoints that guide the ADV through the campus.
  • Utilizes the ROS (Robot Operating System) navigation stack to find the shortest path.
  • Implements algorithms such as Dijkstra, A*, BFS, and RRT to compute efficient routes.
  • Ensures timely delivery and reduces unnecessary detours.

Local Navigation

  • Manages immediate obstacles and real-time route adjustments.
  • Uses semantic segmentation to identify road surfaces and obstacles.
  • Incorporates obstacle detection to avoid collisions, ensuring safe navigation.

Semantic Segmentation

  • Divides images into distinct segments, assigning each segment a label that represents a specific object class (e.g., road, car, pedestrian).
  • Employs Convolutional Neural Networks (CNNs) to classify each pixel, learning to identify patterns and features associated with different object classes during training.
  • Initially aimed to use pretrained models on Cityscapes—a dataset with over 5000 annotated frames across 30 classes.
  • Faced challenges when applied to LUMS roads, due to unique features such as lack of lane markings and traffic lights.
  • As a solution, we created a custom LUMS Road Dataset to improve model performance for our specific environment.

Fig. 5 : Segmenation Model Working

Scroll to Top