📄 Read the full peer-reviewed preprint on Arxiv:
Fully Autonomous Navigation with Real-Time Visual Intelligence and Emergency Handling
Meet Veg, a self-reliant quadcopter that flies without GPS, detects people and faces in real-time, and knows how to land safely even when a motor fails. Designed for autonomous surveillance, Veg combines SLAM-based localization, LQR flight control, and embedded AI vision modules—all running on a Raspberry Pi. This drone isn’t just a prototype; it’s a working, field-ready platform for GPS-denied environments like indoor layouts, urban alleys, or industrial zones.
Why Build Veg? Tackling Real-World Drone Challenges
Modern drones are expected to do more than just hover or follow GPS coordinates. Real deployment means flying through complex spaces, staying stable during failures, and understanding the scene—all without external help. Veg was created to solve three persistent challenges in autonomous drone deployment:
- Navigation without GPS: Indoor or signal-jammed zones require visual SLAM, not satellite guidance.
- Control During Failures: A single rotor failure shouldn’t crash the mission. Veg includes real-time fault detection and recovery.
- Onboard Visual Intelligence: Surveillance means detecting people, and vehicles, and even identifying faces—onboard, not in the cloud.
Veg is powered by a dual-loop control system. A Linear Quadratic Regulator (LQR) maintains flight stability, while a PD-based outer loop handles trajectory. ORB-SLAM3 provides full 6-DoF visual-inertial odometry for map-based navigation. The drone also runs object detection and face recognition using lightweight neural and PCA models on a Raspberry Pi 4.

Block diagram of Veg’s architecture showing SLAM, LQR control, object detection, face recognition, fault detection, and emergency handling modules integrated via ROS.
What Makes Veg Different? Core Features at a Glance
Veg isn’t just a testbed—it’s a working autonomous platform built with cost-efficiency, resilience, and real-world readiness in mind. Below are its standout features:
- LQR-Based Flight Control: Veg uses a discrete-time Linear Quadratic Regulator for fast and smooth stabilization. It significantly outperforms PID and Feedback Linearization controllers in terms of rise time and damping.
- GPS-Free Navigation with ORB-SLAM3: The drone estimates its position in 3D space using monocular-inertial SLAM, enabling real-time path tracking in GPS-denied zones.
- Onboard Object Detection and Face Recognition: It runs a CNN-based object detector and a PCA-based face recognition module on a Raspberry Pi 4, achieving over 90% precision—all without external computing.
- Rotor Failure Detection and Emergency Landing: A real-time FDI module detects rotor anomalies and reroutes the drone to a safe landing spot using a map-based planner.
- Modular ROS Architecture: Every subsystem—SLAM, control, planning, detection, and FDI—runs as a ROS node. The platform is open to upgrades, custom modules, and logging.
This mix of autonomy, safety, and vision in a self-contained platform makes Veg well-suited for applications in surveillance, inspection, or research.
Controller | Rise Time (s) | Overshoot (%) | Settling Time (s) |
PID | 2.8 | ~0 | 3 |
FBL + PD | 1.1 | 5.8 | 1.5 |
LQR | 0.06 | 0.5 | 0.1 |
Figure 2: Controller Performance Comparison Table
Performance comparison of PID, FBL+PD, and LQR controllers showing LQR’s superior response times and minimal overshoot for roll and pitch stabilization.
Inside the Build: Hardware that Runs Veg
Veg is built on a lightweight quadrotor frame using accessible, open-source hardware. The design balances performance with simplicity, allowing all navigation, control, and perception tasks to run onboard.
Main Components:
- Raspberry Pi 4 (4GB): Acts as the onboard brain, running SLAM, object detection, and face recognition.
- Arduino Nano: Handles low-level attitude control using LQR or PID, communicating with the Pi over UART.
- 5MP Pi Camera: Feeds both SLAM and vision modules with 640×480 live video.
- MPU-9250 IMU: Provides 9-DoF motion data fused with SLAM for pose estimation.
- ESCs and Brushless Motors: Provide stable thrust and directional control.
- Power: A 3S 2200mAh LiPo battery powers the entire system.
The airframe supports modular upgrades, such as a GSM module for telemetry or GPS for fallback. The camera features a zoom lens for better long-range detection.

Top-down view of Veg showing Raspberry Pi, ESCs, IMU, camera, and other key hardware components mounted on the quadrotor frame.
How It All Runs: ROS-Based Software Architecture
All of Veg’s intelligence runs onboard using a ROS (Robot Operating System) stack that manages data flow, module scheduling, and process communication in real-time.
Subsystem Breakdown:
- Attitude Control (100 Hz): Runs on Arduino using LQR, stabilizing roll, pitch, and yaw.
- Trajectory Planning (20–30 Hz): PD-based controller on Raspberry Pi computes desired pitch/roll angles from waypoint errors.
- SLAM (10 Hz): ORB-SLAM3 outputs a full 6-DoF pose by fusing camera and IMU data.
- Object Detection (~2 FPS): CNN-based detector scans live video for people, cars, and objects of interest.
- Face Recognition (On Trigger): PCA-based classifier activates when a face is detected, and matches it against known profiles.
- FDI Monitoring (10 Hz): Monitors PWM outputs and attitude feedback to detect motor faults in real-time.
ROS nodes are modular and synchronized using timestamps. Critical threads like SLAM and control are given priority CPU access, ensuring stability even when vision modules experience load spikes.

Veg’s ROS-based software architecture shows data flow between core modules: IMU, Camera, SLAM, Object Detection, FDI, and Control.
Navigating Without GPS: SLAM and Map-Based Planning
One of Veg’s defining traits is its ability to fly in spaces where GPS simply doesn’t work—indoors, in basements, narrow alleys, or industrial zones. This is made possible through ORB-SLAM3, a visual-inertial SLAM system that runs directly on the Raspberry Pi. It estimates the drone’s full 6-DoF position by fusing live camera feed with IMU data, building a sparse map of the environment as it flies.
As Veg moves, it recognizes features, tracks motion, and refines its map continuously. If it re-enters a previously visited area, it corrects any accumulated drift using loop closure. Once enough of the surroundings are mapped, the drone projects this data into a 2D occupancy grid—essentially marking out what’s free and what’s blocked.
Using this grid, Veg plans its path using a Dijkstra algorithm. From there, a smooth sequence of waypoints is generated and passed to the trajectory controller, which guides the drone through the space while avoiding mapped obstacles.
In simple terms: Veg sees, understands, plans, and flies—all without any satellite help.

SLAM-generated occupancy map with Dijkstra-planned path showing the drone’s trajectory through a maze-like indoor environment.
Built to Fail Gracefully: Real-Time Fault Detection and Emergency Landing
Drones aren’t perfect. A sudden motor fault mid-flight can turn a routine mission into a crash. Veg is built with that in mind. It includes a self-monitoring system that watches its own motors and flight behavior—constantly.
If a rotor stops working or loses thrust, the drone notices. It compares how much it’s asking from a motor (through PWM signals) to how the drone is actually moving. If the response doesn’t match, Veg assumes something’s wrong. A fault flag is raised.
At that moment, it doesn’t panic. The system stops following the original path and instead opens its onboard SLAM map to search for the nearest safe spot to land. It reroutes, adjusts the remaining motors to maintain as much stability as possible, and starts descending—controlled and steady. Even with one rotor completely offline, the drone can bring itself down with just a small offset from the target.
What’s running behind the scenes is a combination of clever controller tuning and preloaded map-based heuristics. No fancy AI, no remote override. Just a good design.

Simulated maze scenario where Veg detects a motor failure mid-flight and diverts from its original mission to land safely using onboard SLAM data.
Eyes in the Sky: Onboard Vision and Surveillance AI
Veg isn’t just about robust navigation—it’s also equipped with a real-time vision system that lets it “see” as it flies. Running entirely on the Raspberry Pi 4, the drone uses its camera to detect objects and recognize faces without any help from the cloud. In flight, Veg continuously scans its surroundings, using a lightweight deep-learning model to spot objects like people, cars, and backpacks. When it detects something of interest, the system draws bounding boxes around the objects and displays confidence scores, all in real-time.
In parallel, the drone is ready to switch modes when it encounters a face. A dedicated face recognition routine kicks in, processing images to match faces against a known set of profiles. This module, built with classical techniques and fine-tuned on a modest dataset, achieves remarkable accuracy even on the limited resources of a Raspberry Pi.
Thanks to these onboard capabilities, Veg can perform surveillance tasks independently. Whether it’s monitoring a crowded event or securing an industrial facility, the drone’s vision system provides timely and actionable data without needing external processing power.



Figure 7: Onboard Vision Output Sample
Live video feed from Veg during flight, showing detected objects and recognized faces with annotated bounding boxes and confidence scores.
Tested Where It Matters: Performance Under Pressure
Building a drone is one thing. Watching it succeed in real environments—maze-like layouts, motor fault conditions, and object-heavy scenes—is what really proves its design. Veg was tested through a mix of real-time simulations and actual onboard runs. It didn’t just fly—it adapted.
During indoor navigation trials, Veg used its SLAM system to build up a map and reach its destination without GPS. The planned path, generated using Dijkstra’s algorithm, closely matched the real trajectory, with average deviations under 35 centimeters. The drone handled tight corners and relocalized itself when visual tracking briefly dropped.
In fault injection tests, a rotor was intentionally disabled mid-flight. Within seconds, the onboard fault detection module triggered an emergency landing. The drone rerouted, found the closest safe zone, and descended with minimal lateral drift—landing within 0.65 meters of the intended point.
The vision modules were also stress-tested in varied lighting conditions and real-world scenes. People walking, vehicles in motion, even partial occlusions—Veg held its ground. Object detection ran at around 2 FPS, and face recognition clocked in at 120 milliseconds per face, keeping pace with the drone’s speed and field of view.

Veg’s path through a maze layout showing planned vs. actual trajectory, along with visual output from the onboard object and face detection systems under real conditions.
What’s Next for Veg?
Veg started as a student capstone project, but it’s grown into a fully autonomous surveillance drone with real-world features—GPS-free navigation, onboard vision, fault detection, and emergency landing. All of this runs on low-cost hardware with no dependency on remote computation. That’s not just efficient; it’s field-ready.
Still, this is just the beginning. Future versions of Veg could include stereo or depth cameras for real-time 3D obstacle avoidance. Adding a Coral TPU or Jetson Nano could push object detection speeds from 2 FPS to over 15, allowing faster flight and more detailed analysis. Control logic may also evolve toward adaptive or MPC-based strategies that redistribute control effort in mid-air when faults occur.
There’s even room to scale. Imagine a fleet of Veg drones, all mapping, coordinating, and communicating to patrol large areas without human input. That’s the roadmap.
For now, Veg stands as proof that meaningful autonomy doesn’t have to be expensive. It just has to be thoughtful.

The development timeline of Veg from initial prototype to future goals—highlighting SLAM navigation, embedded vision, fault recovery, and upcoming hardware upgrades.
🔗 GitHub References
- Project Repository:
https://github.com/AbhishekTyagi404/veg-slam-drone - Annotated Build Images and Diagrams:
https://github.com/AbhishekTyagi404/veg-slam-drone/tree/main/images - Codebase for SLAM, Control, and Vision Modules:
https://github.com/AbhishekTyagi404/veg-slam-drone/tree/main/src
👤 About the Author
Abhishek Tyagi is a Data & AI Engineer at Deloitte with a background in Mechatronics Engineering and a passion for robotics, embedded systems, and autonomous AI. He is the founder of Kritrim Intelligence, an independent platform for research and innovation in robotics, machine learning, and real-time edge AI systems. Abhishek has worked with organizations like DRDO and was a finalist in IIT Bombay’s e-Yantra Hackathon. His open-source projects and published research focus on practical, scalable autonomy across air and ground platforms.
📫 Contact: mechatronics.abhishek@gmail.com
🌐 Website: kritrimintelligence.com