PROJECT OVERVIEW

DETAILED ANALYSIS

AI & Software Published: 1/15/2025

PROJECT DETAILS

TECHNICAL SPECIFICATIONS

The Story

In January 2025, the Palisades fire swept through Los Angeles and hit close to home. Friends and members of my community lost their homes, and during those chaotic days I noticed something that stuck with me: the information landscape was a mess. Official government alerts were slow and sometimes conflicting with each other. Meanwhile, people on Reddit, local Facebook groups, and neighborhood text chains were sharing real-time updates about road closures, fire lines, and safe routes faster than any official channel could keep up.

The problem was obvious. Official sources were trusted but slow. Crowdsourced reports were fast but unverified. People were making life-or-death evacuation decisions based on incomplete or contradictory information, leading to traffic bottlenecks that blocked roads for emergency vehicles. I kept thinking: what if someone combined both? What if you could take the speed of user reports and the authority of official data, then use AI to bridge the trust gap?

That question became DisasterScope-EvacuationHub.

The Problem

During a disaster, information is fragmented across dozens of sources. FEMA, local fire departments, the National Weather Service, and CalFire all publish alerts, but they update at different intervals and sometimes contradict each other. At the same time, people on the ground are the first to see what is actually happening, but their reports lack verification. There is no single platform that merges these streams intelligently and helps people make informed decisions about when and how to evacuate.

The consequences are real. Confused residents delay evacuation. Traffic jams form on the wrong routes. Emergency services get blocked. People die not because they were not warned, but because they could not figure out which warning to trust.

My Solution

I designed and built DisasterScope-EvacuationHub as a full-stack AI-driven platform that integrates real-time hazard data from official sources with crowdsourced user reports, then applies AI verification to give every piece of information a transparent confidence score and reasoning.

The system tracks over 10,000 alerts at any given moment, pulling from government APIs, weather services, and our growing community of registered reporters. When a user submits a report — say, a road closure or a new fire line — the platform does not just display it raw. It runs the report through a multi-step verification pipeline: spatial corroboration with nearby official data, cross-referencing with other user reports in the area, and an AI model that evaluates consistency and plausibility. The result is a confidence score that users can see and evaluate for themselves.

On top of the data layer, I built a smart evacuation routing feature. Once the system understands where hazards are and how confident we are in each data point, it can suggest safe evacuation routes that avoid danger zones and minimize congestion. The routing updates in real time as new information comes in.

Technical Stack

The frontend is built with React for a responsive, real-time interface that can handle constantly updating map layers and alert streams. The backend manages data ingestion from multiple APIs, stores user reports, and runs the verification pipeline. The AI verification system uses a combination of rule-based spatial analysis and machine learning to assess report credibility. Everything is deployed on cloud infrastructure to handle traffic spikes during active disasters, when usage can surge unpredictably.

Real-time data processing was one of the toughest technical challenges. During an active emergency, the system needs to ingest, verify, score, and display new information within seconds. I built a pipeline that processes incoming data asynchronously while keeping the user-facing dashboard responsive. The confidence scoring algorithm balances multiple signals — proximity to official reports, reporter history, temporal consistency, and geographic plausibility — into a single transparent score.

Key Features

  • Real-time alert aggregation: Over 10,000 alerts tracked at any moment from official and crowdsourced channels
  • AI-powered verification: Every user report receives a confidence score with transparent reasoning
  • Spatial corroboration: Cross-references reports with nearby official data and other user submissions
  • Smart evacuation routing: Suggests safe routes that update in real time as conditions change
  • Community reporting: Registered users can submit disaster reports that get verified and scored
  • Transparent trust system: Users see exactly why a report has a given confidence level

Impact and Results

DisasterScope has grown beyond what I initially imagined. The platform has received over 1,500 visits and built a community of 250+ registered users who actively report during emergencies.

The project earned recognition in three major competitions. At the World Artificial Intelligence Competition for Youth (WAICY), DisasterScope placed 5th globally out of 130,000 participants from over 105 countries. It won 2nd place in the Congressional App Challenge for California’s 26th district. And it ranked 4th nationally at the ACP MetroCode competition.

But the numbers I care about most are not the awards. It is the users who told me the platform helped them make better decisions during an emergency. That is the whole point.

What I Learned

Building DisasterScope taught me more than any classroom could. I learned how to architect systems that handle real-time data at scale, how to design AI verification pipelines that are transparent rather than black-box, and how to build products that people actually use during high-stress situations.

The hardest lesson was about trust. A disaster platform is useless if people do not trust it, so every design decision centered on transparency. Showing the confidence score is not enough; you have to show the reasoning behind it. Users need to make their own judgment, and the platform’s job is to give them the best possible information to do that.

I also learned the importance of building for resilience. When a disaster hits, your servers better not go down. I spent significant time on infrastructure reliability, graceful degradation, and caching strategies that keep the platform functional even under extreme load.

DisasterScope started as a response to a local tragedy, but the problem it solves is universal. Disasters happen everywhere, and everywhere the same information gap exists. I am continuing to develop the platform and expand its capabilities, because I believe technology should make people safer, not just more informed.

PROJECT LINKS

EXTERNAL RESOURCES