Challenge Overview: No one understands the need for both thorough security screenings and shorter wait times more than the Transportation Security Administration (TSA). They’re responsible for all U.S. airport security, screening more than two million passengers daily.
As part of the Apex Screening at Speed Program, DHS has identified high false alarm rates as creating significant bottlenecks at airport checkpoints. Whenever TSA’s sensors and algorithms predict a potential threat, TSA staff must engage in a secondary, manual screening process that slows everything down. As the number of travelers increases every year and new threats develop, TSA’s screening algorithms need to continually improve to meet increased demand.
Currently, TSA purchases updated algorithms exclusively from the manufacturers of the scanning equipment used. These algorithms are proprietary, expensive, and often take a long time to create. In this competition, TSA stepped outside of their established procurement process and challenged the broader data science community to help improve the accuracy of their threat prediction algorithms. Using a dataset of images collected on the latest generation of scanners, participants were challenged to identify the presence of simulated threats under a variety of objects, clothing, and body types. Even a modest decrease in false alarms will help TSA significantly improve the passenger experience, while maintaining high levels of security.
Algorithms developed through this challenge will complement existing systems funded under the DHS S&T Apex Screening at Speed Program. The competition supports DHS S&T and TSA’s goals of increased security effectiveness and rapidly responding to evolving threats.
1st Place: Jeremy Walthers of Rockville, MD, received the 1st Place prize of $500,000 for the top scoring algorithm. Walthers’ approach used an array of deep learning models customized to process images from multiple views.
2nd Place: Sergei Fotin located in Nashua, NH, received the second place prize of $300,000 with an approach that fuses 2D and 3D sources of data to make object and location predictions.
3rd Place: David Odaibo and Thomas Anthony of Alabaster, AL, won the $200,000 third place prize, presenting a solution that uses specialized image-level annotations to train their 2-stage identification models.
4th-8th Place: The following entrants rounded out the top eight teams and will each receive $100,000:
- Fourth Place: Zach Teed, Hudson, OH, for a solution that works to define threats using a location-based model.
- Fifth Place: Oleg Trott, San Diego, CA, fused 2D and 3D data sources with automated image augmentation to improve model accuracy.
- Sixth Place: Halla Yang, Wilmette, IL, and Phillip Adkins, Chicago, IL, designed an approach that automatically segments the image before running models trained on specific cropped images.
- Seventh Place: Suchir Balaji, Sunnyvale, CA, used synthetic data and cross-image analysis to produce more robust predictions.
- Eighth Place: Michael Avendi, Irvine, CA, used separately trained models and random image augmentation for best results.