Facial recognition in aviation: The insight

Security systems based on facial-recognition technology promise to improve the airport experience—but the aviation industry needs to carefully consider public opinion amid vocal criticism of these systems by privacy and data-security advocates, writes LeClairRyan aviation attorney Mark A. Dombroff for Airport Business magazine.

“As our society adjusts to what can seem, at least to some, like an invasive change, the aviation industry will need to handle the onboarding of this technology with care and sensitivity,” writes Dombroff, an Alexandria, Va.-based Member of the national law firm and co-leader of its aviation industry practice.

In the piece (“Forward-Facing: Could Facial Recognition Turn Back the Clock on the U.S. Airport Experience?”), Dombroff notes that airports across the globe are investing in facial-recognition systems that promise to, in essence, transform the entire terminal into an always-on security checkpoint.

Meanwhile, more airports and airlines are sending digital images of passengers’ faces for crosschecking against biometric profiles in a database maintained by the Department of Homeland Security (DHS). In a best-case scenario, the attorney contends, the travel experience could dramatically improve as slow-moving security lines give way to fast-and-easy boarding. “An optimist might even wonder whether a trip to the airport in 2029 will feel a bit like a throwback to 1999,” he writes.

But over the past few months, vocal critics of such systems appear to have gained ground, Dombroff cautions, citing bans or proposed bans on government use of facial-recognition technology in San FranciscoSomerville, Mass., and in other communities.

Another headline-grabber was the $1 billion lawsuit, filed this past April, by a New York college student alleging that Apple had used facial recognition to falsely accuse him of shoplifting at several Apple stores around the Northeast. That same month, Dombroff continues, a JetBlue passenger’s outraged Twitter post went “viral” after she described being asked to peer into a camera prior to boarding a flight at JFK.

Against this backdrop, privacy advocates have expressed alarm at U.S. government plans to roll out facial recognition for all international passengers at the top 20 American airports by 2021. Should airport operators be concerned about being named in lawsuits over misidentifications, racial profiling and the like? One key protection to keep in mind, Dombroff writes, is the SAFETY Act of 2002. Passed in the aftermath of 9/11, it was designed to safeguard businesses offering products or services that stand to protect Americans from terrorism.

These protections flow from manufacturers to end-users, and so airports and airlines should make sure that the creators of any security-related products and services, including facial-recognition software and systems, have secured SAFETY Act registration, Dombroff advises.

The JetBlue incident highlights how important it is for airlines and airports to be proactive about countering misinformation and making sure passengers understand how these systems work, Dombroff advises. “Through signage, social media messaging and other means, the industry needs to make abundantly clear when and how people can opt-out of the scans (and if they cannot, as with whole-terminal scanning, airports need to be upfront about it),” he writes.


“Given that this technology is relatively young and is bound to have the expected bugs and errors, screeners also need to be trained to anticipate misidentifications,” Dombroff continues. “When they get a ‘hit,’ they should respond professionally, take the passenger to the side and engage in a standard ID check.”

After all, aggressive “red alert” responses to misidentified passengers are a PR nightmare in the waiting, the attorney concludes. “No doubt about it—they will be filmed and posted on social media within seconds of occurring, if not in real time.”