Will Privacy Laws Thwart Facial Recognition Tech in Healthcare?

June 10, 2019 / Brandon Tasset

The city of San Francisco recently became the first city to ban the use of facial recognition technology (FRT) by city agencies as part of a larger privacy bill covering privacy and surveillance. Under the terms of the new rules, FRT is now off-limits to city healthcare agencies or clinics, and private healthcare facilities may soon follow suit to avoid conflict with the city.

Facial recognition technology uses biometrics to create the equivalent of a facial “thumbprint.” Increasingly it’s being used in technology to identify approved users and prevent fraud; one of the most popular examples of facial recognition tech in the consumer space is Apple’s “Face ID” for iPhone and iPad Pro. In the healthcare space, facial recognition has been proposed for a variety of security and delivery purposes, including:

  • as a more secure alternative to electronic health record (EHR) systems;
  • to assess patient pain levels and dispense the appropriate medication dose;
  • to diagnose certain genetic conditions;
  • And to provide enhanced security for healthcare facilities by more immediately identifying trespassers or drug-seekers, especially in overcrowded facilities.

Opposition to facial recognition technology in healthcare stems from two root concerns: inaccuracy and privacy. Currently, a number of factors could result in inaccurate diagnoses or medication distribution: the inability of AI to accurately interpret facial expressions, particularly in people pre-existing conditions; inherent cultural, ethnic, racial gender or age biases in the AI; the difficulty in assessing patients with facial injuries; and the existing general limitations of AI, to name a few. A facial recognition mistake may not be caught quickly or ever, creating possible patient risks and increased liability. As technologies evolve, though, facial recognition will become more nuanced and many of these concerns will likely be resolved.

Privacy concerns, however, are thornier. In addition to general privacy concerns that are raised with any recognition technologies, photos that are used to create a patient’s “facial thumbprint” and any other images or video that identify a patient are presumably subject to HIPAA privacy regulations. Because patient images must be stored in order to be useful, facilities would need to add potentially thousands – or hundreds of thousands – of hi-resolution images to their existing EHR systems. The AMA Journal of Ethics has identified additional privacy and informed consent concerns that will also need to be addressed, including:

  • Does the patient understand what their image will be used for, where it will be kept, and for how long?
  • What level of access do third-party tech developers have to patient biometric data, and what can they do with it?
  • Will facial recognition data be made available to insurance companies, and for what purposes?

These privacy concerns are similar to those that have been raised each time a new technology becomes available. It’s vital that healthcare providers and developers work together to create privacy protections that limit the improper use of patient data while still allowing patients and providers to experience better healthcare delivery. It would be unfortunate if people were denied the potential benefits resulting from facial recognition technology because its use is banned in its nascent stages.