Artificial intelligence systems are increasingly used by police departments to analyze surveillance video, identify suspects, and connect faces in images with names in government databases. Proponents argue that these systems help investigators sort through massive amounts of visual data and identify criminals more quickly. Critics warn that the technology can produce mistakes that carry severe consequences for innocent people. A recent case involving a Tennessee grandmother illustrates how those concerns can become very real.
Angela Lipps, a 50-year-old woman from north-central Tennessee, says her life was turned upside down after police in Fargo, North Dakota, identified her as a suspect in a bank-fraud investigation using facial recognition software. The case began when investigators examined surveillance video showing a woman withdrawing money using what authorities believed was a fake U.S. Army identification card. According to police records, detectives ran the surveillance image through facial recognition software and received a potential match: Angela Lipps.
The identification led to a dramatic chain of events. In July, U.S. marshals arrived at Lipps’s Tennessee home while she was babysitting children and arrested her at gunpoint as a fugitive from North Dakota. She was booked into a county jail and charged with multiple counts related to identity theft and fraud.
Lipps insisted she had never been to North Dakota and had no connection to the alleged crimes. Nevertheless, she remained in custody for months while legal procedures slowly moved forward. According to reports, she spent nearly four months in a Tennessee jail awaiting extradition before being transported to Fargo to face the charges.
Once in North Dakota, her attorney began assembling evidence that challenged the accusation. Bank records and financial transactions showed that Lipps was more than 1,200 miles away in Tennessee at the time the fraud was taking place. When that information was presented to investigators, the case against her collapsed. The charges were eventually dismissed, and she was released around Christmas Eve.
The consequences of the arrest, however, did not simply disappear when the charges were dropped. While she was jailed and unable to manage her finances, Lipps lost her home, her car, and even her dog. She also found herself stranded in North Dakota after her release because authorities did not pay for her trip home. Local attorneys and a nonprofit organization helped provide temporary lodging and transportation so she could return to Tennessee.
Cases like this raise uncomfortable questions about the expanding use of automated identification tools in criminal investigations. Facial recognition systems are designed to compare an image—often taken from surveillance footage—with photographs stored in government databases such as driver’s license records. The software produces potential matches ranked by similarity scores. Ideally, investigators then examine those results and gather additional evidence before making an arrest.
In practice, however, the technology can introduce errors at several stages. Surveillance footage is often low-resolution or taken at awkward angles. Lighting conditions may distort facial features. Changes in hairstyle, weight, or expression can further complicate identification. When the algorithm produces a possible match, investigators may be influenced by confirmation bias—interpreting similarities as stronger evidence than they actually are.
Researchers have also documented accuracy differences among facial recognition systems. Some early studies found higher error rates when analyzing images of women and people with darker skin tones. Although developers say modern systems have improved significantly, the technology still depends heavily on the quality of the image being analyzed and the size and composition of the database being searched.
Because of these concerns, several civil liberties organizations have urged law enforcement agencies to treat facial recognition as an investigative lead rather than proof of identity. In other words, the software can suggest where to look, but it should not be the sole basis for an arrest. Additional corroborating evidence—such as location data, witness statements, or financial records—should confirm that the person identified by the algorithm is actually connected to the crime.
The Lipps case highlights what can happen when that verification process fails or moves too slowly. A computer match led investigators to a suspect, and the legal system began moving forward before basic alibi evidence was examined. By the time the mistake was corrected, months of Lipps’s life had already been lost.
Technology is now woven deeply into modern policing. Databases, license-plate readers, predictive analytics, and facial recognition systems are all part of the investigative landscape. These tools can be powerful when used carefully, but they also shift part of the decision-making process from human judgment to algorithmic output.
For the people affected by errors, the distinction between “investigative lead” and “evidence” can determine whether they spend the holidays at home or in a jail cell hundreds of miles away.
Angela Lipps’s experience serves as a reminder that even sophisticated technology is not infallible. When the algorithm points at the wrong person, the consequences are not theoretical. They are measured in lost time, damaged lives, and the difficult task of rebuilding after a mistake that should never have happened.














