A California father who took naked photos of his toddler for the doctor had his life upended when an automated tool used by Google flagged his account and triggered a 10-month criminal investigation.
As reported by The New York Times, the San Francisco father noticed unusual swelling on his toddler in February 2021, when many physicians were postponing in-person office visits due to the pandemic or conducting care through telemedicine, and reached out to a doctor. He was asked to send photos prior to their emergency consultation via videoconferencing.
The father uploaded photos of his son’s genitals, which were also backed up on his Google cloud, to the health care provider’s messaging system as requested. But because he used an Android phone, artificial intelligence (AI) programmed by Google flagged the photos as potential child abuse material.
Two days later, the father was notified by Google that the images had been flagged and that he was locked out of his account. And despite attempts to appeal and clarify how the photos were taken at the request of his child’s physician for medical purposes, which ultimately allowed doctors to prescribe antibiotics that helped clear his toddler’s swelling, Google maintained that the presence of “harmful content” was a “severe violation of Google’s policies and might be illegal.” After Google denied his request, he was unable to access any of his data and was blocked by his mobile provider Google Fi.
It wasn’t until several months later that the father, who had lost access to his phone number, learned that he had been the subject of an investigation by the San Francisco police department. In December 2021, he received an envelope from law enforcement with documents about the investigation and copies of search warrants served on Google and his ISP.
And even though police noted that his case was closed and that “no crime had occurred,” Google continued to deny his request for access and informed him the account would be permanently deleted.
How Google AI Alerts Authorities to Abusive Material
This isn’t the first time Google AI has flagged parents for taking photos of their children. In 2021, a Texas father was investigated and later cleared by Houston Police after he was flagged for taking photos of his son on an old Android and sending them to his wife on Google’s chat service. Like the California father, he too never regained access to his decade-old Google account.
These stories shine a spotlight on the controversial use of AI technology to identify abusive material, which in Google’s case involves its Content Safety API AI toolkit.
Released in 2018, the toolkit scans images and videos uploaded to Google Photos for “hashes” or unique digital fingerprints of child sexual abuse material. In addition to matching known hashes to those on a database, the technology is also used to identify previously unseen imagery, prioritize those most likely to be deemed harmful, and flag them for a human moderator. Illegal material is also reported to the National Center for Missing and Exploited Children (NCMEC), which works with law enforcement agencies to investigate and remove abusive images from the platform.
Google reported that in 2021, it reported over 621,000 cases of abusive material to the NCMEC, alerting authorities to more than 4,260 potential new victims. Google told the Times that it only scans personal images after used take an “affirmative action,” which includes backing them up on Google Photos.
For many advocates, the latest incident involving Google’s Content Safety toolkit is yet another example of why monitoring data on personal devices or clouds constitutes an invasion of privacy. As one industry expert told the Times: “This is precisely the nightmare that we are all concerned about. They're going to scan my family album, and then I'm going to get into trouble.”
As a criminal defense practice that routinely represents clients facing sex crime investigations, our team at KBN knows how these monitoring technologies can put innocent people and parents in positions where they have to defend themselves against serious criminal allegations.
As we see law enforcement increasingly turn to digital forensics when investigating sex crimes, we know that reviewing data stored on personal devices and the cloud can be a big part of the prosecution’s case. As a result, we’ve dedicated ourselves to understanding these technologies and their limitations in order to provide the best possible defense for our clients. If you have questions about a criminal investigation or case and how we can help, contact us for a free and confidential consultation.
Source: The New York Times- A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.