The use of artificial intelligence to detect illegal content on the Internet is a common policy of large technology companies such as Google.. However, this can create problems, as was recently demonstrated after cataloged some photos taken with an Android phone as child sexual abuse material. and began filing a complaint with the authorities.
However, this was a mistake because, as the person who took the photo explained, it was a picture of her son’s perineum that was going to be sent to the doctor so that he can determine and prescribe treatment for the infection that the minor had in that area.
In addition to all the complications created by this situation, G.oogle also suspended all of the person’s accounts without giving them any chance to explain the context of the image..
This situation has reignited the debate about the access to users’ personal information that large companies like Google can have, despite the fact that they claim their systems are minimally invasive.
In accordance with The newspaper “New York Times, The incident happened in February 2021at the height of the pandemic, when many medical offices were not visited due to restrictions.
Mark, who was identified as a man to preserve his identity, said that when he saw that his son had a swelling in his perineum, he went to his doctor and At the nurse’s request, he took a photo before the video consultation.
Two days later, Mark received a notice from Google stating that all of his Google accounts had been suspended for alleged “harmful content” which is “a serious violation of Google policy and may be illegal.”
This meant that the man lost access to his contacts, email, photos, and even his phone number.. Also in December, it was reported thate The San Francisco Police Department opened an investigation against him, which later established his innocence..
Faced with this whole situation, Google responded that the company only scans its users’ images if they choose to back up their phones’ memory.
In these cases, they can access the material and run it through their AI filters.
Child Sexual Abuse Material (CSAM) is disgusting and we are committed to preventing it from being shared on our platforms.This is stated in a statement by Google spokeswoman Christa Muldoon.
Source: La Opinion
Ashley Fitzgerald is an accomplished journalist in the field of technology. She currently works as a writer at 24 news breaker. With a deep understanding of the latest technology developments, Ashley’s writing provides readers with insightful analysis and unique perspectives on the industry.