CREEPY AI Practices: Children's Photos Found In Major Datasets Causing DEEP Concerns

By Victor Smiroff | Thursday, 04 July 2024 04:10 PM
Views 2.1K
Image Credit : Photo by Shutterstock user gID_7

In a recent revelation that has sent shockwaves through the tech industry, Human Rights Watch (HRW) has unearthed a disconcerting trend in the field of artificial intelligence (AI).

The watchdog has found that images of children are being utilized to train AI models without obtaining the necessary consent, thereby exposing these minors to potential privacy and safety risks.

According to Breitbart, the HRW researcher Hye Jung Han has discovered that widely used AI datasets, such as LAION-5B, contain links to hundreds of photos of Australian children. These images, sourced from various online platforms, are being used to train AI models, unbeknownst to the children or their families. This revelation has far-reaching implications, raising serious concerns about the safety and privacy of minors in the digital age.

Han's investigation, which scrutinized less than 0.0001 percent of the 5.85 billion images in the LAION-5B dataset, identified 190 photos of children from all Australian states and territories. This small sample size suggests that the actual number of affected children could be significantly higher. The dataset includes images spanning the entirety of childhood, enabling AI image generators to create realistic deepfakes of actual Australian children.

 WATCH: BILL GATES WANTS TO STOP COWS FROM FARTING, OR GIVE UP BEEFbell_image

Even more disconcerting is the fact that some of the URLs in the dataset disclose identifying information about the children, including their names and locations. In one instance, Han was able to trace "both children's full names and ages, and the name of the preschool they attend in Perth, in Western Australia" from a single photo link. This level of detail exposes children to potential privacy violations and safety threats.

 WATCH: FLASHBACK TO WHY KAMALA IS AGAINST VOTER ID CARDSbell_image

The investigation also revealed that even photos protected by stricter privacy settings were not immune to scraping. Han found instances of images from "unlisted" YouTube videos, which should only be accessible to those with a direct link, included in the dataset. This raises questions about the efficacy of current privacy measures and the responsibility of tech companies in safeguarding user data.

 FROM CRACK PIPE TO CARNAGE: HOW A NIGHT OF DRUGS TURNED DEADLY IN NYCbell_image

The utilization of these images in AI training sets poses unique risks to Australian children, particularly indigenous children who may be more vulnerable to harm. Han's report emphasizes that for First Nations peoples, who "restrict the reproduction of photos of deceased people during periods of mourning," the inclusion of these images in AI datasets could perpetuate cultural harms.

 STUNNING DISCOVERY: SCIENTISTS UNCOVER SHOCKING TRUTH BEHIND "LAKE OF BONES"bell_image

The potential for misuse of this data is significant. Recent incidents in Australia have already demonstrated the dangers, with approximately 50 girls from Melbourne reporting that their social media photos were manipulated using AI to create sexually explicit deepfakes. This highlights the urgent need for stronger protections and regulations surrounding the use of personal data in AI development.

 FOURTH OF JULY SHARK ATTACK LEAVES TEXAS BEACHGOERS STUNNED, FOUR PEOPLE INJURED (WARNING: GRAPHIC VIDEO)bell_image

While LAION, the organization behind the dataset, has pledged to remove flagged images, the process seems to be slow. Furthermore, removing links from the dataset does not address the fact that AI models have already been trained on these images, nor does it prevent the photos from being used in other AI datasets. This situation underscores the need for a more proactive approach to privacy and safety in the digital age.

X