The spread of generative AI has created a growing concern that the human rights of children could be violated by the generation of large numbers of sexual images that specifically resemble real individuals. Photo UNSPLASH |
NEW YORK – Ten major tech companies announced Tuesday that they would work together to prevent artificial intelligence from creating and spreading materials depicting child sexual abuse. There are concerns that AI is trained on such materials found online to generate masses of inappropriate images.
The companies include Google LLC, Microsoft Corp., Open AI Inc., Amazon.com Inc., Meta Platforms Inc. and Stability AI Ltd.
The dataset for training AI will be checked to see if it contains child sexual abuse materials and, if confirmed, will be removed. The companies agreed to assess AI models for their potential to generate such images before hosting them. They will also work on improving technology to detect harmful materials and share information with governments.
The spread of generative AI has created a growing concern that the human rights of children could be violated by the generation of large numbers of sexual images that specifically resemble real individuals.
In December, Stanford University’s research team announced that it identified a massive number of images in a dataset that it suspected of being what has been termed “child sexual abuse materials.”
The dataset in question has a filter to exclude illicit images during use, but it has been difficult to completely eliminate illicit images with the current technology. The Japan News/ANN