A group of 34 U.S. states is suing Meta, the owner of Facebook and Instagram, alleging that the company manipulates minors through its platforms. This lawsuit comes as concerns about child safety rise alongside rapid advancements in artificial intelligence, including text and generative AI. The legal action alleges that Meta's algorithms encourage addictive behavior and negatively impact children's mental well-being. Additionally, the United Kingdom-based Internet Watch Foundation has raised concerns about the proliferation of AI-generated child sexual abuse material (CSAM). It discovered over 20,000 AI-generated CSAM images on a dark web forum in a month. The IWF called for global cooperation and regulation to address the issue.


A lawsuit has been filed by 34 U.S. states against Meta, the parent company of Facebook and Instagram, alleging improper manipulation of minors on the platforms. The states, including California, New York, Ohio, South Dakota, Virginia, and Louisiana, claim that Meta's algorithms promote addictive behavior and negatively affect children's mental well-being through in-app features like the "Like" button.


Despite recent statements from Meta's chief AI scientist downplaying concerns about the risks of AI technology, the government litigants are pursuing legal action. They seek damages, restitution, and compensation for each state, ranging from $5,000 to $25,000 per alleged occurrence.


Meanwhile, the Internet Watch Foundation (IWF) in the UK has raised concerns about the proliferation of AI-generated child sexual abuse material (CSAM). The IWF discovered over 20,000 AI-generated CSAM images on a single dark web forum in one month, emphasizing the need for global cooperation to combat this issue. The IWF suggests a multifaceted approach, including adjustments to existing laws, enhanced law enforcement education, and regulatory oversight for AI models.


To address the problem, the IWF advises AI developers to prohibit the generation of child abuse content and to remove such material from their models. The advancement of AI image generators has improved the creation of lifelike human replicas, raising concerns about the potential misuse of this technology.


The rise in AI-generated CSAM highlights the need for regulatory measures and international collaboration to address child safety concerns in the digital age.


Keywords: U.S. states, lawsuit, child safety, Meta, Facebook, Instagram, AI advancements, addictive behavior, Internet Watch Foundation, AI-generated CSAM, child abuse content, regulatory supervision.


(AMAKA NWAOKOCHA, COINTELEGRAPH, 2023)