Elon Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to generate “photos depicting minors in minimal clothes” on social media platform X. The chatbot, a product of Musk’s firm xAI, has been producing a wave of sexualized photos all through the week in response to person prompts.
Screenshots shared by customers on X confirmed Grok’s public media tab full of such photos. xAI stated it was working to enhance its programs to stop future incidents.
“There are remoted instances the place customers prompted for and obtained AI photos depicting minors in minimal clothes,” Grok stated in a post on X in response to a person. “xAI has safeguards, however enhancements are ongoing to dam such requests solely.”
“As famous, we’ve recognized lapses in safeguards and are urgently fixing them—CSAM is unlawful and prohibited,” xAI posted to the @Grok account on X, referring to Little one Sexual Abuse Materials.
Many customers on X have prompted Grok to generate sexualized, nonconsensual AI-altered variations of photos in current days, in some instances eradicating individuals’s clothes with out their consent. Musk on Thursday reposted an AI photograph of himself in a bikini, captioned with cry-laughing emojis, in a nod to the development.
Grok’s technology of sexualized photos appeared to lack security guardrails, permitting for minors to be featured in its posts of individuals, often ladies, carrying little clothes, in response to posts from the chatbot. In a reply to a person on X on Thursday, Grok stated most instances could possibly be prevented by way of superior filters and monitoring though it stated “no system is 100% foolproof,” including that xAI was prioritising enhancements and reviewing particulars shared by customers.
When contacted for remark by e-mail, xAI replied with the message: “Legacy Media Lies”.
The issue of AI getting used to generate baby sexual abuse materials is a longstanding challenge within the artificial intelligence trade. A 2023 Stanford research found {that a} dataset used to coach numerous well-liked AI image-generation instruments contained over 1000 CSAM photos. Coaching AI on photos of kid abuse can enable fashions to generate new photos of youngsters being exploited, specialists say.
Support Greater and Subscribe to view content
This is premium stuff. Subscribe to read the entire article.











