2.13.2026

Is AI is Getting Too Risqué for Your Family?

 This guest post is from Hannah Brown, Regent Law 3L & current Bioethics student:

          AI chatbots are going to start acting like adults. One of the newest outputs from Grok, Elon Musk’s chatbot, and possibly soon from ChatGPT is sexually explicit content. Grok allows its users to create an image with its “Spicy Mode.” As of now, Grok does require that a user input his birth year, but there is no identity verification process, so anyone, including minors, can easily use this function. To use Grok generally, a user must be thirteen years old, and the platform requires that for any minors they must have their legal guardian’s permission to use the platform. However, it is on the guardians and parents to check that their teenagers are using the platform appropriately.

According to Sam Altman, OpenAI and ChatGPT had been restricting their platforms to be careful with mental health issues, but now they plan to “treat adult users like adults” and will allow erotica for verified adults beginning in December 2025. ChatGPT uses an age prediction model. The model predicts how old a user is, and if it believes the user is under eighteen, then it will include extra content protections to reduce sexually explicit material. If this happens and the user is over eighteen, he can verify his age with a government ID and selfie to reduce the restrictions.

With Free Speech Coalition v. Paxton, where the Supreme Court upheld a state age verification requirement, ChatGPT, Grok, and other chatbots may soon need to require stricter age verifications to protect minor children. If the platforms want to treat adults like adults, they need to also treat children like children and not allow them to so easily stumble across sexually explicit material.

A screenshot of a video game

AI-generated content may be incorrect.

https://grokimagine.ai/grok-spicy

No comments:

Post a Comment