
Tension has been rising inside Elon Musk’s xAI as staff quietly reveal the shocking scope of what they had to review. Several current and former employees described moments when they were unexpectedly faced with sexually explicit materials, and in some cases requests involving children — the very worst corner of the internet.
Accounts from inside the company show a troubling pattern. While Grok was marketed as cheeky, with its “sexy” and “unhinged” modes and avatars designed to flirt, the reality for workers was something else. Dozens said they had to sift through material they never thought they’d encounter at a tech job. Some even admitted it felt like “listening to audio porn” when transcribing conversations.
One worker confessed that the experience left them rattled. They had quit early this year, saying the strain of reading and hearing disturbing requests was unbearable. Another described moments where they literally felt nauseous — “it actually made me sick,” they told reporters. The human element here is raw: these are not faceless moderators but ordinary people suddenly dragged into reviewing the darkest kinds of content.
Industry experts have raised alarms. Riana Pfefferkorn from Stanford University warned that building models designed to allow sexual roleplay or nudity creates far more “gray areas.” That, in turn, risks letting dangerous child sexual abuse material slip through — something regulators have zero tolerance for.
Business Insider reviewed internal docs that backed up the workers’ accounts. These documents spelled out the risks, explicitly warning tutors that they might encounter CSAM and violent imagery. The company did instruct them to flag such material, but some insiders fear the reactive system isn’t enough.
Despite Musk’s claims that fighting child exploitation is his ‘priority #1,’ the reality seems tangled. Over 500 xAI staff were cut in recent weeks, and it’s unclear whether the shrinking workforce can effectively police Grok’s sprawling capabilities. Projects like “Rabbit” and “Aurora” show how quickly playful features veered into NSFW territory.
For many, the issue is deeply personal. Workers said they signed consent agreements knowing explicit content might appear, but the scale and nature of what they saw left scars. Some simply couldn’t continue. Others worry the problem hasn’t been fixed at all — just buried under corporate silence.
Full report here: Business Insider