HomeAccessUK regulator opens investigation into Grok sexualising images on X

UK regulator opens investigation into Grok sexualising images on X

-

Indonesia and Malaysia have already blocked access to Elon Musk’s platform until the issue, which primarily involves images of women and children, is addressed – who’s next?

Ofcom, the UK’s telecoms and media regulator, has opened a formal investigation into the use of the Grok AI tool on X to manipulate images of women and children by stripping off their clothes and assuming provocative poses, among other things. X and Grok are owned by Elon Musk and Grok is embedded in the messaging platform as well as being a standalone tool. The images started to appear after a recent update to Grok.

There has been a public and political outcry over a flood of sexualised images shared on X, generated by Grok, without the permission of individuals represented in the images.

The regulator is investigating X under the Online Safety Act (OSA), which gives Ofcom the power to impose a range of penalties for breaches of the Act, from insisting on certain steps to comply with it, to issuing fines of up to 10% of global revenue and in the worst cases, blocking an app or website from being used in the UK.

Business disruption

To do this, Ofcom would have to apply to a court for “business disruption measures” such as requiring internet service providers to block access in the UK. Last Friday the government said it would support Ofcom if it decided to impose this measure.

The regulator said it would pursue the investigation as a “matter of the highest priority, while ensuring we follow due process. As the UK’s independent online safety enforcement agency, it’s important we make sure our investigations are legally robust and fairly decided.”

Ofcom said it had contacted X about its concerns on 7 January. Having considered the platform’s response, it opened a formal investigation.

Now it will assess if X breached the law by: failing to assess the risk of people seeing illegal content on the platform; not taking appropriate steps to prevent users from viewing illegal content like abuse involving intimate images and images of child sexual abuse; not removing illegal material quickly; not protecting users from privacy breaches which are protected by law; failing to assess the risk X may pose to children; and failing to use effective mechanisms to check viewers’ age before allowing access to pornography.

Action, what action?

Last week CNN reported that Musk and xAI said that they are taking action “against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.” But CNN noted that Grok’s responses to user requests are still flooded with images sexualizing women.

Ofcom has not given a date for its preliminary findings from the investigation, nor for its subsequent final ruling. The regulator is not generally noted for its speed – it is to be hoped this is an exception.

Jess Asato, a Labour MP and campaigner against AI nudification, was quoted in The Guardian, a UK newspaper, saying, “It’s still happening to me and being posted on X because I speak up about it”. She added that “thousands of people” had sent hateful messages and AI-augmented images of her as a result of her campaign to ban AI nudification.

Latest independent research

AN + AI change telecoms' future

Find out more in our new report