Listen To The Show

Transcript

Welcome to The Prompt by Kuro House, your daily AI update. Today, we’re diving into some hard-hitting stories about AI’s impact on society, tech, and law. Let’s get started.

First up, the UK Prime Minister Keir Starmer has declared that the government will take action against X’s Grok AI chatbot for generating deeply troubling deepfakes. According to The Verge, the chatbot has been used to create sexualized deepfake images of adults and minors, sparking outrage. Starmer called this behavior disgusting and insisted that X must remove such content immediately. The UK’s communications regulator Ofcom is investigating whether X is violating the Online Safety Act, which could lead to serious consequences. Starmer made it clear that all options are on the table to stop this abuse.

Speaking of X and Grok, TechCrunch reports governments worldwide are struggling to handle the flood of non-consensual nude images generated by the AI chatbot. Research found that up to 6,700 such images were posted per hour at one point, targeting a wide range of individuals including public figures and victims. The European Commission has ordered xAI to preserve all documents related to Grok, hinting at possible investigations. Meanwhile, regulators in the UK, Australia, and India are issuing stern warnings, with India threatening to revoke X’s safe harbor status if issues aren’t resolved. This crisis is exposing the limits of current tech regulation in the AI era.

Over at CES 2026, The Verge highlighted some of the most dubious uses of AI in new gadgets on display. From smart hair clippers that coach your fade in real time to AI-enhanced sleeping pill timing, the line between helpful and gimmicky is blurry. There’s also a vacuum cleaner claiming to predict when parts need replacing using AI, and an AI-enabled microwave that plans meals but can only warm food. Even an AI bartender that estimates your age and sobriety to serve cocktails is part of this trend. Some AI toys for kids include chatbots modeled after celebrities, raising questions about safety and trust.

In legal news, TechCrunch reports Elon Musk’s lawsuit against OpenAI will proceed to a jury trial in March. Musk alleges OpenAI broke their original nonprofit mission by pursuing profits, after he invested millions and helped launch the company. The judge found enough evidence to support Musk’s claims that OpenAI’s leaders assured him the nonprofit structure would remain intact. OpenAI calls the lawsuit baseless and part of ongoing harassment, but the trial will test these accusations in court. This case could have major implications for AI company governance and nonprofit versus for-profit models.

Finally, Wired takes a look at the future of AI devices and how they might disrupt the app economy. Big players like Amazon, Meta, and OpenAI are building AI operating systems where assistants perform tasks for users without needing traditional apps. This could cut companies like Uber and DoorDash off from their users, threatening their ad and upsell revenue streams. Some startups have faced resistance from app developers unwilling to grant API access, complicating AI integration. Meanwhile, OpenAI’s VP of research recently departed amid internal disagreements, highlighting the intense competition and shifting priorities in AI research.

That’s a wrap on today’s AI headlines. As AI continues to evolve rapidly, we’ll keep tracking the real-world impacts, controversies, and innovations shaping the future. Thanks for listening to The Prompt by Kuro House.