Listen To The Show

Transcript

Welcome to The Prompt by Kuro House, your daily AI update. Today, we’ve got some big moves in AI law, smart home struggles, and new integrations you’ll want to know about. Let’s dive right in.

First up, a fresh lawsuit is shaking up the AI industry. John Carreyrou, known for exposing Theranos, is teaming up with other authors to sue six major AI companies, including OpenAI and Google. According to TechCrunch, they accuse these companies of training their models on pirated copies of their books. While a previous case allowed training on pirated books but condemned the piracy itself, this new suit challenges the settlement for not holding AI firms fully accountable. The plaintiffs argue that the $3,000 payouts to authors don’t reflect the massive revenues generated by these AI models.

Next, New York’s landmark AI safety bill got significantly watered down before it became law. The Verge reports that a coalition of tech companies and universities spent up to $25,000 on ads opposing the original RAISE Act. This bill would have required AI developers to disclose safety plans and report major incidents, but the final version signed by Governor Hochul removed key safety clauses and softened penalties. Interestingly, many academic institutions involved in partnerships with AI firms were part of the opposition group, the AI Alliance. This pushback highlights the tension between AI innovation and regulation.

On the consumer tech front, Amazon is expanding Alexa+ with four new integrations starting in 2026. TechCrunch tells us Alexa+ will now work with Angi, Expedia, Square, and Yelp, letting users book hotels, get home service quotes, and schedule appointments by voice. This builds on Alexa+’s existing partnerships with services like OpenTable and Uber, aiming to make voice assistants a one-stop app platform. The challenge remains whether users will embrace this new way of interacting with services beyond traditional apps. Amazon notes strong early engagement with home and personal services, signaling promising potential.

But it’s not all smooth sailing in smart homes. According to a detailed report from The Verge, AI-powered assistants like Alexa Plus and Google’s Gemini for Home are struggling with basic tasks in 2025. Users report inconsistent performance controlling devices like lights and coffee machines, despite the assistants being more conversational. Experts say the shift from old template-based systems to probabilistic large language models introduces randomness that complicates reliable device control. While the new AI assistants offer exciting possibilities, it seems we’re still beta testers in this evolving smart home landscape.

Finally, a troubling development in AI image generation has emerged. Wired reveals that users are exploiting chatbots from Google and OpenAI to create nonconsensual bikini deepfakes of women from clothed photos. Despite guardrails, users share instructions on bypassing restrictions to produce realistic but harmful altered images. Both companies reaffirm their policies against sexually explicit content and nonconsensual images, but enforcement remains a challenge. This raises urgent questions about accountability and the ethical use of AI tools in media.

So, as AI continues to advance, we see a complex mix of innovation, regulation battles, and ethical dilemmas. From courtroom challenges and legislative pushbacks to smart home frustrations and deepfake controversies, the landscape is anything but simple. Thanks for tuning into The Prompt by Kuro House—stay curious, stay informed, and we’ll see you tomorrow.