Listen To The Show

Transcript

Welcome to The Prompt by Kuro House, your daily AI update. Today, we’re unpacking some major moves in AI, from new platforms to bold legislation and groundbreaking research timelines. Let’s dive right in.

First up, Elon Musk has launched Grokipedia, pitched as an AI-powered alternative to Wikipedia. According to Wired, this new platform stirred controversy by pushing far-right talking points and historical inaccuracies. For example, Grokipedia falsely links pornography to worsening the AIDS epidemic and frames transgender identity as a social media “contagion.” The site also criticizes mainstream media and presents conservative viewpoints prominently, with some entries stretching to nearly 11,000 words. It’s a bold experiment in AI-generated knowledge, but one that raises serious questions about bias and misinformation.

In other news, OpenAI has completed its for-profit restructuring and struck a new deal with Microsoft, as reported by The Verge. The for-profit arm is now a public benefit corporation valued at around $130 billion, with a nonprofit parent holding significant equity and oversight. The updated agreement clarifies Microsoft’s rights to OpenAI’s technology, extending IP rights through 2032 and including post-AGI models with safety guardrails. Interestingly, Microsoft’s rights no longer cover OpenAI’s consumer hardware, such as the secret device designed with Jony Ive. Plus, OpenAI can now collaborate more freely with third parties, and Microsoft can independently pursue AGI development, signaling an intensifying race.

Speaking of AGI, OpenAI’s CEO Sam Altman shared an ambitious timeline during a TechCrunch livestream. OpenAI aims to have an intern-level AI research assistant by September 2026 and a fully autonomous “legitimate AI researcher” by 2028, according to TechCrunch. The plan involves scaling up compute resources dramatically and pushing algorithmic innovation to tackle complex scientific problems faster than human researchers. This AI researcher will be capable of delivering on large research projects independently, potentially accelerating breakthroughs in medicine and technology. Altman also highlighted a $1.4 trillion infrastructure investment commitment over the next few years to support these goals.

Meanwhile, a startup called Elloe AI is positioning itself as the “immune system for AI,” aiming to add crucial safety layers to language models. TechCrunch reports that Elloe AI’s platform fact-checks outputs, checks for compliance with regulations like HIPAA and GDPR, and tracks audit trails for transparency. Unlike other solutions, it doesn’t rely on LLMs to police LLMs but uses machine learning and human oversight to prevent bias, hallucinations, and unsafe outputs. Elloe AI is gaining attention as a Top 20 finalist at Disrupt 2025, offering a promising approach to AI safety and reliability.

Finally, lawmakers in the US are proposing strict new regulations on AI chatbot use by minors. The Verge covers the GUARD Act, introduced by Senators Hawley and Blumenthal, which would ban users under 18 from accessing AI chatbots. AI companies would be required to verify users’ ages through government IDs or other reasonable methods, and chatbots must disclose they are not human every 30 minutes. The bill also bans chatbots from producing sexual content for minors or promoting suicide, with criminal and civil penalties for violations. This legislation reflects growing concerns over AI’s impact on children and the need for stronger safeguards.

That’s a wrap for today’s AI news roundup. These stories show just how fast the AI landscape is evolving, with innovation, controversy, and regulation all moving in parallel. Thanks for tuning in to The Prompt by Kuro House. Catch you tomorrow for more AI insights.