Listen To The Show
Transcript
Welcome to The Prompt by Kuro House, your daily AI update. Today, we’re diving into some major moves in AI leadership, groundbreaking chip manufacturing plans, new safety features, AI-generated personal audio, and a serious security wake-up call about AI-built apps.
First up, the dramatic story behind Sam Altman’s ouster from OpenAI just got clearer thanks to Mira Murati’s deposition. The Verge uncovered details showing Murati played a pivotal role in the board’s decision to remove Altman, citing concerns about his transparency and management style. Murati initially became interim CEO but quickly stepped aside for Emmett Shear, while secretly supporting Altman’s return. Text messages reveal tense negotiations and a board deeply divided over Altman’s leadership, with Murati caught in the middle. Ultimately, Murati publicly backed Altman’s reinstatement, even signing a letter from 750 employees threatening to quit if he wasn’t brought back. This behind-the-scenes drama shows how complex AI company governance can be, especially when leadership and trust are on the line.
Next, Elon Musk’s SpaceX is making a massive bet on AI hardware with a $55 billion chip manufacturing plant in Texas. According to The Verge, the “Terafab” facility could eventually cost up to $119 billion as it scales. This plant aims to produce AI chips for SpaceX and Tesla, powering everything from robotics to space-based data centers. Intel is partnering on the project, bringing its chip design and fabrication expertise to help reach an ambitious target of 1 terawatt per year of computing power. Musk envisions this as a critical foundation for future AI and space technology advancements, signaling a new frontier in chip manufacturing.
On the safety front, OpenAI just rolled out a new feature called Trusted Contact to help users at risk of self-harm. TechCrunch reports that this lets users designate a trusted person who gets alerted if the AI detects signs of distress during conversations. The system encourages the user to reach out while sending a brief, privacy-conscious alert to the trusted contact. This follows previous safety measures and aims to provide timely support, especially as OpenAI faces lawsuits related to harmful interactions. It’s an optional feature but part of OpenAI’s commitment to making AI safer and more supportive during difficult moments.
Spotify is stepping into the AI-generated personal audio space with a new tool for creating custom podcasts. According to TechCrunch, users can now use programming tools like OpenAI’s Codex or Anthropic’s Claude Code to generate podcasts from documents, schedules, or articles. These AI-created podcasts appear directly in your Spotify library but remain private to you. Spotify’s new CLI tool, currently in beta, lets you write prompts to build audio sessions—imagine a deep dive into the World Cup history narrated just for you. This move taps into growing demand for personalized audio content powered by AI agents.
Finally, a serious security alert from WIRED about thousands of AI-coded web apps exposing sensitive data on the open web. Security researchers found over 5,000 AI-built apps from companies like Lovable, Replit, and Netlify with little to no authentication. Many apps leaked corporate secrets, medical info, and even customer chat logs, simply because creators didn’t configure security properly. This widespread exposure is reminiscent of past cloud storage mishaps and highlights how AI tools empower non-experts to build apps without security checks. The takeaway? AI accelerates development but also demands stronger safeguards and user education to prevent massive data leaks.
That’s a wrap for today’s episode of The Prompt. From boardroom battles to billion-dollar chip factories, new safety tools, personalized AI audio, and security challenges, AI’s impact keeps expanding in every direction. Stay tuned as we continue to track the developments shaping the future of artificial intelligence.


