Listen To The Show
Transcript
Welcome to The Prompt by Kuro House, your daily AI update. Today, we’ve got some exciting moves in AI hardware, software, and control. Let’s dive into the latest breakthroughs and partnerships shaping the AI landscape.
First up, AMD is making a bold play to challenge Nvidia’s grip on AI chips. According to The Verge, AMD has partnered with OpenAI to supply six gigawatts of processors for AI data centers over five years. This deal includes deploying AMD Instinct MI450 GPUs starting in the second half of 2026, aiming to meet OpenAI’s growing computational needs. AMD’s CEO Lisa Su called it a “win-win” that will accelerate AI progress and boost the ecosystem. Interestingly, OpenAI is working with both AMD as a core strategic partner and Nvidia as a preferred strategic partner, signaling a new era of multi-vendor AI infrastructure.
Next, OpenAI is giving developers a powerful new toolkit called AgentKit to build and deploy AI agents. TechCrunch reports that Sam Altman unveiled AgentKit at OpenAI’s Dev Day as a complete set of building blocks for agent workflows, from prototype to production. AgentKit features Agent Builder, a visual design tool for creating agent logic, and ChatKit, an embeddable chat interface for apps. It also includes evaluation tools for measuring agent performance and a connector registry to securely link agents with internal and third-party systems. This toolkit aims to reduce friction and speed up the creation of autonomous AI applications across industries.
Speaking of apps, OpenAI is now launching interactive applications directly inside ChatGPT. Users can access apps from companies like Booking.com, Spotify, and Zillow right within their ChatGPT conversations, as reported by TechCrunch. For example, you can ask ChatGPT to find apartments using Zillow or create playlists with Spotify without leaving the chat. This new system uses OpenAI’s Model Context Protocol to connect data sources and render interactive UIs in responses. OpenAI plans to expand this ecosystem with apps like DoorDash and Uber, aiming to make ChatGPT a hub for personalized, adaptive experiences.
On the control front, OpenAI’s Sora app has introduced new features to give users better say over AI-generated deepfake videos of themselves. The Verge explains that users can now restrict where and how their AI doubles appear, including blocking appearances in political content or certain topics. Users can even customize preferences for their virtual selves, like having a specific hat in every video. While these safeguards are a step forward, OpenAI acknowledges ongoing challenges and plans to enhance restrictions and watermarking to prevent misuse. This reflects a growing concern about misinformation and privacy in AI-generated media.
Finally, at OpenAI’s Dev Day, former Apple designer Jony Ive revealed work on a new family of AI-powered devices with OpenAI. Wired reports that these devices aim to redefine our relationship with technology by making us happier, more fulfilled, and less anxious. Unlike traditional phones or laptops, these hardware products may be screenless and context-aware, using cameras and microphones to interact with users. Ive emphasized the goal is not just efficiency but social good, with a launch possibly targeted for late 2026. This ambitious project highlights OpenAI’s push beyond software into transformative hardware experiences.
So, from chip partnerships to new toolkits, interactive apps, user controls, and groundbreaking hardware, AI is moving fast and wide. These developments show a clear focus on building powerful, responsible, and human-centered AI technologies. Thanks for tuning in to The Prompt by Kuro House—stay curious, and we’ll see you tomorrow.


