Listen To The Show

Transcript

Welcome back to The Prompt by Kuro House—your daily dive into the fast-moving world of AI and the ripple effects it’s having across policy, business, and technology. Today’s stories are heavy, insightful, and at times, unsettling. We’re looking at how political agendas are reshaping AI oversight, the unraveling of a once-celebrated gene therapy, and how universities are stepping in where governments are stepping back. Let’s get into it.

Our first story comes from MIT Technology Review and it’s a sobering look at what’s happening to America’s AI watchdog. The Federal Trade Commission, once a rare bipartisan hero in the fight to keep AI companies honest, is now under siege. The Trump administration’s new AI Action Plan explicitly targets the FTC’s previous efforts, calling them burdensome red tape. That includes investigations into companies like Evolv, whose AI security checkpoints failed to detect a seven-inch knife later used in a school stabbing, and Intellivision, a facial recognition firm that falsely claimed its software was free of gender and racial bias. The FTC also cracked down on AI-powered “lawyers” and fake product review generators. These weren’t symbolic slaps on the wrist—the agency provided real consumer relief. But under Trump, the FTC is being gutted. He’s fired Democratic commissioners, and even a federal court had to step in to reinstate one of them, Rebecca Slaughter. Now, the administration is promising to review—and potentially reverse—every AI-related enforcement action taken over the past four years. The message is clear: innovation first, consumer protection later, if at all.

That brings us to our second story, also from MIT Technology Review, which looks at the broader implications of Trump’s AI Action Plan. While the administration claims it’s fighting to eliminate bias in AI, Wired reports that the plan itself is deeply biased. The language is coded but unmistakable. The plan demands that AI models used by the federal government eliminate references to “misinformation,” “Diversity, Equity, and Inclusion,” and even “climate change.” Yes, you heard that right—climate change is now considered a “social engineering agenda.” Trump’s executive order, titled “Preventing Woke AI in the Federal Government,” essentially requires AI models to align with his administration’s worldview if they want federal contracts. And since companies like OpenAI, Anthropic, and Google are all vying for those contracts, the pressure to comply is enormous. So far, none of these companies have publicly pushed back. Instead, they’ve praised the plan’s support for infrastructure and research funding. But critics like Senator Edward Markey are sounding the alarm, warning that this is an attempt to turn AI into a propaganda tool. If AI becomes just another mouthpiece for political ideology, what happens to truth?

Staying with MIT Technology Review, our third story is a heartbreaking one. It’s about Elevidys, a gene therapy designed to treat Duchenne muscular dystrophy, or DMD—a rare, fatal condition that primarily affects young boys. Elevidys was approved in 2023 under the FDA’s accelerated approval program, which allows drugs for serious diseases to reach patients faster, even if their efficacy isn’t fully proven. The therapy costs $3.2 million and was initially hailed as a breakthrough. But here’s the catch: Elevidys never actually proved it worked. It increased levels of a synthetic version of dystrophin, the missing protein in DMD patients, but failed to show any real-world improvement in symptoms. A follow-up clinical trial confirmed that the drug didn’t meet its primary endpoint. And now, the company behind it, Sarepta, is facing intense scrutiny. For families clinging to hope, the fallout is devastating. They were sold a miracle that science couldn’t back up.

Meanwhile, another MIT Technology Review piece highlights how nonprofits and universities are stepping in to salvage the U.S. climate agenda—because the federal government has all but abandoned it. The Trump administration has pulled funding from the U.S. Global Change Research Program and fired the last remaining staffers in the State Department’s Office of Global Change. This office used to coordinate the U.S. contribution to the UN’s Intergovernmental Panel on Climate Change (IPCC), including selecting scientists and funding their travel. With the government out of the picture, a coalition of 10 universities—including Yale, Princeton, and UC San Diego—has formed the U.S. Academic Alliance for the IPCC. They’ve nominated nearly 300 scientists to participate in the next major climate report, and the American Geophysical Union is now fundraising to cover travel costs. Researchers are also using this moment to test new tools—AI-powered sensors and novel data collection methods—to keep climate science moving forward. It’s a grassroots rescue mission for global climate leadership.

And finally, let’s return to the foundational issue raised by MIT Technology Review in their piece titled “Trump’s AI Action Plan is a distraction.” At the heart of the AI ecosystem in the U.S. are immigrants. Of the 42 U.S. companies on the 2025 Forbes list of top AI startups, 60% have at least one immigrant cofounder. Immigrants also helped build the very companies—OpenAI, Anthropic, Google, Nvidia—that are now shaping the future of artificial intelligence. But that pipeline is under threat. Anti-immigration policies and cuts to R&D funding are reversing a decades-long trend of brain gain. The U.S. is no longer the automatic destination for the world’s best minds. Instead, we’re watching a slow-motion talent exodus. And history tells us what that could mean. Silicon Valley thrived because of its culture of innovation, risk-taking, and yes, defection—from the Traitorous Eight to the PayPal Mafia. If we choke off the flow of new talent and ideas, we risk losing not just the AI race, but the very spirit of American innovation.

That’s all for today’s edition of The Prompt. The stories we covered aren’t just about AI—they’re about who gets to shape the future, how truth is defined, and what happens when institutions meant to protect us are hollowed out. As always, stay curious, stay critical, and we’ll see you tomorrow.

Podcast also available on PocketCasts, SoundCloud, Spotify, Google Podcasts, Apple Podcasts, and RSS.