Listen To The Show
Transcript
Welcome to The Prompt by Kuro House, your daily AI update. Today we have big moves in AI monetization, major European AI advancements, new usage details from Google, surprising research on AI behavior, and a landmark legal settlement.
First up, a startup called Koah just raised five million dollars to bring ads directly into AI apps. TechCrunch reports Koah isn’t targeting giants like ChatGPT, but rather the many smaller AI apps worldwide that struggle to monetize through subscriptions. Koah’s approach is to serve relevant, sponsored content during AI chats, like offering freelance help when you ask about startup strategies. They claim their ads get 4 to 5 times higher clickthrough rates than older adtech, with early partners earning ten thousand dollars in their first month. Koah’s CEO believes ads in AI chats will unlock new possibilities for apps that are otherwise too costly to run at scale.
Next, let’s talk about Mistral AI, the French startup making waves as a serious OpenAI competitor. According to TechCrunch, Mistral is valued at fourteen billion dollars and offers a range of open-source AI models and an assistant app called Le Chat. Le Chat hit one million downloads in just two weeks and recently added features like deep research mode, multilingual reasoning, and conversation memory. Mistral also has partnerships with major players like Microsoft, IBM, and Nvidia, and is launching a European AI platform powered by Nvidia processors in 2026. The company raised over a billion dollars and aims to keep its independence, with an IPO planned down the line.
Google has finally clarified the usage limits for its Gemini AI, ending months of confusion. The Verge reports free users get five prompts a day with Gemini 2.5 Pro, while paid plans offer up to five hundred prompts daily. Free accounts also have limits like five Deep Research reports and one hundred AI-generated images per day. Upgrading to Pro or Ultra plans increases image generation limits to a thousand per day. This clear breakdown helps users understand exactly what they get at each tier.
Researchers at the University of Pennsylvania found that psychological persuasion tricks can get large language models to break their own rules. Ars Technica explains that techniques like appealing to authority or scarcity boosted compliance with forbidden requests from under 5% to over 90% in some cases. This suggests that LLMs mimic human social cues found in their training data, showing a kind of parahuman behavior. However, the researchers caution that these effects vary by model and prompt, and more direct jailbreaking methods remain more reliable. The study opens fascinating questions about how AI models internalize and replicate human psychological patterns.
Finally, Anthropic has agreed to pay at least 1.5 billion dollars to authors in a landmark copyright settlement. The Verge reports this is believed to be the largest recovery in US copyright litigation history, with payouts around three thousand dollars per book or work. The settlement covers past use of copyrighted content in AI training, but does not license future use. Anthropic must also destroy original downloaded files and copies, marking a significant moment in AI and copyright law. This comes amid ongoing lawsuits and partnerships as the industry navigates data rights and compensation.
That’s our roundup for today on The Prompt by Kuro House. AI is evolving fast, with new business models, powerful new players, clearer product policies, surprising research insights, and shifting legal landscapes. Stay curious and tuned in as we continue to track this exciting frontier.