In this issue: Using chatbots as personal coaches and expert advisors. AI workflow hacks for manuscript editing. Google’s new video generator Veo3 and the ease of creating misinformation. And why publishers need to rethink content for Gen Z’s AI-first search habits with Rheinische Post’s Margret Seeger.
What we’re talking about: Studies of varying quality show that more and more people are using chatbots as coaches. There are tons of examples of prompts where the chatbot is supposed to pretend it’s not just a professional in something, but a famous person, and this personality is then supposed to help with life decisions, or evaluate an investment portfolio, business idea, or whatever.
Apparently we’re craving authorities—there are made-up interviews with Albert Einstein and other gimmicks. The more famous the person, the more data in the training sets, the better it works. There’s already the idea of using a person’s knowledge as the foundation for a chatbot. All of Andrew Huberman‘s collected podcasts (I know, problematic) are accessible through this chat and discovery interface by company Dexa.
I’m just waiting for more podcasters or even journalists to adopt this model. It’s only a matter of time. Google is already showing an example and has created a digital copy of author Kim Scott—basically a chatbot that draws on her knowledge and her bestseller “Radical Candor: How to Get What You Want by Saying What You Mean.” Because nothing says “personal growth” like a bot telling you to be more honest with your coworkers.
Knowledge that hasn’t yet flowed into the models’ training, or that the models wouldn’t spit out as quotes because they don’t want to break copyright that blatantly. The idea of expert chatbots is so obvious. The risks of the chatbot spouting nonsense in someone’s name are obvious too. You can minimize this risk and do nothing, or just go for it. Kim Scott and Google are going for it. What’s the worst that could happen? (Famous last words, probably.)
What else I’ve been reading: