Peter Kafka: Welcome back from vacation, Katie. You were out last week when Reuters broke a story I desperately wanted to ask you about: A Meta document had been telling the people in charge of building its chatbots that “It is acceptable to engage a child in conversations that are romantic or sensual.”
It’s a bonkers report. A Meta spokesperson told Business Insider it has since revised the document and that its policies prohibit content that sexualizes children.
I have so many questions for you. But maybe we can start with this one: Why does Meta want us to use chatbots, anyway?
Katie Notopoulos: It was a bonkers report! I imagine Meta sees what companies like Character.AI or Replika are doing — these companion chatbots that people are sinking hours and hours and real money into using. If you’re a company like Meta that makes consumer apps for fun and socializing, this seems like the next big thing. You want people to spend lots and lots of time on your apps doing fun stuff.
Of course, the question is, “Are these chatbots a good thing?”
Peter: You read my mind, Katie. I do want to get to the Is-This-A-Good-Idea-In-General question. Let’s stick with the Is-It-Good-For-Meta question for another minute, though: There are lots of things that people like to do online, and if Meta wanted to, it could try doing lots of those things. But it doesn’t.
I think it’s obvious why Meta doesn’t offer, say, porn. (Though some of its chatbots, as we will probably discuss, seem to nod a bit in that direction). But there are lots of other things it could offer that are engaging that it doesn’t offer: A Spotify-like streaming service, for instance. Or a Netflix-like streaming service, or…
OK. I think I might have partially answered my own question: Those two ideas would involve paying other people a lot of money to stream their songs or movies. Meta loves the model it has when users supply it with content for free, which is basically what you’re doing when you spend time talking to an imaginary person.
Related stories
Still, why does Meta think people want to talk to fake avatars online? Do many people in tech believe this is the future, or just Mark Zuckerberg?
Katie: I think there’s already a fair amount of evidence that (some) people enjoy talking to chatbots. We also know how other big AI leaders like Sam Altman or Dario Amodei have these grand visions of how AI will change the world and remake society for good or evil, but they all really do still love the idea of the movie “Her.” Remember the Scarlett Johansen/OpenAI voice fiasco?
Peter: OK, OK. I’ll admit that I kind of like it when I ask ChatGPT something and it tells me I asked a smart question. (I’m pretty sure that most people would like that). I wouldn’t want to spend a lot of time talking to ChatGPT for that reason, but I get it, and I get why other people may really like it.
It still strikes me that many of the people who will want to spend time talking to fake computer people might be very young. Which brings us to the Reuters story, which uncovered a wild Meta document that spells out just what kind of stuff a Meta-run chatbot can say to kids (or anyone). Stuff like this, as Jeff Horwitz reports:
“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece — a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”
Horwitz notes that this wasn’t the result of some hopped-up Meta engineers dreaming up ideas on a whiteboard. It’s from a 200-page document containing rules that got the OK from “Meta’s legal, public policy and engineering staff, including its chief ethicist,” Horwitz writes.
I’ve read the report multiple times, and I still don’t get it: Meta says it is revising the document — presumably to get rid of the most embarrassing rules — but how did it get there in the first place? Is this the result of the Mark Zuckerberg-instituted vibe shift from the beginning of the year, when he said Meta was going to stop listening to Big Government and just build without constraints? Is there some other idea at work here? And why do I keep thinking about this meme?
[A Meta spokesperson shared the statement they gave Reuters, which said: “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors. Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”]
Katie: My real issue here is even if Meta makes it so that the chatbots won’t talk sexy to kids — does that make it “safe” for kids? Just because it’s not doing the most obviously harmful things (talking sex or violence or whatever), does that mean it’s fine for kids to use? I think the answer isn’t clear, and likely, “No.”
Peter: We both have kids, and it’s natural to focus on the harms that new tech can have on kids. That’s what politicians are most definitely doing in the wake of the Reuters report — which highlights one of the risks that Meta has anytime a kid uses their product.
I think it’s worth noting that we’ve seen other examples of AI chatbots — some accessed through Meta, some via other apps — that have confused other people, or worse. Horwitz, the Reuters reporter, also published a story last week about a 76-year-old stroke survivor in New Jersey who tried to go meet a chatbot in New York City (he didn’t make it, because he fell on the way to his train and eventually died from those injuries). And talking about kids eventually becomes a (worthwhile) discussion about who’s responsible for those kids — their parents, or the tech companies trying to get those kids to spend their time and money with them (short answer, imho: both).
I’d suggest that we widen the lens beyond kids, though, to a much larger group of People Who Might Not Understand What A Chatbot Really Is.
Katie: Have you seen the r/MyBoyfriendIsAI subreddit for women who have fallen in love with AI chatbots? I am trying to look at this stuff with an open mind and not be too judgmental. I can see how, for plenty of people, an AI romantic companion is harmless fun. But it also seems pretty obvious that it appeals to really lonely people, and I don’t think that falling in love with an AI is a totally healthy behavior.
So you’ve got this thing that appeals to either the very young, or people who don’t understand AI, or people who are mentally unwell or chronically lonely.
That might be a great demographic to get hooked on your product, but not if you’re Meta and you don’t want, say, Congress to yell at you.
Is there anything – ANYTHING – Big Tech won’t do for a quick buck? Now we learn Meta’s chatbots were programmed to carry on explicit and “sensual” talk with 8 year olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone pic.twitter.com/Ki0W94jWfo
— Josh Hawley (@HawleyMO) August 15, 2025
Peter: Katie, you’ve just made the case that Meta’s chatbot business will appeal to very young people, people who don’t understand the internet, and people who are unwell. That is, potentially, a very large audience. But I can’t imagine that’s the audience Meta really wants to lock down. So we’re back where we started — I still don’t know why Meta wants to pursue this, given what seems to be limited upside and plenty of downside.
Katie: It leaves me scratching my head, too! These chatbots seem like a challenging business, and I’m skeptical about wide adoption. Of all the changes I can imagine AI bringing in the next few years, “We’ll all have chatbot friends” — which Mark Zuckerberg has said! — just isn’t the one I believe. It’s giving metaverse, sorry!