I’m impressed by how many posts in my Facebook feed these days appear to be AI generated, including both the text and accompanying photos. Many are clearly pushing falsehoods, and some attribute things to political figures and celebrities who never said such things.
I admit, I use an AI LLM (large language model) to help write posts, but my process is to completely write the post myself based on the truth as I know it, then I run it past AI and ask for a fact check on what I’ve said to keep me honest. A lot of times the LLM wants to rewrite the post, supposedly to improve the grammar and flow. I draw the line there. I may edit my post with some of its suggestions, especially grammar and facts, but I won’t let AI write for me.
AI-generated narrative text often has telltale signs once you’ve read enough of it, but even experienced readers can’t always be sure. Detection tools can help, though they’re not perfect. Clearly social media platforms can use their own AI to detect synthetic content. I stop short of saying platforms should ban it—because that risks a slippery slope toward limiting free speech—but they could at least flag posts with an “AI probability” rating to promote transparency.
On YouTube it’s a different story. Many channels are using AI narrators and AI photos within their videos. In most of those cases, I see it as a practical thing to do. People want to get their ideas out there, they write their own script, but they may not have a good speaking voice, good audio equipment, a budget to pay for voiceovers and stock photos, or even enough command of English to narrate their own videos. YouTube now asks creators to disclose when realistic content is AI-generated, which seems like a fair and responsible approach.
I ran this post past an LLM and let it fold in factual corrections—I left in a telltale sign AI touched the post, can you find it?