In March 2023, a synthetic image of Pope Francis wearing a white Balenciaga puffer jacket went viral, racking up 28 million views on Twitter in 48 hours. The image was generated by Ren, a Chicago construction worker, using Midjourney v5 in roughly 90 seconds. It cost him nothing. Debunking it cost significantly more: AFP, Reuters, and Bellingcat collectively spent an estimated 40 analyst-hours verifying it was fake, tracing the metadata, and publishing corrections that reached perhaps 3 million people—a tenth of the original audience. This asymmetry accelerated. By late 2024, the AI-detection startup Sensity reported that synthetic media online had grown 550% year-over-year, reaching 500,000 new deepfake videos per month. Platforms like X and TikTok were flooded. A single person with a ...
Popular framing: Deepfakes are a misinformation problem caused by bad actors creating deceptive content and platforms failing to remove it fast enough. Better detection tools, stronger moderation, and media literacy education will contain the threat. The 'Pope Jacket' wasn't malicious, but its success proved the infrastructure for malice is fully operational and free.
Structural analysis: Deepfakes instantiate a fundamental cost asymmetry: synthetic content creation has been commoditized to near-zero marginal cost, while verification remains labor-intensive and expensive. This is not a moderation lag but a structural Gresham's Law dynamic — cheap synthetic signals systematically crowd out expensive authentic ones in attention markets, regardless of any individual platform policy. The externalities (journalist hours, market volatility, institutional trust erosion) are fully borne by parties with no relationship to the creator, ensuring chronic overproduction of harmful content is the rational equilibrium. The 'Liar's Dividend'—the fact that the existence of deepfakes allows people to claim that *real* evidence (like a politician's actual scandal) is also fake.
The popular frame treats each viral deepfake as a discrete enforcement problem to be solved at the content layer, which consistently misses that the system incentives make overproduction mathematically rational. As long as creation costs approach zero and verification costs remain fixed or rise, no content moderation regime can close the asymmetry — the correct interventions target the cost structure itself (liability assignment, provenance standards with economic teeth, public investment in verification as a common good) rather than individual pieces of content.