A faster printing press doesn't make the book less real
On building with AI without becoming the problem.
There is a conversation happening in educated circles that I find myself in the middle of — sometimes uncomfortably so. AI is destroying the internet. AI-generated content is poisoning real information. And anyone who builds with AI is complicit.
I build with AI. Let me explain why I don't think I'm the problem — and why the distinction actually matters.
What I built
A podcast called IveHadIt publishes roughly 1.5 hours of real human conversation every single day. Real opinions, real arguments, real moments. That content existed whether I touched it or not.
The problem was reach. A 60-minute podcast gets maybe a few thousand loyal listeners. But buried inside that hour are five or six genuinely compelling moments — a sharp take, a burst of laughter, an argument that stops you mid-scroll. Those moments, as a 60-second clip, could reach a completely different audience.
Manually finding those moments, clipping them, adding captions, and uploading them to YouTube, Facebook and Instagram takes about 3 hours per episode. IveHadIt publishes daily. That's a full-time job, for one channel, just in distribution.
So I built a carefully orchestrated process that does it automatically.
The numbers
In 6 weeks, the carefully orchestrated process processed 153 episodes and produced 920 clips across three platforms.
| Manual | Carefully Orchestrated Process | |
|---|---|---|
| Clipping time | 459 hours | 0 hours |
| Upload time | 102 hours | 0 hours |
| Total in 6 weeks | 561 hours | 0 hours |
561 hours is 93 hours per week — more than two full-time jobs, for one channel. The result: 400,000 views across YouTube, Facebook and Instagram in 6 weeks. Not because AI created something. Because real content finally had the logistics to reach people.
Where the line is
The internet is being flooded with AI-generated articles, newsletters and social posts that recycle each other's "facts" — often originally generated by another AI, published, scraped, and fed back into the next model. Researchers call this model collapse. I call it what it is: slop.
My carefully orchestrated process does not generate a single word of content. It finds the most engaging 60 seconds of a real human conversation. It adds captions. It schedules the upload. The opinion, the argument, the laugh — all real, all human, none of it touched.
The AI part is logistics. A faster printing press. The book is still written by a person.
The uncomfortable part
The companies building large language models have a simple fix available that they haven't used: a label. "This content was generated by AI — accuracy may be lower than you think." One line. Trivially easy to implement. Not done, because volume is the business model.
Europe is the most realistic lever here — not because of principle, but because losing the European market is expensive enough to change behavior. It worked for privacy (GDPR). It worked for food standards. Whether it works for information quality is the open question.
In the meantime, the only defense is asking where the fact came from.
Which is exactly what you should do with this newsletter too.
Comments
Be the first to comment.