Kant wept, buried under toxic AI sludge
Immanuel Kant, born in 1724, developed the Categorical Imperative, which we can understand as: If a choice I make were to become a universal law, would I accept that outcome?
For instance, If I decide to lie to gain an advantage, what if everyone were to follow that example? The result if they did: Promises could no longer be trusted, and lying would become impossible since trust would no longer exist.
Kant expressed the concept in several formulations, the first two of which are most relevant:
- Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.
- Act in such a way that you treat humanity, whether in your own person or in that of another, always as an end and never merely as a means.
Which brings us to AI.
This AI-Generated Podcast Network Publishes 11,000 Episodes a Day. It Also Ripped Off Media Outlets.
Durham News Today is one of at least 433 shows by The Daily News Now! (DNN), a podcast network that claims to reach ‘millions of listeners monthly across 150+ U.S. cities and 50+ global cities, with dozens of charting shows across Sports, News, Tech, and Entertainment.’
Indicator couldn’t verify the claims about DNN’s audience. But I found that DNN has published more than 350,000 episodes since January 23. That’s roughly 11,000 episodes a day — or more than a year of content if you played it back-to-back.
Using technology to automate the “rip-and-read” of local radio newscasters, a developer is scraping content from news outlets in 200+ cities and creating literally thousands of hours of podcasts each day.
If Kant were that developer, he might ask of his own actions: Is it acceptable to extract others’ labor and repurpose it at scale without consent if technology makes it possible?
He would then universalize the question: If everyone did this, original reporting would become economically unsustainable. The very profession I am extracting from would collapse. So, the system would undermine its own conditions of possibility.
On his second formulation, Kant might then ask, If AI-generated podcasts monetize or appropriate local reporting without consent, compensation, or partnership, am I then treating these local journalists as merely a tool, the raw material in my quest for profit?
For Kant, that would be morally impermissible regardless of scale or efficiency gained.
Kant was a deontologist, whereas other philosophers of the age were consequentialists who would take a slightly different approach to the question. So if the AI developer were John Stuart Mill, he might ask if the public benefit outweighs the public harm:
Do my automated podcasts significantly expand access to local information and overall civic welfare? If so, and if any direct harm I may cause to local journalists is minimal, then my 11,000 daily podcasts could be justified.
But he would consider: What if my practice degrades the economic viability of local reporting overall, and ultimately reduces the production of trustworthy information? In that case, it is morally wrong — not because I am using journalists as a means to an end, but because my service will lead inevitably to worse outcomes for society.
Two key takeaways here:
- Scale matters. An act limited in scope and impact by the technologies of the industrial age is different than the frictionless scale and geography of the digital age.
- This is why technology CEOs don’t think colleges should teach the humanities.
Member discussion