Tech Utopianism as Dystopia
I am at a journalism AI conference this week and the opening keynote / provocation (my description) an R&D executive from a Fortune 100 business.
The delivered vision of the future was bleak. To wit: businesses that invest in AI are going to make a lot of money. Humans? Well, they will be around.
The future, apparently, is a society made great by the optimization of data, technological innovation, and the automation of almost everything: driving, thinking, living. Or as the theology is couched: “supported by.”
But it is a philosophy built completely the wrong way around—one in which technology and its evangelists are always good and correct and successful in their aims. Of course, there are books written that document the historic falsity of these promises.
Systems that cut humans out of the loop? Well cars without steering wheels are fragile infrastructure. They are great until the power goes out.
Proof of corporate strategic success measured primarily by valuations? Great until the bubble bursts.
The high-density surveillance of employees and customers to drive optimization? Actually, this has never been great, but it gets worse when the tools are hijacked by authoritarians.
Apparently, the future Silicon Valley is planning for is not entirely devoid of humanity, but one where the needs of people are served as a lucky byproduct of corporate optimization.
And they are not asking; they are selling.
Member discussion