Artificial Intelligence is likely a bad news editor
I was struck by a minor existential dread recently while testing an AI-driven news aggregation app in my quixotic 10-year search for a half-decent Zite replacement. The new app in question surveys hundreds of different publications and articles, taking measurements of quality and political leaning into consideration.
- The summaries in the app were fine and typical of many similar efforts.
- But it was quickly apparent that “news” can not be accurately understood as an average of publicly available reports.
- And so the dread: a news ecosystem dominated by AI-generated summarization is going to be an absolute nightmare.
On that first point: there is a vast difference between acquaintance with a fact, and knowledge about it. I am acquainted with the existence of Integral Caculus. I am neither able to teach, explain or practice it. A bullet point summary confers a mere awareness of complex news events. And the “voice from nowhere” of an AI-authored briefing additionally attenuates the chance of actual knowledge creation.
That is generically true of many of this not-exactly-new product category of aggregation/curation/summarization of news. (Which everyone most recently including Apple is chasing.)
On the second, more important related point:
News also cannot be understood when received as simply a summarization of available facts, assertions, opinions and arguments from different reporters, newsrooms and political perspectives.
News in and of itself is not a neutral report of observed events. Rather it is the end result of an objective inquiry that inevitably prioritizes certain topics, perspectives, sources and narratives. To be a “journalist” is to endlessly obsess about pursuing that work in the most fair, accurate and transparent manner possible. And as not all newsrooms are the same, MSNBC and OANN (for example) are not equivalently reliable guardians of that process or the outcomes.
An AI summary confined to the New York Times’ (or any other single newsroom) coverage of a topic may be an imperfect briefing, but it at least represents an internally coherent worldview and editorial voice. Imagine synthesizing coverage from National Geographic and the Flat Earth Digest. To what end? The truth is not always easy to determine, but it is not the average of collected assertions.
Making sense of news is about more than reading a single story on a topic, and it is definitely more than reading an AI-generated summary of a dozen articles written by different journalists with different publications, audiences and perspectives. This trend toward summarization is an attempt to optimize for reader's time but done poorly, it will come at the expense of understanding.
Member discussion