2 min read

The ethics of journalism tech

On the lecture circuit, I almost never repeat the same slide deck twice. Usually, events move too quickly for the exact lesson to be replayed.

The one exception: a discussion about the ethics of journalism technology, which I have probably delivered 10 times, in class, online, in the U.S. and abroad, as a solo act, on panels, and with different individual partners. (Here is one version of the talk, from 2025.) Despite the changes of format and audience, the message has been unusually consistent. Why? Because, as an industry, we have not learned what it means to be ‘tech literate.’ We still too often accept tech on the terms it has been offered; and too often those terms are ill-suited to our needs or are changed unilaterally, harming our business and communities.

It is not Silicon Valley’s responsibility to protect us from our own bad decisions, so we do need to rethink how we think about the role of technology in our organizations. AI is just the latest, but very relevant, example.

Some highlights of the argument:

Journalism is different. It is designed to inform, expensive to create, exists to serve a community, and must both minimize harm and maximize benefits.

We have too often adopted the industrial logics of technology. Big tech makes small margins on large volumes. It requires scale to be profitable, serving a global audience and chasing profits, not the public good. Journalism serves a community and depends on understanding and trust—neither of which scale effectively.

AI is just the latest tech, and also a unique threat. AI imposes a third wave of disruption on journalism: information is no longer expensive to distribute (the web), access (the mobile web), or create (GenAI). We must rebuild the business and readership without those three defensive moats.

We must ask questions about technology. What is (AI in this case) good for? What are the costs & benefits to our staff, the community, the business, our values and ethics, and society? How does it work; who built it; who is profiting from it? Why, how, and where should we use it? And we must keep in mind “no” is often a rational and appropriate answer.

We must focus on our core ethics and values. The digital era didn’t rewrite our values, but it reshapes the fault lines along which they surface. For example, “the right to be forgotten” did not exist in a world where the morning newspaper and the evening broadcast were entirely ephemeral. We need to understand how to manage and mitigate these sometimes abstract and often ignored risks.

Harm is caused when we act without intent. Tech becomes embedded in our systems and processes, acts as an intermediary between publisher and reader and advertiser, and, when the rules are changed unilaterally, imposes significant switching costs or strategic damage.

To change, first requires belief. Organizations must sense the danger in not changing, and recognize the opportunity to change. They must develop a path forward, and believe the path is achievable. And individuals must believe it will benefit them.

And we must remember: “Technology is neither good nor bad; nor is it neutral.” — Kranzberg’s first law.