2 min read

Measuring and explaining reader engagement

Measuring and explaining reader engagement
From the Columbia Journalism Review on July 30, 2019

We need to know the “what” and the “why” to make sense of the data

UPDATE: Not claiming any credit here as Chartbeat often does explainer blog posts — but here are some details on the Chartbeat data from Su Hang, the lead data scientist on the climate coverage study: Taking the temperature of climate change engagement: Our analysis

That is an interesting and engaging headline. Great if true, and I don’t dispute it is. Read the full story here.

But, I would like to see the footnotes:

https://twitter.com/dkiesow/status/1156506094556065792?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1156506094556065792%7Ctwgr%5E%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fcdn.embedly.com%2Fwidgets%2Fmedia.html%3Ftype%3Dtext2Fhtmlkey%3Da19fcc184b9711e1b4764040d3dc5c07schema%3Dtwitterurl%3Dhttps3A%2F%2Ftwitter.com%2Fdkiesow%2Fstatus%2F1156506094556065792image%3Dhttps3A%2F%2Fi.embed.ly%2F1%2Fimage3Furl3Dhttps253A252F252Fpbs.twimg.com252Fprofile_images252F1112532517817585670252FT5oKmaSC_400x400.png26key3Da19fcc184b9711e1b4764040d3dc5c07

Josh Schwartz from Chartbeat quickly and helpfully responded that as the data is all owned by the respective publishers, they can’t share too many details.

Totally fair. But the finding that “time spent” almost doubled on climate stories between 2017–2019 is an extraordinary claim and requires further supporting detail if the industry is going to make sense of the findings. That is important not just for these specific climate stories but for the use of similar metrics in the future.

And the responsibility here is really not Chartbeat’s. They have access to a great mass of data but there are limits to how much they can share publicly.

I would really like to hear more from CJR about how they understood and contextualized the data in the story. I trust the measurements and math Chartbeat provided. But we don’t know enough to trust the interpretation.

A few immediate questions I had that I would like to know more about:

  1. The “time spent” metric is total time, not an average per story? So that means, more coverage would result in more time spent even if engagement was still relatively low on individual stories.
  2. The NYT and Guardian (referenced in the story) both increased their coverage. Did a few large properties skew the metric or was it evenly distributed? Josh (above) indicates they felt it was consistent.
  3. How were stories coded as being “climate-related?” If news organizations are mentioning the climate crisis more in weather stories, sports stories, or political stories is that an increase in “time spent” on “climate” or an increase in media’s contextualizing the issue? (Which also would be great.)
  4. Were climate-related stories promoted differently between 2017–2019 or were climate-related stories just bigger news recently — as in fires, heat, flooding as per question #3? Does that mean readers are engaging more on the topic, or that they are being exposed more to the topic by how media covers it?

That’s it. The story here is really interesting — people are paying more attention to an important topic. But I would just love to hear more that explains the “why” not just the “what.”

Also published on Medium.