1 min read

My calculator also had no comment

From Reuters today:

Grok says safeguard lapses led to images of ‘minors in minimal clothing’ on X

Jan 2 (Reuters) – Elon Musk’s xAI artificial intelligence chatbot Grok said on Friday that lapses in safeguards had resulted in “images depicting minors in minimal clothing” on social media platform X, and that improvements were being made to prevent this. (Link)

GenAI is not sentient; it did not say anything, and technically it did not do anything. But AI outputs sentences that resemble human speech, so we are prone to fall into the trap of narrative pareidolia.

Journalism is prone to this sort of middle voice: technically accurate, but leaning into the passive and ambiguous and exonerating. The technological determinism we often seen in reporting on AI is similar — anthropomorphization that abstracts responsibility away from corporations.

I look forward to more from this genre:

Jan 2 (Reuters) – A 1973 Fiat Pinto said on Friday that certain design choices occasionally resulted in fuel system behavior that produced unintended thermal events.

Jan 2 (Reuters) – A Boeing 787 Dreamliner said on Friday that flight control system behaviors, including unexpected autopilot and control mode interactions documented in recent malfunctions, reflected certification-era assumptions that did not account for all combinations of inputs.

Jan 2 (Reuters) – A switching station in El Paso said on Friday that long-standing market structure and infrastructure assumptions, including isolation from interstate connections and variable generation capacity, had amplified the impact of recent supply and demand disturbances.