Earlier this year, I posted (not for the first time) an important question How soon might humans be replaced at work. Recent layoffs from Amazon, Salesforce and other tech companies might seem to provide an answer to this question, especially as these companies have explicitly blamed AI for making people redundant. However, other explanations are available, and Danielle Kaye's article The AI job cuts are here - or are they? includes several sceptical voices, noting that business has always experienced cycles of hiring and firing. Management may be happy to take the credit for the former while avoiding taking responsibility for the latter, thus AI may provide a convenient excuse. Furthermore, explanations of this kind are designed for several different audiences, including investors, customers, and the surviving workforce. Employees who remain may get the message that their jobs are also at risk unless they come to terms with the corporate appetite for AI. Amazon employees themselves are protesting this - see www.amazonclimatejustice.org (HT Karen Hao). Meanwhile tech companies themselves have a vested interest in hyping the value (or potential value) of AI, therefore claiming to have gained productivity benefits from their own internal AI deployments. For further scepticism on this point, see recent articles by Lindsay Clarke (who was also quoted in my February 2025 post).
Lindsay Clarke, Oracle goes all-in on AI, customers still figuring out how they'll use it (The Register, 16 October 2025) Lindsay Clarke, AI layoffs to backfire: Half quietly rehired at lower pay (The Register, 29 October 2025) Danielle Kaye, The AI job cuts are here - or are they? (BBC News, 29 October 2025) Related posts: Data-Driven Data Strategy (February 2025), How soon might humans be replaced at work? (July 2025)
At City St George's University this week for a discussion on Truth, Trust and Tricksters in the Age of AI, organized by Index on Censorship, the Institute for Creativity and AI and Global (Dis)Order. One of the panelists, Kenneth Cukier of the Economist, recounted his attempt to get an AI tool to produce an image to illustrate a point about silo-busting. The tool refused on the grounds that this would be an image of destruction. Cukier regarded this as an example of censorship - the tool was denying his ability to express his ideas freely. As I suggested in the subsequent discussion, we might also regard this as the tool censoring itself. Self-censorship has been a feature of many authoritarian regimes in the past, and is perhaps emerging in new forms today, especially as people may fear the consequences of their words being taken out of context and blasted around the Internet. One of the features of agentic AI is presenting the AI device as an autonomous agent. It may appear to be working for Crukier, but it is ultimately controlled by whichever big tech assemblage has developed and disseminated it. Which brings me to a critically important question I have asked several times previously - Whom Does The Chatbot Serve? (Towards Chatbot Ethics, May 2019) In any case, all communication and creation is necessarily selective. However many images the AI tool does in fact produce, there is a much larger quantity of possible images it has decided or been programmed not to produce. There must be huge amounts of material that the editors of the Economist choose not to publish, for whatever reason, and it would be absurd to frame all these editorial decisions as forms of censorship or self-censorship (although of course some of them may well be). However, attempts to reduce the quantity and reach of misleading content will often be framed as censorship by those promoting such content, as Jacob Weisberg's latest article demonstrates. There are some difficult issues here, and it is sometimes hard to avoid taking a political side. See also Thinking with the Majority (May 2021), Amplification and Attenuation (October 2021) Jacob Weisberg, Algorithm Nation (New York Review of Books, 23 October 2025)
In his latest article, David Robert Grimes traces the history of the anti-vaccine movement. Ever since Edward Jennner's early experiments, using a relatively mild disease (cowpox) to protect against a much more serious one (smallpox), people have expressed scepticism, fear, scorn and outright opposition to all forms of vaccination. Vaccine hesitancy has increased significantly in the last few years, especially during and after the COVID-19 pandemic, and Jim Reed's article also notes that the sheer quantity of vaccinations that are being pushed onto people has resulted in a degree of vaccine fatigue, even among NHS workers. Those who believe in the efficacy of vaccines, and in the important contribution that vaccine makes to public health, tend to see the anti-vaccine movement as fueled by conspiracy theories, immune to scientific argument because the adversary in this game plays according to rules that are not generally those of science WHO 2007. In relation to another area that has promoted strong opposition in some quarters, the idea of eating insects as a source of protein, Riley Farrell's article quotes Stephan Lewandowsky, who suggests arguments based not on the content of the beliefs but on their purpose. You're not going to be successful if you say, Uncle Bruce, you're crazy… don't believe this utter nonsense . But instead, you can ask: What function do your beliefs serve? Why are you believing this? Many politicians and internet celebrities take strong positions on vaccines, bug eating and other topics, and some of these may be cynically driven by the desire to build support and revenue rather than their own private beliefs - for example vaccinating their own families while attacking vaccines for everyone else. For such people, the purpose of these positions may be clear, although they probably won't acknowledge it. But as for Uncle Bruce, it's not at all clear what kind of answer Professor Lewandowsky would expect or accept, or what arguments this would lead to. Underpinning all of these movements is a distrust of authority, especially governments, big business and scientists. And yet a willingness to trust the biggest businesses on the planet - the tech platforms and their Generative AI tools that add fuel to these theories, and generate income for themselves. Obviously.
Riley Farrell, How eating insects became a conspiracy theory (BBC 4 September 2025) David Robert Grimes, The strange history of the anti-vaccine movement (BBC 5 September 2025) Jim Reed, Rise of vaccine distrust - why more of us are questioning jabs
(BBC 16 January 2025) WHO Bulletin 27 November 2007 86(2):140–146. doi: 10.2471/BLT.07.040089
As previously noted on this blog, lists may be constructed for various purposes, but the list then becomes a thing in its own right. One of the open questions of our time appears to be the existence or non-existence of a list, supposedly maintained by Jeffrey Epstein, possibly with the assistance of Ghislaine Maxwell. And the presence or absence of certain names on this list, if it exists. Interviewed in prison recently, Maxwell has denied the existence of such a list. Fintan O'Toole notes the obsession of conspiracy theorists with the supposed existence of documentary evidence.
This naive faith is the other side of the American paranoid imagination.
Even while it conjures the vast potency of the conspirators, it also
takes it for granted that, inside the archives of the deep state, they
have carefully preserved detailed proof of their plots to assassinate
JFK, hide the visitations of aliens, and enable the satanic child
abusers. Crackpot realism has a strange trust in the bureaucracy. In it,
that most dully bureaucratic of words—files—becomes a magic elixir of
truth. But surely the more important question is about the relationships
that Epstein maintained with a number of wealthy and well-connected
people, and the extent to which he had any kompromat over them. Not
whether he kept all their names in a grubby little notebook, like he was
a villain in a B-movie. If the list only ever existed in Epstein's head, as suggested by the satirical website Newsbiscuit, does that count?
Wikipedia: Jeffrey Epstein client list, Luc Cohen, Andrew Goudsward and Jack Queen, Ghislaine Maxwell told DOJ she is unaware of any Epstein 'client list' (Reuters, 23 August 2025) Fintan O’Toole, ‘A Guy Who Never Dies’ (New York Review of Books, 12 August 2025)
Jeffrey Epstein’s amazing memory (Newsbiscuit 29 August 2025)
Interesting piece by Deena Prichep, in which clergy agonize as to the ethics of using a chatbot to construct a sermon. The first point is that it is easy - perhaps too easy. ChatGPT currently advertises its sermon-writing services as follows: Your preaching companion. Transform Your Message into Impactful Sermons. Just provide your topic, choose from three tailor-made outlines, and let's co-create a captivating sermon. Fully adaptable to your congregation's needs - denomination, duration, tone, and language.
And for busy clergy the results seem almost touched by the Holy Spirit (aka Ghost in the Machine). Prichep quotes a Lutheran pastor whose first reaction was Oh my God, this is really good . (I may be doing my own research here, but I think there may be something in the Bible about taking the name of the Lord in vain.) But just because you can doesn't mean you should. One of the arguments in favour of letting a large language model write your sermons for you is that it frees up your time to do more important things, like pastoral care. But are these things really more important? Brad East argues (following Calvin) that the primary task of ministry is the service of Word and sacrament, and that use of Artificial Intelligence shortchanges something essential. So the underlying principle here seems to be that it might be okay to use AI tools for less important tasks but not for your most important task. However, there are some other issues with the use of AI tools, including the environmental cost. And East notes the possiblity that large language models might fabricate material as well as pushing a particular agenda, although one might think preachers have always been able to do this without the aid of technology.
Brad East, AI Has No Place in the Pulpit (Christianity Today, 27 September 2023) Deena Prichep, We asked clergy if they use AI to help write sermons. Here's what they said (NPR 17 July 2025) HT Carissa Véliz Deena Prichep, Encore: Religion and AI, what does it mean when the word of God comes from a chatbot? (NPR 19 July 2025) John Rector, The Ghost in the Machine (19 June 2024) Brad Turner, Beatitudes or Platitudes (Milton Church of Christ, 19 December 2021)
More Recent Articles
|