 Dictators: why heroes slide into villainy What do Vladimir Putin, Fidel Castro, and Kim Il-sung have in common? All took power promising change for the better: a fairer distribution of wealth, an end to internal corruption, limited foreign influence, and future peace and prosperity. To an extent, they all achieved this, but then things began to go wrong. Corruption raged in Putin’s Russia; millions fled grinding poverty and political persecution in Castro’s Cuba; and Kim’s Korea launched a disastrous war that, to this day, makes reunification with the now-wealthy South a distant dream. Rather than admitting failure and handing over power to the next generation, each autocrat hung on, long outliving their welcome. And we can add Joseph Stalin, Benito Mussolini, Hugo Chavez, Zine El Abidine Ben Ali, Muammar Gaddafi, and Daniel Ortega to the list. So, why does this happen so often? From triumph to tyrantA new model, developed by Professor Kaushik Basu from Cornell University, and published in Oxford Open Economics, simulates the decisions national leaders face. It reveals positive feedback conditions that lead to escalating acts that serve the autocrat’s self-interest, and preservation of power, rather than the best interests of the country. “My paper was provoked by a personal encounter with Daniel Ortega of Nicaragua,” Professor Basu, former chief economist of the World Bank, recalls. “When I met him in September 2013 he still had the aura of a progressive leader, but subsequently morphed into a kind of tyrant that I would not have predicted. I wrote up the algebra to explain this transformation, and realized I had hit upon an argument that explains the behaviour of a large number of authoritarian leaders around the world.” It can take many years of struggle for a dictator to reach office and achieve their political ambitions. Reluctance to risk this work in progress with a popular vote after a mere four- or five-year term leads to actions such as intimidating, imprisoning, or assassinating political rivals; silencing the press, financial corruption, and tax evasion; influencing or corrupting the judiciary. Such tactics work—in the short term, at least —but opponents can’t be silenced forever: truth finds a way, often supported by domestic or foreign antagonists. As these controlling behaviours escalate, it becomes clear that relinquishing power would lead to legal persecution or imprisonment: just ask Chilean autocrat General Pinochet, arrested after leaving office; or Uganda’s Idi Amin, forced into exile in Saudi Arabia. Basu’s algebraic model shows that, after a threshold of bad behaviour is passed, no amount of do-gooding can undo the damage: the only choice left a dictator is to tighten his grip on power especially if, beyond national justice, lie international courts and tribunals. For example, Sudan’s former leader Omar al-Bashir has been in hiding since 2009, a fugitive of the International Criminal Court. Democratic policy mattersThe model explains why two-term limits have evolved as a popular way to curb such damage as they provide a limited time for unsavoury actions to accumulate, a single opportunity for re-election, and better options for leaving office peaceably. So, could this common system be more widely employed to prevent dictatorships? “A globally-enforced term limit is an important step, but may not be enough,” says Basu, who half-jokingly suggests that creating an easy exit for dictators, such as offering them a castle on a Pacific island, could also be effective. Joking aside, the paper has important implications for current US politics. Former US president Donald Trump faces 91 felony counts across four states. Is the barrage of legal indictments boxing him into the corner where he has no option but to regain office, and then corrupt power to save himself, just as the model predicts? At a time where authoritarian regimes are on the rise—the 2024 Economist Intelligence Unit report shows that 39.4% of the world’s population is under authoritarian rule, an increase from 36.9% in 2022—the need is greater than ever to promote policies that could halt the slide. Feature image by Peterzikas, via Pixabay. Public domain. OUPblog - Academic insights for the thinking world.
 State supported Covid-19 nudges only really worked on the young Who says young people never listen? A study in Sweden examining responses to state-backed nudges to get Covid-19 vaccination appointments booked has found that 16- to 17-year-olds responded much more strongly to prompts by letter, text, or email than 50- to 59-year-olds. At first glance, the result, published in an article in Oxford Open Economics, is paradoxical. Older people who are much more vulnerable to the virus should have signed up more than younger people whose lives are much less at stake, but the reverse was the case. In fact, the study is consistent with the theory that nudges are more effective for decisions that don’t really matter to the person subjected to the nudge. Politics, populations, and pandemic policies‘Nudge theory’ has interested researchers and politicians for decades as a means to influence population-level behaviour such as how to get people to sign up for a donor card, or take out a pension earlier in life. And it works. Just ask any marketeer or social media influencer what the impact of a well-timed and skilfully worded notification can be. But the nudge effect is limited and often misses the target population. Niklas Jakobsson from Karlstad University, and colleagues, saw an opportunity to empirically test nudge theories using data collected by the 21 regions of Sweden during the Covid-19 pandemic. While 20 regions sent out letters directing people to book their jabs by phone or online, one—Uppsala—nudged its citizens by summoning them to pre-booked appointments. The researchers matched up vaccination rates in each age group in Uppsala against a synthetic control devised from a weighted combination of all the other Swedish regions. The results showed that Uppsala’s nudged, pre-booked appointments had only a small (if any) effect on older people, but boosted vaccination uptake among younger people by 10%. “It makes sense that people who benefit a lot from vaccinations will take them even without a nudge,” Jakobsson explains. “For younger people, vaccinations were not as important and thus they were more affected by the nudge.” When should a nudge become a push?What is the behavioural mechanism behind such a counterintuitive finding? One line of reasoning suggests that you can’t really nudge people into doing something they don’t want to do when their volition—that is, the power of free will—is high. However, you can change behaviour when people don’t really care either way, that is, when volition is low. According to Kahneman’s theory of Fast and Slow Thinking, once the fast, unconscious, ‘System 1’ cognitive processes are passed, nudges get held up in the slower, deliberative, ‘System 2’ thoughts, and are less effective. “People do actually think through their decisions and do not act randomly,” says Jakobsson. “The nudges that are most likely to work are changes of defaults that clearly decrease the costs of making a decision.” Policymakers could take a closer look at this study to work out what they are doing right, as well as wrong, when developing future nudges. “This nudge worked, but more so for younger people,” says Jakobsson. “So, it could be a way to somewhat increase vaccinations at a low cost.” Feature image by tortensimon via Pixabay, public domain. OUPblog - Academic insights for the thinking world.
 Unlocking the Moon’s secrets: from Galileo to giant impact It is a curious fact that some of the most obvious questions about our planet have been the hardest for scientists to explain. Surely the most conspicuous mystery in paleontology was “what killed the dinosaurs.” Scores of hypotheses were proposed, ranging from the sublime to the ridiculous and including such delusions as “senescent overspecialization, lack of standing room in Noah’s Ark, and paleoweltschmerz.” Not until 1980 did a hypothesis catch on—the Alvarez Theory of dinosaur extinction by meteorite impact—and it is still not fully accepted today, 182 years after Richard Owen discovered the terrible lizards. In the nineteenth century, scientists were preoccupied with explaining the ice ages, when vast sheets of ice marched from the poles, obliterating everything in their path, only to retreat—and repeat. Not until the 1970s did scientists discover that cyclical changes in Earth’s orientation in space and distance from the Sun caused the great glaciers to wax and wane. The first maps of the Atlantic Ocean in the sixteenth century showed that the facing coastlines of Africa and South America fit together like two pieces of a jigsaw puzzle. Geologists rejected Alfred Wegener’s 1912 explanation that continental drift had ripped a giant protocontinent asunder. It took until the mid-1960s and the discovery of plate tectonics to explain the jigsaw fit. This led to the realization that the continents and ocean basins do not stay in one place, but move constantly, albeit at the minuscule rate at which our fingernails grow. Or consider one of the most conspicuous features of our planet, America’s Grand Canyon, plainly visible from space. In the nineteenth century, its cause seemed obvious: the land underneath the Colorado River had risen, causing the muddy stream to incise itself ever deeper in its channel in order to keep up with the uplift. But by mid-twentieth century, newly discovered facts about the age of the canyon led scientists to finally reject the old hypothesis. Every field of science has puzzles that despite being obvious, are exceedingly difficult to solve. Witness the Moon, the most viewed object in the sky, whose largest features we can see with the naked eye. Where did it come from and why does it hang there in space, the same face always turned toward us? The very first person to view the Moon through a telescope, the great Italian father of science, Galileo, saw that it’s surface was not smooth and regular, as the Greeks had supposed, but is marked by pits that came to be called craters. Another great scientist, Robert Hooke, unaware of the existence of meteorites, conducted experiments that indicated that the forces that had created the craters had come from below, from volcanism, a view that persisted for centuries. Isaac Newton was able to answer the question of why the Moon neither crashes into Earth nor flies off into space: the Moon’s velocity in space gives it a momentum that exactly balances the gravitational pull of the Earth. Another great of the Enlightenment, the German philosopher Immanuel Kant, pointed out that the Moon’s gravity would slow the Moon’s rotation until eventually it kept the same face turned to the Earth. Sometimes the answer to a scientific question cannot even be conceived because it depends on some fact yet to be discovered or some research method yet to be invented. Hooke realized that an object descending from above could have created lunar craters, but as far as he knew, the sky contained no such objects. Another reason scientists are slow to adopt a new idea is that they have become wedded to a particular dogma or theory and are unwilling to change their minds. Meteorite impact on the Moon was proposed in the 1870s, but it took nearly a century and spacecraft voyages to the Moon before scientists were willing to entertain it. Both meteorite impact and continental drift were resisted in part because they violated uniformitarianism: the belief that geological processes are gradual rather than catastrophic. Most geologists were unwilling to give up this fundamental principle, learned at their professor’s knee. I have always been fascinated by how scientists have handled these great questions, both because I would like to know the answers myself, but also because they reveal so much about how science really works, as opposed to the idealized scientific method that we learned about in high school. I hope that readers will be interested in my latest effort to make science accessible: Unlocking the Moon’s Secrets: From Galileo to Giant Impact. You will learn how our seemingly placid and unchanging heavenly companion was born in the most colossal act of violence in the history of the solar system. Feature image by Helen Field, via iStock. OUPblog - Academic insights for the thinking world.
 Is humanity a passing phase in evolution of intelligence and civilisation? “The History of every major Galactic Civilization tends to pass through three distinct and recognizable phases, those of Survival, Inquiry and Sophistication…” Douglas Adams, The Hitchhiker’s Guide to The Galaxy (1979) “I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.” Geoffrey Hinton (2023) In light of the recent spectacular developments in artificial intelligence (AI), questions are now being asked about whether AI could present a danger to humanity. Can AI take over from us? Is humanity a passing phase in the evolution of intelligence and civilisation? Let’s look at these questions from the long-term evolutionary perspective. Life has existed on Earth for more than three billion years, humanity for less than 0.01% of this time, and civilisation for even less. A billion years from now, our Sun will start expanding and the Earth will soon become too hot for life. Thus, evolutionarily, life on our planet is already reaching old age, while human civilisation has just been born. Can AI help our civilisation to outlast the habitable Solar system and, possibly, life itself, as we know it presently? Defining life is not easy, but few will disagree that an essential feature of life is its ability to process information. Every animal brain does this, every living cell does this, and even more fundamentally, evolution is continuously processing information residing in the entire collection of genomes on Earth, via the genetic algorithm of Darwin’s survival of the fittest. There is no life without information. It can be argued that until very recently on the evolutionary timescale, i.e. until human language evolved, most information that existed on Earth and was durable enough to last for more than a generation, was recorded in DNA or in some other polymer molecules. The emergence of human language changed this; with language, information started accumulating in other media, such as clay tablets, paper, or computer memory chips. Most likely, information is now growing faster in the world’s libraries and computer clouds than in the DNA of all genomes of all species. We can refer to this “new” information as cultural information as opposed to the genetic information of DNA. Cultural information is the basis of a civilisation; genetic information is the basis of life underpinning it. Thus, if genetic information got too damaged, life, cultural information, and civilisation itself would disappear soon. But could this change in the future? There is no civilisation without cultural information, but can there be a civilisation without genetic information? Can our civilisation outlast the Solar system in the form of AI? Or will genetic information always be needed to underpin any civilisation? For now, AI exists only as information in computer hardware, built and maintained by humans. For AI to exist autonomously, it would need to “break out” of the “information world” of bits and bytes into the physical world of atoms and molecules. AI would need robots maintaining and repairing the hardware on which it is run, recycling the materials from which this hardware is built, and mining for replacement ones. Moreover, this artificial robot/computer “ecosystem” would not only have to maintain itself, but as the environment changes, would also have to change and adapt. Life, as we know it, has been evolving for billions of years. It has evolved to process information and materials by zillions of nano-scale molecular “machines” all working in parallel, competing as well as backing each other up, maintaining themselves and the ecosystem supporting them. The total complexity of this machinery, also called the biosphere, is mindboggling. In DNA, one bit of information takes less than 50 atoms. Given the atomic nature of physical matter, every part in life’s machinery is as miniature as possible in principle. Can AI achieve such a complexity, robustness, and adaptability by alternative means and without DNA? Although this is hard to imagine, cultural evolution has produced tools not known to biological evolution. We can now record information as electron density distribution in a silicon crystal at 3 nm scale. Information can be processed much faster in a computer chip than in a living cell. Human brains contain about 1011 neurons each, which probably is close to the limit how many neurons a single biological brain can contain. Though this is more than computer hardware currently offers to AI, for future AI systems, this is not a limit. Moreover, humans have to communicate information among each other via the bottleneck of language; computers do not have such a limitation. Where does this all leave us? Will the first two phases in the evolution of life—information mostly confined to DNA, and then information “breaking out” of the DNA harness but still underpinned by information in DNA, be followed by the third phase? Will information and its processing outside living organisms become robust enough to survive and thrive without the underpinning DNA? Will our civilisation be able to outlast the Solar system, and if so, will this happen with or without DNA? To get to that point, our civilisation first needs to survive its infancy. For now, AI cannot exist without humans. For now, AI can only take over from us if we help it to do so. And indeed, among all the envisioned threats of AI, the most realistic one seems to be deception and spread of misinformation. In other words, corrupting information. Stopping this trend is our biggest near-term challenge. Feature image by Daniel Falcão via Unsplash. OUPblog - Academic insights for the thinking world.
 Five books to celebrate British Science Week 2023 British Science Week is a ten-day celebration of science, technology, engineering and math’s, taking place between 10-19 March 2023. To celebrate, join in the conversation, and keep abreast of the latest in science, delve into our reading list. It contains five of our latest books on plant forensics, the magic of mathematics, women in science, and more. 1. Planting Clues: How Plants Solve CrimesDiscover the extraordinary role of plants in modern forensics, from their use as evidence in the trials of high-profile murderers such as Ted Bundy to high value botanical trafficking and poaching. In Planting Clues, David Gibson explores how plants can help to solve crimes, as well as how plant crimes are themselves solved. He discusses the botanical evidence that proved important in bringing a number of high-profile murderers such as Ian Huntley (the 2002 Soham Murders), and Bruno Hauptman (the 1932 Baby Lindbergh kidnapping) to trial, from leaf fragments and wood anatomy to pollen and spores. Throughout he traces the evolution of forensic botany, and shares the fascinating stories that advanced its progress. Buy Planting Clues, How Plants Solve Crimes Take a look at Gibson’s blog on Environmental DNA, as well as John Parrington’s (author of ‘Mind Shift’) blog on what neuroscience can tell us about the mind of a serial killer. 2. The Spirit of Mathematics: Algebra and all that What makes mathematics so special? Whether you have anxious memories of the subject from school, or solve quadratic equations for fun, David Acheson’s book will make you look at mathematics afresh. Following on from his previous bestsellers, The Calculus Story and The Wonder Book of Geometry, here Acheson highlights the power of algebra, combining it with arithmetic and geometry to capture the spirit of mathematics. This short book encompasses an astonishing array of ideas and concepts, from number tricks and magic squares to infinite series and imaginary numbers. Acheson’s enthusiasm is infectious, and, as ever, a sense of quirkiness and fun pervades the book. Buy The Spirit of Mathematics, Algebra and all that To learn more, discover our Very Short Introductions series, including editions about Geometry, Algebra, Symmetry, and Numbers. 3. Not Just for the Boys: Why We Need More Women in ScienceWhy are girls discouraged from doing science? Why do so many promising women leave science in early and mid-career? Why do women not prosper in the scientific workforce? Not Just For the Boys looks back at how society has historically excluded women from the scientific sphere and discourse, what progress has been made, and how more is still needed. Athene Donald, herself a distinguished physicist, explores societal expectations during both childhood and working life using evidence of the systemic disadvantages women operate under, from the developing science of how our brains are—and more importantly aren’t—gendered, to social science evidence around attitudes towards girls and women doing science. Buy Not Just for the Boys, Why We Need More Women in Science Make sure not to miss Athene Donald’s limited 4-part podcast series featuring Donald in conversation with fellow female scientists and allies about the issues women face in the scientific world. 4. Distrust: Big Data, Data-Torturing, and the Assault on ScienceUsing a wide range of entertaining examples, this fascinating book examines the impacts of society’s growing distrust of science, and ultimately provides constructive suggestions for restoring the credibility of the scientific community. This thought-provoking book argues that, ironically, science’s credibility is being undermined by tools created by scientists themselves. Scientific disinformation and damaging conspiracy theories are rife because of the internet that science created, the scientific demand for empirical evidence and statistical significance leads to data torturing and confirmation bias, and data mining is fueled by the technological advances in Big Data and the development of ever-increasingly powerful computers. Buy Distrust, Big Data, Data-Torturing, and the Assault on Science Check out Gary Smith’s previous titles, including: The Phantom Pattern Problem, The 9 Pitfalls of Data Science, and The AI Delusion. 5. Sentience: The Invention of ConsciousnessWhat is consciousness and why has it evolved? Conscious sensations are essential to our idea of ourselves but is it only humans who feel this way? Do animals? Will future machines? To answer these questions we need a scientific understanding of consciousness: what it is and why it has evolved. Nicholas Humphrey has been researching these issues for fifty years. In this extraordinary book, weaving together intellectual adventure, cutting-edge science, and his own breakthrough experiences, he tells the story of his quest to uncover the evolutionary history of consciousness: from his discovery of blindsight after brain damage in monkeys, to hanging out with mountain gorillas in Rwanda, to becoming a leading philosopher of mind. Out of this, he has come up with an explanation of conscious feeling—”phenomenal consciousness”—that he presents here in full for the first time. Buy Sentience, The Invention of Consciousness (UK Only)
As an added bonus, you can also read more on the topics of evolutionary biology, the magic of mathematics, and artificial intelligence with the Oxford Landmark Science series. Including “must-read” modern science and big ideas that have shaped the way we think, here are a selection of titles from the series to get your started. You can also explore more titles via our extended reading list via Bookshop UK. OUPblog - Academic insights for the thinking world.
|
|