In light of the recent spectacular developments in artificial intelligence (AI), questions are now being asked about whether AI could present a danger to humanity. Can AI take over from us? Is humanity a passing phase in the evolution of intelligence ...
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

OUPblog » Science & Medicine


Is humanity a passing phase in evolution of intelligence and civilisation?

Is humanity a passing phase in evolution of intelligence and civilisation?

“The History of every major Galactic Civilization tends to pass through three distinct and recognizable phases, those of Survival, Inquiry and Sophistication…”

Douglas Adams, The Hitchhiker’s Guide to The Galaxy (1979)

“I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”

Geoffrey Hinton (2023)

In light of the recent spectacular developments in artificial intelligence (AI), questions are now being asked about whether AI could present a danger to humanity. Can AI take over from us? Is humanity a passing phase in the evolution of intelligence and civilisation? Let’s look at these questions from the long-term evolutionary perspective.

Life has existed on Earth for more than three billion years, humanity for less than 0.01% of this time, and civilisation for even less. A billion years from now, our Sun will start expanding and the Earth will soon become too hot for life. Thus, evolutionarily, life on our planet is already reaching old age, while human civilisation has just been born. Can AI help our civilisation to outlast the habitable Solar system and, possibly, life itself, as we know it presently?

Defining life is not easy, but few will disagree that an essential feature of life is its ability to process information. Every animal brain does this, every living cell does this, and even more fundamentally, evolution is continuously processing information residing in the entire collection of genomes on Earth, via the genetic algorithm of Darwin’s survival of the fittest. There is no life without information.

It can be argued that until very recently on the evolutionary timescale, i.e. until human language evolved, most information that existed on Earth and was durable enough to last for more than a generation, was recorded in DNA or in some other polymer molecules. The emergence of human language changed this; with language, information started accumulating in other media, such as clay tablets, paper, or computer memory chips. Most likely, information is now growing faster in the world’s libraries and computer clouds than in the DNA of all genomes of all species.

We can refer to this “new” information as cultural information as opposed to the genetic information of DNA. Cultural information is the basis of a civilisation; genetic information is the basis of life underpinning it. Thus, if genetic information got too damaged, life, cultural information, and civilisation itself would disappear soon. But could this change in the future? There is no civilisation without cultural information, but can there be a civilisation without genetic information? Can our civilisation outlast the Solar system in the form of AI? Or will genetic information always be needed to underpin any civilisation?

For now, AI exists only as information in computer hardware, built and maintained by humans. For AI to exist autonomously, it would need to “break out” of the “information world” of bits and bytes into the physical world of atoms and molecules. AI would need robots maintaining and repairing the hardware on which it is run, recycling the materials from which this hardware is built, and mining for replacement ones. Moreover, this artificial robot/computer “ecosystem” would not only have to maintain itself, but as the environment changes, would also have to change and adapt.

Life, as we know it, has been evolving for billions of years. It has evolved to process information and materials by zillions of nano-scale molecular “machines” all working in parallel, competing as well as backing each other up, maintaining themselves and the ecosystem supporting them. The total complexity of this machinery, also called the biosphere, is mindboggling. In DNA, one bit of information takes less than 50 atoms. Given the atomic nature of physical matter, every part in life’s machinery is as miniature as possible in principle. Can AI achieve such a complexity, robustness, and adaptability by alternative means and without DNA?

Although this is hard to imagine, cultural evolution has produced tools not known to biological evolution. We can now record information as electron density distribution in a silicon crystal at 3 nm scale. Information can be processed much faster in a computer chip than in a living cell. Human brains contain about 1011 neurons each, which probably is close to the limit how many neurons a single biological brain can contain. Though this is more than computer hardware currently offers to AI, for future AI systems, this is not a limit. Moreover, humans have to communicate information among each other via the bottleneck of language; computers do not have such a limitation.

Where does this all leave us? Will the first two phases in the evolution of life—information mostly confined to DNA, and then information “breaking out” of the DNA harness but still underpinned by information in DNA, be followed by the third phase? Will information and its processing outside living organisms become robust enough to survive and thrive without the underpinning DNA? Will our civilisation be able to outlast the Solar system, and if so, will this happen with or without DNA?

To get to that point, our civilisation first needs to survive its infancy. For now, AI cannot exist without humans. For now, AI can only take over from us if we help it to do so. And indeed, among all the envisioned threats of AI, the most realistic one seems to be deception and spread of misinformation. In other words, corrupting information. Stopping this trend is our biggest near-term challenge.

Feature image by Daniel Falcão via Unsplash.

OUPblog - Academic insights for the thinking world.

 

The hidden toll of war

The hidden toll of war

During war, the news media often focus on civilian injuries and deaths due to explosive weapons. But the indirect health impacts of war among civilians occur more frequently—often out of sight and out of mind.

These indirect impacts include communicable diseases, malnutrition, exacerbations of chronic noncommunicable diseases, maternal and infant disorders, and mental health problems. They are caused primarily by forced displacement of populations and by damage to civilian infrastructure, including farms and food supply systems, water treatment plants, healthcare and public health facilities, and networks for electric power, communication, and transportation.

Increasingly, damage to civilian infrastructure is caused by targeted attacks—as a strategy of war, resulting in reduced access to food, safe drinking water, healthcare, and shelter. When water treatment plants and supply lines are damaged during war, people often have no choice but to drink water from sources that may be contaminated with microorganisms or toxic substances. Healthcare facilities have been increasingly targeted during war; for example, during the first 18 months of the war in Ukraine, there were 1,014 attacks on healthcare facilities, which injured and killed many patients and healthcare workers, and caused much damage, which reduced access to healthcare for many people.

Globally, there are now more than 108 million people who have been displaced from their homes, many as a result of war. Most of these displaced people have been internally displaced within their own countries, often facing greater health and security risks than refugees, who have fled to other countries. And during war, many more people live in continual fear that they may be forcibly displaced.

Major categories of communicable diseases during war include diarrheal diseases and respiratory disorders. These diarrheal diseases result mainly from decreased access to safe drinking water and reduced levels of sanitation and hygiene, leading to increased fecal-oral transmission of bacterial and viral agents. Among respiratory disorders, measles is of great concern because it is highly contagious and associated with high mortality rates among unimmunized children. Another major concern is tuberculosis, which can spread easily among war-affected populations and is difficult to treat without continuity of care. Crowding in bomb shelters, refugee camps, and other locations during war facilitates the spread of both diarrheal diseases and respiratory disorders. Disruption of public health services leads to reduced access to immunizations and reduced resources to investigate and control outbreaks of communicable disease. During war, bacterial resistance to antibiotics increases because people have decreased access to antibiotics and therefore take inappropriate antibiotics or shortened courses of treatment.

Malnutrition often increases during war, thereby increasing the risks of acquiring and dying from many communicable diseases. Infants and children are at greatest risk of becoming malnourished and suffering from its adverse health consequences. Micronutrient deficiencies during pregnancy can lead to birth defects. And severe malnutrition during war can increase the risk of hypertension, coronary artery disease, and diabetes in later life.

During war, exacerbations of preexisting cases of noncommunicable disease increase, mainly because of reduced access to medical care and medications for treating common chronic diseases. For example, a survey by the World Health Organization in Ukraine in 2022 found that about half of the respondents experienced reduced access to medical care and almost one-fourth could not acquire necessary medications that they needed. Without these medications, people with hypertension were at increased risk of myocardial infarction and stroke, people with asthma were at increased risk of life-threatening attacks, people with diabetes were at increased risk of serious complications, and people with epilepsy were at increased risk of seizures.

War exerts adverse effects on reproductive health. Access to prenatal care, postpartum and neonatal care, and reproductive health services are frequently decreased. As a result, complications of pregnancy, including maternal deaths, occur more frequently and there are increased rates of infant deaths and of infants being born prematurely or with low birthweight.

Mental and behavioral disorders occur more frequently during war, including posttraumatic stress disorder (PTSD), depression and anxiety, alcoholism and drug abuse, and suicide. There are many contributing factors to increasing the risk of these disorders, including physical and sexual trauma, witnessing of atrocities, forced displacement, family separation, deaths of loved ones, loss of employment and education, and uncertainty about the future.

Violations of human rights and international humanitarian law occur frequently during war. In addition to those already mentioned, these violations include gender-based violence, summary executions, kidnapping, denial of humanitarian aid, and use of indiscriminate weapons, such as antipersonnel landmines.

The possible use of nuclear weapons represents a profound threat whenever nuclear powers are engaged in war, partly because these weapons could be launched by accident or because of misinterpretation or miscommunication. Even a small nuclear war could cause huge numbers of deaths and severe injuries and could lower temperatures globally, leading to widespread famine.

Environmental damage during war can result from chemical contamination of air, water, and soil; presence of landmines and unexploded ordnance; release of ionizing radiation from nuclear power plants or conventional weapons containing radioactive materials (“dirty bombs”); destruction of the built environment; and damage to animal habitats and ecosystems. In addition, war and the preparation for war consume large amounts of fossil fuels, which generate greenhouse gases, which, in turn, cause global warming.

Protection of civilians and civilian infrastructure during war and improved humanitarian assistance can reduce the indirect health impacts of war. But the only way to eliminate these impacts is to eliminate war. The risk of war can be reduced by resolving disputes before they turn violent; by reducing the root causes of war, such as socioeconomic inequities, militarism, ethnic and religious hatred, poor governance, and environmental stress; and by strengthening the infrastructure for peace. Peace can be achieved and sustained by rehabilitating nations and reintegrating people after war has ended, strengthening civil society, promoting the rule of law, ensuring citizen participation, and holding aggressors accountable.

Barry S. Levy is the author of From Horror to Hope: Recognizing and Preventing the Health Impacts of War (Oxford University Press, 2022). He is an Adjunct Professor of Public Health at Tufts University School of Medicine and a past president of the American Public Health Association.

Featured image: Markus Spiske via Unsplash, public domain.

OUPblog - Academic insights for the thinking world.

 

Who do you think you are? Genetics and identity

Who do you think you are? Genetics and identity

Ethnicity and ethnic identity have been recently brought to the fore in the Western world. One important reason is that immigration and globalization have resulted in a variety of clashes among different groups in very different contexts. However, there is another reason: DNA ancestry testing. Margo Georgiadis, president & chief executive officer of the major company in the field, Ancestry.com, has estimated that in early 2020, 30 million people had taken a DNA test, of which over 16 million was with her company. These companies tell you that by simply spitting into a tube or swabbing the inside of your cheek, you can find out a lot about your origins and your ancestors through DNA. Indeed, the way these tests are sometimes marketed may make people think that ethnicity is something “written” in their DNA. In many cases, people have to deal with surprising revelations that make them reconsider their ethnic identity, and in some cases reveal that the person whom they called father is their biological one.

Identity matters a lot to people, because it affects both how we perceive ourselves and how we are perceived by others. There are two big issues with how people tend to think about ethnic identity. On the one hand, it is assumed that people of the same ethnicity are a lot more similar than they actually are. On the other hand, it is assumed that people of different ethnicities are much more different from one another than they actually are. Therefore, once considered as members of particular ethnic groups, each person is no longer considered as an individual, but as a representative of particular ethnic types. This has an important consequence: people are not considered on the basis of what they really are, but rather on the basis of what they are expected to be given the ethnic group to which they belong. And this is where false stereotypes can easily prevail. Here DNA ancestry companies enter the scene by arguing that their tests can indicate to which ethnic group one belongs. Thus, these tests privilege notions of ethnicity based on genetics, contributing to the myth of genetic ethnicities.

Research in psychology has supported the conclusion that people believe that they have internal, immutable essences that influence who they are. This kind of thinking is called psychological essentialism; when genes, and DNA more general, are considered as being these internal and immutable essences, the view is described as genetic essentialism. This is an intuitive view that makes people find natural that they belong in one or another group, as well as that these groups are internally homogeneous and entirely discrete from one another. Therefore, if people intuitively tend to think of ethnic groups in genetic essentialist terms, it might seem natural to them that there exist discrete ethnic groups that are both genetically homogeneous and genetically distinct from one another.

Ethnic groups are real, but are socially and culturally constructed. More often than not, these groups have not had continuity across time historically, linguistically, culturally, and of course biologically. However, people intuitively tend to essentialize these groups, and DNA often serves as the placeholders for this. Population genetics provides an objective means for distinguishing among human groups; however, even though there are many different ways to do this, people (and researchers themselves) often tend to privilege those groupings that align with previously perceived, extant categories, such as continental and racial groupings. People living in the same continent are indeed more likely to have recent common ancestors among themselves than with people living in other continents. But what really exists at the genetic level are gradients of genetic variation, not distinct groupings. Human genetic variation is continuous and the genetic differences among people are overall very minor. For this reason, ethnic groups, nations, or races are not biological entities.

As a result, any ethnically, nationally, or racially distinctive genetic markers exist only in a probabilistic sense, and what ancestry tests provide are just probabilistic estimations of similarities between the test-takers and particular reference populations, consisting of people living today. But being related genetically to people living today somewhere does not necessarily mean that their ancestors came from that place. Furthermore, as more people take such tests, these reference groups change and as a result the ethnicity estimates for the same person can change across time. DNA provides partial information about our ancestors, which is the outcome of a process of interpretation. Therefore, DNA cannot reveal our true ethnic identity and the genetic ethnicities to which test-takers are assigned are imagined. However, this does not devalue these tests as their results can indeed provide some valuable insights and information to people who may not know much about their ancestors. Indeed, the tests are very good for finding close relatives, and this is perhaps why the industry should be rebranded to DNA family testing.

Feature image by Shutter2U via iStock.

OUPblog - Academic insights for the thinking world.

 

Beyond God and atheism

Beyond God and atheism

What are we doing here? What’s the point of existence?

Traditionally, the West has been dominated by two very different answers to these big questions. On the one hand, there is belief in the traditional God of the Abrahamic faiths, a supreme being who created the universe for a good purpose. On the other hand, there is the meaningless, purposeless universe of secular atheism. However, I’ve come to think both views are inadequate, as both have things they can’t explain about reality. In my view, the evidence we currently have points to the universe having purpose but one that exists in the absence of the traditional God.

The theistic worldview struggles to explain suffering, particularly in the natural world. Why would a loving, all-powerful God choose to create the North American long-tailed shrew that paralyses its prey and then slowly eats it alive over several days before it dies from its wounds? Theologians have tried to argue that there are certain good things that exist in our world that couldn’t exist in a world with less suffering, such as serious moral choices, or opportunities to show courage or compassion. But even if that’s right, it’s not clear that our creator has the right to kill and maim—by choosing to create hurricanes and disease, for example—in order, say, to provide the opportunity to show courage. A classic objection to crude forms of utilitarianism considers the possibility of a doctor who has the option of kidnapping and killing one healthy patient in order to save the lives of five other patients: giving the heart to one, the kidneys to another, and so on. Perhaps this doctor could increase the amount of well-being in the world through this action: saving five lives at the cost of one. Even so, many feel that the doctor doesn’t have the right to take the life of the healthy person, even for a good purpose. Likewise, I think it would be wrong for a cosmic creator to infringe on the right to life and security of so many by creating earthquakes, tsunamis, and other natural disasters.

Looking at the other side of the coin, the secular atheist belief in a meaningless, purposeless universe struggles to explain the fine-tuning of physics for life. This is the recent discovery that for life to be possible, certain numbers in physics had to fall in a certain, very narrow range. If the strength of dark energy—the force that powers the expansion of the universe—had been a little bit stronger, no two particles would have ever met, meaning no stars, no planets, no structural complexity at all. If, on the other hand, it had been significantly weaker, it would not have counteracted gravity, and the universe would have collapsed back on itself a split second after the big bang. For life to be possible, the strength of dark energy had to be—like Goldilocks’ porridge—just right.

For a long time, I thought the multiverse was the best explanation of the fine-tuning of physics for life. If enough people play the lottery, it becomes likely that someone’s going to get the right numbers to win. Likewise, if there are enough universes, with enough variety in the numbers in their ‘local physics,’ then statistically it becomes highly probable that one of them is going to fluke the right numbers for life to exist.

However, I have been persuaded by philosophers of probability that the attempt to explain fine-tuning in terms of a multiverse violates a very important principle in probabilistic reasoning, known as the “Total Evidence Requirement.” This is the principle that you should always work with the most specific evidence you have. If the prosecution tells the jury that Jack always carries a knife around with him, when they know full well that he always carries a butter knife around with him, then they have misled to jury—not by lying, but by giving them less specific evidence than is available.

The multiverse theorist violates this principle by working with the evidence that a universe is fine-tuned, rather than the more specific evidence we have available, namely that this universe is fine-tuned. According to the standard account of the multiverse, the numbers in our physics were determined by probabilistic processes very early in its existence. These probabilistic processes make it highly unlikely that any particular universe will be fine-tuned, even though if there are enough universes one of them will probably end up fine-tuned. However, we are obliged by the Total Evidence Requirement to work with the evidence that this universe in particular is fine-tuned, and the multiverse theory fails to explain this data.

This is all a bit abstract, so let’s take a concrete example. Suppose you walk into a forest and happen upon a monkey typing in perfect English. This needs explaining. Maybe it’s a trained monkey. Maybe it’s a robot. Maybe you’re hallucinating. What would not explain the data is postulating millions of other monkeys on other planets elsewhere in the universe, who are mostly typing nonsense. Why not? Because, in line with the Requirement of Total Evidence, your evidence is not that some monkey is typing English but that this monkey is typing in English.

In my view, we face a stark choice. Either it is an incredible fluke that these numbers in our physics are just right for life, or these numbers are as they are because they are the right numbers for life, in other words, that there is some kind of “cosmic purpose” or goal-directedness towards life at the fundamental level of reality. The former option is too improbable to take seriously. The only rational option remaining is to embrace cosmic purpose.

Theism cannot explain suffering. Atheism cannot explain fine-tuning. Only cosmic purpose in the absence of God can accommodate both of these data-points.

OUPblog - Academic insights for the thinking world.

 

Of language, brain health, and global inequities

Of language, brain health, and global inequities

One of the greatest public health challenges of our century lies in the growth of neurodegenerative disorders. Conditions such as Alzheimer’s disease, Parkinson’s disease, and frontotemporal dementia stand as major contributors to disability and mortality in affluent and under-resourced nations alike. Currently affecting over 55 million individuals, their prevalence is expected increase significantly by 2050—especially in less developed countries, where risk factors are most impactful and mainstream clinical approaches least developed.

Language research in the fight against neurodegeneration

Against this background, researchers from various fields are searching for new, affordable, and scalable digital innovations to facilitate diagnosis and other clinical tasks across the globe. Speech and language assessments have emerged as crucial tools, offering robust insights for detecting, characterizing, and monitoring these diseases. For instance, individuals with Alzheimer’s often struggle with word retrieval, experience difficulties in constructing grammatically complex sentences, and exhibit challenges in understanding or expressing figurative language. These linguistic deficits appear in early and preclinical disease stages, differentiate Alzheimer’s from other forms of dementia, allow predicting the onset of core symptoms, and even capture brain anomalies that typify the disorder.

These clinical applications can be boosted through artificial intelligence tools. New digital technologies allow capturing specific alterations in recorded or written language samples in a non-invasive, patient-friendly, and cost-effective way. Such is the type of solution required to reduce clinical disparities across low-, middle-, and high-income countries. Multicentric research initiatives, large grants from leading funding agencies, and science-based companies are spearheading exciting projects to validate and expand this novel framework. However, a critical challenge looms large: the lack of linguistic diversity in the field threatens its scalability and undermines its potential for more equitable testing worldwide.

Disorders of language vs. disorders of languages

The field is marked with inequities. Less than 0.5% of the world’s 7,000 languages have received any attention in this research field. Also, although English is spoken by roughly 17% of the world’s population, it accounts for nearly 70% of all published studies on speech and language in neurodegeneration. Moreover, large language models and feature extraction tools are available for only a handful of languages. Of course, none of this would be a major issue if links between language anomalies and brain dysfunctions were universal across the world’s languages—if that were the case, we could rely on the abundant findings from English and apply them to patients worldwide, irrespective of their language. Unfortunately, the reality is much more complicated.

As it happens, cross-linguistic differences deeply influence the presentation of speech and language symptoms, challenging the universality of existing diagnostic criteria and candidate disease markers. For instance, a sentence production study showed that Italian-speaking persons with Alzheimer’s could be identified by their tendency to omit subjects, a phenomenon notably absent in their English-speaking counterparts. The distinction lies in the inherent structure of the languages. Unlike English, Italian allows deducing sentence subjects from verb conjugations (the Italian verb ‘camminiamo’ inherently implies a first-person plural subject, whereas the English verb ‘walk’ requires a preceding ‘we’ to convey the same meaning). More strikingly, linguistic anomalies may be diametrically opposed between languages. For example, research on Alzheimer’s shows that different pronouns (words like ‘I’, ‘their’, ‘ours’) tend to be overused among English-speaking patients and underused in Bengali-speaking patients—relative to healthy speakers of the same languages. This, too, likely reflects differences between both languages, as Bengali grammar includes many more (and morphologically more complex) pronouns than English. Succinctly, the linguistic markers that may signal a given disease among speakers of one language may not be relevant among speakers of another language.

Taking action

These findings underscore the need to consider language diversity when examining the linguistic impact of neurodegenerative conditions. Such is the call we raised in our recent article in Brain (García et al., 2023). Researchers must broaden the representation of languages, incorporating diverse linguistic communities to identify shared and distinguishing properties. Multicentric collaborations, harmonized protocols, and cross-linguistic tools must be forged for a more inclusive and comprehensive understanding of neurodegeneration across regions and cultures. The path forward requires overcoming core challenges, such as establishing robust pipelines for comparing outcomes across languages, disentangling linguistic and non-linguistic sources of heterogeneity, and securing funds for language research across underrepresented regions. Ideally, local-global connections should be prioritized to integrate country-specific needs and resources with leading worldwide trends.

Promisingly, strategic efforts are being made in this direction. Consider, for example, the International Network for Cross-Linguistic Research on Brain Health (Include). Supported with initial funds from the Global Brain Health Institute, the Alzheimer’s Association, and the Alzheimer’s Society, Include aims to foster trans-regionally equitable approaches to language-based neurodegeneration research. The network has grown continually since its launch in November 2022. It now has over 140 members spanning 80 centers sites across 30 countries. Five network-wide projects are being run, targeting diverse phenomena across multiple languages in large cohorts of persons with Alzheimer’s, Parkinson’s, and frontotemporal dementia variants. Include is also leading awareness-raising actions, such as the Language Diversity and Brain Health webinar series, hosted in collaboration with the Bilingualism, Languages, and Literacy Special Interest Group of the Alzheimer’s Association’s Diversity and Disparities Professional Interest Area. Initiatives like these can make a difference towards fairer language-based research on brain dysfunctions.

The bottom line

Speech and language assessments hold a valuable key to unlocking generalizable insights on neurodegeneration. To harness their full potential, however, we must bridge the linguistic gap in research, embracing more diverse samples and more inclusive practices. These actions are vital to ensure that valuable tools for equitable brain health assessments do not turn into a new source of global inequity.

Feature image by Studioroman via Canva.

OUPblog - Academic insights for the thinking world.

 

Contact UsPast IssuesJoin This ListUnsubscribe

 

Safely Unsubscribe ArchivesPreferencesContactSubscribePrivacy