Many criminal investigations, including “cold cases, ” do not have a suspect but do have DNA evidence. In these cases, a genetic profile can be obtained from the forensic specimens at the crime scene and electronically compared to profiles listed in ...
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

OUPblog » Technology


Searching DNA databases: cold hits and hot-button issues

Searching DNA databases: cold hits and hot-button issues

Many criminal investigations, including “cold cases,” do not have a suspect but do have DNA evidence. In these cases, a genetic profile can be obtained from the forensic specimens at the crime scene and electronically compared to profiles listed in criminal DNA databases. If the genetic profile of a forensic specimen matches the profile of someone in the database, depending on other kinds of evidence, that individual may become the prime suspect in what was heretofore a suspect-less crime.

Searching DNA databases to identify potential suspects has become a critical part of criminal investigations ever since the FBI reported its first “cold hit” in July 1999, linking six sexual assault cases in Washington, D.C., with three sexual assault cases in Jacksonville, Florida. The match of the genetic profiles from the evidence samples with an individual in the national criminal database ultimately led to the identification and conviction of Leon Dundas.

How the statistical significance of a match obtained with a database search is presented to the jury should, in my view, be straightforward but, given the adversarial nature of our criminal justice system, remains contentious. One view is that if the profiles of the evidence and a suspect who had been identified by the database search match, then the estimated population frequency of that particular genetic profile (equivalent to the Random Match Probability in a non-database search case) is still the relevant statistic to be presented to the jury. The Random Match Probability (RMP) is an estimate of the probability that a randomly chosen individual in a given population would also match the evidence profile. The RMP is estimated as the population frequency of the specific genetic profile, which is calculated by multiplying the probabilities of a match at each individual genetic marker (the “Product Rule”).

An alternative view, often invoked by the defense, is that the size of the database should be multiplied by the RMP. For example, if the RMP is 1/100 million and the database that was searched is 1 million, this perspective argues that the number 1/100 is the one that should be presented to the jury. This calculation, however, represents the probability of getting a “hit” (match) with the database and not the probability of a coincidental match between the evidence and suspect (1/100 million), the more relevant metric for interpreting the probative significance of a DNA match. Although these arguments may seem arcane, the estimates that result from these different statistical metrics could be the difference between conviction and acquittal.

There are many different kinds of DNA databases. Ethnically defined population databases are used to calculate genotype frequencies and, thus, to estimate RMPs but are not useful for searching. The first DNA searches were of databases of convicted felons. In some jurisdictions, databases of arrestees have also been established and searched. These searches have recently been expanded to include “partial matches,” potentially implicating relatives of the individuals in the database. This strategy, known as “familial searching,” has been very effective but contentious, with discussions typically focused on the “trade-offs” between civil liberties and law enforcement. In some jurisdictions, the “trade-off” has been between two different controversial criminal database programs. In Maryland, for example, an arrestee database (albeit one specifying arraignment) was allowed but familial searching was outlawed. Familial searching has been critiqued as turning relatives of people in the database into “suspects.” A more accurate description is that these partial matches revealed by familial searching identify “persons of interest” and that they provide potential leads for investigation.

Recently, searching for partial matches in the investigation of suspect-less crimes has expanded from criminal databases to genealogy databases, as applied in the Golden State Killer case in 2018. These databases consist of genetic profiles from people seeking information about their ancestry or trying to find relatives. Genetic genealogy involves constructing a large family tree going back several generations based on the individuals identified in the database search and on genealogical records. Identifying several different individuals in the database whose profile shares a region of DNA with the evidence profile allows a family tree to be constructed. The shorter the shared region between two individuals or between the evidence and someone in the database, the more distant the relationship. This is because genetic recombination, the shuffling of DNA regions that occurs in each generation, reduces the length of shared DNA segments over time. So, in the construction of a family tree, the length of the shared region indicates how far back in time you have to go to locate the common ancestor. Tracing the descendants in this family tree who were in the area when the crime was committed identifies a set of potential suspects.

The DNA technologies used in investigative genetic genealogy (IGG) are different from those typically used in analyzing the evidence samples or the criminal database samples, which are based on around 25 short tandem repeat markers (STRs). The genotyping technology used to generate profiles in genealogy databases is based on analyzing thousands of single nucleotide polymorphisms (SNPs). With the recent implementation of Next Generation Sequencing technology to sequence the whole genome, even more informative searching for shared DNA regions can be accomplished. (Next Generation Sequencing of the whole genome is so powerful that it can now distinguish identical (monozygotic) twins!)

Investigative genetic genealogy (IGG) has completely upended the trade-offs and guidelines proposed for familial searching as well as many of the arguments. Many of the rationales justifying familial searching of criminal databases, such as the recidivism rate, and the presumed relinquishing by convicts of certain rights do not apply to genealogical databases. Also, the concerns about racial disparities in criminal databases don’t apply to these non-criminal databases either. In general, it’s very hard to draw lines in the sand when the sands are shifting so rapidly and the technology is evolving so quickly. And it is particularly difficult when dramatic successes in identifying the perpetrators of truly heinous unsolved crimes are lauded in the media, making celebrities of the forensic scientists who carried out the complex genealogical analyses that finally led to the arrest of the Golden State Killer and, shortly thereafter, to many others.

It’s still possible and desirable to set some guidelines for IGG, a complex and expensive procedure. It should be restricted to serious crimes. The profiles in the database should be restricted to those individuals who have consented to have their personal genomic data searched for law enforcement purposes. With the appropriate guidelines, the promise of DNA database searching to solve suspect-less crimes can truly transform our criminal justice system.

Featured image by TanyaJoy via iStock.

OUPblog - Academic insights for the thinking world.

 

Artificial Intelligence? I think not!

Artificial <em>Intelligence</em>? I think not!

“The machine demands that Man assume its image; but man, created to the image and likeness of God, cannot become such an image, for to do so would be equivalent to his extermination”

(Nicolai Berdyaev, “Man and Machine” 1934)

These days, the first thing people discuss when the question of technology comes up is AI. Equally predictable is that conversations about AI often focus on the “rise of the machines,” that is, on how computers might become sentient, or at least possess an intelligence that will outthink, outlearn, and thus ultimately outlast humanity.

Some computer scientists deny the very possibility of so-called Artificial General Intelligence (AGI). They argue that Artificial Narrow Intelligence (ANI) is alone achievable. ANI focusses on accomplishing specific tasks set by the human programmer, and on executing well-defined tasks within changing environments, and thus rejects any claim to actual independent or human-like intelligence. Self-driving cars, for example, rely on ANI.

Yet as AI researcher and historian Nils J. Nilsson makes clear, the real ‘prize’ for AI researchers is to develop artifacts that can do most of the things that humans can do—specifically those things thought to require ‘intelligence.’” Thus the real impetus of AI research remains AGI, or what some now call “Human Level Artificial Intelligence (HLAI).

The central problem with such discussions about AI, however, is the simple fact that Artificial Intelligence does not exist.

To achieve this goal, AI researchers attempt to replicate the human brain on digital platforms, so that computers will mimic its functions. With increasing computational power, it will then be possible first to build machines that have the object-recognition, language capabilities, manual dexterity, and social understanding of small children, and then, second, to achieve adult levels of intelligence through machine learning. Once such intelligence is achieved, many fear the nightmare scenario of 2001: A Space Odyssey’s self-preserving computer HAL 9000, who eliminates human beings because of their inefficiency. What if these putative superintelligent machines disdain humans for their much inferior intellect and enslave or even eliminate them? This vision has been put forward by the likes of Max Tegmark (not to mention the posthuman sensationalist Yuval Harari), and has enlivened the mushrooming discipline of machine ethics, which is dedicated to exploring how humans will deal with sentient machines, how we will integrate them into the human economy, and so on. Machine ethics researchers ask questions like: “Will HLAI machines have rights, own property, and thus acquire legal powers? Will they have emotions, create art, or write literature and thus need copyrights?”

The central problem with such discussions about AI, however, is the simple fact that Artificial Intelligence does not exist. There is an essential misunderstanding of human intelligence that undergirds all of these concerns and questions—a misunderstanding not of degree but of kind, for no machine is or ever will be “intelligent.”

Before the advent of modernity, human intelligence and understanding (deriving from the Latin intellectus, itself rooted in the ancient Greek concepts of nous and logos) indicated the human mind’s participation in an invisible spiritual order which permeated reality. Tellingly, the Greek term logos denotes law, an ordering principle and also language or discourse. Originally, human intelligence did not imply mere logic, or mathematical calculus, but the kind of wisdom that comes only from the experiential knowledge of embodied spirits. Human beings, as premodern philosophers insisted, are ensouled or living organisms, or animals, that also possess the distinguishing gift of logos. Logos, translated as ratio or reason, is the capacity for objectifying, self-reflexive thought.

Moreover, as rooted in a universal logos, human intelligence was intrinsically connected to language. In this pre-modern world, symbols are not arbitrary cyphers assigned to things, as AI researchers have always assumed; rather, language derives from and remains inseparably linked to the human experience of a meaningful world. As the German philosopher Hans-Georg Gadamer explains, “we are always already at home in language, just as much as we are in the world.” We live, think, move, and have our being in language. As the very matrix that renders the world intelligible to us, language is not merely an instrument by which a detached mind masters the world. Instead, we only think and speak on the basis of the linguistic traditions that make human experience intelligible. And let’s not forget that human experience is embodied.

The only way we can even conceive of computers attaining human understanding is a radical redefinition of this term in functionalist terms.

No wonder, then, that human understanding, to use the English equivalent of the Latinate ‘intellect,’ has a far deeper meaning than what computer scientists usually attribute to the term. Intelligence is not shuffling around symbols, recognizing patterns, or conveying bytes of information. Rather, human intelligence refers to the intuitive grasp of meaningful relations within the world, an activity that relies on embodied experience and language-dependent thought. The critic of AI, Hubert Dreyfus summed up this meaning of intelligence as “knowing one’s way around in the world.” Algorithms, however, have no body, have no world, and therefore have no intelligence or understanding.

The only way we can even conceive of computers attaining human understanding is a radical redefinition of this term in functionalist terms. As Erik Larson has shown, we owe this redefinition in part to Alan Turing, who, after initial hesitations, reduced intelligence to mathematical problem solving. Turing and AI researcher after him thus aided a fundamental mechanization of nature and human nature. We first turn reality into a gigantic biological-material mechanism, then reconceive human persons as complex machines powered by a computer-like brain, and thus find it relatively easy to envision machines with human intelligence. In short, we dehumanize the person in order to humanize the machine. We have in fact, as Berdyaev prophesied, exterminated the human in order to create machines in the image of our de-spirited, mechanized corpses.

In sum, our problem for a proper assessment of so-called AI is not an imminent threat of actual machine intelligence, but our misguided imagination that wrongly invests computing processes with a human quality like intelligence. Not the machines, but we are to blame for this. Algorithms are code, and the increasing speed and complexity of computation certainly harbors potential dangers. But these dangers arise from neither sentience nor intelligence. To attribute human thought or understanding to computational programs is simply a category mistake. Increasing computational power makes no difference. No amount of computing power can jump the ontological barrier from computational code to intelligence. Machines cannot be intelligent, have no language, won’t “learn” in a human educational sense, and they don’t think.

As computer scientist Jaron Lanier pithily sums up the reality of AI: “there is no A.I.” The computing industry should return to the common sense of those AI researchers who initially disliked the label AI and called their work “complex information processing.” As Berdyaev reminds us with the epigram above, the true danger of AI is not that machines might become like us, but that we might become like machines and thereby forfeit our true birthright.

Featured image by Geralt (Gerd Altmann) from Pixabay.

OUPblog - Academic insights for the thinking world.

 

Is humanity a passing phase in evolution of intelligence and civilisation?

Is humanity a passing phase in evolution of intelligence and civilisation?

“The History of every major Galactic Civilization tends to pass through three distinct and recognizable phases, those of Survival, Inquiry and Sophistication…”

Douglas Adams, The Hitchhiker’s Guide to The Galaxy (1979)

“I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”

Geoffrey Hinton (2023)

In light of the recent spectacular developments in artificial intelligence (AI), questions are now being asked about whether AI could present a danger to humanity. Can AI take over from us? Is humanity a passing phase in the evolution of intelligence and civilisation? Let’s look at these questions from the long-term evolutionary perspective.

Life has existed on Earth for more than three billion years, humanity for less than 0.01% of this time, and civilisation for even less. A billion years from now, our Sun will start expanding and the Earth will soon become too hot for life. Thus, evolutionarily, life on our planet is already reaching old age, while human civilisation has just been born. Can AI help our civilisation to outlast the habitable Solar system and, possibly, life itself, as we know it presently?

Defining life is not easy, but few will disagree that an essential feature of life is its ability to process information. Every animal brain does this, every living cell does this, and even more fundamentally, evolution is continuously processing information residing in the entire collection of genomes on Earth, via the genetic algorithm of Darwin’s survival of the fittest. There is no life without information.

It can be argued that until very recently on the evolutionary timescale, i.e. until human language evolved, most information that existed on Earth and was durable enough to last for more than a generation, was recorded in DNA or in some other polymer molecules. The emergence of human language changed this; with language, information started accumulating in other media, such as clay tablets, paper, or computer memory chips. Most likely, information is now growing faster in the world’s libraries and computer clouds than in the DNA of all genomes of all species.

We can refer to this “new” information as cultural information as opposed to the genetic information of DNA. Cultural information is the basis of a civilisation; genetic information is the basis of life underpinning it. Thus, if genetic information got too damaged, life, cultural information, and civilisation itself would disappear soon. But could this change in the future? There is no civilisation without cultural information, but can there be a civilisation without genetic information? Can our civilisation outlast the Solar system in the form of AI? Or will genetic information always be needed to underpin any civilisation?

For now, AI exists only as information in computer hardware, built and maintained by humans. For AI to exist autonomously, it would need to “break out” of the “information world” of bits and bytes into the physical world of atoms and molecules. AI would need robots maintaining and repairing the hardware on which it is run, recycling the materials from which this hardware is built, and mining for replacement ones. Moreover, this artificial robot/computer “ecosystem” would not only have to maintain itself, but as the environment changes, would also have to change and adapt.

Life, as we know it, has been evolving for billions of years. It has evolved to process information and materials by zillions of nano-scale molecular “machines” all working in parallel, competing as well as backing each other up, maintaining themselves and the ecosystem supporting them. The total complexity of this machinery, also called the biosphere, is mindboggling. In DNA, one bit of information takes less than 50 atoms. Given the atomic nature of physical matter, every part in life’s machinery is as miniature as possible in principle. Can AI achieve such a complexity, robustness, and adaptability by alternative means and without DNA?

Although this is hard to imagine, cultural evolution has produced tools not known to biological evolution. We can now record information as electron density distribution in a silicon crystal at 3 nm scale. Information can be processed much faster in a computer chip than in a living cell. Human brains contain about 1011 neurons each, which probably is close to the limit how many neurons a single biological brain can contain. Though this is more than computer hardware currently offers to AI, for future AI systems, this is not a limit. Moreover, humans have to communicate information among each other via the bottleneck of language; computers do not have such a limitation.

Where does this all leave us? Will the first two phases in the evolution of life—information mostly confined to DNA, and then information “breaking out” of the DNA harness but still underpinned by information in DNA, be followed by the third phase? Will information and its processing outside living organisms become robust enough to survive and thrive without the underpinning DNA? Will our civilisation be able to outlast the Solar system, and if so, will this happen with or without DNA?

To get to that point, our civilisation first needs to survive its infancy. For now, AI cannot exist without humans. For now, AI can only take over from us if we help it to do so. And indeed, among all the envisioned threats of AI, the most realistic one seems to be deception and spread of misinformation. In other words, corrupting information. Stopping this trend is our biggest near-term challenge.

Feature image by Daniel Falcão via Unsplash.

OUPblog - Academic insights for the thinking world.

 

Aleph-AI: an organizing force or creative destruction in the artificial era?

Aleph-AI: an organizing force or creative destruction in the artificial era?

The Aleph is a blazing space of about an inch diameter containing the cosmos, Jorge Luis Borges told us in 1945, after being invited to see it in the basement of a house. The Aleph deeply disrupted him, revealing millions of delightful and awful scenes simultaneously. He clearly saw everything from all points of the universe, including the sea, the day and the night, every single ant and grain of sand, animals, symbols, sickness and war, the earth, the universe, the stunning alteration of death, scrutinizing eyes, mirrors, precise letters from his beloved, the mechanisms of love, his blood, my face, yours, and apparently that of every single person. Borges felt endless veneration and pity. He feared that nothing else would surprise him after this experience.

Amelia Valcárcel proposes that technological progress equated us with an Aleph with a display of about six inches. She compares Borges’ Aleph with mobile phones, which are accessible to most people. Now, I think we have at our fingertips the third generation of Aleph, what we could call Aleph-AI. We use this new Aleph not only to see everything, but to work from everywhere, communicate whenever we want, get travel directions, play games, search for specific information, trade without borders, conduct banking, take pictures and notes, get instant translations or suggestions for completing a message, get product recommendations that we are likely to buy, listen to a personalized list of music, and perform an endless list of activities powered by AI. Has the 6 inches Aleph-AI become creative destruction, or an organizing force in the artificial era?

Some workers feel threatened with being replaced by AI applications and losing their jobs. Fear has knocked on the doors of several unions, including operators, customer service representatives, artists, manual workers, or office workers. For example, Hollywood artists protest because they fear losing control of their image by being AI simulated and replaced by computational creativity. Customer service representatives fear being replaced by chatbots, manual workers by robots, and office workers by a variety of AI innovations.

On the other hand, economists argue that AI can make us about 40 percent more productive and drive economic growth through the use of a virtual workforce. AI innovations could even generate new revenue streams. In recent years, startups have been growing in record time, thanks to internet connectivity that almost everyone can get on highly accessible devices. Trading products and services hasn’t been as seamless as it is now, thanks to AI innovations in the palm of our hands. Governments can potentially easily apply regulations and taxes through a mixed bag of applications that serve millions over their mobile phones, says Thomas Lembong, who optimistically views this as more of an organizing force than a force of creative destruction.

But we can not deny that some people will suffer from the impacts of this unstoppable force. History shows that every time innovations occur, some people face negative effects and react with fear. When Jacquard looms appeared at the beginning of 1800, artisans united under the name of Luddites. They protested, destroyed, and burned textile fabrics. Skilled textile artisans were frightened by the introduction of a technology operated by workers, who could finish their job at a much faster speed, easily operating Jacquard looms. Some intellectuals supported the Luddite protests. Among them was a well-known poet of the time, Lord George Gordon Byron, Ada Lovelace’s father. Today, Ada Lovelace is recognized as the world’s first programmer and AI forerunner, who could have been fascinated by Jacquard looms’ binary technology, suggests Hannah Fry. Indeed, Ada Lovelace pioneeringly noticed the great potential of computers to perform any complex task, like composing music. Luddite protests lasted two years and died out after the English government enacted the death penalty for those caught destroying fabrics.

Technological progress will follow its path, and we are going to adapt. Did painters disappear when photography appeared? No, they re-invented themselves. Many techniques appeared: impressionism, expressionism, cubism, and abstract art, to mention a few. Previously, painters made a living from portraits. Today, we take selfies. How many of you have hired a painter for a portrait? With Aleph-AIs in our hands, we retouch pictures taken on our own. Will actors be entirely replaced by AI? Not happening. The changes in the film industry will be led by a group of professionals, including artists, computer scientists, entrepreneurs, lawyers, and policymakers. This will be the case in every sector and industry. Will there be suffering? Change is suffering. Change is new possibilities. Change could be re-thinking what we want for the future during the artificial era.

Feature image by Anna Bliokh via iStock.

OUPblog - Academic insights for the thinking world.

 

A librarian’s reflections on 2023

A librarian’s reflections on 2023

What did 2023 hold for academic libraries? What progress have we seen in the library sector? What challenges have academic libraries faced?

At OUP, we’re eager to hear about the experience of academic librarians and to foster conversation and reflection within the academic librarian community. We’re conscious of the changing landscape in which academic libraries operate. So, as 2024 gets underway, we took the opportunity to ask Anna França, Head of Collections and Archives at Edge Hill University, to share her impressions of the library sector and her experiences throughout the past year.

Tell us about one thing you’ve been surprised by in the library sector this year?

I’m continually surprised and impressed by how quick the library sector is to respond and adapt to wider trends and challenges. The sector’s response to developments in generative AI is one obvious example from the past year (and one that I’ll speak more on later), but I think academic libraries have navigated some difficult years remarkably well and are continuing to demonstrate their role as a vital cornerstone of their academic institutions. I have worked in academic libraries for over 18 years and in that time I have seen the library’s role shift from being primarily a support service to becoming an active partner across a range of important areas.

What have you found most challenging in your role over the past year?

In recent years, libraries have been at the forefront of conversations on wide-ranging and complex topics, including generative AI and machine learning, learner analytics and Open Research, while also placing an emphasis on support for Equality, Diversity, and Inclusion initiatives and the role of libraries in promoting social justice. These developments make libraries interesting spaces in which to work and provide opportunities for innovation and collaboration, but keeping up to speed as a professional with the most current information on a topic can be challenging. There is always something new to read or learn about!

There has been a lot of debate this year about the place of AI in academia. How has the progression of AI affected your library or role thus far?

Supporting students to develop digital literacy skills has always been an integral role of the library at Edge Hill, but with advancements in generative AI and the increased risk that students will be exposed to erroneous or biased information, we know this is more important than ever.

As a library we have recently established a group tasked with looking at the potential impacts of AI on our services. I’m excited by the opportunities that AI might offer to deliver enhanced services for our users—for example, supporting intelligent resource discovery, improving the accessibility of content, and enabling our users to carry out their research more efficiently. I certainly think that libraries are uniquely positioned within their institutions to help drive and influence the conversation around AI.

What’s an initiative your library took in 2023 that you’re proud of?

I am very proud of the work that has taken place around our archive service. Our archive is at the center of a new research group, Research Catalyst, which brings together library and archive professionals, academic staff, and students who are interested in how we can use innovative and interactive methods to research items in our collections. Research Catalyst has a focus on engagement and using the archive to connect with new audiences. One initiative involved us developing an online Archive Showcase and an associated competition which asked local school students and adults to create an original work inspired by the archive. This work led us to be shortlisted for a 2023 Times Higher Education Outstanding Library Team award—it was wonderful to have our initiative recognized nationally in this way.

Feature image by Mathias Reding via Unsplash, public domain.

OUPblog - Academic insights for the thinking world.

 

Contact UsPast IssuesJoin This ListUnsubscribe

 

Safely Unsubscribe ArchivesPreferencesContactSubscribePrivacy