In light of the recent spectacular developments in artificial intelligence (AI), questions are now being asked about whether AI could present a danger to humanity. Can AI take over from us? Is humanity a passing phase in the evolution of intelligence ...
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

OUPblog » Technology


Is humanity a passing phase in evolution of intelligence and civilisation?

Is humanity a passing phase in evolution of intelligence and civilisation?

“The History of every major Galactic Civilization tends to pass through three distinct and recognizable phases, those of Survival, Inquiry and Sophistication…”

Douglas Adams, The Hitchhiker’s Guide to The Galaxy (1979)

“I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”

Geoffrey Hinton (2023)

In light of the recent spectacular developments in artificial intelligence (AI), questions are now being asked about whether AI could present a danger to humanity. Can AI take over from us? Is humanity a passing phase in the evolution of intelligence and civilisation? Let’s look at these questions from the long-term evolutionary perspective.

Life has existed on Earth for more than three billion years, humanity for less than 0.01% of this time, and civilisation for even less. A billion years from now, our Sun will start expanding and the Earth will soon become too hot for life. Thus, evolutionarily, life on our planet is already reaching old age, while human civilisation has just been born. Can AI help our civilisation to outlast the habitable Solar system and, possibly, life itself, as we know it presently?

Defining life is not easy, but few will disagree that an essential feature of life is its ability to process information. Every animal brain does this, every living cell does this, and even more fundamentally, evolution is continuously processing information residing in the entire collection of genomes on Earth, via the genetic algorithm of Darwin’s survival of the fittest. There is no life without information.

It can be argued that until very recently on the evolutionary timescale, i.e. until human language evolved, most information that existed on Earth and was durable enough to last for more than a generation, was recorded in DNA or in some other polymer molecules. The emergence of human language changed this; with language, information started accumulating in other media, such as clay tablets, paper, or computer memory chips. Most likely, information is now growing faster in the world’s libraries and computer clouds than in the DNA of all genomes of all species.

We can refer to this “new” information as cultural information as opposed to the genetic information of DNA. Cultural information is the basis of a civilisation; genetic information is the basis of life underpinning it. Thus, if genetic information got too damaged, life, cultural information, and civilisation itself would disappear soon. But could this change in the future? There is no civilisation without cultural information, but can there be a civilisation without genetic information? Can our civilisation outlast the Solar system in the form of AI? Or will genetic information always be needed to underpin any civilisation?

For now, AI exists only as information in computer hardware, built and maintained by humans. For AI to exist autonomously, it would need to “break out” of the “information world” of bits and bytes into the physical world of atoms and molecules. AI would need robots maintaining and repairing the hardware on which it is run, recycling the materials from which this hardware is built, and mining for replacement ones. Moreover, this artificial robot/computer “ecosystem” would not only have to maintain itself, but as the environment changes, would also have to change and adapt.

Life, as we know it, has been evolving for billions of years. It has evolved to process information and materials by zillions of nano-scale molecular “machines” all working in parallel, competing as well as backing each other up, maintaining themselves and the ecosystem supporting them. The total complexity of this machinery, also called the biosphere, is mindboggling. In DNA, one bit of information takes less than 50 atoms. Given the atomic nature of physical matter, every part in life’s machinery is as miniature as possible in principle. Can AI achieve such a complexity, robustness, and adaptability by alternative means and without DNA?

Although this is hard to imagine, cultural evolution has produced tools not known to biological evolution. We can now record information as electron density distribution in a silicon crystal at 3 nm scale. Information can be processed much faster in a computer chip than in a living cell. Human brains contain about 1011 neurons each, which probably is close to the limit how many neurons a single biological brain can contain. Though this is more than computer hardware currently offers to AI, for future AI systems, this is not a limit. Moreover, humans have to communicate information among each other via the bottleneck of language; computers do not have such a limitation.

Where does this all leave us? Will the first two phases in the evolution of life—information mostly confined to DNA, and then information “breaking out” of the DNA harness but still underpinned by information in DNA, be followed by the third phase? Will information and its processing outside living organisms become robust enough to survive and thrive without the underpinning DNA? Will our civilisation be able to outlast the Solar system, and if so, will this happen with or without DNA?

To get to that point, our civilisation first needs to survive its infancy. For now, AI cannot exist without humans. For now, AI can only take over from us if we help it to do so. And indeed, among all the envisioned threats of AI, the most realistic one seems to be deception and spread of misinformation. In other words, corrupting information. Stopping this trend is our biggest near-term challenge.

Feature image by Daniel Falcão via Unsplash.

OUPblog - Academic insights for the thinking world.

 

Aleph-AI: an organizing force or creative destruction in the artificial era?

Aleph-AI: an organizing force or creative destruction in the artificial era?

The Aleph is a blazing space of about an inch diameter containing the cosmos, Jorge Luis Borges told us in 1945, after being invited to see it in the basement of a house. The Aleph deeply disrupted him, revealing millions of delightful and awful scenes simultaneously. He clearly saw everything from all points of the universe, including the sea, the day and the night, every single ant and grain of sand, animals, symbols, sickness and war, the earth, the universe, the stunning alteration of death, scrutinizing eyes, mirrors, precise letters from his beloved, the mechanisms of love, his blood, my face, yours, and apparently that of every single person. Borges felt endless veneration and pity. He feared that nothing else would surprise him after this experience.

Amelia Valcárcel proposes that technological progress equated us with an Aleph with a display of about six inches. She compares Borges’ Aleph with mobile phones, which are accessible to most people. Now, I think we have at our fingertips the third generation of Aleph, what we could call Aleph-AI. We use this new Aleph not only to see everything, but to work from everywhere, communicate whenever we want, get travel directions, play games, search for specific information, trade without borders, conduct banking, take pictures and notes, get instant translations or suggestions for completing a message, get product recommendations that we are likely to buy, listen to a personalized list of music, and perform an endless list of activities powered by AI. Has the 6 inches Aleph-AI become creative destruction, or an organizing force in the artificial era?

Some workers feel threatened with being replaced by AI applications and losing their jobs. Fear has knocked on the doors of several unions, including operators, customer service representatives, artists, manual workers, or office workers. For example, Hollywood artists protest because they fear losing control of their image by being AI simulated and replaced by computational creativity. Customer service representatives fear being replaced by chatbots, manual workers by robots, and office workers by a variety of AI innovations.

On the other hand, economists argue that AI can make us about 40 percent more productive and drive economic growth through the use of a virtual workforce. AI innovations could even generate new revenue streams. In recent years, startups have been growing in record time, thanks to internet connectivity that almost everyone can get on highly accessible devices. Trading products and services hasn’t been as seamless as it is now, thanks to AI innovations in the palm of our hands. Governments can potentially easily apply regulations and taxes through a mixed bag of applications that serve millions over their mobile phones, says Thomas Lembong, who optimistically views this as more of an organizing force than a force of creative destruction.

But we can not deny that some people will suffer from the impacts of this unstoppable force. History shows that every time innovations occur, some people face negative effects and react with fear. When Jacquard looms appeared at the beginning of 1800, artisans united under the name of Luddites. They protested, destroyed, and burned textile fabrics. Skilled textile artisans were frightened by the introduction of a technology operated by workers, who could finish their job at a much faster speed, easily operating Jacquard looms. Some intellectuals supported the Luddite protests. Among them was a well-known poet of the time, Lord George Gordon Byron, Ada Lovelace’s father. Today, Ada Lovelace is recognized as the world’s first programmer and AI forerunner, who could have been fascinated by Jacquard looms’ binary technology, suggests Hannah Fry. Indeed, Ada Lovelace pioneeringly noticed the great potential of computers to perform any complex task, like composing music. Luddite protests lasted two years and died out after the English government enacted the death penalty for those caught destroying fabrics.

Technological progress will follow its path, and we are going to adapt. Did painters disappear when photography appeared? No, they re-invented themselves. Many techniques appeared: impressionism, expressionism, cubism, and abstract art, to mention a few. Previously, painters made a living from portraits. Today, we take selfies. How many of you have hired a painter for a portrait? With Aleph-AIs in our hands, we retouch pictures taken on our own. Will actors be entirely replaced by AI? Not happening. The changes in the film industry will be led by a group of professionals, including artists, computer scientists, entrepreneurs, lawyers, and policymakers. This will be the case in every sector and industry. Will there be suffering? Change is suffering. Change is new possibilities. Change could be re-thinking what we want for the future during the artificial era.

Feature image by Anna Bliokh via iStock.

OUPblog - Academic insights for the thinking world.

 

A librarian’s reflections on 2023

A librarian’s reflections on 2023

What did 2023 hold for academic libraries? What progress have we seen in the library sector? What challenges have academic libraries faced?

At OUP, we’re eager to hear about the experience of academic librarians and to foster conversation and reflection within the academic librarian community. We’re conscious of the changing landscape in which academic libraries operate. So, as 2024 gets underway, we took the opportunity to ask Anna França, Head of Collections and Archives at Edge Hill University, to share her impressions of the library sector and her experiences throughout the past year.

Tell us about one thing you’ve been surprised by in the library sector this year?

I’m continually surprised and impressed by how quick the library sector is to respond and adapt to wider trends and challenges. The sector’s response to developments in generative AI is one obvious example from the past year (and one that I’ll speak more on later), but I think academic libraries have navigated some difficult years remarkably well and are continuing to demonstrate their role as a vital cornerstone of their academic institutions. I have worked in academic libraries for over 18 years and in that time I have seen the library’s role shift from being primarily a support service to becoming an active partner across a range of important areas.

What have you found most challenging in your role over the past year?

In recent years, libraries have been at the forefront of conversations on wide-ranging and complex topics, including generative AI and machine learning, learner analytics and Open Research, while also placing an emphasis on support for Equality, Diversity, and Inclusion initiatives and the role of libraries in promoting social justice. These developments make libraries interesting spaces in which to work and provide opportunities for innovation and collaboration, but keeping up to speed as a professional with the most current information on a topic can be challenging. There is always something new to read or learn about!

There has been a lot of debate this year about the place of AI in academia. How has the progression of AI affected your library or role thus far?

Supporting students to develop digital literacy skills has always been an integral role of the library at Edge Hill, but with advancements in generative AI and the increased risk that students will be exposed to erroneous or biased information, we know this is more important than ever.

As a library we have recently established a group tasked with looking at the potential impacts of AI on our services. I’m excited by the opportunities that AI might offer to deliver enhanced services for our users—for example, supporting intelligent resource discovery, improving the accessibility of content, and enabling our users to carry out their research more efficiently. I certainly think that libraries are uniquely positioned within their institutions to help drive and influence the conversation around AI.

What’s an initiative your library took in 2023 that you’re proud of?

I am very proud of the work that has taken place around our archive service. Our archive is at the center of a new research group, Research Catalyst, which brings together library and archive professionals, academic staff, and students who are interested in how we can use innovative and interactive methods to research items in our collections. Research Catalyst has a focus on engagement and using the archive to connect with new audiences. One initiative involved us developing an online Archive Showcase and an associated competition which asked local school students and adults to create an original work inspired by the archive. This work led us to be shortlisted for a 2023 Times Higher Education Outstanding Library Team award—it was wonderful to have our initiative recognized nationally in this way.

Feature image by Mathias Reding via Unsplash, public domain.

OUPblog - Academic insights for the thinking world.

 

How can business leaders add value with intuition in the age of AI? [Long Read]

How can business leaders add value with intuition in the age of AI? [Long Read]

In a speech to the Economic Club of Washington in 2018, Jeff Bezos described how Amazon made sense of the challenge of if and how to design and implement a loyalty scheme for its customers. This was a highly consequential decision for the business; for some time, Amazon had been searching for an answer to the question: “what would loyalty program for Amazon look like?”

A junior software engineer came up with the idea of fast, free shipping. But a big problem was that shipping is expensive. Also, customers like free shipping, so much so that the big eaters at Amazon’s “buffet” would take advantage by free shipping low-cost items which would not be good for Amazon’s bottom-line. When the Amazon finance team modelled the idea of fast, free shipping the results “didn’t look pretty.” In fact, they were nothing short of “horrifying.”

But Bezos is experienced enough to know that some of his best decisions have been made with “guts… not analysis.” In deciding whether to go with Amazon Prime, the analysts’ data could only take the problem so far towards being solved. Bezos decided to go with his gut. Prime was launched in 2005. It has become one of the world’s most popular subscription services with over 100 million members who spend on average $1400 per year compared to $600 for non-prime members.

As a seasoned executive and experienced entrepreneur Bezos sensed that the Prime idea could work. And in his speech he reminded his audience that “if you can make a decision with analysis, you should do so. But it turns out in life that your most important decisions are always made with instinct and intuition, taste, heart.”

The launch of Amazon Prime is a prime example of a CEO’s informed and intelligent use of intuition paying off in decision-making under uncertainty (where outcomes are unknown and their likelihood of occurrence cannot be estimated) rather than under risk (where outcomes are known and probabilities can be estimated). The customer loyalty problem for Amazon was uncertain because probabilities and consequences could not be known at the time. No amount of analysis could reduce the fast, free shipping solution to the odds of success or failure.

Under these uncertain circumstances Bezos chose to go with this gut. This is not an uncommon CEO predicament or response. In business, decision-makers often have to act “instinctively” even though they have no way of knowing what the outcome is likely to be. The world is becoming more, not less uncertain, and “radical uncertainty” seems to have become the norm for strategic decision-making both in business and in politics. The informed and intelligent use of intuition on the part of those who have the nous and experience to be able to go with their gut is one way forward.

Human intuition meets AI

Turning to the uncertainties posed by artificial intelligence and winding the clock back to over half-a-century ago, the psychologist Paul Meehl in his book Clinical Versus Statistical Prediction (1954) compared how well the subjective predictions of trained clinicians such as physicians, psychologists, and counsellors fared when compared with predictions based on simple statistical algorithms. To many people’s surprise, Meehl found that experts’ accuracy of prediction, for example trained counsellors’ predictions of college grades, was either matched or exceeded by the algorithm.

The decision-making landscape that Meehl studied all those years ago has been transformed radically by the technological revolutions of the “Information Age” (see Jay Liebowitz, Bursting the Big Data Bubble, 2014). Computers have exceeded immeasurably the human brain’s computational capacity. Big data, data analytics, machine learning, and artificial intelligence (AI) have been described as “the new oil” (see Eugene Sadler-Smith, “Researching Intuition: A Curious Passion” in Bursting the Big Data Bubble, 2014). They have opened-up possibilities for outsourcing to machines many of the tasks that were until recently the exclusive preserve of humans. The influence of AI and machine learning is extending beyond relatively routine and sometimes mundane tasks such as cashiering in supermarkets. AI now figures prominently behind the scenes in things as diverse as social media feeds, the design of smart cars, and on-line advertising. It has extended its reach into complex professional areas such as medical diagnoses, investment banking, business consulting, script writing for advertisements, and management education (see Marcus du Sautoy, The Creativity Code, 2019).

There is nothing new in machines replacing humans: they did so in the mechanisations of the agricultural and industrial revolutions when they replaced dirty and dangerous work; dull work and decision-making work might be next. Daniel Suskind, author of World without Work thinks the current technological revolution is on a scale which is hitherto unheard of. The power with which robots and computers are able to perform tasks at high speed, with high accuracy, at scale using computational capabilities are orders of magnitude greater than those of any human or previous technology. This one reason this revolution is different and is why it has been referred to as nothing less than the “biggest event in human history” by Stuart Russell, founder of the Centre for Human-Compatible Artificial Intelligence at the University of California, Berkeley.

The widespread availability of data, along with cheap, scalable computational power, and rapid and on-going developments of new AI techniques such as machine learning and deep learning have meant that AI has become a powerful tool in business management (see Gijs Overgoor, et al). For example, the financial services industry deals with high-stakes, complex problems involving large numbers of interacting variables. It has developed AI that can be used to identify cybercrime schemes such as money laundering, fraud and ATM hacking. By using complex algorithms, the latest generation of AI can uncover fraudulent activity that is hidden amongst millions of innocent transactions and alert human analysts with easily digestible, traceable, and logged data to help them to decide, using human intuition based on their “feet on the ground” experiences, on whether activity is suspicious or not and take the appropriate action. This is just one example, and there are very few areas of business which are likely to be exempt from AI’s influence. Taking this to its ultimate conclusion Elon Musk said at the recent UK “AI Safety Summit” held at Bletchley Park (where Alan Turing worked as code breaker in World War 2) that: “There will come a point where no job is needed—you can have a job if you want one for personal satisfaction but AI will do everything.  It’s both good and bad—one of the challenges in the future will be how do we find meaning in life.”

Creativity and AI

Creativity is increasingly and vitally important in many aspects of business management. It is perhaps one area in which we might assume that humans will always have the edge. However, creative industries, such as advertising, are using AI for idea generation. The car manufacturer Lexus used IBM’s Watson AI to write the “world’s most intuitive car ad” for a new model, the strap line for which is “The new Lexus ES. Driven by intuition.” The aim was to use a computer to write the ad script for what Lexus claimed to be “the most intuitive car in the world”. To do so Watson was programmed to analyse 15 years-worth of award-winning footage from the prestigious Cannes Lions international award for creativity using its “visual recognition” (which uses deep learning to analyse images of scenes, objects, faces, and other visual content), “tone analyser” (which interprets emotions and communication style in text), and “personality insights” (using data to make inferences about consumers’ personalities) applications. Watson AI helped to “re-write car advertising” by identifying the core elements of award-winning content that was both “emotionally intelligent” and “entertaining.” Watson literally wrote the script outline. It was then used by the creative agency, producers, and directors to build an emotionally gripping advertisement.

Even though the Lexus-IBM collaboration reflects a breakthrough application of AI in the creative industries, IBM’s stated aim is not to attempt to “recreate the human mind but to inspire creativity and free-up time to spend thinking about the creative process.” The question of whether Watson’s advertisement is truly creative in the sense of being both novel and useful is open to question (it was based on rules derived from human works that were judged to be outstandingly creative by human judges at the Cannes festival). In a recent collaborative study between Harvard Business School and Boston Consulting Group, “humans plus AI” has been found to produce superior results compared to “humans without AI” when used to generate ideas by following rules created by humans. However, “creativity makes new rules, rules do not make creativity” (to paraphrase the French composer Claude Debussy). The use of generative AI which is rule-following rather than rule-making is likely to result in “creative” outputs which are homogeneous and which may ultimately fail the test of true creativity, i.e. both novel (in the actual sense of the word) and useful. Human creative intuition on the other hand adds value by:

  1. going beyond conventional design processes and rules
  2. drawing on human beings’ ability to think outside the box, produce innovative solutions
  3. sensing what will or won’t work
  4. yielding products and services that stand out in market, capture the attention of consumers, and drive business success.

—as based on suggestions offered by Chat GPT in response to the question: “how does creative intuition add value to organizations?”

Emotion intelligence and AI

Another example of area in which fourth generation AI is making in-roads is in the emotional and inter-personal domains. The US-based start-up Luka has developed the artificially intelligent journaling chatbot “Replika” which is designed to encourage people to “open-up and talk about their day.” Whilst Siri and Alexa are an emotionally “cold” digital assistants, Replika is designed to be more like your “best friend.” It injects emotion into conversations and learns from the user’s questions and answers. It’s early days, and despite the hype rigorous research is required to evaluate the claims being made on behalf of such applications.

“The fact that computers are making inroads into areas that were once considered uniquely human is nothing new.”

The fact that computers are making inroads into areas that were once considered uniquely human is nothing new. Perhaps intuition is next. The roots of modern intuition research are in chess, an area of human expertise in which grand masters intuit “the good move straight away.” Nobel laureate and one of the founding figures of AI, Herbert Simon, based his classic definition of intuition (“analyses frozen into habit and the capacity for rapid response through recognition”) on his research into expertise in chess. He estimated that grandmasters have stored of the order of 50,000 “familiar patterns” in their long-term memories, the recognition and recall of which enables them to play chess intuitively at the chess board.

In 1997, the chess establishment was astonished when IBM’s Deep Blue beat Russian chess grand master and world champion Garry Kasparov. Does this mean that IBM’s AI is able to out-intuit a human chess master? Kasparov thinks not. The strategy that Deep Blue used to beat Kasparov was fundamentally different from how another human being might have attempted to do so. Deep Blue did not beat Kasparov by replicating or mimicking his thinking processes, in Kasparov’s own words:

“instead of a computer that thought and played like a chess champion, with human creativity and intuition, they [the ‘AI crowd’] got one that played like a machine, systematically, evaluating 200million chess moves on the chess board per second and winning with brute number-crunching force.”

Nobel laureate in physics, Richard Feynman, commented presciently in 1985 that it will be possible to develop a machine which can surpass nature’s abilities but without imitating nature. If a computer ever becomes capable of out-intuiting a human it is likely that the rules that the computer relies on will be fundamentally different to those used by humans and the mode of reasoning will be very different to that which evolved in the human organism over many hundreds of millennia (see Gerd Gigerenzer, Gut Feelings, 2007).

AI’s limitations

In spite of the current hype, AI can also be surprisingly ineffective. Significant problems with autonomous driving vehicles have been encountered and are well documented, as in the recent case which came to court in Arizona involving a fatality allegedly caused by an Uber self-driving car. In medical diagnoses, even though the freckle-analysing system developed at Stanford University does not replicate how doctors exercise their intuitive judgement through “gut feel” for skin diseases, it can nonetheless through its prodigious number-crunching power diagnose skin cancer without knowing anything at all about dermatology (see Daniel Susskind, A World Without Work, 2020). But as the eminent computer scientist Stuart Russell remarked, the deep learning that such AI systems rely on can be quite difficult to get right, for example some of the “algorithms that have learned to recognise cancerous skin lesions, turn out to completely fail if you rotate the photograph by 45 degrees [which] doesn’t instil a lot of confidence in this technology.”

Is the balance of how we comprehend situations and take business decisions shifting inexorably away from humans and in favour of machines?  Is “artificial intuition” inevitable and will it herald the demise of “human intuition”? If an artificial intuition is realized eventually that can match that of a human, it will be one of the pivotal outcomes of the fourth industrial revolution―perhaps the ultimate form of AI.

Chat GPT appears to be “aware” of its own limitations in this regard. In response to the question “Dear ChatGPT: What happens when you intuit?” it replied:

“As a language model I don’t have the ability to intuit. I am a machine learning based algorithm that is designed to understand and generate human language. I can understand and process information provided to me, but I don’t have the ability to have intuition or feelings.”

More apocalyptically, could the creation of an artificial intuition be the “canary in the coalmine,” signalling the emergence of Vernor Vinge’s “technological singularity” where large computer networks and their users suddenly “wake up” as “superhumanly intelligent entities” as Musk and others are warning of? Could such a development turn out to be a Frankenstein’s monster with unknown but potentially negative, unintended consequences for its makers? The potential and the pitfalls of AI are firmly in the domain of the radically uncertain and identifying the potential outcomes and how to manage them is likely to involve a judicious mix of rational analysis and informed intuition on the part of political and business leaders.

“The potential and the pitfalls of AI are firmly in the domain of the radically uncertain.”

Human intuition, AI, and business management

Making any predictions about what computers will or will not be able to do in the future is a hostage to fortune. For the foreseeable future most managers will continue rely on their own rather than a computer’s intuitive judgements when taking both day-to-day and strategic decisions. Therefore, until a viable “artificial intuition” arrives that is capable of out-intuiting a human, the more pressing and practical question is “what value does human intuition add in business?” The technological advancements of the information age have endowed machines with the hard skill of “solving,” which far outstrips this capability in the human mind. The evolved capacities of the intuitive mind have endowed managers with the arguably hard-to-automate, or perhaps even impossible-to-automate, soft skill of “sensing.” This is the essence of human intuition.

Perhaps the answer lies in an “Augmented Intelligence Model (AIM),” which marries gut instinct with data and analytics. Such a model might combine three elements:

  1. human analytical intelligence, which is adept in communicating, diagnosing, evaluating, interpreting, etc.
  2. human intuitive intelligence, which is adept in creating, empathising, feeling, judging, relating, sensing, etc.
  3. artificial intelligence, which is adept in analysing, correlating, optimising, predicting, recognizing, text-mining, etc.

The most interesting spaces in this model are in the overlaps between the three intelligences, for example when human analytical intelligence augments artificial intelligence in a chatbot with human intervention. Similar overlaps exist for human analytical and human intuitive intelligences, and for human intuitive intelligence and artificial intelligence. The most interesting space is where all three overlap and it is here that most value stands to be added by leveraging the combined strengths of human intuitive intelligence, human analytical intelligence, and artificial intelligence in an Augmented Intelligence Model which can drive success.

This blog post is adapted from Chapter 1 of Intuition in Business by Eugene Sadler-Smith.

Feature image by Tara Winstead via Pexels.

OUPblog - Academic insights for the thinking world.

 

Science in the time of war: voices from Ukraine

Science in the time of war: voices from Ukraine

On 23 February 2022, I drove back to Michigan after giving a talk at the University of Kentucky on genome diversity in Ukraine. My niece Zlata Bilanin, a recent college graduate from Ukraine, was with me. She was calling her friends in Kyiv, worried. A single question was on everyone’s mind: will there be a war tomorrow? The thought of invasion, though, seemed unimaginable, illogical, even absurd.

At 2am, Zlata woke me up. “They are coming,” she said. I remember the color of her face–pale green. The world would never be the same again.

Indeed, the war has changed everything; priorities are no longer the same. Many researchers enlisted and went to fight. Others, their homes destroyed, fled. Many packed and crossed the border in the hope of a better life in the West.

Nearly 600 days later, the war continues, each day amplifying the human tragedy, of lives and futures lost—lives that could have otherwise been dedicated to better and more meaningful purposes.

As a researcher, my colleagues and I could not help but think about the crushing blow the war delivered to the vibrant Ukrainian scientific community. Ukraine is a country with incredible resources, unique human genetics given the land once served as a human migration crossroads, and a large dedicated, community of researchers working on numerous and varied projects. Now, however, research centers have been destroyed, and universities have few new students, as they now go to study abroad where there are opportunities, and they cannot be drafted.

Through all this, although my laboratory is at Oakland University, I continue to work with my colleagues back home, building a research program in genomics at my alma mater, Uzhhorod National University (UzhNU). Several years ago, my colleagues and I dreamed up a project to sequence a hundred Ukrainian genomes to provide data for researchers to have tools to study the history of migration, admixture, and distribution of medically relevant variation in the local population. This collaboration started with President of UzhNU, Prof Volodymyr Smolanka, a neurosurgeon by training, an effective administrator, and an active scientist.

Given his work and his position, for this blog post, I wanted a comment from him on the state of Ukrainian science since the start of the war. I called and asked, simply: “Is it harder or easier?” His reply was one that matches the current thoughts of those now involved in retaining and rebuilding Ukrainian scientific programs, “One thing I can say is that there is a lot less government funding. That’s clearly a negative. On the other hand, there seem to be more grant opportunities from international sources, and this helps us to stay afloat.”

“What about the people,” I ask, “How do they feel about science?”

“I would not say that they were optimistic. I am not sure that pessimistic would be the right word either. You know, those scientists that did not leave, they are working, they really want to work in science.”

Thinking about those who are not leaving, I contacted an old colleague who has stayed: Dr Serghey Gashchak, a legendary field biologist, who, among many things, worked in the Chornobyl Exclusion Zone and knew everything there was to know about animals in Chornobyl. We used to call him “Stalker” in reference to a 1979 Soviet science fiction art film about a post-apocalyptic wasteland called “The Zone.”

Given his research background and work in a disaster zone, I emailed Serghey about his thoughts on the current situation. “It’s impossible to work in the Zone these days,” he said. “The barbarians are not at the gates anymore, but there are no research projects, and if there were, there’s no one to work on them. Many of the research staff are fighting in the war. Perhaps it is time to close.”

I was stunned to hear that, knowing Serghey’s inquisitive nature, it was hard for me to believe he would just stop doing research, Worse, I realized, this was likely felt by many. While my head said this might be true, my heart felt there must be a way forward. But, with the war’s destruction of institutions and financial mechanisms, such a mechanism couldn’t rely on expensive infrastructure and top-down government funding schemes. That would take decades to rebuild. What was needed was a way to integrate Ukrainian research into the worldwide research community: to bring opportunity and virtual infrastructure to Ukraine. In fact, the basic mechanisms for bringing research to places all around the world have been in place for decades in the form of international courses and conferences, remote learning, and worldwide collaboration — quite simply we could take the current international infrastructure and modify it to empower researchers in disaster zones.

A case in point is a summer research program developed in 2022—during the war—that takes place at Uzhhorod National University, which, although it is in Ukraine, is a safe distance from the war zone. This research program is led by an international team: Drs Fyodor Kondrashov (OIST, Japan), Roderic Guigo (CRG, Spain), Serghei Mangul (USC), and Wolfgang Huber (EMBL, Germany). Here, international faculty come to Ukrainian students and continue to train them and engage them in work around the globe.

I called Dr Kondrashov at his home in Okinawa and asked what research area he thought would be most useful to bring to a devastated Ukraine. He replied immediately: “Bioinformatics is a good choice because you could accomplish a lot more with the same amounts of resources than in other disciplines, such as molecular biology.”

He was right. The hybrid nature of bioinformatics—combining biology, computer science, mathematics, and statistics—encourages cross-disciplinary collaborations essential for solving complex biological problems—that can easily be carried out across borders. More, skills in these areas are highly transferable, can involve people who work remotely, and can serve as a catalyst for revitalizing war-affected regions.

This is just one example of how already in-place international infrastructure can be brought to Ukrainian research, and it is now one of many ongoing projects to allow Ukrainian researchers to continue their work. Many more examples are presented in the recent review, Scientists without Borders in GigaScience. In fact, we have come to realize, and have described in the review, that these mechanisms can be expanded: taking suitable and already existing international mechanisms and infrastructure to areas anywhere in the world that have been destroyed by political strife and natural disasters.

For Ukraine, and personal involvement, I teach and train Ukrainian students remotely. It is well worth it: an example of the passion of young researchers to continue their training, to embrace new opportunities is Valerii Pokrytiuk. He was admitted to my graduate program in bioinformatics at Oakland University in Michigan, but before he could come, the war broke out. Valerii volunteered to fight and is doing so somewhere in Eastern Ukraine. Periodically, when conditions allow, Valerii still joins us online for book club discussions, lab meetings, and to listen to courses I teach.

The war continues. And so does our fight.

Featured image: “Bucha, Ukraine, June 2022” by U.S. Embassy Kyiv Ukraine, Wikimedia Commons (public domain)

OUPblog - Academic insights for the thinking world.

 

Contact UsPast IssuesJoin This ListUnsubscribe

 

Safely Unsubscribe ArchivesPreferencesContactSubscribePrivacy