At a time when funding for urban infrastructure and the promotion of an overarching global goal—the hard-won SDG 11—have catapulted cities up the international policy agenda, it’s hard to believe that urban issues could ever have been considered ...
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

OUPblog » Business & Economics


Policy meets politics on the frontiers of world urbanization

Policy meets politics on the frontiers of world urbanization

At a time when funding for urban infrastructure and the promotion of an overarching global goal—the hard-won SDG 11—have catapulted cities up the international policy agenda, it’s hard to believe that urban issues could ever have been considered marginal.

In relation to much of Africa, however, until about 15 years ago urban development challenges were quite a fringe concern in both policy and academic research. Many governments on the continent—particularly in Eastern Africa, where a suite of countries were governed by regimes whose rebel origins lay deep in the rural peripheries—viewed city-dwellers either with indifference or active hostility. International donor agencies ploughed funds into rural development, and later into aspatial ‘social’ sectors such as education and health. Meanwhile, the renewed interest from China in Africa in the early 2000s was largely seen as being focused on natural resource extraction.

This all changed from around 2010. The intense refocusing of international attention on African cities has sometimes been taken for granted as a natural consequence of the continent’s urbanisation, and of evolving international aid and investment priorities. Yet the focus on external and demographic drivers can obscure how this reorientation has been shaped in strikingly diverse ways by different political contexts.

Against the background of this ‘urban turn’ in twenty-first century Africa, my book Politics and the Urban Frontier explores how domestic politics and power struggles harness the demographic force of urban growth, alongside diversifying flows of international finance, to produce very different kinds of cities. I explore this in a set of countries in East Africa (specifically, Ethiopia, Rwanda and Uganda), which I argue is the global urban frontier: the region with the lowest proportion of people living in urban areas, but also on average the fastest rates of urbanisation. Globally-sanctioned ideals of urban progress have been central to urban development in this region, yet their fate in any given city is partly determined by how they become entangled with other political and economic agendas.

What’s missing in ‘global’ urban policy debates?

The kinds of blueprints that have often been promoted for cities in low-income countries (think of the vogue for ‘self-help’ housing schemes in the 1970s, the prescriptive urban ‘good governance’ reforms dominating the 1990s, or the supposed panacea of land titling since the early 2000s) are rooted in certain generic assumptions about how cities work. These policy approaches are steeped in ideas of private property, infrastructure planning, and capitalist social relations that are largely rooted in industrialised countries elsewhere. African cities are often seen by foreign donors and external investors as substandard versions of the urban norm: low-income, dysfunctional nearly-cities that can be ‘lifted’ with the application of enough funds and the right regulatory frameworks.

Yet cities are not just bundles of land, regulations, and economic resources. They are also fundamentally political arrangements of people and things, and in much of the world in the twenty-first century they are shaped by three intersecting variables, about which conventional urban debates say little.

Yet cities are not just bundles of land, regulations, and economic resources. They are also fundamentally political arrangements of people and things.

First, cities are conditioned by shifting geopolitics, both in terms of regional dynamics of trade, movement, and diplomacy, and in terms of the diversifying forms of international finance on offer from donors and creditors. These financial flows, which wax and wane in response to geopolitical conditions, provide scope for bargaining and deal-making over big investments in and around capital cities, where national government priorities jostle with urban ones.

Second, cities are increasingly subject to dynamics of resurgent authoritarianism—but the nature and impact of this varies by country, given different levels of prior democratic institutional development and very different distributions of power among urban social groups. These differences affect how easily a governing regime can crush urban social opposition and repress city-level institutions.

Third, and related, cities are sites of intensifying quests for political legitimacy by governments seeking to build and hold urban power in the context of competing socioeconomic demands. In East Africa, postcolonial legitimacy to govern has often been sought among the rural majority. This is no longer adequate in the face of rapid urban growth. Urban legitimacy is now a central concern, even for authoritarian regimes—but precisely which urban groups matter most, and what needs to be done to court their support, will differ substantially by context.

Cities, then, are geopolitical hubs in which leaders and governing coalitions draw international flows into localised bargaining processes, in pursuit of (often authoritarian) urban power and legitimacy—not just globalising sites of ‘travelling’ urban planning visions and ideals of entrepreneurialism. The latter aspects of cities—which are no doubt important—have received much greater attention than the former. Politics and the Urban Frontier is part of an attempted rebalancing.

Urban analysis as political work

Focusing on East Africa, the book develops a detailed analytical framework to explain differences in urban trajectories between three cities that face many similar socioeconomic and demographic pressures, and similar flows of ideas and finance. It aims to explain why we see a stark contrast between a sustained commitment to top-down master planning processes in Kigali, accompanied by drive towards high-end service sector-led development, in contrast with a long-term ‘anti-planning’ political culture in Kampala where major urban infrastructures are fragmented and targeted as sources of private gain. Meanwhile, Addis Ababa has seen huge investments in the kinds of large-scale mass transport and housing projects that never get off the ground in the other two cities (though these have taken on a life of their own, due to widespread use by social groups for which they were not initially designed). These differences between the cities are above all rooted in power relations, rather than economic might or depoliticised notions of ‘state capacity’.

Why does all this matter? Aside from academic rebalancing, the book’s argument aims to enhance understanding about which kinds of urban investments will be taken seriously in a given context, and which may collide with political dynamics that mean they founder or twist into radically new directions during implementation. Having the analytical tools to better understand the political agendas and conflicts that underpin urban policy in a given context is essential if the international push towards more inclusive urban futures is to produce results in actual urban places. But attention to the politics of urban development is not just about tailoring urban interventions to political contexts to enhance implementation. It’s also about recognizing how such interventions can either bolster or reconfigure existing power relations, and therefore influence politics itself in a particular place—for better or worse.

There is a lot at stake in Africa’s urban century. We have seen the forms of radical socio-economic polarisation, spatial segregation, and authoritarianism that are possible in cities across the world. Indeed, many African cities were created this way during colonialism, and have struggled to escape this shadow. As the continent becomes an increasingly urban, it is vitally important that investors, donors, analysts, and advisors, who often see themselves as providing technical assistance, realise they are doing fundamentally political work—and not always in the ways they intend.

Featured image by Daggy J Ali via Unsplash (public domain).

OUPblog - Academic insights for the thinking world.

 

Less-than-universal basic income

Less-than-universal basic income

Ten years ago, almost no one in the United States had heard of Universal Basic Income (UBI). Today, chances are that the average college graduate has not only heard of the idea, but probably holds a very strong opinion about it. From Silicon Valley elites to futurists to policy wonks, UBI is igniting passions and dividing opinions across the political spectrum.

Much of the credit for this is due to Andrew Yang, whose 2016 presidential campaign took a centuries-old academic idea and transformed it into a focal point for conversations about poverty, inequality, and the future of work in an age of increasing technological automation.

Since then, the idea of UBI has taken off. The organization Mayors for a Guaranteed Income reports that it has sponsored almost 60 pilot programs in various cities across the United States, and the results of these pilots have been largely encouraging. In Stockton, California, a guaranteed income program not only reduced income volatility and mental anxiety, but significantly boosted full-time employment among recipients—by 12 percentage points, compared to a mere 5-point increase in the control group over the same period. A more recent experiment in St. Paul, Minnesota, showed similar increases in employment as well as improvements in housing and psychological wellbeing.

And yet, for all its popularity, the idea of UBI seems stuck at the level of a temporary experiment. No government has yet to implement UBI as a permanent, large-scale policy, and none seems likely to do so in the near-term future.

The UBI That Almost Was: Nixon’s Family Assistance Plan

After all, we’ve been here before. Back in the 1960s, a similar wave of enthusiasm for UBI (or “guaranteed income,” as it was called at the time) swept the United States. Milton Friedman popularized the idea in 1962 with his proposal for a Negative Income Tax. In 1968, a letter supporting a guaranteed income garnered over 1,000 signatures from academic economists and received front-page coverage in the New York Times. Finally, by 1969, Richard Nixon proposed his “Family Assistance Plan,” which would have provided a federally guaranteed income to families with children. Between growing bipartisan support and extreme public dissatisfaction with traditional welfare, it looked like the timing might be just right for the idea to actually become a reality.

Except, it didn’t. After months of struggle and compromises that left no one happy, the Family Assistance Plan ultimately failed to make it through Congress. The full story of its defeat is a complicated one, well-documented in Brian Steensland’s masterful book, The Failed Welfare Revolution. But, in essence, its failure came down to the same two objections that always bedevil guaranteed income programs: cost and fairness.

The Two Main Objections to UBI: Cost and Fairness

The issue of cost is a serious challenge for advocates of UBI. A grant of $1,000 per month would be close to enough to bring a single individual with no other income up to the poverty level. But a fully universal grant of $1,000 per month to all the roughly 330 million people living in the United States would cost almost 4 trillion dollars – more than half the entire federal budget for 2024! A smaller grant would cost less money, but the smaller the grant, the less effective it will be at lifting people out of poverty. Fiscal constraints thus create a dilemma that is difficult to escape.

The other problem is, if anything, even more difficult to manage. One of the defining features of UBI is its universality, meaning, in this context, that everyone is eligible to receive the grant, whether they are working or not. But it is precisely this universality that strikes many people as morally unfair. It’s one thing, the argument goes, to help people who are trying to support themselves but can’t. It’s quite another thing to declare that everybody is entitled to live off the federal dole, whether they’re able and willing to work or not. The old Victorian distinction between the deserving and the undeserving poor resonates deeply with a sizable majority of the American public, liberals, and conservatives alike.

It might be time to consider an alternative approach to UBI—one that avoids the main objections to the policy while retaining much of what makes it so attractive in the first place.

Of course, UBI advocates have responses to both objections. The cost of a UBI can be mitigated by imposing modest new taxes, consolidating existing welfare programs, or both. And claims of unfairness can be met by pointing out that just because full-time parents, artists, and caretakers aren’t working, this doesn’t make them free-riders. There are other ways of making a positive contribution to one’s community beyond participation in the paid labor market.

These responses are serious enough to merit more attention than I can devote to them here. But so far, at least, they have failed to persuade a majority of the American public. This doesn’t necessarily mean that they should give up. But it does suggest, perhaps, that it might be time to consider an alternative approach to UBI—one that avoids the main objections to the policy while retaining much of what makes it so attractive in the first place.

The Child Tax Credit as an Alternative to the UBI

We don’t have to stretch our imaginations to conceive of what such an alternative might look like. We’ve already tried it—at least briefly. In 2021, responding to the economic crisis caused by COVID-19, the United States made its Child Tax Credit (CTC) fully refundable. This meant that families whose income was too low to owe any taxes received cash payments from the government, the size of which depended on how many children they had. The results of this experiment were impressive. Childhood poverty levels fell to their lowest level on record—5.2%. When the expansion ended in 2022, child poverty more than doubled almost immediately, skyrocketing to 12.4%.

So far, efforts to make the expansion permanent have been unsuccessful. But its demonstrated success and relative popularity suggest that it might be the viable path forward for enacting a policy of large scale, no-strings-attached cash transfers.

First, because the CTC is limited to families with dependent children, its scope is far narrower than a fully universal UBI. Only about 40% of US households have children under the age of 18, so even keeping the size of the grant constant, a CTC cuts the cost of UBI by more than half.

Second, and perhaps more importantly, because the CTC is focused on families with children, it is much less vulnerable to the kind of worries about unfairness that plague UBI. Even if you think that there’s something morally objectionable about able-bodied adults being dependent on government support, surely that objection doesn’t apply to children. Children aren’t responsible for putting themselves in poverty. And they aren’t capable of getting themselves out of it. If anyone deserves a helping hand, it is children.

As Josh McCabe has recently noted, other countries such as Canada, Australia, New Zealand, and the United Kingdom all have child tax credits that are at least partially refundable. The United States not only lacks such a policy, but spends less on cash transfers to children than any other country in the Organization for Economic Cooperation and Development (OECD). No surprise, then, that the US also has the highest post-tax, post-transfer child poverty rates of any other country in the developed world.

For many of UBI’s supporters, its universality is one of its strongest appeals. And yet the objections about cost and fairness show that it might also be one of its greatest political liabilities. A permanent expansion of the Child Tax Credit has the potential to realize much of the promise of permanent, broad-based, unconditional cash transfers, while simultaneously avoiding the biggest pitfalls of UBI. In bridging ambition with practicality, expanding the Child Tax Credit could be the key to transforming the ideal of universal financial support into a sustainable reality.

Feature image by Andre Taissin via Unsplash.

OUPblog - Academic insights for the thinking world.

 

How can business leaders add value with intuition in the age of AI? [Long Read]

How can business leaders add value with intuition in the age of AI? [Long Read]

In a speech to the Economic Club of Washington in 2018, Jeff Bezos described how Amazon made sense of the challenge of if and how to design and implement a loyalty scheme for its customers. This was a highly consequential decision for the business; for some time, Amazon had been searching for an answer to the question: “what would loyalty program for Amazon look like?”

A junior software engineer came up with the idea of fast, free shipping. But a big problem was that shipping is expensive. Also, customers like free shipping, so much so that the big eaters at Amazon’s “buffet” would take advantage by free shipping low-cost items which would not be good for Amazon’s bottom-line. When the Amazon finance team modelled the idea of fast, free shipping the results “didn’t look pretty.” In fact, they were nothing short of “horrifying.”

But Bezos is experienced enough to know that some of his best decisions have been made with “guts… not analysis.” In deciding whether to go with Amazon Prime, the analysts’ data could only take the problem so far towards being solved. Bezos decided to go with his gut. Prime was launched in 2005. It has become one of the world’s most popular subscription services with over 100 million members who spend on average $1400 per year compared to $600 for non-prime members.

As a seasoned executive and experienced entrepreneur Bezos sensed that the Prime idea could work. And in his speech he reminded his audience that “if you can make a decision with analysis, you should do so. But it turns out in life that your most important decisions are always made with instinct and intuition, taste, heart.”

The launch of Amazon Prime is a prime example of a CEO’s informed and intelligent use of intuition paying off in decision-making under uncertainty (where outcomes are unknown and their likelihood of occurrence cannot be estimated) rather than under risk (where outcomes are known and probabilities can be estimated). The customer loyalty problem for Amazon was uncertain because probabilities and consequences could not be known at the time. No amount of analysis could reduce the fast, free shipping solution to the odds of success or failure.

Under these uncertain circumstances Bezos chose to go with this gut. This is not an uncommon CEO predicament or response. In business, decision-makers often have to act “instinctively” even though they have no way of knowing what the outcome is likely to be. The world is becoming more, not less uncertain, and “radical uncertainty” seems to have become the norm for strategic decision-making both in business and in politics. The informed and intelligent use of intuition on the part of those who have the nous and experience to be able to go with their gut is one way forward.

Human intuition meets AI

Turning to the uncertainties posed by artificial intelligence and winding the clock back to over half-a-century ago, the psychologist Paul Meehl in his book Clinical Versus Statistical Prediction (1954) compared how well the subjective predictions of trained clinicians such as physicians, psychologists, and counsellors fared when compared with predictions based on simple statistical algorithms. To many people’s surprise, Meehl found that experts’ accuracy of prediction, for example trained counsellors’ predictions of college grades, was either matched or exceeded by the algorithm.

The decision-making landscape that Meehl studied all those years ago has been transformed radically by the technological revolutions of the “Information Age” (see Jay Liebowitz, Bursting the Big Data Bubble, 2014). Computers have exceeded immeasurably the human brain’s computational capacity. Big data, data analytics, machine learning, and artificial intelligence (AI) have been described as “the new oil” (see Eugene Sadler-Smith, “Researching Intuition: A Curious Passion” in Bursting the Big Data Bubble, 2014). They have opened-up possibilities for outsourcing to machines many of the tasks that were until recently the exclusive preserve of humans. The influence of AI and machine learning is extending beyond relatively routine and sometimes mundane tasks such as cashiering in supermarkets. AI now figures prominently behind the scenes in things as diverse as social media feeds, the design of smart cars, and on-line advertising. It has extended its reach into complex professional areas such as medical diagnoses, investment banking, business consulting, script writing for advertisements, and management education (see Marcus du Sautoy, The Creativity Code, 2019).

There is nothing new in machines replacing humans: they did so in the mechanisations of the agricultural and industrial revolutions when they replaced dirty and dangerous work; dull work and decision-making work might be next. Daniel Suskind, author of World without Work thinks the current technological revolution is on a scale which is hitherto unheard of. The power with which robots and computers are able to perform tasks at high speed, with high accuracy, at scale using computational capabilities are orders of magnitude greater than those of any human or previous technology. This one reason this revolution is different and is why it has been referred to as nothing less than the “biggest event in human history” by Stuart Russell, founder of the Centre for Human-Compatible Artificial Intelligence at the University of California, Berkeley.

The widespread availability of data, along with cheap, scalable computational power, and rapid and on-going developments of new AI techniques such as machine learning and deep learning have meant that AI has become a powerful tool in business management (see Gijs Overgoor, et al). For example, the financial services industry deals with high-stakes, complex problems involving large numbers of interacting variables. It has developed AI that can be used to identify cybercrime schemes such as money laundering, fraud and ATM hacking. By using complex algorithms, the latest generation of AI can uncover fraudulent activity that is hidden amongst millions of innocent transactions and alert human analysts with easily digestible, traceable, and logged data to help them to decide, using human intuition based on their “feet on the ground” experiences, on whether activity is suspicious or not and take the appropriate action. This is just one example, and there are very few areas of business which are likely to be exempt from AI’s influence. Taking this to its ultimate conclusion Elon Musk said at the recent UK “AI Safety Summit” held at Bletchley Park (where Alan Turing worked as code breaker in World War 2) that: “There will come a point where no job is needed—you can have a job if you want one for personal satisfaction but AI will do everything.  It’s both good and bad—one of the challenges in the future will be how do we find meaning in life.”

Creativity and AI

Creativity is increasingly and vitally important in many aspects of business management. It is perhaps one area in which we might assume that humans will always have the edge. However, creative industries, such as advertising, are using AI for idea generation. The car manufacturer Lexus used IBM’s Watson AI to write the “world’s most intuitive car ad” for a new model, the strap line for which is “The new Lexus ES. Driven by intuition.” The aim was to use a computer to write the ad script for what Lexus claimed to be “the most intuitive car in the world”. To do so Watson was programmed to analyse 15 years-worth of award-winning footage from the prestigious Cannes Lions international award for creativity using its “visual recognition” (which uses deep learning to analyse images of scenes, objects, faces, and other visual content), “tone analyser” (which interprets emotions and communication style in text), and “personality insights” (using data to make inferences about consumers’ personalities) applications. Watson AI helped to “re-write car advertising” by identifying the core elements of award-winning content that was both “emotionally intelligent” and “entertaining.” Watson literally wrote the script outline. It was then used by the creative agency, producers, and directors to build an emotionally gripping advertisement.

Even though the Lexus-IBM collaboration reflects a breakthrough application of AI in the creative industries, IBM’s stated aim is not to attempt to “recreate the human mind but to inspire creativity and free-up time to spend thinking about the creative process.” The question of whether Watson’s advertisement is truly creative in the sense of being both novel and useful is open to question (it was based on rules derived from human works that were judged to be outstandingly creative by human judges at the Cannes festival). In a recent collaborative study between Harvard Business School and Boston Consulting Group, “humans plus AI” has been found to produce superior results compared to “humans without AI” when used to generate ideas by following rules created by humans. However, “creativity makes new rules, rules do not make creativity” (to paraphrase the French composer Claude Debussy). The use of generative AI which is rule-following rather than rule-making is likely to result in “creative” outputs which are homogeneous and which may ultimately fail the test of true creativity, i.e. both novel (in the actual sense of the word) and useful. Human creative intuition on the other hand adds value by:

  1. going beyond conventional design processes and rules
  2. drawing on human beings’ ability to think outside the box, produce innovative solutions
  3. sensing what will or won’t work
  4. yielding products and services that stand out in market, capture the attention of consumers, and drive business success.

—as based on suggestions offered by Chat GPT in response to the question: “how does creative intuition add value to organizations?”

Emotion intelligence and AI

Another example of area in which fourth generation AI is making in-roads is in the emotional and inter-personal domains. The US-based start-up Luka has developed the artificially intelligent journaling chatbot “Replika” which is designed to encourage people to “open-up and talk about their day.” Whilst Siri and Alexa are an emotionally “cold” digital assistants, Replika is designed to be more like your “best friend.” It injects emotion into conversations and learns from the user’s questions and answers. It’s early days, and despite the hype rigorous research is required to evaluate the claims being made on behalf of such applications.

“The fact that computers are making inroads into areas that were once considered uniquely human is nothing new.”

The fact that computers are making inroads into areas that were once considered uniquely human is nothing new. Perhaps intuition is next. The roots of modern intuition research are in chess, an area of human expertise in which grand masters intuit “the good move straight away.” Nobel laureate and one of the founding figures of AI, Herbert Simon, based his classic definition of intuition (“analyses frozen into habit and the capacity for rapid response through recognition”) on his research into expertise in chess. He estimated that grandmasters have stored of the order of 50,000 “familiar patterns” in their long-term memories, the recognition and recall of which enables them to play chess intuitively at the chess board.

In 1997, the chess establishment was astonished when IBM’s Deep Blue beat Russian chess grand master and world champion Garry Kasparov. Does this mean that IBM’s AI is able to out-intuit a human chess master? Kasparov thinks not. The strategy that Deep Blue used to beat Kasparov was fundamentally different from how another human being might have attempted to do so. Deep Blue did not beat Kasparov by replicating or mimicking his thinking processes, in Kasparov’s own words:

“instead of a computer that thought and played like a chess champion, with human creativity and intuition, they [the ‘AI crowd’] got one that played like a machine, systematically, evaluating 200million chess moves on the chess board per second and winning with brute number-crunching force.”

Nobel laureate in physics, Richard Feynman, commented presciently in 1985 that it will be possible to develop a machine which can surpass nature’s abilities but without imitating nature. If a computer ever becomes capable of out-intuiting a human it is likely that the rules that the computer relies on will be fundamentally different to those used by humans and the mode of reasoning will be very different to that which evolved in the human organism over many hundreds of millennia (see Gerd Gigerenzer, Gut Feelings, 2007).

AI’s limitations

In spite of the current hype, AI can also be surprisingly ineffective. Significant problems with autonomous driving vehicles have been encountered and are well documented, as in the recent case which came to court in Arizona involving a fatality allegedly caused by an Uber self-driving car. In medical diagnoses, even though the freckle-analysing system developed at Stanford University does not replicate how doctors exercise their intuitive judgement through “gut feel” for skin diseases, it can nonetheless through its prodigious number-crunching power diagnose skin cancer without knowing anything at all about dermatology (see Daniel Susskind, A World Without Work, 2020). But as the eminent computer scientist Stuart Russell remarked, the deep learning that such AI systems rely on can be quite difficult to get right, for example some of the “algorithms that have learned to recognise cancerous skin lesions, turn out to completely fail if you rotate the photograph by 45 degrees [which] doesn’t instil a lot of confidence in this technology.”

Is the balance of how we comprehend situations and take business decisions shifting inexorably away from humans and in favour of machines?  Is “artificial intuition” inevitable and will it herald the demise of “human intuition”? If an artificial intuition is realized eventually that can match that of a human, it will be one of the pivotal outcomes of the fourth industrial revolution―perhaps the ultimate form of AI.

Chat GPT appears to be “aware” of its own limitations in this regard. In response to the question “Dear ChatGPT: What happens when you intuit?” it replied:

“As a language model I don’t have the ability to intuit. I am a machine learning based algorithm that is designed to understand and generate human language. I can understand and process information provided to me, but I don’t have the ability to have intuition or feelings.”

More apocalyptically, could the creation of an artificial intuition be the “canary in the coalmine,” signalling the emergence of Vernor Vinge’s “technological singularity” where large computer networks and their users suddenly “wake up” as “superhumanly intelligent entities” as Musk and others are warning of? Could such a development turn out to be a Frankenstein’s monster with unknown but potentially negative, unintended consequences for its makers? The potential and the pitfalls of AI are firmly in the domain of the radically uncertain and identifying the potential outcomes and how to manage them is likely to involve a judicious mix of rational analysis and informed intuition on the part of political and business leaders.

“The potential and the pitfalls of AI are firmly in the domain of the radically uncertain.”

Human intuition, AI, and business management

Making any predictions about what computers will or will not be able to do in the future is a hostage to fortune. For the foreseeable future most managers will continue rely on their own rather than a computer’s intuitive judgements when taking both day-to-day and strategic decisions. Therefore, until a viable “artificial intuition” arrives that is capable of out-intuiting a human, the more pressing and practical question is “what value does human intuition add in business?” The technological advancements of the information age have endowed machines with the hard skill of “solving,” which far outstrips this capability in the human mind. The evolved capacities of the intuitive mind have endowed managers with the arguably hard-to-automate, or perhaps even impossible-to-automate, soft skill of “sensing.” This is the essence of human intuition.

Perhaps the answer lies in an “Augmented Intelligence Model (AIM),” which marries gut instinct with data and analytics. Such a model might combine three elements:

  1. human analytical intelligence, which is adept in communicating, diagnosing, evaluating, interpreting, etc.
  2. human intuitive intelligence, which is adept in creating, empathising, feeling, judging, relating, sensing, etc.
  3. artificial intelligence, which is adept in analysing, correlating, optimising, predicting, recognizing, text-mining, etc.

The most interesting spaces in this model are in the overlaps between the three intelligences, for example when human analytical intelligence augments artificial intelligence in a chatbot with human intervention. Similar overlaps exist for human analytical and human intuitive intelligences, and for human intuitive intelligence and artificial intelligence. The most interesting space is where all three overlap and it is here that most value stands to be added by leveraging the combined strengths of human intuitive intelligence, human analytical intelligence, and artificial intelligence in an Augmented Intelligence Model which can drive success.

This blog post is adapted from Chapter 1 of Intuition in Business by Eugene Sadler-Smith.

Feature image by Tara Winstead via Pexels.

OUPblog - Academic insights for the thinking world.

 

10 things direct reports must do to get the most out of their 1:1 meetings

10 things direct reports must do to get the most out of their 1:1 meetings

1:1s are crucial in promoting positive outcomes such as increased employee engagement, higher retention rates, more innovation, and overall success for the team member, manager, and organization. A lot of focus is placed on the manager’s role in orchestrating 1:1s, where they are responsible for addressing direct reports’ practical and personal needs. However, it is also important to recognize that direct reports have agency in 1:1s and should play an active, not passive, role in the effectiveness of these meetings. When direct reports feel empowered to seek help, there are benefits to both the individual and organization.

As an employee, you need to take an active role in your 1:1s to get the most out of them. These 10 key behaviors are critical in making sure you are receiving the help that you need to grow in your career:

  1. Know what you need: be ready to discuss your own needs, hopes, and goals, not just what you think you should say to your manager.
  2. Be curious: do not just have a curious mindset, but also engage in curious behaviors such as asking questions, listening, and challenging yourself to discover new things.
  3. Build rapport: get to know your manager on a personal and professional level by learning about their interests.
  4. Actively engage: get the most out of your meeting by doing things like asking questions, expressing yourself, taking notes, and paying attention to non-verbal communication like maintaining good eye contact.
  5. Communicate well: strive to be clear, concise, focused, honest and pay attention to voice infliction and tone. For difficult conversations, consider practicing before bringing them to your manager.
  6. Problem solve: come to your 1:1 not only with your problems but also possible solutions. Be ready to constructively discuss counterarguments and differing viewpoints.
  7. Ask for help (constructively): seek assistance from your manager that encourages independent problem solving. This includes asking to recommendations or help of others when your manager cannot assist you.
  8. Ask for feedback: ask specific questions that focus on receiving suggestion on future behaviors such as “I want to improve at X, do you have any suggestions on how to get better at this?”
  9. Receive feedback well: show that you are appreciative of the feedback by thanking your manager and asking further questions about issues that were raised.
  10. Express gratitude: let your manager know you are grateful for their time and feedback.

Finally, as you proceed with these behaviors, it is important to keep in mind the science around asking for help. Namely, help-seeking behaviors have been categorized by social psychologists into two main types: autonomous help-seeking and dependent help-seeking.

Autonomous help-seeking can be understood as seeking information that enables individuals to be independent, accomplish tasks, and solve problems on their own. This tends to promote long-term independence—similar to the adage, “Give a person a fish and they’ll eat for a day but teach them to fish and they’ll eat for a lifetime.”

Dependent help-seeking, on the other hand, refers to searching for a “quick fix” and an “answer” from someone else. This style of help-seeking conserves time and effort and leads to immediate gratification, but typically doesn’t yield long-term self-sufficiency. Interestingly, job performance ratings have been shown to have a positive relationship with autonomous help-seeking, but a negative relationship with its counterpart—dependent help-seeking.

Bottom line: do your part in the 1:1 to maximize its value to you and approach it as an opportunity to learn to be the best you can seeking meaningful insights that enable you to thrive and grow both short-term and long-term. 

Featured image via Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

 

Five unexpected things about medical debt

Five unexpected things about medical debt

Debt is a subject that so many of us dread. It is a drain not only on our wallets but also on our minds, leaving us with the sense that our lives and our freedom are being slowly drained away. The consequences of medical debt, which is held by 100 million Americans, are even worse. Medical debt causes people to forgo (or be denied) necessary medical care, harms people’s credit, and leaves their property (including their bank accounts and even their homes) vulnerable to foreclosure.

In Your Money or Your Life: Debt Collection in American Medicine, I detailed some odd particularities about medical debt. Here are five:

1. Your medical debt can land you in jail

Debtors’ prisons have been illegal in the United States for centuries, but Americans are still jailed because of medical debt. If hospitals and their debt collectors decide to sue a patient for medical debt and win the judgment, a patient might be summoned to court for a hearing to “discover” where their assets are located. If a patient fails to appear in court for an oral examination as part of this process of discovery, the creditor can request that the court issues a “body attachment” directing the sheriff to arrest the debtor. Technically, the debtor is being arrested for contempt of court, not the debt itself, but the debt is the fundamental cause of the arrest.

The American Civil Liberties found that in 2020, 44 states still allowed the arrest of a debtor for failing to appear in court or failing to provide information to creditors after a judgment against them. Medical debtors are frequent victims of this practice. In Idaho, for instance, Medical Recovery Services LLC sought and obtained the arrest of more debtors than any other collector in the state between 2010 and 2016. In one case, a judge set bail at more than twice the amount of the debt. After being arrested for failing to appear for a hearing in Utah for unpaid debt for an ambulance ride, Rex Iverson was arrested and brought to the county jail. Later that day, when the police went to check the holding cell, they found Iverson dead; a subsequent investigation determined that Iverson had committed suicide by strychnine poisoning.

2. Hospitals can wipe away your bill before you see the doctor

To receive a tax exemption, every nonprofit hospital in the United States must have a financial assistance policy. These policies specify income qualifications for free and discounted care. But even if patients qualify for this “charity care,” they are usually made to complete onerous and detailed applications. But there is an easier way. Hospitals can use existing “presumptive eligibility” software to determine, at the point of care, whether a patient is likely to qualify for financial assistance. This is relatively easy to determine; for instance, if a patient is enrolled in food stamps, they meet eligibility criteria for most hospitals.

At Oregon Health and Sciences University in Portland, Oregon, the billing department started this using this software in 2017. Since instituting this process, patients have been offered free care earlier in the process, and the billing department no longer has to deal with as many incomplete applications.

Still, this effort to bring some efficiency and equity to billing is still not common practice. Hospitals must pay every time they use proprietary software to screen a patient. Most hospitals would rather not bother with the expense, so they leave the onus on patients to apply for financial assistance.

3. Medical debt is bought, sold, and collected by some of America’s wealthiest and most powerful people

The medical debt collection industry includes some of the world’s richest and most powerful people. In 2014, medical debt collector Transworld Systems was sold to a private equity firm called Platinum Equity. This firm was headquartered in Beverly Hills and owned by billionaire Tom Gores. As of July 2022, Gores was the 424th richest person in the world, just behind Twitter founder Jack Dorsey and Star Wars creator George Lucas. Gores is best known as owner of the NBA’s Detroit Pistons, and as a philanthropist. He is a member of the Board of Directors of the UCLA Medical Center, a donor to Children’s Hospital Los Angeles and to various causes Detroit and Flint, Michigan, where he grew up. He is a giant of industry, and a fixture in the civic life of two great American cities.

Under Gores’ stewardship, Transworld Systems Inc. was not nearly so charitable as his public image. By 2017, it was the company with the most medical debt collection complaints on the CFPB’s database. One person in Georgia claimed that TSI had called a friend to find him (which is allowed), but during that call had said it was in regard to a medical debt that he owed (which is not). A resident of Illinois complained that a negative action had been filed on his credit report by TSI for a medical debt that he had never heard of, and was sure he did not owe. He said he had tried to call TSI numerous times to settle the matter but could never get anyone on the other end of the line.  Another in Missouri claimed that TSI called his work cell phone so often, and despite his pleas to them to contact them at home instead, that his supervisor became annoyed and passed him over for a pay raise.

4. Aggressive debt collection earns hospitals very little money

Not all hospitals are in financial trouble. In 2019, America’s hospitals recorded their highest average profit margin ever, at 6.7%. And while many hospitals struggled during the early days of the COVID pandemic, massive federal support led them to finish the year with similar profit margins as in 2019. Of course, there are many hospitals that do not operate with such comfortable margins; many struggle to stay afloat, and each year, some close.

But hounding patients who cannot afford to pay does precious little to help. TransUnion Healthcare reports that in 2016, 68% hospital bills of bills under $500 were not paid in full. Heftier bills were even less likely to be paid; 99% of hospital bills over $3,000 were not paid in full in 2016. This meager repayment is the reason why when hospitals sell their debt to outside buyers, they receive mere cents on the dollar. Most patients in arrears just don’t have the money to pay without risking their financial health, a problem that has given rise to a saying long in use among hospital administrators: “self-pay equals no pay.”

Suing patients does not meaningfully contribute to a hospital’s financial well-being. A study of Virginia hospitals that garnished the wages of patients found that they collected, on average, 0.1% of hospital revenue through this practice. Even the hospital that sued the most patients in the state, Mary Washington Hospital in Fredericksburg, gained only 0.2% of its revenue from wage garnishments. And most often it is not the struggling safety-net or rural access hospital filling the court dockets. Institutions that pursue patients as aggressively as possible frequently have comfortable operating margins and very well-paid executives.

5. Widows and widowers are being held responsible for repaying the debts of their late spouses

This strange fact is the result of a tangled legal history. Every state law includes a “doctrine of necessaries,” which makes parents liable for the support of their children; they must, in other words, provide what is “necessary for the health and well-being” of their children. In some states, however, this doctrine is also held to make spouses liable for the financial support of one another. This does not exist everywhere: in some states, such as Georgia, the spousal doctrine of necessaries has been repealed by the legislature, while in others, such as Florida, it has been ruled unconstitutional by the courts. But in states where the spousal doctrine remains, hospitals have used it to sue spouses when the patient has died before paying their bills. In fact, medical debt is the predominant kind of debt for which the doctrine of necessaries is invoked.

The doctrine of necessaries is a relic of early English common law in which women had no right to own property or assume debts independent of their husbands, so husbands were deemed to have an obligation to pay the necessary expenses of their wives. In an ironic interpretive move, given the burden it places on widows and widowers, the very fact that medical care is so necessary is what makes it so easy for hospitals to invoke the doctrine in court.

Widows have even been made to do manual labor to pay off the debts of their late spouses. Grieving the recent death of her husband after a prolonged hospitalization, Ms. Wilson faced medical debt she could never hope to repay on her fixed income. In 1995, Danville Regional Medical Center in southern Virginia gave her the option of entering a “Service-Credit” program, where patients owing between $300 and $20,000 were put to work typing, filing, housekeeping, lawn care and working in the print shop. For their labor, they earned $5 per hour toward settling their debts.  “Net pay is applied directly to the bill, so no cash changes hands,” explained a rather upbeat front-page article in the Richmond Times-Dispatch. Nowhere in the article was the possibility mentioned that the medical center, a non-profit, might just write off the unpaid debts of low-income patients as charity care. The journalist called the program a “working cure,” without a hint of irony.

The road ahead

The history recounted in Your Money or Your Life shows how medical debt transformed from a personal negotiation between doctor and patient into an impersonal financial instrument, bought and sold by people with no role in patient care and no social bonds to patients. Hospitals and their debt collectors have become aggressive in pursuing debts, threatening patients’ physical and financial health. There have long been people working to alleviate the suffering caused by medical debt, as well as abolitionists who aim for an America where such debt is a thing of the past. By better understanding how medical debt came to be so pervasive and so harmful to patients, we might help that day come sooner.

Featured image by camilo jimenez on Unsplash (public domain)

OUPblog - Academic insights for the thinking world.

 

Contact UsPast IssuesJoin This ListUnsubscribe

 

Safely Unsubscribe ArchivesPreferencesContactSubscribePrivacy