Showing posts with label innovation. Show all posts
Showing posts with label innovation. Show all posts

Saturday, February 18, 2023

Hedgehog Innovation

According to Archilochus, the fox knows many things, but a hedgehog knows one big thing.

In his article on AI and the threat to middle class jobs, Larry Elliot focuses on machine learning and robotics.

AI stands to be to the fourth industrial revolution what the spinning jenny and the steam engine were to the first in the 18th century: a transformative technology that will fundamentally reshape economies.

When people write about earlier waves of technological innovation, they often focus on one technology in particular - for example a cluster of innovations associated with the adoption of electrification in a wide range of industrial contexts.

While AI may be an important component of the fourth industrial revolution, it is usually framed as an enabler rather than the primary source of transformation. Furthermore, much of the Industry 4.0 agenda is directed at physical processes in agriculture, manufacturing and logistics, rather than clerical and knowledge work. It tends to be framed as many intersecting innovations rather than one big thing.

There is also a question about the pace of technological change. Elliott notes a large increase in the number of AI patents, but as I've noted previously I don't regard patent activity as a reliable indicator of innovation. The primary purpose of a patent is not to enable the inventor to exploit something, it is to prevent anyone else freely exploiting it. And Ezrachi and Stucke provide evidence of other ways in which tech companies stifle innovation.

However the AI Index Report does contain other measures of AI innovation that are more convincing.


 AI Index Report (Stanford University, March 2022)

Larry Elliott, The AI industrial revolution puts middle-class workers under threat this time (Guardian, 18 February 2023)

Ariel Ezrachi and Maurice Stucke, How Big-Tech Barons Smash Innovation and how to strike back (New York: Harper, 2022)

Wikipedia: Fourth Industrial Revolution, The Hedgehog and the Fox

Related Posts: Evolution or Revolution (May 2006), It's Not All About (July 2008), Hedgehog Politics (October 2008), The New Economics of Manufacturing (November 2015), What does a patent say? (February 2023)

Monday, June 29, 2020

Bold, Restless Experimentation

In his latest speech, invoking the spirit of Franklin Delano Roosevelt, Michael Gove calls for bold, restless experimentation.

Although one of Gove's best known pronouncements was his statement during the Brexit campaign that people in this country have had enough of experts ..., Fraser Nelson suggests he never intended this to refer to all experts: he was interrupted before he could specify which experts he meant.

Many of those who share Gove's enthusiasm for disruptive innovation also share his ambivalence about expertise. Joe McKendrick quotes Valar Afshar of DisrupTV: If the problem is unsolved, it means there are no experts.

Joe also quotes Michael Sikorsky of Robots and Pencils, who links talent, speed of decision and judgement, and talks about pushing as much of the decision rights as possible right to the edge of the organization. Meanwhile, Michael Gove also talks about diversifying the talent pool - not only a diversity of views but also a diversity of skills.

In some quarters, expertise means centralized intelligence - for example, clever people in Head Office. The problems with this model were identified by Harold Wilensky in his 1967 book on Organizational Intelligence, and explored more rigorously by David Alberts and his colleagues in CCRP, especially under the Power To The Edge banner.

Expertise also implies authority and permission; so rebellion against expertise can also take the form of permissionless innovation. Adam Thierer talks about the tinkering and continuous exploration that takes place at multiple levels, while Bernard Stiegler talks about disinhibition - a relaxation of constraints leading to systematic risk-taking.
 
Elevating individual talent over collective expertise is a risky enterprise. Malcolm Gladwell calls this the Talent Myth, while Stiegler calls it Madness. For further discussion and links, see my post Explaining Enron.




Michael Gove, The Privilege of Public Service (Ditchley Annual Lecture, 27 June 2020)


Henry Mance, Britain has had enough of experts, says Gove (Financial Times, 3 June 2016)

Fraser Nelson, Don't ask the experts (Spectator, 14 January 2017)

Bernard Stiegler, The Age of Disruption: Technology and Madness in Computational Capitalism (Polity Press, 2019). Review by John Reader (Postdigital Science and Education, 2019).

Adam Thierer, Permissionless Innovation (Mercatus Center, 2014/2016)


Related posts: Demise of the Superstar (August 2004), Power to the Edge (December 2005), Explaining Enron (January 2010), Enemies of Intelligence (May 2010), The Ethics of Disruption (August 2019)

Wednesday, October 30, 2019

What Difference Does Technology Make?

In his book on policy-making, Geoffrey Vickers talks about three related types of judgment – reality judgment (what is going on, also called appreciation or sense-making), value judgment and action judgment.

In his book on technology ethics, Hans Jonas notes "the excess of our power to act over our power to foresee and our power to evaluate and to judge" (p22). In other words, technology disrupts the balance between the three types of judgment identified by Vickers.

Jonas (p23) identifies some critical differences between technological action and earlier forms
  • novelty of its methods
  • unprecedented nature of some of its objects
  • sheer magnitude of most of its enterprises
  • indefinitely cumulative propagation of its effects
In short, this amounts to action at a distance - the effects of one's actions and decisions reach further and deeper, affecting remote areas more quickly, and lasting long into the future. Which means that accepting responsibility only for the immediate and local effects of one's actions can no longer be justified.

Jonas also notes that the speed of technologically fed developments does not leave itself the time for self-correction (p32). An essential ethical difference between natural selection, selective breeding and genetic engineering is not just that they involve different mechanisms, but that they operate on different timescales.

(Of course humans have often foolishly disrupted natural ecosystems without recourse to technologies more sophisticated than boats. For example, the introduction of rabbits into Australia or starlings into North America. But technology creates many new opportunities for large-scale disruption.)

Another disruptive effect of technology is that it affects our reality judgments. Our knowledge and understanding of what is going on (WIGO) is rarely direct, but is mediated (screened) by technology and systems. We get an increasing amount of our information about our social world through technical media: information systems and dashboards, email, telephone, television, internet, social media, and these systems in turn rely on data collected by a wide range of monitoring instruments, including IoT. These technologies screen information for us, screen information from us.

The screen here is both literal and metaphorical. It is a surface on which the data are presented, and also a filter that controls what the user sees. The screen is a two-sided device: it both reveals information and hides information.

Heidegger thought that technology tends to constrain or impoverish the human experience of reality in specific ways. Albert Borgmann argued that technological progress tends to increase the availability of a commodity or service, and at the same time pushes the actual device or mechanism into the background. Thus technology is either seen as a cluster of devices, or it isn't seen at all. Borgmann calls this the Device Paradigm.

But there is a paradox here. On the one hand, the device encourages to pay attention to the immediate affordance of the device, and ignore the systems that support the device. So we happily consume recommendations from media and technology giants, without looking too closely at the surveillance systems and vast quantities of personal data that feed into these recommendations. But on the other hand, technology (big data, IoT, wearables) gives us the power to pay attention to vast areas of life that were previously hidden.

In agriculture for example, technology allows the farmer to have an incredibly detailed map of each field, showing how the yield varies from one square metre to the next. Or to monitor every animal electronically for physical and mental welbeing.

And not only farm animals, also ourselves. As I said in my post on the Internet of Underthings, we are now encouraged to account for everything we do: footsteps, heartbeats, posture. (Until recently this kind of micro-attention to oneself was regarded as slightly obsessional, nowadays it seems to be perfectly normal.)

Technology also allows much more fine-grained action. A farmer no longer has to give the same feed to all the cows every day, but can adjust the composition of the feed for each individual cow, to maximize her general well-being as well as her milk production.

In the 1980s when Borgmann and Jonas were writing, there was a growing gap between the power to act and the power to foresee. We now have technologies that may go some way towards closing this gap. Although these technologies are far from perfect, as well as introducing other ethical issues, they should at least make it easier for the effects of new technologies to be predicted, monitored and controlled, and for feedback and learning loops to be faster and more effective. And responsible innovation should take advantage of this.




Albert Borgmann, Technology and the Character of Everyday Life (University of Chicago Press, 1984)

Hans Jonas, The Imperative of Responsibility (University of Chicago Press, 1984)

Geoffrey Vickers, The Art of Judgment: A Study in Policy-Making (Sage 1965)


Wikipedia: Rabbits in Australia, Starlings in North America

Saturday, August 31, 2019

The Ethics of Disruption

In a recent commentary on #Brexit, Simon Jenkins notes that
"disruption theory is much in vogue in management schools, so long as someone else suffers".

Here is Bruno Latour making the same point.
"Don't be fooled for a second by those who preach the call of wide-open spaces, of  'risk-taking', those who abandon all protection and continue to point at the infinite horizon of modernization for all. Those good apostles take risks only if their own comfort is guaranteed. Instead of listening to what they are saying about what lies ahead, look instead at what lies behind them: you'll see the gleam of the carefully folded golden parachutes, of everything that ensures them against the random hazards of existence." (Down to Earth, p 11)

Anyone who advocates "moving fast and breaking things" is taking an ethical position: namely that anything fragile enough to break deserves to be broken. This position is similar to the economic view that companies and industries that can't compete should be allowed to fail.

This position may be based on a combination of specific perceptions and general observations. The specific perception is when something is weak or fragile, protecting and preserving it consumes effort and resources that could otherwise be devoted to other more worthwhile purposes, and makes other things less efficient and effective. The general observation is that when something is failing, efforts to protect and preserve it may merely delay the inevitable collapse.

These perceptions and observations rely on a particular worldview or lens, in which things can be perceived as successful or otherwise, independent of other things. As Gregory Bateson once remarked (via Tim Parks),
"There are times when I catch myself believing there is something which is separate from something else."
Perceptions of success and failure are also dependent on timescale and time horizon. The dinosaurs ruled the Earth for 140 million years.

There may also be strong opinions about which things get protection and which don't. For example, some people may think it is more important to support agriculture or to rescue failing banks than to protect manufacturers. On the other hand, there will always be people who disagree with the choices made by governments on such matters, and who will conclude that the whole project of protecting some industry sectors (and not others) is morally compromised.

Furthermore, the idea that some things are "too big to fail" may also be problematic, because it implies that small things don't matter so much.

A common agenda of the disruptors is to tear down perceived barriers, such as regulations. This is subject to a fallacy known as Chesterton's Fence, assuming that anyone whose purpose is not immediately obvious must be redundant.




Simon Jenkins, Boris Johnson and Jeremy Hunt will have to ditch no deal – or face an election (Guardian, 28 June 2019)

Bruno Latour, Down to Earth: Politics in the New Climatic Regime (Polity Press, 2018)

Tim Parks, Impossible Choices (Aeon, 15 July 2019)

Rory Sutherland, Chesterton’s fence – and the idiots who rip it out (Spectator, 10 September 2016)


Related posts: Shifting Paradigms and Disruptive Technology (September 2008), Arguments from Nature (December 2010), Low-Hanging Fruit (August 2019)

Thursday, August 22, 2019

Low-Hanging Fruit

August comes around again, and there are ripe blackberries in the hedgerows. One of the things I was taught at an early age was to avoid picking berries that were low enough to be urinated on by animals. (Or humans for that matter.) So I have always regarded the "low hanging fruit" metaphor with some distaste.

In business, "low hanging fruit" sometimes refers to an easy and quick improvement that nobody has previously spotted.

Which is of course perfectly possible. A new perspective can often reveal new opportunities.

But often the so-called low hanging fruit were already obvious, so pointing them out just makes you sound as if you think you are smarter than everyone else. And if they haven't already been harvested, there may be something you don't know about. (The fallacy of eliminating things whose purpose you don't understand is known as Chesterton's Fence.)

And another thing about picking soft fruit. Fruit are not placed at random, each plant has a characteristic pattern. Many plants place the leaves above the fruit, thus you can often see more fruit when you look upwards from below. If you get into the habit of looking downwards for the low-hanging stuff, you will simply not see how much more bounty the plant has to offer.

A lot of best practices and checklists are based on the simple and obvious. Which is fine as far as it goes, but not very innovative, won't take you from Best Practice to Next Practice.

So as I pointed out in my post on the Wisdom of the Iron Age, nobody should ever be satisfied with the low hanging fruit. Even if the low-hanging fruit hasn't already been pissed upon, its only value should be to get us started, to feed us and motivate us as we build ladders, so we can reach the high-hanging fruit.





Rory Sutherland, Chesterton’s fence – and the idiots who rip it out (Spectator, 10 September 2016)

Wikipedia: Chesterton's Fence

Updated 1 September 2019

Sunday, March 3, 2019

Ethics and Uncertainty

How much knowledge is required, in order to make a proper ethical judgement?

Assuming that consequences matter, it would obviously be useful to be able to reason about the consequences. This is typically a combination of inductive reasoning (what has happened when people have done this kind of thing in the past) and predictive reasoning (what is likely to happen when I do this in the future).

There are several difficulties here. The first is the problem of induction - to what extent can we expect the past to be a guide to the future, and how relevant is the available evidence to the current problem. The evidence doesn't speak for itself, it has to be interpreted.

For example, when Stephen Jay Gould was informed that he had a rare cancer of the abdomen, the medical literature indicated that the median survival for this type of cancer was only eight months. However, his statistical analysis of the range of possible outcomes led him to the conclusion that he had a good chance of finding himself at the favourable end of the range, and in fact he lived for another twenty years until an unrelated cancer got him.

The second difficulty is that we don't know enough. We are innovating faster than we can research the effects. And longer term consequences are harder to predict than short-term consequences: even if we assume an unchanging environment, we usually don't have as much hard data about longer-term consequences.

For example, a clinical trial of a drug may tell us what happens when people take the drug for six months. But it will take a lot longer before we have a clear picture of what happens when people continue to take the drug for the rest of their lives. Especially when taken alongside other drugs.

This might suggest that we should be more cautious about actions with long-term consequences. But that is certainly not an excuse for inaction or procrastination. One tactic of Climate Sceptics is to argue that the smallest inaccuracy in any scientific projection of climate change invalidates both the truth of climate science and the need for action. But that's not the point. Gould's abdominal cancer didn't kill him - but only because he took action to improve his prognosis. @Alexandria Ocasio-Cortez has recently started using the term Climate Delayers for those who find excuses for delaying action on climate change.

The third difficulty is that knowledge itself comes packaged in various disciplines or discourses. Medical ethics is dependent upon specialist medical knowledge, and technology ethics is dependent upon specialist technical knowledge. However, it would be wrong to judge ethical issues exclusively on the basis of this technical knowledge, and other kinds of knowledge (social, cultural or whatever) must also be given a voice. This probably entails some degree of cognitive diversity. Will Crouch also points out the uncertainty of predicting the values and preferences of future stakeholders.

The fourth difficulty is that there could always be more knowledge. This raises the question as to whether it is responsible to go ahead on the basis of our current knowledge, and how we can build in mechanisms to make future changes when more knowledge becomes available. Research may sometimes be a moral duty, as Tannert et al argue, but it cannot be an infinite duty.

The question of adequacy of knowledge is itself an ethical question. One of the classic examples in Moral Philosophy concerns a ship owner who sends a ship to sea without bothering to check whether the ship was sea-worthy. Some might argue that the ship owner cannot be held responsible for the deaths of the sailors, because he didn't actually know that the ship would sink. However, most people would see the ship owner having a moral duty of diligence, and would regard him as accountable for neglecting this duty.

But how can we know if we have enough knowledge? This raises the question of the "known unknowns" and "unknown unknowns", which is sometimes used with a shrug to imply that noone can be held responsible for the unknown unknowns.

(And who is we? J. Nathan Matias argues that the obligation to experiment is not limited to the creators of an artefact, but may extend to other interested parties.)

The French psychoanalyst Jacques Lacan was interested in the opposition between impulsiveness and procrastination, and talks about three phases of decision-making: the instant of seeing (recognizing that some situation exists that calls for a decision), the time for understanding (assembling and analysing the options), and the moment to conclude (the final choice).

The purpose of Responsibility by Design is not just to prevent bad or dangerous consequences, but to promote good and socially useful consequences. The result of applying Responsibility by Design should not be reduced innovation, but better and more responsible innovation. The time for understanding should not be dragged on forever, there should always be a moment to conclude.




Matthew Cantor, Could 'climate delayer' become the political epithet of our times? (The Guardian, 1 March 2019)

Will Crouch, Practical Ethics Given Moral Uncertainty (Oxford University, 30 January 2012)

Stephen Jay Gould, The Median Isn't the Message" (Discover 6, June 1985) pp 40–42.

J. Nathan Matias, The Obligation To Experiment (Medium, 12 December 2016)

Alex Matthews-King, Humanity producing potentially harmful chemicals faster than they can test their effects, experts warn (Independent, 27 February 2019)

Christof Tannert, Horst-Dietrich Elvers and Burkhard Jandrig, The ethics of uncertainty. In the light of possible dangers, research becomes a moral duty (EMBO Rep. 8(10) October 2007) pp 892–896

Stanford Encyclopedia of Philosophy: Consequentialism, The Problem of Induction

Wikipedia: There are known knowns 

The ship-owner example can be found in an essay called "The Ethics of Belief" (1877) by W.K. Clifford, in which he states that "it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence".

I describe Lacan's model of time in my book on Organizational Intelligence (Leanpub 2012)

Related posts: Ethics and Intelligence (April 2010), Practical Ethics (June 2018), Big Data and Organizational Intelligence (November 2018)

updated 11 March 2019

Friday, March 9, 2018

Fail Fast - Burger Robotics

As @jjvincent observes, integrating robots into human jobs is tougher than it looks. Four days after it was installed in a Pasadena CA burger joint, Flippy the robot has been taken out of service for an upgrade. Turns out it wasn't fast enough to handle the demand. Does this count as Fail Fast?

Flippy's human minders have put a positive spin on the failure, crediting the presence of the robot for an unexpected increase in demand. As Vincent wryly suggests, Flippy is primarily earning its keep as a visitor attraction.

If this is a failure at all, what kind of failure is it? Drawing on earlier work by James Reason, Phil Boxer distinguishes between errors of intention, planning and execution.

If the intention for the robot is to improve productivity and throughput at peak periods, then the designers have got more work to do. And the productivity-throughput problem may be broader than just burger flipping: making Flippy faster may simply expose a bottleneck somewhere else in the system. But if the intention for the robot is to attract customers, this is of greatest value at off-peak periods. In which case, perhaps the robot already works perfectly.



Philip Boxer, ‘Unintentional’ errors and unconscious valencies (Asymmetric Leadership, 1 May 2008)

John Donohue, Fail Fast, Fail Often, Fail Everywhere (New Yorker, 31 May 2015)

Lora Kolodny, Meet Flippy, a burger-grilling robot from Miso Robotics and CaliBurger (TechCrunch 7 Mar 2017)

Brian Heater, Flippy, the robot hamburger chef, goes to work (TechCrunch, 5 March 2018)

James Vincent, Burger-flipping robot takes four-day break immediately after landing new job (Verge, 8 March 2018)





Related post Fail Fast - Why did the chicken cross the road? (March 2018)

Friday, July 15, 2016

Boughing to the Inevitable

What is the best time to plant a tree?

A popular answer to this question is that the best time to plant a tree is twenty years ago, and the second-best time is now.

This is often claimed to be an ancient Chinese proverb. Or an African proverb. It is unlikely to be either of these.

And obviously we are not supposed to take this proverb literally. Because if the best time was twenty years ago, the second-best time would be nineteen years ago.

But instead of interpreting this logically, we are presumably supposed to interpret it as a motivational statement. Don't waste time regretting that you didn't plant a tree twenty years ago, act now to make sure you don't have similar regrets in twenty years' time. (Do real Chinese proverbs do motivational statements? I suspect not.)

In his new book, The Inevitable, Kevin Kelly talks about the opportunities for internet entrepreneurs thirty years ago. "Can you imagine how awesome it would have been to be an ambitious entrepreneur back in 1985 at the dawn of the internet?"

He then looks forward to the middle of the century. "If we could climb into a time machine, journey 30 years into the future, and from that vantage look back to today, we’d realize that most of the greatest products running the lives of citizens in 2050 were not invented until after 2016."

In other words, for an internet start-up the second-best time is now.



By the way, I'm not the first person to use the pun about 'boughing' to the inevitable. For example, @rcolvile used it in the context of ash dieback. "Half the trees in the country were going to be torn down. He’d already had to veto a particularly insensitive press release describing him as 'ashen-faced' about the situation, but 'boughing to the inevitable'. Meanwhile, Google is asking me if I meant 'coughing to the inevitable'. Thanks Google, it's always useful to spot something you haven't yet mastered.


KK.org, The Inevitable
Kevin Kelly, The Internet Is Still at the Beginning of Its Beginning (Huffington Post, 6 June 2016)

On The Best Time to Plant a Tree (Reddit)

Robert Colvile, Friends: The One with the Guy in a Yellow Tie (Telegraph, 3 November 2012)



Wednesday, January 30, 2013

Real Criticism, The Subject Supposed to Know

"Goodbye, Anecdotes", says @Butterworthy, "The Age Of Big Data Demands Real Criticism" (AWL, January 2013). Thanks to @milouness, who comments "Important concepts here about what is knowable!".  The article tries to link Big Data with Big Questions about the Big Picture, and what @Butterworthy calls The Big Criticism. From this perspective, Bill Franks' advice, To Succeed with Big Data, Start Small (HBR Oct 2012), is downright paradoxical.

But why would we expect Big Data to help us answer the Big Questions? Big Data is rather a misnomer: it mostly comprises very large quantities of very small data and very weak signals. Retailers wade through Big Data in order to fine-tune their pricing strategies; pharma researchers wade through Big Data in order to find chemicals with a marginal advantage over some other chemicals; intelligence analysts wade through Big Data to detect terrorist plots. Doubtless these are useful and sometimes profitable exercises, but they are hardly giving us much of a Big Picture. Big Data may give us important clues about what the terrorists are up to, but it doesn't tell us why.

A few years ago, Chris Anderson promoted The End of Theory, and published an article claiming that The Data Deluge Makes the Scientific Method Obsolete (Wired June 2008), although this may have only been an ironic reference to Fukuyama's earlier idea of The End of History. Claiming obsolescence seems like hyperbole, although scientific method has always been modified by technological progress. Even in mathematics, computer power and human brilliance have combined to crack some previously unsolved problems. See for example, Proof and Beauty (The Economist, March 2005).

Although @Butterworthy claims to have identified some critical ("Big Critical") questions, there seems to be only one real question - the dialectical question of quantity becoming quality. Are we on the cusp of aggregating utilitarianism into new tyrannies of scale? Are the numbers are so big, they leave interpretation behind and acquire their own agency? How much information and of what kind would you need to conclude something - for example, something like gender bias in the media?

A recent academic study looked at 2.4 million pages of newspaper and came to the conclusion that there was some gender bias. That's a lot of newspaper. It's like examining every single grain of sand in the forest for traces of ursine faeces. (In other words, looking for microscopic proof that bears defecate in the woods.) From a technophile perspective, Big Data seems to be raising the bar for scientific methodology: following this impressive piece of research, those who don't understand the concept of statistical significance can dismiss any smaller study - for example, one that merely studied thousands of pages - as unscientific anecdote. At a stroke, decades of careful analysis by feminists can be discredited because their sample sizes were too small by modern Big Data standards, and so there is now less scientifically credible evidence of gender bias than there was before.

Seriously, how many pages of newspaper do you have to read to convince yourself of gender bias? Clearly this is an example of Big Data getting in the way of the Big Picture. @Butterworthy clearly understands this danger, and sees the redemptive possibility of Big Crit (whatever that is) revitalizing the notion of critical authority and restoring some balance to the universe. I'm not sure I follow how he thinks that is going to happen. 


 Related post: Big Data and Organizational Intelligence (November 2018)

Sunday, January 27, 2013

Expert Generalists and Innovative Organizations

What do the great innovators have in common? Looking at examples from Picasso to Kepler, Art Markman calls these men expert generalists. They seem to know a lot about a wide variety of topics, and their wide knowledge base supports their creativity.

Markman identifies two personality traits that are key for expert generalists: Openness to Experience and Need for Cognition. Can we also expect to find these traits in innovative organizations?

Openness to Experience entails a willingness to explore new ideas and opportunities. Obviously many organizations prefer to stick with familiar ideas and activities, and have built-in ways of maintaining the status quo.

Need for Cognition entails a joy of learning, and a willingness to devote the time and effort necessary to master new things. 

In his post on the origins of modern science, Tim Johnson compares the rival claims of magic and commerce. He points out that good science is open whereas magic is hidden and secretive; he traces the foundations of modern science to European financial practice, on the grounds that markets are social, collaborative, open, forums. But perhaps it makes more sense to see modern science as having two parents: from magic it inherits its Need for Cognition, a deep and passionate interest in explaining how things work; while from commerce it inherits its Openness to Experience, a broad fascination with the unknown. Obviously there have been individual scientists who have had more of one than the other, and some outstanding individual scientists who have excelled at both, but the collective project of science has relied on an effective combination of these two qualities.


Saturday, December 1, 2012

Challenge-Led Innovation

#oipsrv One view of innovation is that it is motivated by a series of challenges. Once upon a time, we would have used the word "problems", and called this the "problem-solving" approach to innovation. But the word "problem" is now taboo in business world, and we have to find various euphemisms such as "opportunity" or "challenge". Necessity is the mother of invention.

At a seminar at the British Library yesterday (Open Innovation in Public Services), I heard several ways of managing innovation in these terms.

  • Challenge Prizes - Offering cash prizes to the first person or team that can solve a well-defined problem. This approach has been used for centuries, although the history of technology is littered with unfortunate inventors who have produced something brilliant only to have the prize taken by a rival, or unfairly denied for various spurious reasons. Furthermore, a poorly designed prize can discourage collaboration and thus inhibit innovation instead of encouraging it. However, as Vicki Purewal explained, prize schemes do not have to follow the winner-takes-all, loser-gets-nothing rule, and are often designed to distribute the rewards more fairly and in stages. See Centre for Challenge Prizes
  • Hack Days - Bringing volunteers together for a day to build quick and dirty solutions to a broad range of problems. This approach is most commonly seen in the software arena, and the example presented was NHS Hackdays.
  • Challenge Platform - Creating a social network and/or funding for collective problem-solving. Contrasting examples from Barking and Dagenham, Camden, and York.


I think these are all good and useful initiatives. One of the benefits is that they open up the organization or ecosystem to ideas from a much larger community of people. This can be both more democratic and a lot more cost-effective than hiring one of the large consultancies, which seems to be the default method in some organizations. One way of putting this is that it changes the available scope of Organizational Intelligence.

However, problem-solving may be necessary for innovation, but is not sufficient. These initiatives concentrate on invention, which tends to be the sexy part of innovation. @davidtownson from the Design Council showed two slides that placed invention into a broader context. The first of these slides showed the Design Council's design process, drawn as a Double Diamond.  The first diamond is devoted to clarifying the problem or requirement, and the second diamond is devoted to solving a well-defined problem. If the challenge-led approach starts from a well-defined problem, then it is just doing the second diamond. 

The second of David's slides showed a spiral model of innovation, culminating in Systemic Change. (I can't find a version of this spiral on the Design Council website.) This might suggest extending the Double Diamond into a Triple Diamond, where the third diamond tackled the difficult and unglamorous end of the innovation process - rolling out the solution, integrating it with systems and working practices, and embedding it into the target organization or ecosystem.

This triple diamond faintly echoes the three-phase innovation model proposed (in a somewhat different context) by Abernathy and Utterback, which combined product innovation, process innovation, competitive environment and organizational structure: 
  • Fluid phase (exploratory)
  • Transitional phase (convergence on solution)
  • Specific phase (focus on costs and performance)
Within the public sector, there may be broad demand for innovations (individual challenges), but there is also extremely strong demand for innovation as such (focus on costs and performance). So a suitably modified version of the Abernathy and Utterback model would be extremely relevant to the public sector.

Let us return to the question of Open Innovation. In her presentation, Heather Niven contrasted a large tanker with a flotilla of small boats. In the specific area of NHS information systems, Heather's metaphor applies very well to the contrast between the NPfIT - a grossly expensive centralized white elephant - and a large number of small but useful apps developed in the NHS Hackdays Carl Reynolds has organized. The "bottom-up" approach may be more promising than the "top-down" approach, as well as more exciting, but there probably needs to be a stronger element of coordination and integration before we can see this innovation as anything more than a load of well-meaning but marginal efforts by a bunch of extremely clever geeks.

Finally, there was some discussion about the word "innovation", and resistance to this concept within the public sector in particular. Perhaps we need to go back to talking about problem-solving?



Abernathy, W.J. and Utterback, J.M. Patterns of Innovation in Technology (Technology Review 1978) via Innovation Zen

For @LucyInnovation 's report of the British Library seminar, see Because not all the smart people work for you ...

Saturday, May 5, 2012

Daoism and Rocket Science

Who is to say whether a scientific or technical discovery is accidental or planned? Historians of science often point out that there was some luck involved in Fleming's "accidental" discovery of penicillin. But Fleming and his assistants were already actively searching for anti-bacterial agents, and the discovery of penicillin followed a similar path to his earlier discovery of the anti-bacterial properties of egg-white (lysozyme), so it is misleading to describe the discovery of penicillin as a complete accident.

Some historians of science now suggest that the Chinese invention of rockets was an accident. They argue that Daoist thinkers would have understood explosion as a violent response to the combination of Yin and Yang, and that they would therefore have been unable to think systematically about a reaction involving three ingredients instead of two. In other words, a given mental model or frame constrains investigation. (Unlike the Fleming example.)

Of course we must be cautious about interpreting historical Daoist thought against either a modern understanding of the chemistry of gunpowder, or even against a modern interpretation of Daoist thought. Perhaps the ancient Chinese did not see any contradiction between a three-way chemical reaction and Daoism, and that this apparent contradiction is merely a modern projection. (In other words, the modern historians perceive the past using their own mental models or frames. None of us can escape this.)

However, it is still true that mental models can constrain what we perceive, as well as how we make sense of our perceptions and act upon them, and this has important implications for innovation and organizational intelligence.


Frank H. Winter, Michael J. Neufeld, Kerrie Dougherty, Was the rocket invented or accidentally discovered? Some new observations on its origins (Acta Astronautica, Volume 77, August–September 2012, Pages 131–137) http://dx.doi.org/10.1016/j.actaastro.2012.03.014

Corrinne Burns, Oops, I invented the rocket! The explosive history of serendipity (Guardian, 4 May 2012)

Monday, October 24, 2011

There is always another story 2

James Allworth (and apparently Clay Christensen) believe that Steve Jobs solved the innovator's dilemma (HBR October 2011).

Allworth's simplified version of the Innovator's Dilemma (as explained with greater precision in Christensen's book) is that successful innovators are led astray by the pursuit of profit. Jobs supposedly disdained profit, along with any number of other business school best practices, and produced "a company that looks entirely different to almost any other modern Fortune 500 company". In an unrelated article on Steve Jobs and the purpose of the corporation (HBR October 2011), Ben Heineman has asserted that "Apple existed to delight customers first — benefits to other stakeholders, including shareholders, followed".

Jobs' original expulsion from Apple may well have been partly caused by his failure to respect the traditional gods of management. On his return, he characterized the difference between Sculley and himself in terms of profitability versus passion. Jobs later told Walter Isaacson, his official biographer: "My passion has been to build an enduring company where people were motivated to make great products. The products, not the profits, were the motivation. Sculley flipped these priorities to where the goal was to make money. It's a subtle difference, but it ends up meaning everything." [via Huffington Post October 2011]

But as I indicated in my previous blogpost (There is always another story), Jobs was outstandingly good at constructing simple either-or narratives of this kind, and persuading everyone to believe them. His former colleague Bud Tribble called this a Reality Distortion Field. We sometimes have to work hard to avoid taking such narratives at face value. Like many wealthy rock stars or religious gurus, Jobs may have enjoyed creating the impression that he didn't care about wealth. But we don't have to believe it.


James Allworth, Steve Jobs solved the innovator's dilemma (HBR 24 October 2011)

Note: Professor Christensen tweeted James Allworth's HBR article without further comment, so I take that as indicating broad agreement with the article's main premise. See also Evgeny Morozov, Form and Fortune - Steve Jobs’s pursuit of perfection and the consequences (The New Republic, February 22, 2012).

Thursday, March 24, 2011

The Wisdom of the Iron Age

Interesting BBC programme In Our Time this morning about The Dawn of the Iron Age. Why and how did people start making ornaments and tools and weapons from copper and tin and lead? Because the ores were shiny, and it was easy to see how they could be melted and purified and worked. Gradually, people discovered that certain combinations of these materials (what we now call alloys) were stronger, or more malleable - hence the development of bronze.

Although iron ore was much more abundant than any of the others, it was much less attractive, and primitive people were unaware of its potential value. Even when melted, it didn't look much. Producing useful iron from this stuff was a more complicated procedure, and those tribes that first discovered the secret wisely kept it to themselves. Egyptian tombs had a few iron items, but these were probably obtained by trade or capture - the evidence suggests that Egyptians themselves did not know how to produce iron.

Could people ever have worked out how to produce iron if they didn't already have the experience of working with other metals. Would people ever have thought it worth the extra hassle of producing iron if they weren't aware of the limitations of using other metals? Is it conceivable that we could ever have had an Iron Age without having a Bronze Age first?

There is an important lesson here for innovation. Nobody should ever be satisfied with the "low hanging fruit". The only purpose of the low-hanging fruit is to get us started, to feed us and motivate us as we build ladders, so we can reach the high-hanging fruit.


See also

Venkatesh Rao, The Disruption of Bronze (2 February 2011)

Paula Hay, Cognitive Archeology of the West (17 March 2011)

Related post: Low-Hanging Fruit

The Wisdom of the Tomato

Various people have tweeted the following aphorism.

Knowledge is knowing a tomato is a fruit. Wisdom is knowing not to put it in a fruit salad.

Please permit me to quibble with this aphorism. Classifying tomatoes as fruit is merely information. This classification is supported by data, such as the observation that the tomato contains its own seeds. Knowing not to put it into a fruit salad is a culinary best practice, based on a series of social conventions about the proper constitution of fruit salad and its place within a meal. So this is knowledge, or what is sometimes called received wisdom. However, innovation often involves disobeying social conventions and surprising those who rely excessively upon received wisdom. For example, how did chefs discover that it was okay to put flower petals into salads (next practice)? So courage is knowing that you are not supposed to put tomatoes into fruit salad, but doing it anyway. And real wisdom is not inflicting such gross culinary experiments on the wrong people at the wrong time in the wrong way.



Wikipedia attributes this aphorism to the Irish rugby captain Brian O'Driscoll. Various interpretations can be found in the comments to Brendan Cole's blog What did BOD mean? (Feb 2009)

On the Unbelievable Truth (Series 10 Episode 5), the @RealDMitchell rants about whether a tomato is a fruit or a vegetable. He claims that the US Government taxes tomatoes as vegetables, and regards this as more authoritative than mere science.

See also my post Co-Production of Data and Knowledge (Nov 2012)

Updated 29 January 2013

Wednesday, June 30, 2010

Visible Problems

@jchyip tweets "Just because a problem is visible doesn't mean that it's the most important one to deal with first."

In hierarchical organizations, the most important problem to deal with first is the one visible to your boss - or his boss. So much the worse for hierarchical organizations of course.

@flowchainsensei agrees, but says he would prefer the word "analytic", and posts a chart onto Twitter to clarify his use of the word "analytic".

Which types of organization are good at dealing with invisible problems? I searched for "invisible problems" on the Internet and found a few random examples: alcohol, presenteeism, cell-phone antennas, gambling, racism, violence against girls in school, vulnerable customers, water. One important point about any invisible problem is that problem-solving contains at least two additional steps - firstly persuading yourself that there is a problem at all, and secondly motivating others to help you solve it.

Sometimes you have to prepare a solution, but you cannot get enough resources or political support to implement your solution - until something happens to make the problem visible. Which means that whenever something dreadful happens, there are always opportunists who try to use this event as a pretext for introducing some innovation or other, which (they optimistically argue) would prevent such an event ever occurring again. Indeed, sometimes a dreadful event is so politically convenient for certain parties or interest groups that they may even be accused (by their opponents or by conspiracy theorists) of having engineered the event themselves (POSIWID).

An intelligent organization should have the capacity to identify and deal with invisible problems - making potential problems visible, and motivating people to anticipate and prevent problems before they occur. This is not the only defining characteristic of an intelligent organization, but it is one of the things that we should expect more intelligent organizations to be better at than less intelligent organizations.


Update: Bob's original picture is no longer available. For a more recent (and more detailed) account of his model, see Bob Marshall, Rightshifting in a Nutshell (October 2015)

Related post: Visible Problems and Best Practice (June 2010)

Monday, June 28, 2010

Reinventing the Wheel

@jschwa1 via @hlsdk and @JohnIMM on not reinventing the wheel, recommending examples and techniques to avoid (being accused of) wasting effort and resources.

There are two issues here. Firstly the trade-off between design time and search time. In the short term, it doesn't make sense to spend half a day searching for something that you could build in a hour, even if the lifetime consequences of unnecessary duplication and complication may cost a lot more.

Secondly, there is a trade-off between using an existing wheel (which might be okay but not perfectly designed for this particular task) and designing a better wheel. I'm guessing that there are engineers at any major car manufacturer dedicated to re-inventing the wheels - otherwise we'd still be using Henry Ford's design.

Thus the management challenge here is twofold. Firstly, making sure that there is sufficient access to existing knowledge and ideas to allow engineers to build on what went before. And secondly making sure that there is enough management information and intelligence to achieve a reasonable balance between innovation and reuse.


Great summary from @j4ngis When you re-invent wheels - re-use knowledge about existing wheels.

Sunday, April 25, 2010

Innovation by Committee

The historian and filmmaker Laurence Rees is setting up a subscription-based website providing coverage of World War Two, called WW2History.com. In an article in today's paper No, children: Hitler came after 1066 (Sunday Times, 25 April 2010), he describes some of the financial and organizational challenges, and the risks of the subscription model.

If the subscription model of internet funding doesn’t work, I can’t see how truly authoritative educational material on the web has a future. Unless, of course, it’s assembled by a state-funded or charitable institution.

I was particularly interested in his comment about the failure of universities of pursue this kind of innovation.

From the first day I started making WW2History.com I was curious as to why no university had created something similar to this before me, especially since all the academics I asked to contribute to the site could see the value of the work instantly. Partly it’s because of money — universities can scarcely expand into new areas when they face cuts elsewhere. But, according to one distinguished academic I talked to, there’s also another reason. “We could never do this,” he said. “It isn’t just because we don’t have the media expertise or the cash, it’s because we would set up a committee to oversee production and no one would ever agree on anything.”

Does this mean that innovation by committee can never work, or merely that universities typically lack the capacity to operate the kind of collective intelligence that would make it work?

Friday, March 19, 2010

Where is the fear?

@hnauheimer recommended a discussion in the Linked-In Organizational Change Practitioners group. Rauf Aslam Butt had asked why people FEAR to change, and this prompted a number of responses about resistance to change being caused by anxiety, ego, dislike of effort, fear of the unknown, and so on.

I thought it was interesting that we often describe other people as fearful, anxious, reactive, and so on, but never ourselves. WE are rational and THEY are emotional.

Sometimes resistance to change is a perfectly rational response to a flawed or ill-conceived initiative, as my friend Linda Levine pointed out many years ago. Many change programmes in large organizations are not properly thought through, and many large organizations are trying to run several incompatible change programmes at the same time. As Christina Buchanan said in the discussion, we should all fear badly-planned change.

The second good reason to fear change is that the change may get half-way through and then run out of money or trust. (People losing faith when the change gets to that inevitable dip in the middle.) Or there'll suddenly be a new person at the top with a different agenda.

People learn to be apprehensive about change because of accurate observation of what has happened in their organizations and elsewhere. Art Kleiner suggests that "resistance to change occurs not because people fear change, but because they fear the consequences of contradicting the perceived priorities of the core group" (Strategy+Business, April 2010). Rather than complain about "fear", maybe change practitioners should look at addressing the causes of fear. (Perhaps this was the purpose of Rauf's original question.)

When change agents perceive ordinary busy people as fearful or ego-driven, simply because they don't leap with enthusiasm and energy for every half-baked scheme that is put to them, or because they ask awkward questions, maybe there's a certain amount of projection involved. (Psychologists call this transference.) It might seem that the people who are really most anxious about this change and its immediate prospects are the change agents themselves? But that's not the whole truth either.

Following my post Where is the intelligence?, we might ask a similar question about the true location of the fear. Even if the change agents authentically acknowledge their own feelings about the change, and deal with these feelings in a healthy and mature manner, we might wonder if the change agents really owned these feelings, or whether they were sensitively picking up the anxiety embedded in the organization itself. (Psychologists call this counter-transference.)

Unless we are completely emotionally cut off, our feelings inside organizations are strongly connected to the emotional state of the organization as a whole. This applies to feelings such as motivation as well as anxiety, and applies whether we are change agents ourselves or the recipients of change led by other people - often we may be both at the same time.

Change agents should also acknowledge their own contribution to the sum total of fear and anxiety in an organization. One common tactic for change is to create something called a compelling event - a story that attaches fear to the status quo, prodding the organization reluctantly into the future like a herd of cattle. As a consequence of this tactic, many organizations are almost paralysed by the accumulation of would-be compelling events.

But in my opinion, one of the biggest errors of a change agent is to focus attention on the people who are most vocal in expressing anxiety about a given initiative - treating them as if they were the instigators of these feelings rather than merely witnesses to them, sometimes even trying to exclude or avoid them as if this will cause the bad feelings to disappear. But anyone who brings their concerns out into the open is doing you a favour, because these concerns can then be addressed, and the change programme will be better for it. It is the silent ones you really have to worry about.


Linda Levine, An Ecology of Resistance. In Tom McMaster et al, Facilitating Technology Transfer through Partnership (Springer 1997) pp 163-174

 

Related posts: Passive Adoption (December 2009), Why new systems don't work (December 2009), The Role of the Sceptic (April 2010)

Wednesday, January 6, 2010

Notes on the Value of Culture

Following my previous post Meeting of Minds, about the cost of meetings and of the "meeting culture", @j4ngis asked "Would you also consider value of meeting culture?"

There is a more general discussion of culture to be had here. Culture is often blamed when things don't go according to a rational managerial ethos. As Oscar Berg blogged yesterday, Did you ever hear anyone shout "culture failure!"?

But an organization without culture - well, it just wouldn't be an organization at all. Culture is what gives an organization its identity - it is a kind of deeper structure that protects the organization from incoherence, instability and inconsequentiality. Culture tells us how an abstract business model is embodied in a particular organization.

Arjo Klamer identifies several ways of talking about the value of culture. In an anthropological sense,

"‘culture’ ... refers to the shared values, stories and aspirations that distinguish one group of people from another (think of a community, an organization, an ethnic group, a nation or a continent). The economic value of culture would be the economic contribution that those shared values make. As the sociologist Max Weber famously argued, the culture of Calvinism may have contributed to the rise of capitalism and the economic growth that came with it. A particular culture may improve economic performance or hinder it. A culture of distrust can seriously hamper the market process. A culture of consensus, such as exists in Japan and the Netherlands, can stifle entrepreneurship but may also be responsible for stability in the event of crisis." [Value of Culture (pdf)]

Edgar Schein identifies three levels of organizational culture - behaviour and artefacts, values, and assumptions and beliefs. [See Wikipedia. See also notes by Ted Nellen.] We can use the VPEC-T lens to unpack and identify these different elements.


An organization has various mechanisms to prevent random changes to the way-we-do-things. Much of the time, these mechanisms are accepted uncritically as part of normal management control - like an immune system that prevents the organization being taken over by destructive memes. @AndreaMeyer calls these mechanisms corporate antibodies. However, when managers themselves want to change things, these mechanisms turn out to be inconvenient obstaces, whose aggregate effect is to suppress innovation.

Vincent Kenny and Philip Boxer describe culture as follows.
"Anyone who works in businesses will have encountered the notion of culture, and the incredible extent to which a culture lives on in a way which defies anyone's attempts to bring about change. It is not only a question of dealing with the issue of anxiety as an individual issue - the whole fabric of the organisation seems to be caught up in the conservation of identity however much change individuals may make." [Economy of Discourses, 1990]
Kenny and Boxer go on to talk about "the levels of extreme inflexibility and 'stuckness' which we witness in large companies" and ask "how can we explain the increasing degrees of rigidity and loss of power for self-transformation evident in the invariant identities and cultures of organisations?"

The reason why leaders struggle with culture is because there is a creative tension or asymmetry between culture and identity on the one hand, and viability (or effectiveness or survival) on the other hand. This is a critical element of the Asymmetric Design lens, which Philip Boxer describes in When is a stratification not a universal hierarchy? (See also the sociological distinction between structure and agency.) This provides a rigorous framework for reasoning about complex structural change.

Coming back to @j4ngis's question - what is the value of the meeting culture - I guess the key question here is what other cultures (more flexible, less bureaucratic, or whatever) we'd be comparing it with, in what context. What function does this culture serve in the context of this particular organization, and how does it affect the strategic outcomes and energy profile of the organization?



Related posts: Vaccination (September 2004), Meeting of Minds (January 2010), Easier Seddon Done (June 2008)