The purpose of this blogpost is to enumerate the declared ethical positions of major players in the data world. This is a work in progress. Google In June 2018, Sundar Pinchai (Google CEO) announced a set of AI principles for Google. This includes seven ...

Click here to read this mailing online.

Your email updates, powered by FeedBlitz

 
Here is a sample subscription for you. Click here to start your FREE subscription


  1. Data and Intelligence Principles From Major Players
  2. Responsibility by Design
  3. Outdated Assumptions - Connectivity Hunger
  4. Be the Change
  5. Making the World More Open and Connected
  6. More Recent Articles

Data and Intelligence Principles From Major Players

The purpose of this blogpost is to enumerate the declared ethical positions of major players in the data world. This is a work in progress.




Google

In June 2018, Sundar Pinchai (Google CEO) announced a set of AI principles for Google. This includes seven principles, four application areas that Google will avoid (including weapons), references to international law and human rights, and a commitment to a long-term sustainable perspective.

https://www.blog.google/topics/ai/ai-principles/


Also worth noting the statement on AI ethics and social impact published by DeepMind last year. (DeepMind was accquired by Google in 2014 and is now a subsidiary of Google parent Alphabet.)

https://deepmind.com/applied/deepmind-ethics-society/research/



IBM

In January 2017, Ginni Rometty (IBM CEO) announced a set of Principles for the Cognitive Era.

https://www.ibm.com/blogs/think/2017/01/ibm-cognitive-principles/

This was followed up in October 2017, with a more detailed ethics statement for data and intelligence, entitled Data Responsibility @IBM.

https://www.ibm.com/blogs/policy/dataresponsibility-at-ibm/



Microsoft

In January 2018, Brad Smith (Microsoft President and Chief Legal Officer) announced a book called The Future Computed: Artificial Intelligence and its Role in Society, to which he had contributed a forward.

https://blogs.microsoft.com/blog/2018/01/17/future-computed-artificial-intelligence-role-society/



Twitter


@Jack Dorsey (Twitter CEO) asked the Twitterverse whether Google's AI principles were something the tech industry as a whole could get around (via The Register, 9 June 2018).



Selected comments

These comments are mostly directed at the Google principles, because these are the most recent. However, many of them apply equally to the others. Commentators have also remarked on the absence of ethical declarations from Amazon.


Many commentators have welcomed Google's position on military AI, and congratulate those Google employees who lobbied for discontinuing its work with the US Department of Defense analysing drone footage, known as Project Maven. @kateconger, Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program (Gizmodo 1 June 2018) Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance, (Gizmodo 7 June 2018)

Interesting thread from former Googler @tbreisacher on the new principles (HT @kateconger)

@EricNewcomer talks about What Google's AI Principles Left Out (Bloomberg 8 June 2018). He reckons we're in a "golden age for hollow corporate statements sold as high-minded ethical treatises", complains that the Google principles are "peppered with lawyerly hedging and vague commitments", and asks about governance - "who decides if Google has fulfilled its commitments".

@katecrawford(Twitter 8 June 2018) also asks about governance. "How are they implemented? Who decides? There's no mention of process, or people, or how they'll evaluate if a tool is 'beneficial'. Are they... autonomous ethics?" And @mer__edith (Twitter 8 June 2018) calls for "strong governance, independent external oversight and clarity".

Andrew McStay (Twitter 8 June 2018) asks about Google's business model. "Please tell me if you spot any reference to advertising, or how Google actually makes money. Also, I’d be interested in knowing if Government “work” dents reliance on ads."

Earlier, in relation to DeepMind's ethics and social impact statement, @riptari (Natasha Lomas) suggested that "it really shouldn’t need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology’s societal impacts" (TechCrunch October 2017). See also my post on Conflict of Interest (March 2018).

@rachelcoldicutt asserts that "ethical declarations like these need to have subjects. ... If they are to be useful, and can be taken seriously, we need to know both who they will be good for and who they will harm." She complains that the Google principles fail on these counts. (Tech ethics, who are they good for? Medium 8 June 2018)


Updated 11 June 2018
    

Responsibility by Design

Over the past twelve months or so, we have seen a big shift in the public attitude towards new technology. More people are becoming aware of the potential abuses of data and other cool stuff. Scandals involving Facebook and other companies have been headline news.

Security professionals have been pushing the idea of security by design for ages, and the push to comply with GDPR has made a lot of people aware of privacy by design. Responsibility by design (RbD) represents a logical extension of these ideas to include a range of ethical issues around new technology.

Here are some examples of the technologies that might be covered by this.

Technologies such as
...
Benefits such as
...
Dangers such as
...
Principles such as
...
Big Data Personalization Invasion of Privacy Consent
Algorithms Optimization Algorithmic Bias Fairness
Automation Productivity Fragmentation of Work Human-Centred Design
Internet of Things Cool Devices Weak Security Ecosystem Resilience
User Experience Convenience Dark Patterns, Manipulation Accessibility, Transparency


Ethics is not just a question of bad intentions, it includes bad outcomes through misguided action. Here are some of the things we need to look at.
  • Unintended outcomes - including longer-term or social consequences. For example, platforms like Facebook and YouTube are designed to maximize engagement. The effect of this is to push people into progressively more extreme content in order to keep them on the platform for longer.
  • Excluded users - this may be either deliberate (we don't have time to include everyone, so let's get something out that works for most people) or unwitting (well it works for people like me, so what's the problem)
  • Neglected stakeholders - people or communities that may be indirectly disadvantaged - for example, a healthy politics that may be undermined by the extremism promoted by platforms such as Facebook and YouTube.
  • Outdated assumptions - we used to think that data was scarce, so we grabbed as much as we could and kept it for ever. We now recognize that data is a liability as well as an asset, and we now prefer data minimization - only collect and store data for a specific and valid purpose. A similar consideration applies to connectivity. We are starting to see the dangers of a proliferation of "always on" devices, especially given the weak security of the IoT world. So perhaps we need to replace a connectivity-maximization assumption with a connectivity minimization principle. There are doubtless other similar assumptions that need to be surfaced and challenged.
  • Responsibility break - potential for systems being taken over and controlled by less responsible stakeholders, or the chain of accountability being broken. This occurs when the original controls are not robust enough.
  • Irreversible change - systems that cannot be switched off when they are no longer providing the benefits and safeguards originally conceived.


Wikipedia: Algorithmic Bias (2017), Dark Pattern (2017), Privacy by Design (2011), Secure by Design (2005), Weapons of Math Destruction (2017). The date after each page shows when it first appeared on Wikipedia.

Ted Talks: Cathy O'Neil, Zeynep Tufekci, Sherry Turkle

Related Posts: Pax Technica (November 2017), Risk and Security (November 2017), Outdated Assumptions - Connectivity Hunger (June 2018)



Updated 12 June 2018
    

Outdated Assumptions - Connectivity Hunger

Behaviours developed in a state of scarcity may cease to be appropriate in a state of abundance. Our stone age ancestors struggled to get enough energy-rich food, so they acquired a taste for food with a strong energy hit. We inherited a greed for sweet and fatty foods, and can now stuff our faces on delicacies our stone age ancestors never knew, such as ice-cream and cheesecake.

***

So let's talk about data. Once upon a time, data processing systems struggled to get enough data, and long-term data storage was expensive, so we were told to regard data as an asset. People learned to grab as much data as they could, and keep it until the data storage was full. But the greed for data was always moderated by the cost of collection, storage and retrieval, as well as the limited choice of data that was available in the first place.

Take away the assumption of data scarcity and cost, and our greed for data becomes problematic. We now recognize that data (especially personal data) can be a liability as much as an asset, and have become wedded to the principle of data minimization - only collecting the data you need, and only keeping it as long as you need.

***

But data scarcity is not the only outdated assumption that still influences our behaviour. Let's also talk about connectivity. Once upon a time, connectivity was intermittent, slow, unreliable. Hungry for greater connectivity, computer scientists dreamed of a world where everything was always on. More recently, Facebook has argued that Connectivity is a Human Right. (But you can only read this document if you have a Facebook account!)

But as with an overabundance of data, we may experience an overabundance of connectivity. Thus we are starting to realise the downside of the "always on", not just in the highly insecure world of the Internet of Things (Rainie and Anderson) but also in corporate computing (Ben-Meir, Hill).

Increasingly, products and services are being designed for "always on" operation. Ben-Meir notes Apple’s assertion that constant connectivity is essential for features such as AirDrop and AirPlay, and only today a colleague was grumbling to me about the downgrading of offline functionality in Microsoft Outlook.

Perhaps therefore, similar to the data minimization principle, there needs to be a network minimization principle. The wider the network, the larger the scope of responsibility. Or as Bruce Schneier puts it, "the more we network things together, the more vulnerabilities on one thing will affect other things". So don’t just connect because you can. Connect for a reason, disconnect by default, support offline functionality and disruption-tolerance, prefer secure hubs to insecure peer-to-peer.

Bruce Schneier again: "We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized. If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much."



Elad Ben-Meir, How an 'Always-On' Culture Compromises Corporate Security (Info Security, 2 November 2017)

Paul Hill, Always-on Access Brings Always-Threatening Security Risks (System Experts, 25 June 2015)

Lee Rainie and Janna Anderson, The Internet of Things Connectivity Binge: What Are the Implications? (Pew Research Centre, 6 June 2017)

Bruce Schneier, Click Here to Kill Everyone (New York Magazine, 27 January 2017)

Maeve Shearlaw, Mark Zuckerberg says connectivity is a basic human right – do you agree? (Guardian 3 Jan 2014)


Thanks to @futureidentity for useful discussion
    

Be the Change

Anyone fancy a job as Head of Infrastructure? Here is the job description, posted to Linked-In earlier this week.

We're responsible for "IT Change", including the end to end architecture, deployment and maintenance of IT infrastructure technologies across [organization]. We’re the first technical point of contact for people in [organization] who want to speak to the CIO function. We take business requirements and architect solutions, then work with [group IT] to input the solution into our data centres.
We provide direction, thought leadership, guidance and subject matter expertise on our IT estate to make sure we get the maximum value from our investment in our IT. We do this by defining our IT strategy and aligning it with Group IT, producing technology roadmaps and identifying and recommending IT solution opportunities, supporting business initiatives and ideas, and documenting and managing our architecture assets.
The Head of Infrastructure is a key leadership role in the CIO and critical to the delivery of both customer and partner facing technology. Working closely with our technology supplier, group IT, CISO and Service Management teams, this leader will be accountable for the end to ends analysis, design, build, test and implementation of; Platforms and Middleware, Network and Communications, Cloud Services, Data Warehouse and End User Services.
https://www.linkedin.com/jobs/view/633114556/

The job description contains a number of key words and phrases that architects should be comfortable with - direction, strategy, alignment, thought leadership, roadmaps, architecture assets.

But perhaps the first clue that there may be something amiss with this position is the fact that "IT Change" is in quotes. (As if to say that in IT, nothing really changes.)

The Register has contacted the person who is (according to Linked-In) currently holding this position. Is he moving on, moving up? Could this vacancy be connected in any way with recent IT difficulties facing the organization? (No answer reported. Curious.)

The recent IT difficulties facing this particular organization have come to the attention of politicians and the media. After the chair of the Treasury Select Committee described the situation as having "all the hallmarks of an IT meltdown", the word "meltdown" is now the descriptor of choice for journalists covering the story.

But help is at hand: IBM has kindly volunteered to help sort out the mess. So we can guess what "working closely with our technology supplier" might look like.




Karl Flinders, TSB IT meltdown has the makings of an epic (Computer Weekly, 25 April 2018)

Samuel Gibbs, Warning signs for TSB's IT meltdown were clear a year ago – insider (The Guardian, 28 April 2018)

Kat Hall, Newsworthy Brit bank TSB is looking for a head of infrastructure (The Register, 27 April 2018)

Stuart Sumner, TSB brings in IBM in attempt to resolve IT crisis (Computing, 26 April 2018)
    

Making the World More Open and Connected

Last year, Facebook changed its mission statement, from "Making The World More Open And Connected" to "Bringing The World Closer Together".

As I said in September 2005, interoperability is not just a technical question but a sociotechnical question (involving people, processes and organizations). (Some of us were writing about "open and connected" before Facebook existed.) But geeks often start with the technical interface, or what is sometimes called an API.

For many years, Facebook had an API that allowed developers to snoop on friends' data: this was shut down in April 2015. As Constine reported at the time, this was not just because the API was "kind of shady" but also to "deny developers the ability to build apps ... that could compete with Facebook’s own products". Sandy Paralikas (himself a former Facebook executive) made a similar point (as reported by Paul Lewis): Facebook executives were nervous about the commercial value of data being passed to other companies, and worried that the large app developers could be building their own social graphs.

In other words, the decision was not motivated by concern for user privacy but by the preservation of Facebook's hegemony.

When Tim Berners-Lee first talked about the Giant Global Graph in 2007, it seemed such a good idea. When Facebook launched the Open Graph in 2010, this was billed as "a taste of the future where everything can be more personalized". Like!




Philip Boxer and Richard Veryard, Taking Governance to the Edge (Microsoft Architecture Journal, August 2006)

Josh Constine, Facebook Is Shutting Down Its API For Giving Your Friends’ Data To Apps (TechCrunch, 28 April 2015)

Josh Constine and Frederic Lardinois, Everything Facebook Launched At f8 And Why (TechCrunch, 2 May 2014)

John Lanchester, You Are the Product (London Review of Books, 17 August 2017)

Paul Lewis, 'Utterly horrifying': ex-Facebook insider says covert data harvesting was routine (Guardian, 20 March 2018)

Caroline McCarthy, Facebook F8: One graph to rule them all (CNet, 21 April 2010)

Sandy Parakilas, We Can’t Trust Facebook to Regulate Itself (New York Times, 19 November 2017)

Wikipedia: Giant Global GraphOpen API,


Related Posts SOA Stupidity (September 2005), Social Networking as Reuse (November 2007), Security is Downstream from Strategy (March 2018), Connectivity Hunger (June 2018)
    

More Recent Articles


You Might Like

Safely Unsubscribe ArchivesPreferencesContactSubscribePrivacy