We can find a useful metaphor for data ethics in Tolkein's Lord of the Rings. Palantíri are indestructable stones or crystal balls that enable events to be seen from afar. They also allow communication between two stones. The word comes from Tolkein's ...
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

Click here to read this mailing online.

Your email updates, powered by FeedBlitz

Here is a sample subscription for you. Click here to start your FREE subscription

  1. As Shepherds Watched
  2. Dynamic Pricing Update
  3. Documenting Business Requirements
  4. Netflix and Algorithms
  5. Trusting the Schema
  6. More Recent Articles

As Shepherds Watched

We can find a useful metaphor for data ethics in Tolkein's Lord of the Rings. Palantíri are indestructable stones or crystal balls that enable events to be seen from afar. They also allow communication between two stones. The word comes from Tolkein's invented language Quenya - palan means far, tir means to watch over.

The stones are powerful but dangerously unreliable. Even in the hands of an evil wizard such as Sauron, the stones cannot present a completely false image, but they can conceal enough to mislead, and at one point in the story Sauron himself is deceived.

This links to my oft-repeated point about data and dashboards: along with the illusion that what the data tells you is true, there are two further illusions: that what the data tells you is important, and that what the data doesn't tell you is not important. (See my eBook How To Do Things With Data.)

Joseph Pearce notes the parallel between palantíri and another device whose name also denotes watching from afar - television. "The palantir stones, the seeing stones employed by Sauron, the Dark Lord, to broadcast propaganda and sow the seeds of despair among his enemies, are uncannily similar in their mode of employment to the latest technology in mass communication media" Pearce p244.

The big data company Palantir Technologies was named after Tolkein's stones. It has pitched itself as "providing the power to see the world, without becoming corrupted by that power" Maus. Not everyone is convinced.


Denis Campbell, NHS England gives key role in handling patient data to US spy tech firm Palantir (Guardian, 20 November 2023)

Maus Strategic Consulting, A (Pretty) Complete History of Palantir (27 April 2014)

Joseph Pearce, Catholic Literary Giants: A Field Guide to the Catholic Literary Landscape (Ignatius Press 2014)

Wikipedia: Palantír, Palantir Technologies



Dynamic Pricing Update

As I pointed out back in 2006, the concept of Dynamic Pricing has been around for at least 25 years. I first encountered it in Kevin Kelly's 1998 book, New Rules for the New Economy.

Five years ago, when I was consulting to a large UK supermarket, this concept was starting to be taken seriously. The old system of sending people around the store putting yellow stickers on items that were reaching their sell-by date was seen as labour-intensive and error-prone. There were also some trials with electronic shelf-edge labels to address the related challenge of managing special offers and discounts for a specific stock-keeping unit (SKU). At the time, however, they were not ready to invest in implementing these technologies across the whole business.

The BBC reports that these systems are now widely used in other European countries, and there are further trials in the UK. This is being promoted as a way of reducing food waste. According to Matthias Guffler of EY Germany, around 400,000 tonnes of food is wasted every year, costing German retailers over 2 billion euros. Obviously no system will completely eliminate waste, but even reducing this by 10% would represent a significant saving.

Saving for whom? Clearly some consumers will benefit from a more efficient system of marking down items for quick sale, but there are concerns that other consumers will be disadvantaged by the potential uncertainty and lack of transparency, especially if retailers start using this technology also to increase prices in response to high demand.

MaryLou Costa, Why food discount stickers may be a thing of the past (BBC News, 30 November 2023)

Matthias Guffler, Wie der Handel das Problem der Lebensmittelverschwendung lösen kann (EY-Parthenon, 28 March 2023)

See also The Price of Everything (May 2017)


Documenting Business Requirements

A couple of questions came in on my phone, and I thought I'd share my answers here.

When is a Business Requirements Document good enough?


Here are a few considerations to start with

Fit for purpose - What is the document going to be used for - planning, estimating, vendor selection, design or whatever? Does it contain just enough for this purpose, or does it dive (prematurely) into solution design or technical detail?

SMART - are the requirements specific, measurable, and so on, or are they just handwaving? How will you know when (if) the requirements have been satisfied?

Transparency and governance - is it clear whose requirements are being prioritized (FOR WHOM)? Who wrote it and what is their agenda? (Documents written by a vendor or consultancy may suit their interets more than yours.)

Future-proof - are these requirements short-term and tactical, are all the urgent requirements equally important?

Complete - ah, but what does that mean?

When is a Business Requirements Document complete?

Scope completeness - covering a well-defined area of the business in terms of capability, process, org structure, ...

Viewpoint completeness - showing requirements for people, process, data, technology, ...

Constraint completeness - identifying requirements relating to privacy, security, compliance, risk, ...

Transition plan - not just data migration but also process change, big bang versus phased, ...

Quality and support plan - verification, validation and testing, monitoring and improvement. What is critical before going live versus what can be refined later.

RAID - risks, assumptions, issues and dependencies 

Important note - this is not to say that a business requirements document must always be complete in all respects, as it's usually okay for requirements to develop and evolve over time. But it is important to be reasonably clear about those aspects of the requirements that are assumed to be more or less complete and stable within a time horizon appropriate for the stated purpose, and to make these assumptions explicit. For example, if you are using this document to select a COTS product, and that product needs to be a good fit for requirements for at least n years, then you would want to have a set of requirements that is likely to be broadly valid for this length of time. (And avoiding including design decisions tied to today's technology.)


Netflix and Algorithms

Following my previous posts on Netflix, I have been reading a detailed analysis in Ed Finn's book, What Algorithms Want (2017).

Finn's answer to my question Does Big Data Drive Netflix Content? is no, at least not directly. Although Netflix had used data to commission new content as well as recommend existing content (Finn's example was House of Cards) it had apparently left the content itself to the producers, and then used data and algorithmic data to promote it. 

After making the initial decision to invest in House of Cards, Netflix was using algorithms to micromanage distribution, not production. Finn p99

Obviously something written in 2017 doesn't say anything about what Netflix has been doing more recently, but Finn seems to have been looking at the same examples as the other pundits I referenced in my previous post.

Finn also makes some interesting points about the transition from the original Cinematch algorithm to what he calls Algorithm 2.0.

The 1.0 model gave way to a more nuanced, ambiguity-laden analytical environment, a more reflexive attempt to algorithmically comprehend Netflix as a culture machine. ... Netflix is no longer constructing a model of abstract relationships between movies based on ratings, but a model of live user behavior in their various apps Finn p90-91

The coding system relies on a large but hidden human workforce, hidden to reinforce the illusion of pure algorithmic recommendations (p96) and perfect personalization (p107). As Finn sees it, algorithm 1.0 had a lot of data but no meaning, and was not able to go from data to desire (p93). Algorithm 2.0 has vastly more data, thanks to this coding system - but even the model of user behaviour still relies on abstraction. So exactly where is the data decoded and meaning reinserted (p96)?

As Netflix executives acknowledge, so-called ghosts can emerge (p95), revealing a fundamental incompleteness (lack) in symbolic agency (p96).

Ed Finn, What Algorithms Want: Imagination in the Age of Computing (MIT Press, 2017)

Alexis C. Madrigal, How Netflix Reverse-Engineered Hollywood (Atlantic, 2 January 2014)

Previous posts: Rhyme or Reason - The Logic of Netflix (June 2017), Does Big Data Drive Netflix Content? (January 2021)


Trusting the Schema

A long time ago, I did some work for a client that had an out-of-date and inflexible billing system. The software would send invoices and monthly statements to the customers, who were then expected to remit payment to clear the balance on their account.

The business had recently introduced a new direct debit system. Customers who had signed a direct debit mandate no longer needed to send payments.

But faced with the challenge of introducing this change into an old and inflexible software system, the accounts department came up with an ingenious and elaborate workaround. The address on the customer record was changed to the address of the internal accounts department. The computer system would print and mail the statement, but instead of going straight to the customer it arrived back at the accounts department. The accounts clerk used a rubber stamp PAID BY DIRECT DEBIT, and would then mail the statement to the real customer address, which was stored in the Notes field on the customer record.

Although this may be an extreme example, there are several important lessons that follow from this story.

Firstly, business can't always wait for software systems to be redeveloped, and can often show high levels of ingenuity in bypassing the constraints imposed by an unimaginative design.

Secondly, the users were able to take advantage of a Notes field that had been deliberately underdetermined to allow for future expansion.

Furthermore, users may find clever ways of using and extending a system that were not considered by the original designers of the system. So there is a divergence between technology-as-designed and technology-in-use.

Now let's think what happens when the IT people finally get around to replacing the old billing system. They will want to migrate customer data into the new system. But if they simply follow the official documentation of the legacy system (schema etc), there will lots of data quality problems.

And by documentation, I don't just mean human-generated material but also schemas automatically extracted from program code and data stores. Just because a field is called CUSTADDR doesn't mean we can guess what it actually contains.

Here's another example of an underdetermined data element, which I presented at a DAMA conference in 2008. SOA Brings New Opportunities to Data Management.

In this example, we have a sales system containing a Business Type called SALES PROSPECT. But the content of the sales system depends on the way it is used - the way SALES PROSPECT is interpreted by different sales teams.

  • Sales Executive 1 records only the primary decision-maker in the prospective organization. The decision-maker’s assistant is recorded as extra information in the NOTES field. 
  • Sales Executive 2 records the assistant as a separate instance of SALES PROSPECT. There is a cross-reference between the assistant and the boss

Now both Sales Executives can use the system perfectly well - in isolation. But we get interoperability problems under various conditions.

  • When we want to compare data between executives
  • When we want to reuse the data for other purposes
  • When we want to migrate to new sales system 

(And problems like these can occur with packaged software and software as a service just as easily as with bespoke software.)


So how did this mess happen? Obviously the original designer / implementer never thought about assistants, or never had the time to implement or document them properly. Is that so unusual? 

And this again shows the persistent ingenuity of users - finding ways to enrich the data - to get the system to do more than the original designers had anticipated. 


And there are various other complications. Sometimes not all the data in a system was created there, some of it was brought in from an even earlier system with a significantly different schema. And sometimes there are major data quality issues, perhaps linked to a post before processing paradigm.


Both data migration and data integration are plagued by such issues. Since the data content diverges from the designed schemas, it means you can't rely on the schemas of the source data but you have to inspect the actual data content. Or undertake a massive data reconstruction exercise, often misleadingly labelled "data cleansing".

There are several tools nowadays that can automatically populate your data dictionary or data catalogue from the physical schemas in your data store. This can be really useful, provided you understand the linitations of what this is telling you. So there a few important questions to ask before you should trust the physical schema as providing a complete and accurate picture of the actual contents of your legacy data store.

  • Was all the data created here, or was some of it mapped or translated from elsewhere? 
  • Is the business using the system in ways that were not anticipated by the original designers of the system? 
  • What does the business do when something is more complex than the system was designed for, or when it needs to capture additional parties or other details?
  • Are classification types and categories used consistently across the business? For example, if some records are marked as "external partner" does this always mean the same thing? 
  • Do all stakeholders have the same view on data quality - what "good data" looks like?
  • And more generally, is there (and has there been through the history of the system) a consistent understanding across the business as to what the data elements mean and how to use them?

Related posts: Post Before Processing (November 2008), Ecosystem SOA 2 (June 2010), Technology in Use (March 2023)


More Recent Articles

You Might Like