I heard this story last week, from the mother of a toddler.
The kid is home, playing with grandpa. Kid is just past the peek-a-boo stage, now experimenting with hide and go seek. Like most kids of this age, hiding generally involves standing on the other side of a chair or putting a piece of paper on her head. Not really hidden. But Grampa didn't get the message. When it's his time to hide he goes in the other room. Toddler takes hands off eyes, looks around. Doesn't see Grampa. Looks a little bit worried but doesn't move. Waits another minute. Shouts out, "Alexa, where's Grampa?"
I'm going to let you sit with that.
It led us to all kinds of questions. Including about advertising on these devices. Others in the group said it's small, but growing. This article says it's already here and that we (the people) like it. All of us agreed it seems inevitable.
Question for nonprofits: you ready to pay whatever it will cost to make sure you are the one (and only) response when someone starts asking, "Alexa (others), who should I donate money to?"
Question for the rest of us: You really want some engineered algorithm (no doubt based on who paid the most) telling you where to give your money?
I’ve participated in a lot of conferences, panels, discussions etc. about “nonprofits and AI,” “foundations and AI,” “AI for good”* and so on. The vast majority of them miss the point all together.
It’s not really a question of these organizations using artificial intelligence, which is how every one of these panels approaches it. For most civil society organizations, they may be buying software that’s going to use algorithmic analysis and some AI on a large dataset, perhaps through their vendors of fund development data or software. And then, yes, there are legitimate questions to be asked about the inner workings, the ethical implications, the effects on staff and board and so on. Important questions but hardly worth a conference panel (IMHO) - those are important software vendor considerations, and it is important for all organizations to understand how these things work, but not the “black magic” or “sector transforming phenomenon” that a conference organizer would want you to think.
The REAL issue is how large datasets (with all the legitimate questions raised about bias, consent and purpose) are being interrogated by proprietary algorithms (non-explainable, opaque, discriminatory) to feed decision making in the public and private sectors in ways that FUNDAMENTALLY shift how the people and communities served by nonprofits/philanthropy are being treated.
This essay on “The Automated Administrative State” is worth a read.
- Biased policing algorithms cause harm that nonprofits need to understand, advocate agains, deal with, and mitigate.
- AI driven educational programs shift the nature of learning environments and outcomes in ways that nonprofit after-school programs need to understand and (at worst) remediate, (at best) improve upon.
- The use of AI driven decision making to provide public benefits leaves people without clear paths of recourse to receive programs for which they qualify (read Virginia Eubanks’s Automating Inequality).
- Algorithmically-optimized job placement practices mean job training programs and economic development efforts need to understand how online applications are screened, as much as they help people actually add skills to their applications.
The real question for nonprofits and foundations is not HOW will they use AI, but how is AI being used within the domains within which they work and how must they respond?
* I try to avoid any conversations that are structured as “_____ for (social) good” and all situations that are “_[blank]_ for social good” where the [blank] is the name of a company or a specific type of technology.
Yep, deliberately trying to provoke you with the headline. Here's what provoked me:
The news that two airplane crashes killed a total of 346 people, in part due to a software upgrade that was "optional." (read: cost more)
This story about electronic health records (software) and deaths that ensued from resultant poor medical care.
What does this have to do with philanthropy and civil society?
Philanthropic and civil society organizations are as dependent on software as are businesses and governments. Do you know how your software works? What its vulnerabilities are?
Your work may not involve the difference between life and death, but if you're collecting information on lots of people and not respecting their rights in collecting it, not protecting it once you have it, or managing it (and the software you use to hold and analyze it) in line with your mission, how much good are you really doing? Are you making the people your organization serves, or the nonprofits you fund, more vulnerable with your data practices even as you try to do good with your dollars?
The World Food Programme recently announced a partnership with Palantir. There’s a lot to be concerned about here - in this specific instance and in the broadest possible sense for understanding civil society’s role regarding digital data use. Please read this open letter
Does your organization know how the tech companies you partner with use the data you have on your constituents? It’s not just about “ownership,” but the insights and algorithms and predictive tools that may be built on that data. Are you “locked in” to a platform - if you wanted to switch vendors for your CRM or mailing lists or social media - can your organization get your data back?
How is your organization managing data? With any more respect for the individual people from whom it comes than these big software/analytics/insight companies? If not, why should anyone trust you with their data?
These are pivotal questions - and we need answers and processes and protocols and regulation. Civil society is not meant to be either a poor cousin of big business, an outsource arm of government, or the “data washing” arm of either.
The tenth annual Blueprint - Blueprint 2019 - went live in December. You can find it and the entire decade's archive here.
On January 23rd we'll be hosting our annual "prediction-palooza" (free, online) discussion about philanthropy predictions. Information on that is available here.
In the meantime, I've just come off a conversation with a research group preparing a report for a big foundation on the future of philanthropy. I get asked to do a lot of these. I only agree to these conversations if there is going to be a public version of the report. I'm told that's the case - this report should be available in April.
Some thoughts as I was asked to reflect on the last 5 years and then look ahead 10 years.
All the new products, platforms and ways to give (DAFs, impact investing, crowdfunding platforms, text donations, cause marketing, etc.) are not adding up to more giving by more people. As Indiana University notes, since 2000 we've lost 20 million givers - at least as recorded by data on charitable donations to tax exempt nonprofits. This is over the same 19 year time frame that brought us online/mobile donations, online data resources about nonprofits,
-> Perhaps we can stop assuming product innovation equals growth?
Where have the "missing millions" gone? I doubt people have given up on giving, after all we've been doing it for thousands of years. I think we have an "instrumentation" problem. Which is to say, we're measuring the wrong things. Changes in the tax code are expected to result in changes in giving to nonprofits (and to changes in how useful itemized tax forms will be for measuring giving behavior).
--> Perhaps we can ask whether we're measuring the right things?
We need new instruments to measure how Americans give and to whom. It should include measurements of person-to-person giving (e.g., as happens on crowdfunding platforms), political donations and contributions of time, investments intended to produce social/environmental outcomes, and money raised via product purchases (BOGO or cause marketing). I've been calling for this since at least 2015 - see here, and had intimations about it back in 2008 (see here).
Commercial platforms and nonprofit data:
Does anyone really think Facebook is going to be the same thing in 2029 that it is in 2019? Not even the folks at FB would say that. Every nonprofit and philanthropist that is managing their giving, their outreach, their donors, etc. on this single platform should beware. The rules are going to change, the compan(ies) will push back, there will be all kinds of ups/downs between now and 10 years from now - but in no imaginable future is doing right by nonprofits (in terms of their data and longterm relationships with donors) in the growth plans of that company. If you can imagine either a different FB in 10 years or no FB 10 years from now, then it seems like a good idea not to put all your data eggs in a FB basket. (Or any commercial platform driven by a business model unrelated to how well it serves the social sector).
Charity and Politics:
The legal line between these two domains is man-made. It's changed over time. It's thin and it's fragile. The current presidential administration is determined to destroy it (see Johnson Amendment fight, as well as court cases on donor disclosure in politics). There's not enough manpower at the IRS to defend any boundary that might exist. Digital data and systems make defending the line even more difficult than they were in the analog age. Many advocates and advocacy organizations would like to see the line gone. Individual people may not care as much about separating political and charitable action as policy makers and nonprofit leaders want them to. Assuming the old boundaries between these domains function as intended is fooling oneself. We should put our attention into writing rules that protect our (often conflicting) rights (association, expression, privacy), sheds light on political corruption and the excessive influence of wealth, and assumes digital data collection and exposure, rather than nostalgically assuming that protecting a legal distinction established in the 1950s is the best (or only) way forward.
Anyone but me noticing that the marketing hype about blockchain is starting to quiet down, just as people building digital systems that focus on digital security and encryption are growing in actual use? This is a good thing - stop gawking at the new packaging and let's focus on the values that particular activities require. In some cases, permanent records of transactions are a good thing (supply chain verification of objects, possibly). In other cases, distributed, immutable records may not be such a good idea (human ID, for example).
Artificial intelligence (AI), predictive algorithms, and machine learning are three more shiny objects. Most people ask me "How will nonprofits and/or philanthropists change by using AI?" I think this question has the subject and object in wrong order. The more relevant question for most donors and social sector organizations is "How will the use of AI (etc.) change what the organizations/donors are doing?" Government bodies and commercial companies are already using these tools - they shape what you see online, what benefits you qualify for, your chances of being audited by the tax authority, your chances of getting a speeding ticket, the keywords you need to enter in your job application, etc. etc. They are changing the nature of the problem space in which social sector organizations and philanthropists do their work. This is not the future, this is now. This is not the edge case exceptions of a few organizations wih some good data and data scientists, this is the great mass of organizations and donors. I'd love to see some real discussion of how philanthropy and social sector organizations can and should change to be effective in a world already being shaped by AI (etc.). Then, for dessert, we can talk about the exceptions to this rule.
It's the nature of the game that we'll chatter about the latest shiny object. What's much more interesting is how we embody shared values in new ways.