Unintended consequences: Why good intentions go bad, and how to make positive change in an unpredictable world
An interview with Kirsten Moy
By Laurie Mazur with Kirsten Moy
If you’ve ever tried to bring about positive change in the world, you know how easily the best-laid plans can go awry.
Kirsten Moy knows this all too well. A long career in investment and community development has afforded Moy an eagle-eye view of the rare interventions that succeeded, as well as those that failed. Over time, she became fascinated by the myriad factors that separate the two — and with the complex systems in which all human actions are embedded.
Today, Moy is a practitioner member of the Waterloo Institute for Complexity and Innovation at Waterloo University, a senior fellow with the Aspen Institute, and a board member of Island Press. She served as the first director of the Community Development Financial Institutions (CDFI) Fund, a program created under the Clinton Administration to provide capital to financial institutions offering services to communities underserved by traditional banks. Before that, Moy helped pension funds and other institutions invest in affordable housing and other community facilities. At the Aspen Institute, in pursuit of bringing greater scale to a growing nonprofit activity, Moy created platforms for asset-building in lower-income communities.
In this interview with Island Press editor Laurie Mazur, Moy explores the enduring mystery of unintended consequences, or UICs. She breaks down the many different kinds of UICs and finds that some are not even unintended. Importantly, Moy offers advice and insights for those seeking to help — and heal — a complex world that is full of surprises.
Q: What, exactly, is an unintended consequence?
A: Well, there’s no actual scientific definition for unintended consequences. So, let’s just go with the most basic: it’s something that happens directly or indirectly as a result of something you did that you didn’t plan for or consider. Sometimes unintended consequences are small and sometimes they’re large. But while unintended consequences can be positive, far more often they seem to be negative.
Here’s one example. Back in the 90s, millions of trees were planted to restore forests across the U.S. and Canada. But the new forests were monocultures of highly flammable tree species, and now they are contributing to catastrophic wildfires. Certainly not what the tree planters intended.
Q: So, you’ve done a deep dive into unintended consequences — UICs — and found that they fall into several distinct categories.
A: Yes, and the categories matter because you need to deal with each category differently. The first three categories, let’s call them the “Coulda Shouldas,” the “Bad Actors,” and the “Real Deals.”
The Coulda Shouldas, as the name suggests, are UICs that we could or should have known would happen. This category is particularly distressing because many so-called UICS could have been predicted and even avoided if someone had cared or bothered to ask a few questions at the start. Like the tree plantings: even in the 90s we knew that monocultures are more vulnerable than a healthy, diverse forest. But somebody wasn’t paying attention, or didn’t care, or didn’t communicate. And sometimes it’s a case of bad actors.
Q: Let’s talk about the bad actors. Because some UICs are not even unintended. Right?
A: Sadly, yes. For many so-called UICS, the negative consequences were not just known, they were allowed, encouraged, or even driven to happen to realize desired outcomes for other parties — like government entities or private corporations.
Think of all the cases where corporations dump toxic waste that harms people and communities. To the corporation, the waste is an “exogenous factor” — it’s outside the system, so their claim is that they don’t have to deal with it — not their responsibility. While their goal is not necessarily to cause harm, their business model — the quest for profitability and shareholder value — means that they won’t spend the necessary resources to take care of the problem and, indeed, may try to prove the harm was in no way due to their actions. The problem here is really the imbalance of power.
Q: So, after you separate out the situations where we could or should have known and the bad actors, what are UICs that truly deserve the name, your “Real Deals?”
A: These are consequences that are very hard or even impossible to predict, even when we bring in sophisticated technology and awesome computational power. Part of that is because there’s a lot science still doesn’t know about both the natural world and the consequences of human activity. But part of it is the existence of a phenomenon called “emergence,” a characteristic of complex systems.
Complex systems — like a forest, say, or the global economy — are composed of multitudinous entities, each doing their own thing with no central control. Those entities interact and produce surprising results, which we call emergent behavior, that couldn’t have been predicted by looking at the individual pieces. We see emergence in ant colonies, flocks of birds, the human brain, cities. All complex systems display examples of emergent behavior where the whole is different from and greater than the sum of its parts.
Q: And sometimes that emergent behavior produces positive surprises?
A: Yes, sometimes hugely so, such as in the return of endangered animal and plant species to the Korean Peninsula’s demilitarized zone since its creation in 1953. I didn’t find nearly as many examples of positive UICs as negative UICs, but I wonder if that’s simply a matter of us humans not noticing when things go right and/or there not being a story to report.
Q: Are there ways to avoid UICs? Or to cause less harm when we unwittingly set UICs in motion?
A: Well, the first thing we have to do is understand any situation or problem we’re dealing with before jumping in with a solution. It may be an apocryphal story, but Einstein is quoted as having said “If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.” In other words, our first goal should not be to solve the problem, it is to understand the problem to the best of our ability, and to do that we have to recognize that we and our actions are part of an interconnected system. We are never acting in isolation.
There are helpful mental tools and ways of thinking that broaden our inquiry and make us think more deeply about a problem with a systems perspective. For example, we should think about not only first-order effects, but second-order effects. You’re doing this, but what else could happen? Who around you could be affected? It’s like playing chess: Think not only about the first move, but the subsequent moves.
Environmentalists are familiar with the precautionary principle, which aims to protect human health and the environment. The principle asks us to really think about what could go wrong before acting, and to involve those affected in the decision making. It also puts the onus on whomever is proposing an activity to prove that it won’t have harmful consequences.
We can learn a lot from the past, to avoid making the same mistakes over again. So, in doing your research, don’t forget to look at history because it does repeat itself.
When you think about dealing with human beings and decision making, motivational research can help. A lot of plans go counter to our expectations because we think people are going to be motivated by X but they really want Y. For example, there’s research that shows volunteers become less enthusiastic once you start paying them for their work. That’s because they really did it to help someone and giving them money just cheapened the activity.
In the search for answers, proximity is key: if you’re not close to the problem you’re trying to investigate, it is critical to get close to someone who is. Not surprisingly, some of the most successful ideas for addressing homelessness come from individuals who themselves have experienced homelessness, just as some of the most constructive ideas for addressing counterproductive aspects of our criminal justice system come from previously incarcerated people and their families.
And in your probing, go deeper. Keep asking why something happens or goes wrong until you get to an answer that points to a root cause or multiple contributing causes. Don’t be satisfied by superficial explanations or the obvious answer. And watch out for how your own past life experiences and biases (we all have them) may color your investigations.
Q: Are there tools that can help us understand complex systems?
A: Yes. One of the most useful is system mapping. There are now there so many mapping techniques and tools that can help. (I’ve included a few links in my Bibliography.) A further benefit of mapping a complex system is that you may find opportunities for solving other problems or creating other positive results, a phenomenon called “multisolving.“
And as I mentioned before, we can harness the power of technology and sophisticated computational tools to assist our limited human brains when dealing with all this complexity. There have been huge advances in the use of techniques like agent-based modeling over the last 20 years.
Q: Okay, so you’ve done your homework, and you have more understanding of the problem and the system it’s embedded in. Does this mean we can now predict when UICs are likely to happen?
A: Well, as you can imagine, it’s not that simple. Not all events or phenomena are equally predictable. A complexity scientist and friend of mine, Eric Bonabeau, gave me a way to think about when and how we could improve our ability to predict UICS by dividing them into four different categories representing four levels of predictability.
The first level aligns with the Coulda Shouldas. For these UICs, there’s a relatively direct, linear relationship between cause and effect, or what you do and what you get. These are very predictable if you are paying attention and doing your homework.
The second level involves trying to predict what an individual human being will do. You never can know with complete certainty what someone is going to do. For example, raising a child is, by most people’s definition, a complex problem.
Q: I can vouch for that.
A: You may have all the latest parenting books and lots of motivational studies. You’ve raised 10 other kids, but the 11th one comes out differently. You can never be completely sure how any kid will react to something.
The third level involves trying to predict what groups of human beings will do. If it’s hard to predict what one person will do, you can imagine that trying to predict the actions of groups of individuals, each marching to their own drum, is even harder. There are well-developed disciplines and practices for studying and analyzing group behavior — market research, network theory — which can help but never perfectly predict how people will act, as demonstrated by the racks of unsold clothing left at the end of each retail season.
The fourth level involves trying to predict what will happen when groups of human beings interact with large systems. Now we’re at a level of complexity with so many actors, so many factors and so many potential interactions and feedback loops that it’s very difficult to predict anything. It’s no surprise that we have frequent failures in public policy. This is where we humans need help from technology and maybe artificial intelligence. But that’s another story.
Q: We’ve got eight billion unpredictable humans and the systems we’ve built to sustain us interacting with natural systems and human-made systems that we barely understand. What could possibly go wrong?
Seriously, this is a little discouraging. You can do all that background research/homework and still fail to correctly predict what will happen. So what now?
A: All that background research and homework can get you to a truer, fuller, systemic understanding of the problem. Armed with that deeper and more comprehensive understanding, you can move in a much more informed fashion toward finding a solution.
But remember that finding or developing a solution and trying to implement it is itself an exploratory process, something that grows from on-the-ground experimentation. Critically, you have to learn from what happens. Find ways to get feedback. And don’t ever assume that if you do one thing in one place, you’ll get the same result the next time in a different place. In complex systems, context matters.
You can begin by trying to stop the harms that are causing the problems. The sooner you stop the harm, the less mitigation you’ll have to do, and the lower the price you’ll pay. Stopping an action is generally faster and cheaper than creating a new program to deal with harm after the fact.
In developing solutions, you can look to nature for inspiration. “Biomimicry” is about solutions that copy natural forms and processes. Nature has had millennia to get things right: everything we see in nature today reflects some level of adaptation — experiments in nature that survived because they evolved to meet the conditions needed for their survival. We are learning from nature how to prevent flooding, for example. There’s an architect in Zimbabwe who copies the design of termite mounds to keep buildings cool. We have so much to learn from Indigenous peoples who see themselves as part of nature and view themselves as stewards of the environment, not a dominating force to tame it. Perhaps that’s why in thousands of years they didn’t destroy their environments, something we’ve managed to do in just a few centuries or even decades.
Next, start small. Often, governments and foundations will address a problem by building a big multi-year program. It’s expensive and time-consuming, and if you’re wrong, you won’t know it for a long time. Meanwhile, you’ve expended a lot of resources and possibly hurt people, or not helped them.
There is a movement to do “lighter, quicker, cheaper” interventions in the built environment — like homemade, community-created bike lanes or public plazas. There are some great books on this, including Tactical Urbanism and Dream, Play, Build. The idea is to get the people involved, get ’em together, and try something. If it doesn’t work, maybe you’ve lost a month and a thousand dollars.
Or start in the most economical fashion possible, not by thinking of what new program you can add or new initiative you can undertake, but by thinking about what you can take away. It’s a concept called “subtractive change” which recognizes that any time you intervene in a complex system with so many interconnected parts, many of which you may be unaware of, you open yourself up to a lot of potential unintended consequences.
Q: At the same time, we have a lot of really big problems that won’t be solved by small-scale solutions. And some big, top-down programs have been very effective — like the vaccination programs of the 20th century, or the creation of the COVID vaccine a couple years ago. How do you reconcile the need for both small, incremental actions and large top-down solutions?
A: In complexity science, the term “scale” is not used to refer to the idea of growing an intervention or an organization, but the idea that an issue or problem manifests across different levels of an ecosystem. There are appropriate things to do at each level, and the levels need to be coordinated and connected. They can’t be fighting each other. So, there’s a need for both small and large actions, properly coordinated and with the parties involved each doing their part and hopefully what they each do best. For example, a large top-down government effort facilitated markets and provided funding for needed research on the development of a Covid vaccine. But what the top-down effort did not do, or at least not very well, was consult with community-based organizations and provide the resources they needed to prepare their communities for receiving the vaccine. Working at different scales — and communicating between those scales — is key. Because it’s not progress if the large-scale effort ends up producing solutions that communities won’t use.
Q: Good point. There might have been less vaccine refusal in the U.S. if there was more attention to the community level.
A: Right again. In complex systems change cannot be engineered from the top down. It comes from the bottom up and evolves and spreads when proper conditions exist. So, the work is not simply to build larger organizations, but to create, enable, or facilitate the conditions for an outcome to take place. Complexity science also says that change is non-linear — small efforts can lead to big changes or disruptions, while large efforts can sometimes have very little impact.
Also, any time you do a big, top-down, long-term plan for five or 10 years, you’re ignoring the whole idea of emergence. Things are going to happen because of the interaction of the pieces that you put in place. The more inflexible your plan is, the worse it is, because you’re not allowing for emergence. And I would say that dooms you to failure from the start.
Q: This is a problem for nonprofits and foundations. Foundations make grants to nonprofits to carry out a plan, and success is evaluated by how well they stick to it — even though change is not linear and all sorts of new problems and opportunities might emerge.
A: Exactly. There’s an approach to evaluation called “outcome harvesting” that addresses this. With this approach, you don’t try to measure against your goals because they may not be the right goals. For example, U.S. education metrics are centered on standardized test scores, but this can replace an enriching curriculum with test-question drills that leave many kids behind.
Instead, with outcome harvesting, you collect evidence of what has been achieved and work backward to determine whether and how the project or intervention contributed to the change. It focuses on all results, good or bad, intended or unintended. A related approach is “causal link monitoring,” which emphasizes bringing in a diversity of perspectives and understanding the importance of context.
It’s also important to approach evaluation with humility. Our fixation on accountability and taking credit has no place in work with complex systems where emergence is an integral aspect. None of the agents can predict — much less control — the disparate elements of the system. There is no simple cause and effect. And all any participant or collaborator can legitimately do is celebrate their contribution to the final outcome.
Q: I’m glad you mentioned humility. Unintended consequences are so pervasive that it just seems that we need to approach any endeavor with a great deal of humility.
A: Yes, one of the fundamental problems — along with power imbalances — is a lack of humility. But another perspective on giving is emerging among a small group of funders who believe that you can achieve greater impact by giving up control.
Q: But it can be disempowering to realize how much is beyond us. Any words of encouragement for people who are taking this in for the first time and feeling daunted by it? What gives you hope that we can approach our big problems with humility and bring about desired change?
A: I know this may sound strange, but I’m actually more positive and optimistic after having done this work than before.
10 or 15 years ago when I first began to research the application of complexity thinking to community development, I rarely heard terms like “ecosystem” used outside of environmental science or ecology and would have been hard-pressed to identify practitioners working from an ecosystem perspective or produce a list of funders pursuing “systems change.” A quick look at the 50+ page Bibliography accompanying this article will tell you how far the field has come in its awareness of the complex nature of communities and the tangled problems they face. While we’re not there yet — not nearly close — in knowing how to solve these problems, we have a greater understanding of what we’re facing and that’s a start.
But we need to go beyond awareness and understanding to taking action and working in ways different from how we’ve worked in the past to try to create change. And there the field faces far greater challenges, not because there aren’t individuals, communities and organizations that have found approaches that we could learn from, but because of the power imbalances and lack of humility we talked about earlier. There is hope, though.
Earlier in our conversation I noted a fundamental characteristic of complex systems –- that positive, enduring change comes from the bottom up through the actions of the many agents on the ground who are part of the system. To me, this realization totally opens up the possibility of meaningful action by the individual. The mini-case studies in the Bibliography provide examples of organizations that understand and support this ground-up phenomenon. There’s ioby (“in our backyards”) and the Center for Peer-Driven Change (PDC), both of whose placemaking and community-building initiatives are not only resident-led but resident activated and executed. In the case of humanitarian aid or disaster relief, World Central Kitchen has provided millions of life-saving meals through its cultivation of trusted on-the-ground networks of local residents, restaurants and aid organizations and its ability to rapidly deploy resources to these groups.
But this realization places an obligation on each of us. While none of us can do everything, each of us must do something, and we can.
For more examples of organizations working with an awareness of complexity, see the remaining mini-case studies in the last section of the associated Bibliography, which also includes sections on ways of thinking and techniques that can bring a complexity-informed lens to your own efforts
—
Laurie Mazur is editor of the Island Press Short-form Program which is supported by The Kresge Foundation and The JPB Foundation.