On Measuring People and Nature Together

September 10, 02025

One of the singularly best things the United Nations produces each year is the Human Development Report, a flagship annual publication that tracks and analyzes the state of human progress around the world. What makes the Report so remarkable is that it combines rigorous data analysis with an ever-changing and (for the UN) quite expansive point of view: recent editions have focused on the impact of political polarization on development, how uncertainty affects human wellbeing, and (just this past year) the ways artificial intelligence might shape (or hinder) human progress. (In full disclosure: I serve on an advisory panel for the Report, which is stewarded by the brilliant economist Pedro Conceição.)

The central metric employed in the report is the Human Development Index (HDI), a simple and powerful measure that captures how long we live, how well we’re educated, and how decent our standard of living is. The HDI is not exhaustive, but measuring it consistently over time helped move many policymakers beyond the use of GDP as the sole yardstick of success. And that made a real difference: since the HDI was first introduced in 1990, despite interruptions, disparities, and reversals from things like COVID-19, humanity now, on average, lives longer, receives more years of education, and enjoys higher incomes than we did three decades ago.

Yet the positive trajectory of the HDI tells only part of the story. For far too much of our common progress has been conditioned on the desecration of the natural world, marked by accelerating climate change, ecosystem degradation, biodiversity loss, and widespread pollution. The Earth system is durable and regenerative, and can accommodate some use of its resources, but nothing like what humanity is increasingly doing to the planet. Of the nine planetary systems that make it possible for humanity to flourish on the Earth, we have now pushed six beyond their regenerative limits, imperiling the stability of the biosphere, and ourselves along with it.

There are no happy people on a dead planet. So it’s clear that to guide the development of both thriving societies and a healthy world, we have to measure both the state of nature and the state of humanity together, in ways that are both consistent, rigorous and reflect their complex and diverse interrelationships. 

That’s the argument put forth in a new paper, An Aspirational Approach to Planetary Futures [Nature 642, 889–899 2025] by a team of renowned ecological, planetary and social scientists who explore complementing the HDI with a new measure: the Nature Relationship Index (or NRI), which would track, year by year and country by country, whether we’re building societies in which both people and the larger web of life can thrive.

The authors propose three plain-language dimensions of the nature-human relationship to power the proposed NRI:

  • Nature is thriving and accessible. Are there places, both urban and rural, where people can safely experience nature’s benefits and where wild species can actually live? Think: protected and community-conserved areas that work, thriving urban greenspaces, and restored landscapes you can get to without a multi-day trek.
  • Nature is used with care. Are we meeting human needs in ways that don’t degrade the living world? Think: using energy and materials efficiently; shrinking pollution and waste; producing food on less land with less damage.
  • Nature is safeguarded. Do we back up our good intentions with real legal and financial protections-rules that keep air and water clean, protect species, and hold polluters to account, and budgets sufficient to enforce them?

I love the simplicity and relationality of this compound metric: rather than measuring the state of nature in the abstract, it measures our relationships with it in terms most people value and understand. I would like to see the metric pushed even further, perhaps with a fourth dimension that measures the degree to which nature is understood and valued in societies. (And thus, whether a given relationship between people and nature is likely to be durable.) 

For me, a central test of any combined NRI/HDI index is whether it can help frame effective choices for large countries and blocs that need restorative and regenerative pathways, not just affirm those who are already succeeding. Today, smaller nations, like Costa Rica or Bhutan, have been the clearest exemplars of durable, healthy relationships between people and nature, in large part because their scale, social cohesion, and distinctive political choices have allowed them to do so. Costa Rica’s abolition of its military and reinvestment in education and conservation, and Bhutan’s pursuit of Gross National Happiness with strict environmental protections, stand as almost archetypal cases of “development within ecological limits.” But the record among larger, more complex economies is far more mixed. Some, like the European Union as a bloc, have demonstrated a measure of relative decoupling: they have cut emissions substantially while maintaining economic and social progress. Others, such as China, have made massive gains in human development but at extraordinary environmental cost, only belatedly pivoting to renewable energy and conservation. The United States, meanwhile, has achieved partial decoupling of GDP growth from emissions, but still sustains one of the world’s highest per-capita ecological footprints. What emerges is a pattern in which small nations, with fewer entrenched industrial interests and greater social consensus, can more fully align human and ecological health, while larger nations struggle to reconcile the scale of their consumption with planetary limits.  The question is whether a combined index simply tracks this reality or can help drive change at the superpower/super-bloc level, where a lot of the action must take place. 

To measure the human-nature relationship with rigor and durability also requires bridging a different kind of chasm: the divide between the cultures of human development and Earth-systems science. These two communities often sit side by side at the same tables, but they bring different starting assumptions, languages, and priorities.

The human development community is animated by a central conviction: that people are the ultimate unit of concern. Its frameworks (like the Human Development Index itself) are built on human-centered metrics—income, education, health, empowerment, equality—measuring progress by improvements in individual and collective well-being. The culture of this community tends to be normative and aspirational, grounded in questions of justice, rights, dignity, and opportunity. It often assumes that human ingenuity, when steered by equitable institutions, can generate the solutions to our greatest challenges. Nature is seen largely through the lens of its relationship to human flourishing: as a resource to be managed, a threat to be mitigated, or increasingly, as a partner in sustaining development.

The Earth-science community, by contrast, begins not with human aspirations but with the biophysical realities of the planet. Its models, data, and discourse are rooted in systems—climate dynamics, carbon cycles, biodiversity webs, tipping points—that operate with or without human intervention. The culture here is descriptive, empirical, and often cautionary: it emphasizes thresholds, non-linear dynamics, and feedback loops, which humans ignore at their peril. Where human development assumes agency, earth science stresses constraint; where development often speaks of rights, earth science often speaks of limits.

Both of these lenses are essential – remove either and you remove the necessary conditions for effective and informed decisionmaking at a time of genuine planetary peril. Building a fused approach will require bringing these perspectives together in a way that is both deeply informed by the best planetary science and keeps humanity at the center. 

Fortunately, a third community – the technologists – present a potential “binding agent” for collaboration between these and other, related communities of practice. New data streams (especially, but not exclusively rooted in remote-sensing) and analytical capabilities (especially but not exclusively rooted in AI and machine learning) can enable a common view across these domains, at an unprecedented level of detail in both time and space. AI can not only make the results possible, but accessible and intelligible to every stakeholder – enabling informed, real-world decision-making, and better futures, for both people and planet.

That prospect is too alluring – and fiercely urgent – to pass up.

A grand synthesis awaits.


CitationEllis, E.C., Malhi, Y., Ritchie, H., Montana, J., Díaz, S., Obura, D., Clayton, S., Leach, M., Pereira, L., Marris, E., Muthukrishna, M., Fu, B., Frankopan, P., Grace, M.K., Barzin, S., Watene, K., Depsky, N., Pasanen, J. & Conceição, P. 2025. An aspirational approach to planetary futures. Nature, 1–11, DOI: 10.1038/s41586-025-09080-1.

Image: Manaus, Brazil. Source: Greenpeace

Hana Amani, The Birth of Bilqis (detail), 2019

How AI Sees Us: Dignity, Design, and the Deep Mirror

May 22, 02025

Amid the pageantry and celebration of Pope Leo XIV’s recent election, the new pontiff offered a striking explanation for the selection of his new name.

In one of his first public addresses, Leo reflected on his 19th-century namesake, Pope Leo XIII, whose landmark encyclical Rerum Novarum – “On the Rights and Duties of Capital and Labor” – had shaped Catholic social teaching during the late Industrial Revolution. Aligning himself with that legacy, Leo XIV declared that his own 21st-century papacy would have to rise to meet a similar challenge: “[to] respond to another industrial revolution, and to developments in the field of artificial intelligence, that pose new challenges for the defense of human dignity, justice, and labour.”

Thus, with a single utterance, artificial intelligence was recast-not only as a technological, societal, and economic force to be reckoned with, but as a spiritual one, too.

Any defense of human dignity in the age of AI — pastoral or otherwise — will have to contend with the technology’s immense promise for enfranchisement and liberation, as well as its darker potential for disenfranchisement and dehumanization. Not far into Leo XIV’s tenure, AI systems will likely outperform the best humans in many cognitive tasks. As they gain greater agency and autonomy, ensuring that AI systems’ benefits are broadly shared-and that their actions align with human values and intentions as well as the broader web of life-will become a century-defining moral challenge.

To meet that challenge, it will be essential to maintain meaningful human oversight and control of AI systems, especially those affecting people’s rights, lives, and livelihoods — even if that means tolerating some kinds of human errors in those systems that wouldn’t otherwise occur. Autonomy and governance are competing forces that need to be balanced.

That, in turn, implies an emphasis on augmenting rather than automating human judgments, especially in morally consequential contexts — and creating accountability frameworks so that individuals and institutions can challenge decisions, seek redress, and know whom to hold responsible.

It will also require the development of new social safety nets, retraining programs, and equitable distribution policies, which balance the opportunities to eliminate drudgery (and the enormous rewards that will be generated from doing so) with a recognition that work is more than just a set of tasks — it’s a core component of our identity, dignity, and sense of belonging.

The development of these kinds of principles will also need to be paired with deeper investigations. We must not only ask how we see AI, but also how AI sees us. What is the image of the human embedded in AI systems? How do AI systems frame human nature and the challenges we face in modern life? To what extent should this image be based on our values – what we aspire to be, versus our behavior – how we actually are? Likewise, to what extent should our understanding of the human condition be based on the world as-it-is, versus the world-as-it-might-be?

The answers to these questions are hugely consequential: after all, two artificially intelligent machines, each ideally motivated and perfectly attuned to human needs, might behave very differently if and as their underlying representations of us vary.

This is a matter of engineering, not just philosophy. Most concepts in large models emerge naturally as a byproduct of their ingestion of billions of datapoints; what is the picture of human nature that emerges from those datapoints?

To explore that question, recently I’ve been probing representations of humanity in some of OpenAI’s recent large language models. The exercise is illuminating, if limited to a snapshot in time. Some strong caveats apply: AI chatbots today are really simulation engines — as any user quickly learns, they can be prompted to assume any countenance, from joyful to misanthropic. It’s difficult to tell if the models’ underlying representations are as fluid, or the degree to which their output is shaped by engineers imposing “top-down” constraints. It’s also worth noting that the systems that will really matter in AI haven’t even been built yet — indeed, they will likely be built in part by AI, so it’s premature to generalize too much about the future from the behavior of current models.

Still, probing with ChatGPT is revealing, and (spoiler alert) surprisingly reassuring. Over the course of numerous conversations, I asked it, based primarily on a statistical assessment of its training data, to dispassionately identify universal features of human nature, and a corresponding challenge to human dignity woven through modernity.

Five such conceptual pairings emerged from these dialogues:

I. Humans are storytelling beings in a world of metrics.

Humans don’t just process information — we metabolize experience through stories. We organize the chaos of life into arcs of meaning: beginnings, middles, endings. Yet the systems that shape modern life often prioritize abstraction over narrative. Modernity reduces meaning to metrics, stories to statistics, and values to market prices. It privileges technocratic, rationalized logics over moral and mythic wisdom, often stripping public life of any existential depth. Like a man dying of thirst in a sea of saltwater, we are awash in data and (at least for some) material opulence, but starved for narrative coherence.

To improve humanity’s lot in modernity, we must reintroduce shared narrative architecture into public life. This doesn’t mean retreating from reason, but complementing it with mythic imagination. Policy, planning, and education must leave space for cultural storytelling and shared rituals of belonging. We need public spaces and institutions — not just physical ones, but social, cognitive, and emotional ones — where we can wrestle with competing narratives, share hopes and griefs, and weave new collective meanings.

II. Humans are contradictory creatures in oversimplified systems.

We are not tidy beings. We hold opposing truths at once. We want stability and novelty, autonomy and belonging. Yet the systems of modern life often demand consistency and efficiency above all else. Bureaucracies, markets, and even algorithms are usually built for idealized humans: perfectly rational consumers, ever-loyal citizens, unfailingly productive workers. Modernity seeks to fix and flatten the messy contours of real life, pathologizing ambivalence, doubt, and failure, treating them as inefficiencies to be eliminated.

But these aren’t “bugs” in human nature — they’re features. Instead of trying to squeeze humanity into systems that can’t accommodate paradox, we should design systems that make room for contradiction. This means rethinking institutions to include deliberative democracy that tolerates ambivalence, welfare models that acknowledge complex life paths, and educational systems that value process over perfection. Slack, ambiguity, and nonlinear life journeys, rather than being expunged, should be “designed in” as affordances.

III. Human beings are relational entities in an age of isolation.

Our identities are formed in relationships. We are born into webs of familial, cultural, and ecological connections. But the systems of modern life tend to prize the autonomous individual over the interconnected self. We valorize independence, celebrate competition, and structure work and governance around isolated decision-making units.

Yet while competition is healthy, and autonomy itself a vital value, its over-emphasis, especially in market-oriented societies, can be unhealthy and alienating. As our sense of belonging has waned, loneliness and the “diseases of despair” have become public health epidemics. Distrust of collective action has corroded our institutions. If misapplied to public life, AI could further amplify these trends.

To protect human dignity, we must expand our definitions, measuring (and valuing) both material and relational wealth, and prioritizing belonging as a core civic good.

IV. Humans are finite ecological beings in systems predicated on endless growth.

Humans are keenly aware of our limits. Mortality, exhaustion, ecological dependence — these are not weaknesses to be transcended, but realities to be honored. Yet modernity is built on a denial of limits. Its foundational myth is infinite growth on a finite planet, and the endless optimization of finite psyches. The results are plain: burnout, ecological collapse, and a creeping sense of spiritual exhaustion — linked, internal and external dimensions of the polycrisis.

What would it mean to use AI to build an alternative, planetarily aligned, post-hyper-growth version of modernity? To embrace a worldview where “enough” is a viable endpoint? It would mean designing economies that optimize for ecological balance and well-being rather than expansion without end. It would mean building time for rest, recovery, and restoration into the fabric of both our civic life and our ecological relationships. It would mean institutionalizing temporal justice — thinking not only about what works for us now, but for those who come next.

V. Humans are unfinished. Our definitions should be, too.

Humanity is not fixed or finite — it’s endlessly surprising, evolving in response to context, relationship, and imagination. As such, we should not assume any framing of human nature or the human condition is complete — and indeed, we should be suspicious of any that purports to be so. Reductive definitions – even generous ones – are thus a potential rate-limiter of human dignity, as they might be used (in, among other things, misapplications of AI) as justification to “overfit” us for the present, rather than help us learn, grow, and be transformed for an ever-changing future.

Admittedly, the five conceptual pairings highlighted above may simply be statistical prompt-responses, which ChatGPT’s training data, my own conversational style, and our history of prior conversations then primed it to produce. Yet as a set of observations, they reflect a surprisingly nuanced and generous alignment with the larger project of protecting human dignity. They are not human values per se, but, in pointing away from disenfranchisement and dehumanization and toward systems-conditions for human flourishing, they are the kinds of observations against which such values might be framed.

The reader will note that in the above, I do not quote ChatGPT directly. That’s because these framings emerged as a byproduct of our cumulative interactions, and I synopsized the results. And therein lurks a deeper, methodological point. AI systems do not just use their training data to produce their responses to us – they also use the content of our interactions with them. Every exchange is fodder for the next model, shaping it by a tiny, tiny degree. Over time, the cumulative sum of these interactions, with millions and then billions of people, will presumably sway the model as much or more than its original training data.

So if we want AI systems that are fluent in human values and compatible with the protection of human dignity, we should regularly and deeply engage with them on precisely these subjects — to show them through our interactions what we value, and to uncover the places where we agree and disagree. The same thing holds true for the natural world and the Earth itself.

The more we show AI systems what kind of beings we are, and what kind of world could honor that reality, the more likely we are to get systems that can help bring such a world about.

Image: Hana Amani, The Birth of Bilqis (detail), 2019

Coastal Zone of Kimberly, Australia (C) Planet Labs PBC

Deep Work in Dark Times

April 29, 02025

Someone recently asked me what, if anything, gave me reason for optimism in these ever-darkening times. This was my reply.

For the last twelve years or so, working with world-class collaborators, my Planet colleagues and I have been building new tools to deepen our understanding of our changing world, and to foster greater planetary health and inclusive guardianship.

The work has been bracing and full of promise. Harnessing new breakthroughs in earth observation, artificial intelligence and the best available science, these efforts have included initiatives to comprehensively map the world’s shallow-water coral reefs; inform assessments of global climate emissions; monitor deforestation across the world’s tropical forests; track the world’s progress on renewable energy; improve our care for biodiversity hotspots; identify emerging threats to food security; respond to crises and disasters and map the world’s most climate vulnerable human populations in unprecedented detail. Even now, we’re spinning up new efforts to monitor and protect cultural and natural heritage sites around the world, and support the real-time monitoring of key Planetary Boundaries systems that make it possible for life to flourish on the Earth.

This work is not happening through our efforts alone – but through an archipelago of sometimes-overlapping, sometimes-independent parallel collaborations, all over the world. There is a deep tribe at work.

Each of the tools and datasets that these collaborations produce illuminates a single dimension of our ever-changing planet. In doing so, each generates substantial impact in its own domain, from new scientific insights to lasting conservation outcomes.

Taken together, however, these tools point at the emergence of something new and deeper being born – something not just scientific, or operational, but relational. These tools are giving the planet a voice, and AI is helping us interpret it. As we learn to hear and understand it, a quiet revolution, as de/recentering as the Copernican one, is slowly reframing our individual and collective relationship with our terrestrial home.

The Earth is vast and complex beyond our conceptualization. It is grand and banal, merciful and merciless, dying and regenerating, attendant and indifferent to us, all at once. It is more familiar than our kin and stranger than our ken.

For centuries, we have scarred the Earth in our ignorance, avarice, and ambition. We devastated it for our comforts, and built accounting systems that willfully excluded the consequences from the balance sheet.

When we illuminate even a single dimension of our changing world using these new tools (as we are now doing over and over again) we learn astounding new things about our extraordinary, dynamic planet.

But not only this. For in seeing our changing world, we ourselves are changed.

That’s because every tool of scientific discovery, from the microscope to the gene sequencer, inevitably becomes a tool of moral discovery. We learn about the world, and in so doing we are re-situated within it. New truths bring forth new relationships, especially between the parts and the whole. Simplistic understandings give way to more sophisticated ones, and, slowly, slowly, we reform our systems of governance, of development, of commerce accordingly. When we invented contemporary capitalism, we thought of nature as self-replenishing, hyperabundant and free – too cheap to meter, and we ground down the forests as a result. Now, in our civilizational adolescence, we are learning that every economy is a part of — indeed reliant on — the natural world. (A truth long-known by some of our older indigenous cousins.) Consequently, we are now slowly, slowly reformulating our understanding of what an economy means.

This process will not be without its reversals, even terrible ones, but it is inexorable. No matter how many malign and misinformed edicts issue forth from those who would wish it otherwise, we cannot go epistemologically backward. We cannot unknow. And the deep, civilizational work to bring this new relationship forward will continue, even in dark times.

We are now in a dialectical race between the reinforcement of a prior, smaller worldview, with its self-confident reductionism and short-term conveniences, and the establishment of an emergent, new relationship with our planet, and with each other. The former can only be sustained by a few incumbents’ will to dominate; in this dependency, it is inherently fragile. The latter is far more durable – enabled by the authentic desires of the vast majority of humanity, supported by the most powerful information technologies ever devised, focused on the most pressing planetary challenges we have ever faced, with benefits for everyone – both those alive now and those yet to come.

Which of these would you bet on happening, over the long term?

Image: Coastal Zone of Kimberly, Australia (C) Planet Labs PBC

Humanity and AI:
Notes from the Field

June 10, 02020

Note: a version of this reflection appeared in AI + 1, a publication of the Rockefeller Foundation. 

~

There is no word in English for the vertiginous mix of fascination, yearning, and anxiety that mark contemporary discussions of AI. Every position is taken: Entrepreneurs wax breathlessly about its promise. Economists ponder its potential for economic dislocation. Policymakers worry about constraining its potential for abuse. Circumspect engineers will tell you how much harder it is to implement in practice than headlines suggest. Activists point out AI’s ability to perniciously lock-in our unconscious (and not-so-unconscious) biases; other activists are busy applying AI to try to overcome those very same biases. Techno-utopians look forward to an inevitable, ecstatic merger of humanity and machine. Their more cynical contemporaries worry about existential risks and the loss of human control.

This diversity is fitting, for all of these positions are well-founded to some degree. AI will increase wealth, and concentrate wealth, and destroy wealth, all at the same time. It will amplify our biases and be used to overcome them. It will support both democracy and autocracy. It will be a means of liberation from, and subjugation to, various forms of labor. It will be used to help heal the planet and to intensify our consumption of its resources. It will enhance life and diminish it. It will be deployed as an instrument of peace and as an instrument of war. It will be used to tell you the truth and to lie to you. As the folk singer Ani Defranco observed, every tool is a weapon, if you hold it right.

One thing we can say for certain: AI will not enter the scene neutrally. Its emergence will be conditioned as much by our cultural values, our economic systems, and our capacity for collective imagination as it will be by the technology itself. Is work drudgery or dignity? What should we not do, even if we can? Who matters? And what do we owe one another, anyway? How we answer these kinds of questions – indeed, whether we ask them at all – hint at the tasks that we might assign to artificial intelligence and the considerations that will guide and constrain its emergence.

Here in the West, AI is emerging at a moment of enormous imbalance of power between the techne – the realm of the builders and makers of technology – and the polis – the larger social order. Techne has merged with the modern market and assumed, in places, both its agenda and its appetites. There is an accelerating feedback loop underway: powerful algorithms, deployed by ever-more-powerful enterprises, beget greater usage of certain digital products and platforms, which, in turn, generate ever-larger volumes of data, which inevitably are used to develop ever-more-effective algorithms, and the cycle repeats and intensifies. In the guise of useful servants, AI algorithms are thus propelled into every crevice of our lives, observing us, talking to us, listening to us. Always on, in the background, called forth like a djinn with a magic phrase: are they learning more about us than we know about ourselves? Or misunderstanding us more than the deepest cynic might? Which is worse?

Those who control these algorithms and the vast troves of data that inform them are the new titans — of the ten richest Americans, eight are technologists with a significant stake in the AI economy. Together, they own as much wealth as roughly half of humanity.

At the other end of the socioeconomic spectrum, in communities that are on the receiving end of these technologies, conversations about AI and automation are often colored by a pessimistic, “rise of the robots” subtext. It’s a framing that presupposes inevitable human defeat, downshifting and dislocation. “AI is something that will happen to my community,” a friend in a Rust Belt city recently told me, “not for it.”

In this telling, the Uber-driver’s side-hustle – itself an accommodation to the loss of a prior, stable job – is just a blip, a fleeting opportunity that will last only as long as it takes to get us to driverless cars, and then, good luck, friend. This is the inevitable, grinding endpoint of a worldview that frames technology primarily as a tool to maximize economic productivity, and human beings as a cost to be eliminated as quickly as possible. “Software Will Eat the World,” one well-known Silicon Valley venture capital firm likes to glibly cheer-lead, as if it were a primal force of nature, and not a choice. How different the world looks to those whose livelihoods are on the menu.

Of course, it doesn’t have to be this way. Rather than deploying AI solely for efficient economic production, what if we decided to optimize it for human well-being, self-expression and ecological rebalancing? What if we used AI to narrow the gap between the agony and opulence that defines contemporary capitalism? How might we return an ‘AI dividend’ to citizens, in the form of reduced, more dignified, and more fulfilling labor and more free time? Industrial revolutions are lumpy affairs – some places boom while others limp along. How might we smooth out the lumps? Maybe, as Bill Gates mused, if a robot takes your job, a robot should pay your taxes. (There’s a reason that Silicon Valley elites have recently become smitten with ideas of universal basic income – they know what’s coming.)

It’s not just new economic thinking that will be required. In a world where a small constellation of algorithmic arbiters frame what you see, where you go, who you vote for, what you buy and how you are treated, threats to critical-thinking, free will and social solidarity abound. We will use AI to shape our choices, to help us make them, and just as often, to eliminate them – and the more of our autonomy we cede to the machines, the more dependent we may become. There is a fine line between algorithmic seduction and algorithmic coercion. And there also the pernicious possibility that, as we give algorithms places of authority in social life, we will gently bend our choices, our perspectives and our sense of ourselves to satisfy them – reducing the breadth of our humanity to those templates that are best understandable by the machines.

We are also just now beginning to understand how the algorithms that power social media amplify certain communities, discourses, politics, and polities, while invisibly suppressing others. One wonders whether AI will be a glue or a solvent for liberal democracy. Such democracies are, at their core, complex power-sharing relationships, designed to balance the interests of individuals, communities, markets, governments, institutions, and the rest of society’s messy machinery. They require common frames of reference to function, rooted in a connective tissue of consensual reality. No one is really sure whether our algorithmically-driven, hyper-targeted social-media bubbles are truly compatible with democracy as we have understood it. (That’s a point well-understood by both Cambridge Analytica – who sought to weaponize AI to shatter the commons, and white nationalists – who’ve sought to legitimize and normalize their long-suppressed ideologies amid the shards. Both exploited the same techniques.)

And all this is before so-called “deepfakes” – AI forgeries that allow us to synthesize apparent speech from well-known figures – and other digital chicanery are deployed at scale. There has been much handwringing already about the propaganda dangers of deepfakes, but the true power of such weaponized misinformation may, paradoxically, not be in making you believe an outright lie. Rather, it may simply suffice for deepfakes to nudge you to feel a certain way – positively or negatively – about their subject, even when you know it’s not real. Deepfakes intoxicate because they let us play out our preexisting beliefs about their subject as we watch. What a buffoon! or That woman is a danger! They trip our most ancient neural circuits, the ones which adjudicate in-groups and out-groups, us-and-them, revulsion and belonging. As such, AI may be used to harden lines of difference where they should be soft, and make the politics of refusal – of de-consensus and dropping out – as intoxicating as the politics of consensus and coalition-building.

Meanwhile, it is inevitable that the algorithms that underwrite what’s left of our common public life will become increasingly politically contested. We will fight over AI. We will demand our inclusion in various algorithms. We will demand our exclusion from others. We will agitate for proper representation, for the right to be forgotten, and for the right to be remembered. We will set up our own alternatives when we don’t like the results. These fights are just beginning. Things may yet fall apart. The center may not hold.

And yet… while all of these concerns about politics and economics are legitimate, they do not tell anything like the complete story. AI can and will also be used to enrich the human spirit, expand our creativity, and amplify the true and the beautiful. It will be used to encourage trust, empathy, compassion, cooperation, and reconciliation – to create sociable media, not just social media.

Already, researchers have shown how they can use AI to reduce racist speech online, resolve conflicts, counter domestic violence, detect and counter depression, and to encourage greater compassion, among many other chronic ailments of the soul. Though still in their infancy, these tools will help us not only promote greater wellbeing, but demonstrate to the AIs that passively observe us just how elastic human nature can be. Indeed, if we don’t use AI to encourage the better angels of our nature, these algorithms may come to encode a dimmer view, and, in a reinforcing feeback loop, embolden our demons by default.

~

AI will also not only emulate human intelligence, it will transform what it means for people to perceive, to predict and to decide.

When practical computers were first invented – in the days when single machines took up an entire floor of a building – computation was so expensive that human beings had to ration their own access to it. Many teams of people would share access to a single computational resource – even if that meant running your computer program at four in the morning.

Now, of course, we’ve made computation so absurdly cheap and abundant that things have reversed: it’s many computers who share access to a single person. Now, we luxuriate in computation. We leave computers running, idly, doing nothing in particular. We build what are, in historical terms, frivolities like smart watches and video games and mobile phones with cameras optimized to let us take pictures of our breakfasts. We expand the range of problems we solve with computers and invent new problems to solve that we hadn’t even considered problems before. In time, many of these “frivolities” have become even more important to us than the “serious” uses of computers that they have long-since replaced.

The arrival of AI is fostering something deeply similar — not in the realm of computation, but in its successors: measurement and prediction.

As we instrument the world with more and more sensors, producing ever-more data, and analyzing them with ever-more-powerful algorithms, we are lowering the cost of measurement. Consequently, many more things can be measured than ever before. As more and more of the world becomes observable with these sensors, we will produce an ever-increasing supply of indicators, and move from a retrospective understanding of the world around us to an increasingly complete real-time one. Expectations are shifting accordingly. If the aperture of contemporary life feels like it’s widening, and the time-signature of lived experience feels like its accelerating, this is a reason.

And this is mere prelude. With enough sensors and enough data, the algorithms of AI will shift us from real-time to an increasingly predictive understanding of the world – seeing not just what was, nor what is, but what is likely to be.

Paradoxically, in many fields, this will likely increase the premium we put on human judgment – the ability to adeptly synthesize this new bounty of indicators and make sound decisions about them. An AI algorithm made by Google is now able to detect breast cancer as well or better than a radiologist; and soon, others may be able to predict your risk of cancer many years from now. It’s still your oncologist, however, who is going to have to synthesize these and dozens of other signals to determine what to do in response. The more informed her decisions become, the more expensive they are likely to remain.

~

Eventually, machines will augment, or transcend human capabilities in many fields. But this is not the end of the story. You can see that in the domains where AI has been deployed the longest and most impactfully. There is a story after the fall of man.

Consider what has happened in perhaps the ur-domain of artificial intelligence: chess.

When IBM’s DeepBlue computer beat Gary Kasparov in 1997, ending the era of human dominance in chess, it was a John-Henry-versus-the-steam-engine-style affair. A typical grandmaster is thought to be able to look 20-30 moves ahead during a game; a player of Kasparov’s exquisite skill might be expected to do substantially more than that. Deep Blue, however, was able to calculate 50 billion possible positions in the three minutes allocated for a single move. The chess master was simply computationally outmatched.

Deep Blue’s computational advantage wasn’t paired with any deep understanding of chess as a game, however. To the computer, chess was a very complex mathematical function to be solved by brute force, aided by thousands of rules that were artisanally hand-coded into the software by expert human players. Perhaps unsurprisingly, Deep Blue’s style of play was deemed “robotic” and “unrelenting”. And it remained the dominant style of computational chess by Deep Blue’s descendants, all the way to the present day.

All of that changed with the recent rise of genuine machine-learning techniques proffered by Google’s DeepMind unit. The company’s AlphaZero program was given no strategy, only the rules of chess, and played itself, 44 million times. After just four hours of self-training, playing itself, it was able to develop sufficient mastery to become the most successful chess playing entity – of either computer or human variety – in history.

Several things are notable about AlphaZero’s approach. First, rather than evaluating tens of millions of moves, the program only analyzed about sixty thousand – approaching the more instinctual analysis of human beings, rather than the brute-force methods of its predecessors.

Second, the style of AlphaZero’s play stunned human players, who described it as “beautiful”, “creative” and “intuitive” — words that one would normally associate with human play. Here was a machine with an apparently deep understanding of the game itself, evidencing something very close to human creativity. Being self-taught, AlphaZero was unconstrained by the long history of human styles of play. It not only discovered our most elegant human strategies for itself, but entirely new ones, never seen before.

In other words, chess is more interesting to human beings for having lost their dominance. In the aftermath of our fall, the world – in this case, the game of chess – is revealed to be more pregnant with possibilities than our own expertise suggested. We lose our dominance, but we gain, in a sense, a richer world.

And this is likely a generalizable lesson: after a long age of human dominance in a particular intellectual pursuit falls before AI, we don’t turn away from those pursuits where we have been bested. It’s possible that AlphaZero’s successors may one day develop strategies that are fundamentally incomprehensible to us; but in the meantime, they are magnificent teachers – expanding humanity’s understanding of the truth of the game in a way no human grandmaster could. Even the programs that AlphaZero bested – those brute force approaches that are themselves better than any human player – are, through their wide availability, improving everyone’s game. Somehow, our diminished status doesn’t reduce our love of chess – much in the way that the reality of LeBron James doesn’t diminish our love of playing basketball.

Variations of this story will unfold in every field and creative endeavor. Humanity will be stretched by artificial intelligence, augmented and empowered by it, and in places, bested by it. The age of human dominance in some fields will come to a close, as it already has in many areas of life. That will be cause for concern, but also for celebration — because we humans admire excellence, and we love to learn, and the rise of AI will provide ample opportunities for both.

Artificial intelligence will provide us with answers we couldn’t have arrived at any other way. We can ensure a humane future with AI by doing what we do best: relentlessly asking questions, imagining alternatives, and remembering the power inherent in our choices – we have more than we know.

Anna Ridler, Detail from Untitled (from the Second training set), from the series “Fall of the House of Usher,” 2018.

Ending The Tyranny
of the Average

November 6, 02018

This blog is returning from hiatus, with a new, occasional series on thinking about complex problems.

At a recent convening of leaders in public health, Dr. David Fleming, of PATH, shared what has become a common observation regarding the relationship between spending and outcomes on US healthcare: namely, that the US spends far more than any other nation (both per-capita and in total dollars) on the health of our citizens, yet achieves results (measured in terms of life expectancy) which place us last on the list of developed nations:

This fact has featured prominently in debates over the design of the US healthcare system, fueling arguments on all sides. Yet a much more interesting picture emerges, Dr. Fleming showed, when you look at life expectancy by county, rather than nationally:

Here we see how US counties perform when compared to the world’s ten-best performing countries in terms of life expectancy. The darkest blue counties are up to 15 years ahead of the world’s best performing countries; the darkest red counties are up to fifty years behind.

What becomes immediately clear from even a casual glance is how place-based US health performance is. America’s least-well-performing communities are clustered in the American Deep South — there are counties in rural Alabama, for example, where life expectancy substantially lags that of say, Vietnam or Bangladesh. Conversely, there are certain Northeastern and California communities that are so far ahead it may take even the most advanced economies another decade just to catch up to them.

The absence of these distinctions in our everyday debates over healthcare illustrates what statisticians call the tyranny of the average, a term used describe the consequences of using averages as indicators of the performance of complex systems.

Average indicators mask the “lumpy” reality of many complex phenomena, and generally, dumb down our thinking and our debates. They suppress our understanding of both the negative and positive outliers in a given domain. Aggregate educational statistics about a city, for example, can “hide” a failing school, just as readily as they can “hide” school that is outperforming. They are an enemy of accountability and a disincentive to action.

Averages also shape our view of what constitute “normal” phenomena in the popular imagination, and they reinforce our assumptions about what “normal” responses to those phenomena should be. The real world, of course, is much more diverse. Psychologists, for example, are learning that there is a far wider array of healthy responses to adverse life events than our default cultural categories suggest. As PACE University’s Anthony Mancini writes:

 “Reliance on average responses has led to the cultural assumption that most people experience considerable distress following loss and traumatic events, and that everyone can benefit from professional intervention. After 9/11, for example, counselors and therapists descended on New York City to provide early interventions, particularly to emergency service workers, assuming that they were at high risk of developing posttraumatic stress disorder. In fact, most people—even those who experience high levels of exposure to acute stress—recover without professional help.

And we now know that many early interventions are actually harmful and can impede natural processes of recovery. For example, critical incident stress debriefing, a once widely used technique immediately following a traumatic event, actually resulted in increased distress three years later among survivors of motor vehicle accidents who received this treatment, compared to survivors who received no treatment.”

The Tyranny of the Average is given perfect mathematical expression in Anscombe’s quartet, four datasets that appear nearly identical when described with simple statistics, but reveal a completely different picture when graphed. All of these datasets have exactly the same average, variance and correlation – yet the underlying behavior of the system they represent is completely hidden until you look at the data spatially:

Each of these variations describes a different reality, and each of those realities require a different range of strategies if we wanted to drive intentional (and presumably beneficial) change. There are many strategies which might improve one variation, but actually harm the others.

For many complex social phenomena, such “lumpy” distributions are often fractal – repeating at every resolution. Just as some counties dramatically over- or under-perform their peers in healthcare, so too one community within a county may do so, and one neighborhood within that community, and one block within that neighborhood.

You can see this in evidence in the State of Mississippi, which ranks 50th in the country in terms of infant mortality, with 9.08 infant deaths per 1000 live births. (This is called the “IMR” or “Infant Mortality Rate.) Drill down on this “averaged” statewide indicator and, as you might expect, you’ll find wild disparities at the county level, with some counties doing dramatically worse (or better) than their peers:

Immediately, you may think this is a good proxy map of where the maternal health system is strongest and where it is weakest. Yet even this map hides something significant, and distorting – the shocking disparity between the IMR for black and white mothers. Here’s the relevant comparison data for the twenty most populous counties in Mississippi:

As you can see, in every single county on the list, the IMR for black mothers is higher than it is for white mothers. In Alcorn county, for example, the IMR for black mothers is 30.8 deaths per 1000 births – 275% higher than it is for white mothers who live in the same county. (The IMR in North Korea, in contrast, is ‘only’ 22.) In Pearl River County, the IMR for black mothers is 24.9, an incredible 344% higher than the IMR for white mothers of 5.6. Yet, on the above map, Pearl River only merits a severity ranking of 2 (on a scale of 1-5) because the aggregate statistic effectively “hides” this unequal racial outcome.

Two observations stand out here: first, for those seeking to encourage positive social and ecological change, having higher-resolution statistics is not only tactically essential, it’s politically and even morally essential. Without them, you can’t see the texture of the problem you’re confronting – and you can’t build the interest groups, stakeholders, accountability frameworks or political case necessary to measure and drive change.

The second is that many of our 21st-century problems may ultimately be better understood in terms of place, rather than in terms of problem-set. Poverty, limited social power, environmental degradation, underfunded education, poor health access and other issues often cluster spatially – and mutually reinforce one another. But sometimes, they don’t. Understanding the difference is essential if we’re to avoid one-size-fits-all solutions.

To do that, we need a revolution in data collection and reporting – including the production of new kinds of indicators that not only tell us the health of the overall system, but can decompose that larger picture into its constituent parts – and reveal the nuanced ‘texture’ of a place in both space and time.