16. Maybe Big Someday, Definitely Good Now
The world finds itself in peril, on the brink of self-immolating calamity in any of a handful of ways. Bit by bit, geopolitics has been turned into a multipolar tinderbox; drive for profit at all costs and for ruthless corporate growth are an ever-hungry furnace, with the continued impoverishment of billions; the climate warms, and forests burst into flames as the next zoonotic plague looms. And this is to say nothing of the neglected funeral pyres of nuclear nonproliferation, poor distribution of food and medical goods, and any of half a dozen simmering genocides at some or other great power’s behest, to name just a few ongoing failures.
With all these cause areas screaming for resources with which to fight the flames, what are we to do, wishing to do the most good as efficiently as possible, if we hope to douse the rising flames our global civilization sleeps fitfully among? The prospect of near-term AGI has only stoked the flames of our perdition, and threatens to flash into a sudden inferno - but it also holds out the promise of a shining victory at a single stroke, if only we can get the desires and cares of such a wonder pointed just right. And surely we must also consider the uncounted trillions of future lives that would be snuffed out, were our global civilization to come to ruin? Thus does longtermist ideology emerge: we must spend down resources on tomorrow, safeguarding the Earth’s future lightcone with floods of speculative investments in neurotechnology, space colonization, and especially AI safety in all its forms, from training up new researchers to paying for popularization and community-building.
Despite being a creature deeply and endorsedly embedded in both culturally longtermist spaces and the AI safety community specifically, I think that this attempt at an object-level approach is deeply misguided. Longtermist causes have become overserved. Their flashiness and their promise to save the world and the lightcone all at once - without need for societal upheaval or the long slow work of remediating the wrongs of centuries past - have seduced too many who keep hold of overly vast resources. What’s needed instead, even if - especially if! - we care about the safety and worthwhileness of the long-term future is the boring groundwork; for EA to get back to eating its vegetables. Disease control and healthcare; plentiful food and education; safe clean water and sanitation; capacity-building and political stability - these are the kinds of classical cause areas to benefit the Third World that are called for to safeguard the long-term future. Seek to fund those initiatives which do clear good now and which in that good advance the possibility of vast gains later - this is why I trumpet education and nutrition, and not shrimp welfare.
We might equally well consider the similarly dire need to bring all of our planet’s industrial potential online as quickly and as cleanly as possible to kickstart space colonization and post-scarcity. That said, I consider a gears-level model of precisely how the tragically old-fashioned interventional focus I propose would fairly directly benefit the pursuit of safe AGI more instructive. I can tell you, fully informed, that AI safety and alignment researchers still don’t have a paradigm to work in, and concrete goals are thin on the ground; I shudder to contemplate the kind of uphill battle AI governance and policy thinktankers have against the twin powers of ravenous corporate interests and bloodthirsty foreign policy hawks. None of us know the way forward; anyone who claims to is selling something. The only thing to do is to keep widening the funnel in search of the next star researcher. But where to look, given that any sufficiently clever person is about as good a bet ex ante as any other? “I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops” - thus did Stephen Jay Gould put it. Rather than keep bidding up the price of skilled technical labor in the First World, we should be maximizing the number of slot-machine pulls we get on the literal billions of people growing up even now amidst the ignorance, privation, and war in the Third World.
(Why do I say “Third World” and not “developing world” or “Global South”? It goes back to Alfred Sauvy, writing in the 1950s; he spoke of the “Third World” with the same archaic French word - tiers - as was used for the Third Estate just prior to the French Revolution. Neither those who own nor those who preach orthodoxy, but those who labor and curse their fates - and perhaps one day rise up in rage. Indeed, we might darkly note that such charity for the Third World as I promote raises no banners nor wins no geopolitical glory nor merits any grand recognition; indeed, all it does is raise up as potential competitors those who were once conquered - and who might well hold grudges. Perhaps this is some modest piece of why so little of it occurs, for all the crocodile tears world leaders might shed.)
To those who shun my proposed approach as likely to take too long to matter - we’re likely all dead or worse anyway if AGI comes upon us before 2030. Society is not remotely ready along numerous axes and will not be ready in time. Great power conflict and the constant associated global suicide threats are the order of the day; brutal capitalism reigns unchecked, and those useless to economic engines or simply unlucky all lie prostrate; algorithmic manipulation is treated as legitimate or at any rate too hard to prevent. Is it any wonder that so many find themselves with so little to hope to gain that they espouse e/acc, hoping that whatever replaces us will be fitter, happier, and more productive? Or that any might live with so little to lose that they might as well unleash the Locusts on we happy bastards who ignored them in their plight? Indeed, these prosaic interventions are clearly not enough. A mode of living that refuses to discard or permanently disempower those who cannot contribute legibly to the economy; a political system that safeguards by its design against power-grabs and the propagation of pleasant lies to the masses; foreign policy that treats humans as people rather than chattel and game-pieces to fight over - these will be sorely needed as well. As it has been crudely put, the IQ necessary to end the world drops by 1 each decade - even should we long for a universal surveillance state, who could possibly dream that such a panopticon would be a world worth living in? That it would never see inescapable misuse by some institutional power, or that it might be just the single point of societal failure a misaligned AI needed, or just that it might provoke the very catastrophes such overweening mistrust seeks to prevent?
Perhaps, instead, we should improve society somewhat. Make sure no one in our world cries out for its destruction from a filthy coltan pit, that they all grow up educated and well-fed and safe, just for the hell of it. That might conceivably just work. That would be pretty cool.
19807/32768
With all these cause areas screaming for resources with which to fight the flames, what are we to do, wishing to do the most good as efficiently as possible, if we hope to douse the rising flames our global civilization sleeps fitfully among? The prospect of near-term AGI has only stoked the flames of our perdition, and threatens to flash into a sudden inferno - but it also holds out the promise of a shining victory at a single stroke, if only we can get the desires and cares of such a wonder pointed just right. And surely we must also consider the uncounted trillions of future lives that would be snuffed out, were our global civilization to come to ruin? Thus does longtermist ideology emerge: we must spend down resources on tomorrow, safeguarding the Earth’s future lightcone with floods of speculative investments in neurotechnology, space colonization, and especially AI safety in all its forms, from training up new researchers to paying for popularization and community-building.
Despite being a creature deeply and endorsedly embedded in both culturally longtermist spaces and the AI safety community specifically, I think that this attempt at an object-level approach is deeply misguided. Longtermist causes have become overserved. Their flashiness and their promise to save the world and the lightcone all at once - without need for societal upheaval or the long slow work of remediating the wrongs of centuries past - have seduced too many who keep hold of overly vast resources. What’s needed instead, even if - especially if! - we care about the safety and worthwhileness of the long-term future is the boring groundwork; for EA to get back to eating its vegetables. Disease control and healthcare; plentiful food and education; safe clean water and sanitation; capacity-building and political stability - these are the kinds of classical cause areas to benefit the Third World that are called for to safeguard the long-term future. Seek to fund those initiatives which do clear good now and which in that good advance the possibility of vast gains later - this is why I trumpet education and nutrition, and not shrimp welfare.
We might equally well consider the similarly dire need to bring all of our planet’s industrial potential online as quickly and as cleanly as possible to kickstart space colonization and post-scarcity. That said, I consider a gears-level model of precisely how the tragically old-fashioned interventional focus I propose would fairly directly benefit the pursuit of safe AGI more instructive. I can tell you, fully informed, that AI safety and alignment researchers still don’t have a paradigm to work in, and concrete goals are thin on the ground; I shudder to contemplate the kind of uphill battle AI governance and policy thinktankers have against the twin powers of ravenous corporate interests and bloodthirsty foreign policy hawks. None of us know the way forward; anyone who claims to is selling something. The only thing to do is to keep widening the funnel in search of the next star researcher. But where to look, given that any sufficiently clever person is about as good a bet ex ante as any other? “I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops” - thus did Stephen Jay Gould put it. Rather than keep bidding up the price of skilled technical labor in the First World, we should be maximizing the number of slot-machine pulls we get on the literal billions of people growing up even now amidst the ignorance, privation, and war in the Third World.
(Why do I say “Third World” and not “developing world” or “Global South”? It goes back to Alfred Sauvy, writing in the 1950s; he spoke of the “Third World” with the same archaic French word - tiers - as was used for the Third Estate just prior to the French Revolution. Neither those who own nor those who preach orthodoxy, but those who labor and curse their fates - and perhaps one day rise up in rage. Indeed, we might darkly note that such charity for the Third World as I promote raises no banners nor wins no geopolitical glory nor merits any grand recognition; indeed, all it does is raise up as potential competitors those who were once conquered - and who might well hold grudges. Perhaps this is some modest piece of why so little of it occurs, for all the crocodile tears world leaders might shed.)
To those who shun my proposed approach as likely to take too long to matter - we’re likely all dead or worse anyway if AGI comes upon us before 2030. Society is not remotely ready along numerous axes and will not be ready in time. Great power conflict and the constant associated global suicide threats are the order of the day; brutal capitalism reigns unchecked, and those useless to economic engines or simply unlucky all lie prostrate; algorithmic manipulation is treated as legitimate or at any rate too hard to prevent. Is it any wonder that so many find themselves with so little to hope to gain that they espouse e/acc, hoping that whatever replaces us will be fitter, happier, and more productive? Or that any might live with so little to lose that they might as well unleash the Locusts on we happy bastards who ignored them in their plight? Indeed, these prosaic interventions are clearly not enough. A mode of living that refuses to discard or permanently disempower those who cannot contribute legibly to the economy; a political system that safeguards by its design against power-grabs and the propagation of pleasant lies to the masses; foreign policy that treats humans as people rather than chattel and game-pieces to fight over - these will be sorely needed as well. As it has been crudely put, the IQ necessary to end the world drops by 1 each decade - even should we long for a universal surveillance state, who could possibly dream that such a panopticon would be a world worth living in? That it would never see inescapable misuse by some institutional power, or that it might be just the single point of societal failure a misaligned AI needed, or just that it might provoke the very catastrophes such overweening mistrust seeks to prevent?
Perhaps, instead, we should improve society somewhat. Make sure no one in our world cries out for its destruction from a filthy coltan pit, that they all grow up educated and well-fed and safe, just for the hell of it. That might conceivably just work. That would be pretty cool.
19807/32768
Comments
Post a Comment