Chokepoint Capitalism: why we'll all lose unless we stop Amazon, Spotify and other platforms squeezing cash from creators
Suicide risk is high for military and emergency workers – but support for their families and peers is missing
Mukbang, #EatWithMe and eating disorders on TikTok: why online food consumption videos could fuel food fixations
Climate crisis and the dangers of tech-obsessed ‘longtermism’
As a philosopher who thinks about climate change, a central concern motivating my work in recent years has been anguish that our species and our political-economic systems are dangerously short-termist. Indeed, I have a book coming out soon on just this: in Why Climate Breakdown Matters, I set out the pressing need for humanity to become more “long-termist” in its outlook. By this I simply mean things like: we need to care more about what the world will be like in 1,000 years, after (on our current climate-trajectory) most of the world’s ice will have melted. We need true long-term thinking, and we need it fast.
But the reason I felt driven to use scare-quotes above is that the term “long-termist” has in effect been captured by a particular interpretation which, ironically, does not take climate breakdown seriously.
I’m thinking of the situation described by the writer and philosopher Phil Torres in a recent essay. He argues compellingly that what’s become called longtermism is a “dangerous secular credo”.
What is this credo? It’s the notion that what really matters is humanity’s alleged very long-term potential. This future is allegedly post-human, or will involve colonising the solar system, the galaxy and the universe. Once one starts thinking this way, almost any sacrifice or indeed crime is justified, in order to keep our species alive. More precisely: to keep alive that part of our species which is betting everything upon big tech, space exploration, cryogenesis and more.
Longtermists are often big supporters of space travel. Alones / shutterstock
Torres’s essay exposes how justifiable concern with the existential risks – risks to our very existence – which, increasingly, humanity has come to hang over itself, is morphing into a way of perpetuating the very system that’s created those risks. A big-tech/industrial/academic complex has sprung into existence, which is sucking up money and attention that could be going into thinking about how we could become genuinely long-termist, and is instead focusing that well-paid attention on the idea that the way to prevent ourselves from destroying ourselves is to have much more tech, much more surveillance (supposedly, to guard against existential threats to humanity coming from non-state terrorists) and much more economic growth.
If you think that Torres and I are exaggerating, here is an example. Oxford academic and leading “longtermist” Nick Bostrom proposes that everyone should permanently wear an Orwellianly-named “freedom tag”: a device that would monitor everything that you do, 24/7 for the remainder of your life to guard against the minuscule possibility that you might be part of a plot to destroy humanity.
This might sound like satire. When I first read Bostrom’s piece, I assumed he was proposing the “freedom tag” idea for rhetorical effect only, or something like that. But no – he means it quite seriously.
And here’s the real trouble: these longtermists, in backing to the hilt the idea of a big-tech, industry-heavy future appear to be calling for much more of the very things that have brought us to this desperate ecological situation.
Not a fully existential threat?
Proponents of the technotopian conception of longtermism often, extraordinarily, see climate breakdown as only a fairly minor issue since they believe it is not a fully existential threat to our species. Allegedly, technological innovation sprung from within the rich world will eventually “solve” climate change. This is why longtermists such as billionaire venture capitalist Peter Thiel and Skype co-founder Jaan Tallinn urge us to worry less than we do about the climate.
By contrast, I want to explicitly hold open the possibility that global eco-catastrophe really is a “white swan” existential threat (unlike black swan threats, which are unexpected, white swans are of course expected). If that is the case, then the wisest way to truly plan for the long term might even be a possibility virtually uncontemplated by longtermists: the deliberate reduction of our techno-power.
Ultimately, I believe we should work towards a relocalised future in which we have democratic control over what technologies get developed. Perhaps this is virtually never contemplated because longtermists are overwhelmingly technophiles from wealthy countries.
The point then is to differentiate between the valid concept of longtermism and the dubious conception of it that’s become almost hegemonic. We of course need to care more about what the world will be like in the future, after our individual lives are over. In that context, it’s dreadful news that “longtermist” has in effect been appropriated by one particular interpretation.