Categories
Uncategorised

Jet-Setters

In the first two hundred days of 2022, Taylor Swift’s private jet made 170 flights, covering an average distance of 133 miles. It emitted 8,293 tonnes of carbon dioxide in the process. By way of comparison, the average annual carbon footprint for a US citizen is 14.2 tonnes. In Europe it is 6.8, in Africa 1.04. Swift’s jet, in other words, has a carbon footprint equal to 1,603 Americans, 2,225 Europeans and 14,552 Africans.

None of us would consider taking a plane to travel 133 miles. But evidently, we live in a world apart from the likes of Kylie Jenner – sister of Kim Kardashian – who is apparently partial to taking 12 minute flights. One wonders about the mental processes that govern such decisions, or those that led her to post on Instagram a black-and-white photograph of herself and her partner kissing in front of two private jets, captioned: ‘You wanna take mine or yours?’ It’s dispiriting to see that their uncertainty is seemingly no different from that of children deciding which scooter to ride. But the 7 million plus people who liked the post – evidently dreaming of owning a pair of jets themselves – inspire even more despair.

The dream of everyone having their own private aircraft – every man an Icarus – has been a figment of the Western imagination since before air travel even existed. See, for example, this French illustration from 1890 of a graceful lady with hat and parasol on her flying taxi-carriage: 

Albert Robida, Un quartier embrouillé, illustration for La Vie éléctrique, Paris 1890.

Just as the carriage, once the preserve of ‘gentlemen’, became available to all classes once it was mechanical and motor-powered, so too the aeroplane would one day become a personal form of travel, whizzing along the boulevards of the sky. An American illustration from 1931 already exhibited the idea of city parking for planes, even suggesting, perhaps in keeping with the ineffable Jenner, that a family may possess a number of them, just as they own multiple cars.

From Harry Golding, The Wonder Book of Aircraft, London 1931.

An unsustainable utopia: imagine a world with a few billion aircraft whirling around the sky. A few billion cars are already unbearable for the planet. But of course, it is the rarity of aircraft that makes them so desirable. There are 23,241 private jets in operation worldwide (as of August 2022), 63% of which are registered in North America. (The number of private aircraft as a whole is much greater; there are still 90,000 Pipers in operation, plus several other brands of private propeller planes).

Orders for new private jets are on the rise, even as calls to reduce CO2 emissions intensify. Beyond the opulent lifestyles of starlets and ephemeral idols, it is major corporations that are leading the charge. An Airbus Corporate Jet study found that 65% of the companies they interviewed now use private jets regularly for business. The pandemic caused this figure to skyrocket. Last year saw the highest jet sales on record. As one commentator noted: ‘According to the business aviation data firm WingX, the number of flights on business aircraft across the globe rose by 10% last year compared to 2021 – 14% higher than pre-pandemic levels in 2019. The report lists more than 5.5 million business aircraft flights in 2022 – more than 50% higher than in 2020’.

While solemn international summits make plans for reducing emissions (along with the use of plastic, noxious chemicals and so on), elites are polluting away as if there were no tomorrow. Meanwhile, the poor fools down below busy themselves with sorting out their recycling. For our rulers, the question of whether it would be better to have an egg today or a chicken tomorrow is entirely rhetorical. Never in human history has a king, emperor, statesman or entrepreneur chosen the chicken: it is always and only the egg today, at the cost of exterminating the entire coop.

As Le Monde reports, the five largest oil companies posted ‘an unprecedented $153.5 billion (€143.1 billion) in net profits for 2022. The oil giants are approaching the total figure of $200 billion in adjusted net profit’ (i.e. excluding provisions and exceptional items), of which ‘$59.1 billion in adjusted earnings (+157%) for ExxonMobil (US); $36.5 billion (+134%) for Chevron (US); $27.7 billion (+116%) for BP (UK), despite a net loss of $2.5 billion linked to the Russian context; and $39.9 billion (+107%) for Shell (UK).’ Even the environmentally friendly Norwegian state pension fund, Equinor, will benefit from the bonanza: it posted ‘an adjusted net profit of $59.9 billion at the end of just the first nine months of 2022’.

The announcement of these record profits (which have not been taxed by any government) comes on the back of last year’s much-hyped COP27 conference in Sharm el Sheik, attended by as many as 70 executives from the fossil fuel industry. They will be gathering again for another no doubt portentous summit later this year, presided over by Sultan Ahmed Al Jaber, chief executive officer of the Abu Dhabi National Oil Company. (Naturally, a geopolitical emergency serves as a good excuse to delay the slightest environmental action: war in Ukraine has led even the ecologically-minded Germans to reopen their coal mines. Rather than prompting a shift away from natural gas, the war has sparked a frantic search for more of it. The pandemic likewise led to a vertiginous increase in plastic consumption, and if for a few months it helped reduce the emissions from road and air traffic, it dealt a far more serious blow to public transportation, now viewed with suspicion, as a site of infection and contagion.)

It is as if global elites weren’t just mocking the rest of humanity, but the planet itself – poisoning it with one hand while greenwashing with the other. The Italian oil company Eni has as its symbol a six-legged dog, formerly black, now green, thus assuring us of their environmental bonafides. ‘Investment firms have been capturing trillions of dollars from retail investors, pension funds, and others’, Bloomberg writes,

with promises that the stocks and bonds of big companies can yield tidy returns while also helping to save the planet or make life better for its people. The sale of these investments is now the fastest-growing segment of the global financial-services industry, thanks to marketing built on dire warnings about the climate crisis, wide-scale social unrest, and the pandemic.

Wall Street now rates the environmental and social responsibility of business governance, though Bloomberg rightly points out that ESG scores ‘don’t measure a company’s impact on the earth and society’, but rather ‘gauge the opposite: the potential impact of the world on the company and its shareholders’. That is to say, they are not intended to help protect the environment from the companies, but the companies from the environment. ‘McDonald’s Corp., one of the world’s largest beef purchasers, generated more greenhouse gas emissions in 2019 than Portugal or Hungary, because of the company’s supply chain. McDonald’s produced 54 million tons of emissions that year, an increase of about 7% in four years.’ Yet in 2021 McDonald’s saw its ESG score upgraded, thanks to the ‘company’s environmental practices’.

The elites are fond of dangling a grass-coloured future in front of us – deodorized, disinfected and depolluted thanks to biofuels and electric cars. But to produce sufficient biofuel we’d have to cover the earth with soy plantations, definitively deforesting the planet (not to mention the production of fertilisers, pesticides and agricultural machinery). As for the electric car, whilst it pollutes less than its petrol-powered equivalent when used, it actually creates far more pollution to produce one. According to one professor at ETH Zurich’s Institute of Energy Technology, manufacturing an electric car emits as much CO2 as driving 170,000km in a regular car. And this is before the electric car’s engine is even turned on. As one academic study concluded:

the electric cars appear to involve higher life cycle impacts for acidification, human toxicity, particulate matter, photochemical ozone formation and resource depletion. The main reason for this is the notable environmental burdens of the manufacturing phase, mainly due to toxicological impacts strictly connected with the extraction of precious metals as well as the production of chemicals for battery production.

This is without even counting the fact that the electricity used to drive the car will benefit the environment only if it’s produced by clean and renewable sources. At best, the electric car is a mere palliative: the problem is not so much having billions of non-polluting cars, but producing billions of cars in the first place (in addition to the necessary infrastructure).

The elites are fooling the world, but they’re also fooling themselves. They believe they can poison the planet with impunity but save themselves by escaping to recently-acquired estates in New Zealand, far from all the smog and radiation, or else to Mars or some other extra-terrestrial refuge. Infantile dreams, cartoon utopias. One wonders what right they have to proclaim themselves elites in the first place. In the original French, ‘troupe d’élite’ denoted a superior stratum. The term was popularized in postwar sociology by C. Wright Mills’s Power Elite (1956), essentially as a modern synonym for the classical ‘oligarchy’. After the sixties, it fell out of fashion, until reappearing again in the 1990s.

In The Revolt of the Elites and the Betrayal of Democracy (1995), Christopher Lasch wrote that what characterized the new elites was their hatred of the vulgar masses:

Middle Americans, as they appear to the makers of educated opinion, are hopelessly shabby, unfashionable, and provincial, ill informed about changes in taste or intellectual trends, addicted to trashy novels of romance and adventure, and stupefied by prolonged exposure to television. They are at once absurd and vaguely menacing.

(Note how the fortunes of the term ‘elite’ have gone hand-in-hand with those of ‘populism’, wielded as a pejorative).

Lasch defined the elite in intellectual terms, thereby opening the way for the problematic concept of the ‘cognitive elite’. The champion of the term was Charles Murray, who together with Richard Herrnstein published The Bell Curve: Intelligence and Class Structure in American Life (1994), a book whose essential claim is that black people are more stupid than white people. (In a subsequent conversation with the New York Times, aided by a significant amount of alcohol, Murray summarised his life’s work as ‘social pornography’.) Its introduction claims that ‘modern societies identify the brightest youths with ever increasing efficiency and then guide them into fairly narrow educational and occupational channels. These channels are increasingly lucrative and influential, leading to the development of a distinct band in the social hierarchy, dubbed the ‘cognitive elite’.

Those who govern today’s world consider themselves part of this enlightened set. The legitimacy of their power is based on their supposed intellectual superiority. This is meritocracy in reverse. Rather than ‘They govern (or dominate) because they are better’, we have ‘They are better because they govern (or dominate)’. Weber had already caught onto this inversion in the early twentieth century:

When a man who is happy compares his position with that of one who is unhappy, he is not content with the fact of his happiness, but desires something more, namely the right to this happiness, the consciousness that he has earned his good fortune, in contrast to the unfortunate One who must equally have earned his misfortune. Our everyday experience proves that there exists just such a need for psychic comfort about the legitimacy or deservedness of one’s happiness, whether this involves political success, superior economic status, bodily health, success in the game of love, or anything else.

Given the social, environmental and geopolitical disasters which we are heading towards at breakneck speed, it is easy to doubt the claims of the elite to cognitive superiority. Yet perhaps it is not so much that they are dim, but rather that they are asleep at the wheel – and accelerating towards the precipice.

P.S. I must confess that before researching this article I did not know of the existence of Taylor Swift and Kylie Jenner: it must be me, rather than the elites, who lives in a world apart.

Translated by Francesco Anselmetti.

Read on: Jacob Emery, ‘Art of the Industrial Trace’, NLR 71.

Categories
Uncategorised

Curious Stranger

July 1957. A 26-year-old Romila Thapar waits at Prague Airport. She is dressed in a sari. The pockets of her overcoat are bulging with yet more saris. ‘It is blasphemous’, she laments in her diary, to have crumpled ‘the garment of the exotic, the indolent, the unobvious, the newly awakening East’. But there is no more room in her suitcases. They are stuffed with photographic equipment (‘cameras, cameras, more cameras’) and saddled with ‘large bundles of books and papers, strapped together with bits of string’. Thapar – today the pre-eminent historian of ancient India – is on her way to China along with the Sri Lankan art historian Anil de Silva and the French photographer Dominique Darbois. Earlier in the year, the Chinese Society for Cultural Relations with Foreign Countries had accepted de Silva’s proposal to study two ancient Buddhist cave sites in the northwestern Gansu province, Maijishan and Dunhuang. After some hesitation, Thapar, then a graduate student at SOAS in London, agreed to join de Silva as her assistant. She had been nervous about her limited expertise in Chinese Buddhist art, as well as the practical difficulties posed by the cave sites. And not without good reason. Just imagine crawling about in those rock-cut caverns ‘enveloped in billowing yards of silk’.

But China was still far away. The three women were waiting for their delayed connection to Moscow. The latest, much-publicized, Soviet plane had got stuck in the mud. Loitering in the terminal, Thapar observed the entourage of the Indian actors, Prithviraj Kapoor and his son Raj, a newly anointed superstar in the Socialist Bloc. As heavy rains poured outside, some members of the group began discussing the film Storm over Asia (‘Would they think it rude if I gently pointed out to them that the film was not by Sergei Eisenstein, but by Vsevolod Pudovkin, and that the two techniques are so different that one can’t confuse them’). Elsewhere, a French family tune into Radio Luxembourg; a young African man listens to the BBC on his radio; the terminal loudspeakers play the Voice of America (‘poor miserable propagandists’). Late into the night, Thapar leisurely smokes her black Sobranie. She thinks of herself ‘an overburdened mule wrapped in folds of cloth’.

This journey followed a new, but already well-worn, diplomatic trail. In 1950, India had become the first non-socialist country to recognize the People’s Republic of China. Two years later, a motley crew of Indian economists, writers and artists embarked on a self-styled Goodwill Mission. Their visit inaugurated a wave of political and cultural exchange that lasted for nearly a decade. In 1954, Nehru and Zhou Enlai signed the Panchsheel Agreement (‘five principles of peaceful co-existence’) in Beijing. Friendship societies bloomed on both sides of the MacMahon Line. And Indian trade unionists, state planners and litterateurs became eager pilgrims to Mao’s fabled cooperative farms. This growing decolonial intimacy was memorably captured in the breathless opening sentence of a 1956 dispatch, ‘Huai aur Cheen’ (‘Huai and China’), by the cultural critic Bhagwatsharan Upadhyay: ‘Abhi mazdoor-jagat Cheen se lauta hoon’ (‘I have just returned from the workers’ world of China’). The tone of the original Hindi conjures a neighbourhood gossip returning with the latest news from a corner teashop.

Thapar’s diary, recently published as Gazing Eastwards: Of Buddhist Monks and Revolutionaries in China, is a relic of this fraternal decade. But she was neither an emissary of the Indian state nor a member of any friendship societies. Unlike her fellow countrymen, Thapar’s travels were not fettered by the demands of cross-border diplomacy. Traversing the Chinese hinterland on trains and trucks by day and recording her experiences by night (often in the flickering light of a single candle), she travelled and wrote with greater freedom. The resulting travelogue is not only steadfastly historical, but also unexpectedly entertaining, a quality sorely missing from the reverential accounts of her compatriots. For instance, when the historian Mohammad Habib chanced upon a group of elderly war veterans during the Goodwill Mission, he sanctimoniously declared: ‘We are your sons from distant India’. Spreading her arms, a woman promptly responded: ‘If you are my sons, then let me press you to my heart’. When Thapar encounters a member of the youth team working on the Beijing-Lanzhou railway line, she cheerfully asks the young man if he stuck pictures of pin-up girls on the wall by his dormitory bed (he did).

Thapar’s political commentary is equally revelatory. Unlike other visitors who eulogized the popular emblems of Chinese development – factories, farms, oil refineries, dams – she highlights the uncanny persistence of ancient China in the Maoist era. As the workers laid the foundations for new construction sites, the remains of prehistoric societies were turning up with unprecedented frequency. After just a few years, hundreds of accidental archaeological digs spread out across the country. During stopovers at a neolithic excavation site near X’ian and a Ming-era Buddhist monastery in south Lanzhou, Thapar learned that groups of archaeologists and younger students were being attached to construction sites, where they mended, labelled and catalogued the discovered artefacts on the spot. The quantity of newfound prehistoric greyware was in fact so large that the country was facing a severe shortage of buildings to house them. During conversations with Thapar, the officials explained this popular enthusiasm by repeatedly quoting Mao’s directives to archaeologists – ‘discover the richness of China’s past’ and ‘correct historical mistakes’.

Faced with Thapar’s inquiries about the pitfalls of ‘salvage archaeology’, the provincial archaeologists and museum officials regurgitated statistics; politics was never mentioned, while the name of Marxist archaeologist Gordon Childe drew blank faces. Her requests to meet the historians of ancient China, university students and young intellectuals meanwhile were brusquely ignored. This puzzled Thapar to no end, not least because their counterparts in India were working through similar problems. Back in Bombay, she had recently come into contact with the left-wing polymath D.D. Kosambi, whose work contained a blend of Marxist theory, numismatics, archaeology, linguistics, genetics and ethnographic fieldwork. In An Introduction to the Study of Indian History (1956), Kosambi lyrically described India as ‘a country of long survivals’, where ‘people of the atomic age rub elbows with those of the chalcolithic’. China, Thapar slowly realized, was no different.

Recording her group’s trek towards Maijishan and Dunhuang, Thapar’s travelogue gracefully blends the world-historical with the everyday. Multiple timelines gather a heterogenous throng of characters onto the stage. At a monastery in Xi’an, we hear of the legendary seventh-century monk Xuan Zang lugging cartloads of Buddhist manuscripts, sculptures and relics collected during his sixteen-year sojourn across northern India. Back in the twentieth century, in nearby Lanzhou, Czech-made Škoda buses ferry Chinese workers to a power station. As Thapar proceeds across the hinterland, extended spells of isolation are broken only occasionally, as when a radio set catches the BBC News (‘the Russians had developed an intercontinental rocket that had alarmed the Western world’). On most days, Thapar’s battered copy of Ulysses serves as a marker of the passage of time (‘Ulysses is stuck at page 207 and at this rate will probably see me all through China’). Fittingly, this drama reaches its climax at the ancient cave sites, nested at the Chinese end of the Silk Routes which had once linked the region with Central Asia, India, and the eastern Mediterranean.

Thapar and her companions were the first group of foreign researchers to access the Maijishan site. Carved into sheer cliff faces, the caves contained hundreds of Buddhist murals and sculptures created over the course of a millennium. They were ‘like museums of Chinese paintings’: offering something like a historical timelapse of how the earliest Gandhara-era depictions of Buddha’s life were gradually adapted to the Chinese landscape. Every evening, the group descended from ‘heaven’ on rickety wooden ladders, sometimes nearly a hundred meters long. Back in the candlelit monastery, as wolves and bears roved outside, their experiences were equally startling: we hear of holidaying Chinese soldiers singing Cossack folk songs picked up from the touring Russian Red Army choir; a head monk toasting the end of the hydrogen bomb; a guard playing scratched folk records, featuring a Chinese cover of ‘Aawara Hoon’, the title song of Raj Kapoor’s latest hit. Meanwhile, at Dunhuang, the group discover that the Western explorers of the early twentieth century had vandalized and stolen numerous murals, paintings and manuscripts from the ‘Caves of a Thousand Buddhas’. In 1920, the White Russians fleeing the Bolsheviks had found refuge in these same caves, and had spent their days gouging out gold from the artworks.

The thread of the present ties these proliferating timelines together. In 1957 the Chinese revolution started to unravel. Shortly before Thapar’s arrival, Mao had effectively ended the Hundred Flowers Campaign. His key distinction between ‘fragrant flowers’ and ‘poisonous weeds’ had instead impelled a brutal ‘anti-Rightist campaign’. Meanwhile, despite stiff resistance, the CCP was still pushing its ill-fated campaign for rural collectivization. Arriving in Beijing, Thapar fleetingly notes the ubiquitous ‘bright, bold cartoons and statements’, portraying the so-called ‘Rightists’ as venomous snakes. In the following weeks, her solidarity with the Maoists was severely tested by ongoing clampdowns on intellectual freedom (she was greatly disturbed by the case of the feminist novelist Ling Ding, who had been denounced and exiled). Despite warm encounters with the locals, she greeted village cooperatives with a mixture of guarded suspicion (‘Were we expected to believe that before 1951 production was low, in 1954 it rose by half and by 1956 it had doubled?’) and open cynicism (‘I asked somewhat diffidently if they had tried any experiments along the lines of Lysenko in Russia’). On returning to Beijing, she was told that Professor Xiang Da, an authority on Dunhuang, was too busy to meet her, only to discover from a newspaper report that he had already been charged as a Rightist last month. Soon China would be utterly transformed by the Great Leap Forward and the ill-fated Sino-Soviet split. The 1962 Sino-Indian War over their borderlands would close the curtain on a short-lived decolonial friendship.

In the six decades between Thapar’s journey and the diary’s publication, her scholarly studies have spanned the history of state formation in early India, the politics of the Aryan question, the conflicts between the Brahmanas and the Shramanas (the Ajivika, the Buddhist and Jaina lineages), the Itihasa-Purana traditions, and the Indian epics, among others. Along with Irfan Habib, R.S. Sharma and Bipin Chandra, Thapar is widely credited for inaugurating a paradigm shift in the study of Indian history – a radical break with the British colonial periodization and research methods. Her honours include both the Kluge Prize for lifetime achievement in the humanities and social sciences, and the Padma Bhushan, the third highest civilian award in India (she has declined it twice). In the context of such an illustrious career, the diary is likely to be read as a relic of youthful indulgence. And yet, as Thapar has often argued, past events always accrue new, unexpected meanings in the present. It is hardly surprising, then, that the diary has significant affinities with her later work.

In the widely acclaimed Somnath (2004), Thapar describes how a single event – the destruction of a Hindu temple by Mahmud of Ghazni, a Turkic king, in 1025 – has been narrated across Turko-Persian and Arabic chronicles, Sanskrit temple inscriptions, biographies and courtly epics, popular oral traditions, British House of Commons proceedings and nationalist histories. Patiently decoding these dissonant voices, Thapar disproves the myth of Hindus and Muslims as eternally warring civilizations, established by British colonizers and popularized by their modern-day heirs, the Hindu nationalists. In doing so, Thapar reflexively shows that history is a process of ‘constant re-examination and reassessment of how we interpret the past’. Her pursuit however has never devolved into a postmodernist free-for-all. This is not just because of Thapar’s lifelong engagement with sociological theories, economic histories, archaeological methods and Marxist debates, but also because her scholarship has always been grounded in the public life of postcolonial India. Thapar has written school textbooks, given public lectures on All India Radio, and published extensive writings on the relationship between secularism, history and democracy in popular periodicals.

In recent decades, Thapar’s work has been systematically discredited by a Hindu right-wing smear campaign (popular slurs include ‘academic terrorist’ and ‘anti-national’). She has responded with characteristic aplomb, poking more historical holes in the fantasies of a ‘syndicated Hinduism’. Shortly before turning 90, she published Voices of Dissent (2020). Written during the upsurge of nationwide protests against the new citizenship laws (CAA and NRC), the book traces a genealogy of dissent in India – spanning the second millennia B.C. of the Vedic times, the emergence of the Sramanas, the medieval popularity of the Bhakti sants and Sufi pirs, and the Gandhian satyagraha of the twentieth century – that offer a vital corrective to the popular right-wing tendency to label ‘dissent’ as an ‘anti-national’ import from the West. Yet with the BJP pushing for the privatization of higher education, its affiliates infiltrating university administrations and its stormtroopers terrorizing college campuses, the struggle for decolonizing Indian history is no longer merely a matter of critique. There now exists a nationwide network of 57,000 shakhas operated by the RSS (the parent organization of the BJP), where the rank-and-file receive both ideological and weapons training, while the BJP’s IT Cell has infiltrated the social media feeds of millions of Hindu middle class homes, promoting its historical propaganda.

These changes have not only upended the paradigm shift in Indian history of which Thapar was a leading figure but have also illuminated its political limits. Historically anchored in the Nehruvian-era universities, the decolonial turn has struggled to significantly transform popular consciousness beyond the bourgeois public sphere. The Hindutva offensive has put liberal and left intellectuals in a difficult double bind. This contradiction was first captured by Aijaz Ahmad, shortly after the demolition of the Babri Masjid in 1992, now widely recognized as the emblem of the ‘Hindu nation’. The Indian left, Ahmad had argued, cannot abandon ‘the terrain of nationalism’, but nor can it just occupy this terrain ‘empty-handed’, that is, ‘without a political project for re-making the nation’. In Ahmad’s words, to counter Hindutva with secularism is certainly ‘necessary’, but it remains ‘insufficient’. Likewise, countering the syndicated, market-friendly Hinduism by recovering a subversive genealogy of the Indian past is necessary but by itself, it too remains insufficient.

Thapar’s studies of ancient India naturally offer no ready-made cures for these modern maladies. One incident from Gazing Eastwards though reads like an allegory for future action. As Thapar declared in a lecture for All India Radio in 1972, ‘the image of the past is the historian’s contribution to the future’. In Lanzhou, Thapar and de Silva’s clothes drew considerable attention from the Chinese public. Trailed by curious strangers, they found it difficult to walk the streets. To blend in, they ditched their saris in favour of peasant jackets in the customary blue, made famous by Maoists at the time. As the universities continue to crumble, perhaps historians of the new generation should also discard their clothes of distinction, and blend as organizers, pedagogues and foot soldiers into the agrarian and citizen struggles erupting against the BJP-led right.

Read on: Pranab Bardhan, ‘The “New” India’, NLR 136.

Categories
Uncategorised

The Death Gap

There’s no injustice more frightening – more definitive, more irredeemable – than inequality of life expectancy: a form of discrimination whereby years, sometimes decades, are stolen from the majority and given to a select few, based solely on their wealth and social class.

Indeed, the most important form of ‘social distance’ imposed by the pandemic was not spatial, not a matter of meters. It was the temporal distance between rich and poor, between those who could escape the worst effects of the virus and those whose lives were abbreviated by it. Modernity established a biopolitical chasm – a social distancing of death – that was widened and accentuated by the Covid-19 crisis. This was demonstrated by a litany of studies across various countries. For instance:

In this retrospective analysis of 1,988,606 deaths in California during 2015 to 2021, life expectancy declined from 81.40 years in 2019 to 79.20 years in 2020 and 78.37 years in 2021. Life expectancy differences between the census tracts in the highest and lowest income percentiles increased from 11.52 years in 2019 to 14.67 years in 2020 and 15.51 years in 2021.

Many political and scientific discussions are rooted in calculations of life expectancy at birth. But though this criterion holds for modern Western societies, where infant mortality is almost irrelevant, it is misleading when applied to other geographical regions or historical periods. If the average lifetime lasts 70 years, to compensate for every infant death another seven people must live to 80. This is why life expectancy is often calculated at age 40 or 50: a historically more reliable indicator in its exclusion of infant mortality, as well as war deaths, car accidents (more frequent amongst young people) and maternal fatalities in childbirth.

Here is life expectancy at 40 against household income in the United States, as outlined in a study published by The Harvard Gazette in 2016:

As you can see, the gap between the richest and poorest 1% is just over 10 years for women and 15 years for men: ‘roughly equivalent to the life expectancy difference between the United States and Sudan. For women, the 10-year difference between richest and poorest is equivalent to the health effects from a lifetime of smoking’.

Another notable phenomenon, to which we’ll return later, is the fact that the graph never flattens, regardless of one’s income level:

While researchers have long known that life expectancy increases with income, Cutler and others were surprised to find that trend never plateaued: “There’s no income [above] which higher income is not associated with greater longevity, and there’s no income below which less income is not associated with lower survival”, he said. “It was already known that life expectancy increased with income, so we’re not the first to show that, but…everyone thought you had to hit a plateau at some point, or that it would plateau at the bottom, but that’s not the case.”

The difference between the lifespans of different classes wasn’t always so abyssal. It has increased progressively in recent centuries, such that it has now become a constant of modern civilization. The gulf is plainly visible in the below graph, which shows life expectancy at 65 for male workers, divided into categories of higher and lower-earners:

We can see how, in 1912, poorer workers could expect to live to just under 80, while their wealthier counterparts could expect to live to just over that. In 1941, the margin dilates: the former could expect to live around a year longer than in 1921, while the latter gained a full six years (average life expectancy increases with the age at which it is calculated: at 30 it’s higher than at birth, at 50 it’s longer than at 30, and at 65 it’s even longer, because at every step you discount all deaths that occurred prior to that age and contributed to the original average. This is why, in 1912, the life expectancy of the poorer half of 65-year-olds almost reached 80, whereas life expectancy at birth was only 55).

The picture is even more stark if you divide society not into two, but into five different income classes. These graphs, taken from a Congressional study in 2006, show the average life expectancy growing massively for the richest quintile (20% of the population) and rising meagrely for the poorest:

A closer look gives us an astonishing picture. For males in the lowest income quintile, those born in 1930 could expect to live 26.6 years at age 50, while those born in 1960, after World War II, could expect to live 26.1 years: counterintuitively half a year less! The phenomenon was even more pronounced for the poorest women: those born in 1930 at the age of 50 had an average of 32.3 years ahead of them, while those of the next generation had 28.3: almost four years less life: while life in general was getting longer, for the poorest women it was getting shorter, and by quite a lot.

The music changes for the highest income quintile: those born in 1960 can expect to live 38.8 years (i.e. to reach 88 years and nine months), a full 7.1 years longer than their predecessors born in 1930 who had a life expectancy of 31.7 years. The same trend is true for rich women born in 1960 who can expect to live 41.9 years (i.e. to 91 years and 10 months), more than rich women born thirty years earlier whose life expectancy was 36.2 years, i.e. 5.7 years less: between the two generations, while for poor women life expectancy shortens, for rich women it lengthens.

In the thirty years between 1930 and 1960, the income gap had thus widened frighteningly. Whereas among men born in 1930 the richest lived 5.1 years longer than their poorest peers, for the generation born in 1960 the gap had widened to an astonishing 12.7 years. The gap among women was even more pronounced: whereas for the 1930 generation the richest could hope to live 4.0 years longer than their poorer peers, for the 1960 generation the gap had widened to 13.6 years.

Since we the segmented data on household income to extend this analysis further back in time, we must make do with a few scattered clues. If we take the dynasties of Italian nobles during the Renaissance (the Estes, Gonzagas, Medicis), we find that princes were generally outlived by their artists, chancellors and courtiers. This is understandable. Without truly effective medical sciences and developed systems of hygiene (such as sewers and running water), there was no reason for the rich to live longer than the poor – and there is a strong indication that their habits (overeating, alcohol consumption) made them more fragile.

The first great fractures occurred precisely with the introduction of sewage systems and running water, which sanitized the homes of the rich, where they were first installed. Child mortality eased first amongst the more comfortable classes. Dietetics taught the wealthy to better nourish themselves and do more exercise (hence the diffusion of sport: physical exertion whose end was neither profit nor sustenance). And then, naturally, the gap widened even further with the medical advances of the twentieth century. Modern medicine – especially when privatized and dependant on discriminatory insurance regimes – became an accelerator of inequality.

We are now living the world described by Rousseau, where inequality is created and then sharpened by civilization:

the origin of society and law, which bound new fetters on the poor, and gave new powers to the rich; which irretrievably destroyed natural liberty, eternally fixed the law of property and inequality, converted clever usurpation into unalterable right, and, for the advantage of a few ambitious individuals, subjected all mankind to perpetual labour, slavery and wretchedness.

The arts and sciences – ‘progress’, in other words – does nothing but exacerbate inequality and the struggle for property. Immiseration for the poor, fortification for the rich. How could this fail to lengthen the life of the powerful and shorten (relatively speaking) that of their subjects?

Of course, if inequalities in life continue to multiply year-on-year, one would expect the same of inequalities in death. The aforementioned researchers at Harvard were shocked by the fact that in the US, the life expectancy/income gap didn’t seem to plateau, neither at the top nor the bottom of the scale. In France, however, the curve flattens, as shown by this graph:

There, as in the USA, data for life expectancy at birth presents a marked gap between classes: a difference of almost 13 years for men and over 8 for women. But unlike in the US, the curve slows rapidly, almost plateauing over the threshold of €2,500 per month in net income (after taxes and social security). Gross income is usually around double this figure, so it’s at the threshold of €60,000 per year that we see this change, with the line becoming almost horizontal above a monthly net income of €3,500.

The only possible explanation seems to lie in the fact that the French public health system is easier to navigate the higher one’s level of education (with all the income and lifestyle differentials that implies):

Here, too, the curve flattens visibly above the €2,000 mark (we can assume that few of those who earn a yearly income of €60,000 don’t possess at least a secondary school diploma). This is despite the fact that there is an increasing gap between those with an undergraduate degree and those without a diploma (a difference of a little under three years for the same income group of under €1,000 per month, and nearly four and a half at net income of €3,500). In short, studying earns you almost three years of life. Perhaps if children were told this they would strive for better grades.

Until now we’ve discussed life in quantitative rather than qualitative terms. But what kind of life are we talking about? In the UK, researchers have developed separate metrics for life expectancy (lifespan) and the expected length of a healthy life (healthspan). Here are their findings:

‘Heathy life experience’, they conclude,

has also increased over time, but not as much as life expectancy, so more years are spent in poor health. Although a male in England could expect to live 79.4 years in 2018-20, his average healthy life expectancy was only 63.1 years – ie, he would have spent 16.3 of those years (20%) in ‘no good’ health.  In 2018–20 a female in England could expect to live 83.1 years, of which 19.3 years (23 per cent) would have been spent in ‘not good’ health. And although females live an average of 3.7 years longer than males, most of that time (3 years) is spent in poor health.

Not only do the poor live shorter lives than the rich (around 74 years versus 84 for men; 79 and 86 for women). Of this shorter existence, a larger part is lived in weakness and infirmity (for men, 26.6 years compared to 14; for women, 26.4 years compared to 15.8). The result is that the poor enjoy 18 fewer healthy years.

In an effort to extend the length of life, then, we’ve prolonged the length of death. The masters of the earth – those whose fortunes exceed the GDP of several nation states – have clearly realized this. Mark O’Connell’s To Be a Machine (2017) documents the frantic, infantile fantasies of these Lords of the Cosmos, who strive to achieve immortality through financing both the development of cryopreservation projects such as Alcor Life Extension Foundation, ‘where clients sign up to be frozen on dying in the hope not just of resuscitation but rejuvenation’ – as well as research into technology that would allow one to download one’s brain onto a hard disk or a cloud, so as to reincarnate, perhaps even as a computer, with all one’s memory intact.

In the absence of such technological breakthroughs, though, the masters of the universe have now dedicated considerable resources to realizing the more mundane aim of extending their lives by a few years, or perhaps a few decades. Since 2013, Jeff Bezos, Larry Page & co. have been investing in businesses developing anti-aging pharmaceuticals:

With just two short sentences posted on his personal blog in September 2013, Google co-founder Larry Page unveiled Calico, a ‘health and wellbeing company’ focused on tackling ageing. Almost a year earlier he had persuaded Arthur Levinson, the driving force behind the biotech giant Genentech and chairman of Apple, to oversee the new business and lined up $1.5bn in funding pledges – half from Google, the balance from AbbVie, the pharmaceutical company.

In 2022 the venture capital firm Arc Venture Partner, Jeff Bezos and another billionaire Yuri Milner, invested $3 billion in Altos Lab, whose self-declared mission is to ‘restore cell health and resilience through cellular rejuvenation programming to reverse disease, injury, and the disabilities that can occur throughout life’. The billionaires of Silicon Valley believe their money can enable them not only to live longer, but to live well, while preserving the prospect of immortality for their offspring.

Once this is achieved, they will finally have a rejoinder to Max Weber’s famous remark in The Protestant Ethic and the Spirit of Capitalism (1905). To the pre-capitalist subject, he writes,

that anyone should be able to make it the sole purpose of his life-work, to sink into the grave weighed down with a great material load of money and goods, seems explicable only as the product of a perverse instinct, the auri sacra fames.

To this, the lords of the universe will reply: ‘There is no grave we will sink into!’

Translated by Francesco Anselmetti

Read on: Marco D’Eramo, ‘Celebrity Thaumaturge’, NLR 74.

Categories
Uncategorised

Money and Mimesis

‘The view I am taking here is that the portrayal can be convincing regardless of whether such a thing has ever been seen or whether or not it is credible…’ – Erich Auerbach, Dante, Poet of the Secular World (1929)

On 1 January, Croatians entered the latest EU-mandated experiment in whether monetary ‘portrayal can be convincing’, when they substituted their national currency, the kuna, for the euro, becoming the first member-state to do so since Lithuania in 2015. Like all EU states other than Denmark, Croatia formally accepted the obligation to enter the eurozone with its accession as the Union’s 28th – and still most recent – member in 2013. Its relatively prompt adoption of the currency contrasts with the persistent euro-scepticism of countries such as Sweden, the Czech Republic and Hungary, which continue to maintain their own currencies despite being much older members of the EU. This is largely attributable to the unflagging enthusiasm for Brussels emanating from the centre-right government of Prime Minister Andrej Plenković and his party the Croatian Democratic Union (HDZ; Hrvatska demokratska zajednica). Under Plenković, the HDZ has refashioned itself as a Christian Democratic party of the sort that is increasingly rare in the epoch of ascendant right-wing populism in Europe and beyond.

Ursula von der Leyen, the President of the European Commission, visited Zagreb to sanctify Croatia’s definitive embrace of the euro. (She and Plenković pointedly paid for their coffees with them). Such political fanfare has not been a panacea to apprehension about the new currency regime; Croatian citizens are well-acquainted with the contortions and consternations that the euro can involve. Most real estate transactions and the lion’s share of the tourism industry – the dynamo of Croatia’s economy – have long been conducted in euros, while the kuna has effectively been pegged to the euro since the latter’s introduction in 2002. The Greek crisis of 2009, rooted in the financial and policy constraints entailed by the Eurozone, is a more tangible memory in the Balkans than in more affluent EU member-states to the west. Admission to the Eurozone has nevertheless been broadly welcomed, especially by local political and economic elites, as proof positive that Croatia has reached the final stop on its staggered voyage to ‘Europe’. In light of the euro’s chequered recent past, the near-total absence of domestic opposition to its adoption has been remarkable. The decision to invite Croatia into the Schengen area of passport-free travel, taken in December 2022, only added to the sense that Europe was finally here, rather than beyond the horizon in Ljubljana, Trieste or Vienna. Buoyed by the virtuosity of Croatia’s star player Luka Modrić, who led the Vatreni to the semi-final of the World Cup in Qatar in December, the public mood in Osijek, Rijeka, Split and Zagreb is remarkably sanguine.  

Like all currency, the euro is a crucible for political-symbolic allegories and alchemies. While bills, from five euros up, are uniform across the Eurozone, specie – one, two, five, ten, twenty and fifty euro cent and one and two euro coins – are specific to each member-state, even as they circulate freely throughout the zone and beyond. So: the obverse face of fifty euro cents in Austria depicts Vienna’s iconic Secession building; one and two euro coins in Cyprus display the idol of Pomos, a cross-shaped artefact from ca. 3000 BCE; a two euro coin in Slovenia features a portrait of the national poet, France Prešeren (1800–49); and so on. Ideologically, euro coins integrate the historical and cultural specificity of each constituent nation-state into the universalizing project of the Union, enshrining the dubious conceit that national identities – however problematically imagined and invented – can persevere and thrive in the solvent of EU membership. The delicate selection of which national icons to mint, ranging from historical heroes to material culture, necessarily addresses both domestic and international publics. Euro coins must effectively abbreviate national cultures in a set of recognizable images while also embodying a deracinated, bureaucratized Brussels liberalism.

In the summer of 2021, as part of the lead-up to Croatia’s admission to the Eurozone, Plenković announced the symbols that would receive the mint’s sanction: the Croatian chequerboard (šahovnica) pattern, a key component of the national coat-of-arms; a map of Croatia; a pine marten; the inventor Nikola Tesla; and the Medieval Glagolitic Croatian alphabet. A competition to determine the final design of each symbol was announced, to be judged by a committee of art historians, bankers and sundry public figures. The winning designs were presented to an applauding audience in February 2022, but controversy quickly usurped ceremony. In light of the struggle between Serbia and Croatia to monopolize the legacy of Nikola Tesla, a degree of dyspepsia over the inventor’s star turn on Croatia’s ten, twenty and fifty cent coins was expected. But contention came too from an unanticipated quarter. Only three days after the official unveiling of the victorious designs, illustrator Stjepan Pranjković withdrew his winning image of a pine marten, the eponymous kuna that lent its name to Croatia’s former currency, after Iain Leach, a Scottish photographer for National Geographic, pointed out that Pranjković’s design clearly plagiarized one of his photographs.

The embarrassment of Pranjković’s deception called into question Croatia’s European aspirations generally. Anxieties of incomplete and insufficient Europeanness, which, as Maria Todorova has emphasized, haunt the Balkans at large, lurked in the scandal’s shadows. The process of minting Croatia’s European credentials had been tainted by failed mimesis. Even worse, it was stolen from a European source (leaving aside Brexit-related ambiguities of geopolitical identity). If the Croatian euro coin was a knock-off, might not Croatia’s entry into the Eurozone and Schengen be similarly plagiaristic, inauthentic, fake?

This commotion distracted from reckoning with the weightier, more sinister political history compacted in the image of the pine marten: a legacy of the fascist Independent State of Croatia, the Nazi comprador regime that ruled both Croatia and today’s Bosnia during World War II. The kuna was first introduced as a currency by the fascist Ustaše in 1941, and remained in circulation until the end of the war. Like many emblems of the fascist era, the kuna was resurrected during the 1990s, in the wake of Croatia’s secession from socialist Yugoslavia and its war of independence against the Serbian-dominated Yugoslav army.

The reintroduction of the kuna did arouse opposition at the time on account of its dark provenance. Ivo Škrabalo of the Croatian Social Liberal Party, for instance, lobbied strongly against its adoption: ‘If we’ve rejected the dinar as Yugoslav money, then we must also reject the kuna as Ustaša money, since neither “Yu” nor “U” are needed in Croatia at this point in time.’ (‘Ako smo odbacili dinar kao jugoslavenski novac, odbacimo i kunu kao ustaški, jer Hrvatskoj u ovom trenutku ne trebaju ni “YU” ni “U”.’) Škrabalo and like-minded MPs proposed that the Croatian crown (kruna) replace the Croatian dinar, but they were outvoted by the right-wing parliamentary majority. In 1994, the kuna was restored. It would last longer this time, 28 years rather than a mere four, and daily use has resulted in collective amnesia about its origin. Even now, as kunas rapidly exit circulation, the pine marten on Croatia’s one euro coin is an unmistakable material afterlife of the fascism of the 1940s. But this presents no obstacle to Croatia’s geopolitical aspirations: as Giorgia Meloni and the Fratelli d’Italia have recently demonstrated in Italy, few things are less controversially European these days than fascist afterlives.

Meanwhile, the denizens of Zagreb, my adopted hometown, negotiate the quotidian dilemmas and exasperations of the transition from the kuna to the euro with a bricoleur’s blend of pragmatism, cynicism and humour. Queues grow long at bakeries and farmers’ markets as customers exploit the final opportunity to pay with the former currency by purchasing staples with hoards of long-neglected coins. Cashiers’ brows furrow with new calculations. There are reports of customers attempting to purchase chewing gum with 500 euro bills; others are immediately nostalgic for the kuna. At my local market, an elderly customer asks me how I’m handling ‘our battle with the euro’ before winking and handing me a chocolate shaped like a euro coin. Even among the tricks and traps of mimesis that Europeanization involves, grappling with a new currency can occasionally offer sweet satisfactions.

Read on: Wolfgang Streeck, ‘Why the Euro Divides Europe’, NLR 95.

Categories
Uncategorised

Starless Sky

If the three wise men were to travel on their camels to the stable in Bethlehem this year, they would almost certainly get lost. Along vast tracts of their route, they would be unable to rely upon their guiding star, for the simple reason that it would not be visible. Baby Jesus would have to forego his gold, frankincense and myrrh.

A paradox characterises our society: we know more about the universe than ever before – we know why the stars shine, how they are born, how they grow old and die, can perceive the swirling motion of galaxies invisible to the naked eye, listen (so to speak) to the sounds of the origin of the universe emitted some 15 billion years ago. Yet for the first time in human history few adults can recognize even the brightest of stars, while most children have never witnessed a starry night. I say most because the majority of the world’s population today – now surpassing 4 billion – live in urban areas, where artificial light obscures the stars from view.

(This is a form of contradiction common to modern life. The moment we are able to satisfy our desire to fly across the world to exotic beaches and get a tan, the hole in the ozone layer makes the ultraviolet rays of the sun dangerous and carcinogenic. As soon as we realize our desire for cleanliness – see my previous article on eliminating odours – water becomes a limited resource, and so on.)

The awesome spectacle of the star-filled sky is quite unknown to most of us today. Rarely do we raise our gaze to the skies, and if we did, we would see only a handful of dull glimmers of light. To think than on a clear night in ‘normal’ darkness as many as 6,000 celestial bodies are visible to the naked eye, the furthest being the Triangulum Galaxy, some 3 million light years away (we see it as it was three million years ago). And this is nothing compared to the many billions whose existence observatories and telescopes have revealed to us, as our eyes have become ever more blinded by artificial light.

In Darkness Manifesto (2022), the Swedish writer Johan Eklöf tells us that in Hong Kong (together with Singapore the most illuminated city in the world) the night is 1,200 times brighter than without artificial lighting. To realise the enormity of the transformation, you only have to look at this map which records light pollution (you can zoom in and see the situation where you live). In 2002, the amateur astronomer John Bortle devised a scale which measures the darkness of the night sky: level 1 corresponds to an ‘excellent dark-sky’, level 3 ‘rural sky’, level 5 ‘suburban sky’; at level 6 (‘bright suburban sky’) only 500 stars are visible to the naked eye; at level 7 (‘suburban/urban transition’) the Milky Way disappears. At level 8 (‘city sky’) and 9 (‘inner city sky’) only a few celestial objects are visible (nearby planets and a few clusters of stars).

A case could be made that artificial lighting is the industrial innovation which has most profoundly affected human life. It won the multi-millennial war against darkness, driving away the terror of the night; its nightmares and its monsters. Only a few centuries ago, when night fell, not only homes but entire cities were barricaded, their gates bolted. The night was populated by demons (Satan, of course, was the ‘Prince of Darkness’); it was the time when the forces of evil gathered, when witches celebrated the Sabbath riding pigs or other animals, as Carlo Ginzburg recounts in his Ecstasies: Deciphering the Witches’ Sabbath (1990).

Illuminating cities has been a practice for over three centuries, long preceding the invention of electrical lighting. Ancient Romans knew night lighting, but a millennium would pass before oil lamps appeared in city streets. It is perhaps no coincidence that the Enlightenment was coeval with urban lighting; its definition of the ‘Dark Ages’ may not have been simply a metaphor. In Disenchanted Night: The Industrialization of Light in the Nineteenth Century (1988) Wolfgang Schivelbusch details the ‘chemical enlightenment’ brought about by Antoine Lavoisier’s modern theory of combustion, according to which flames are not fuelled – as has hitherto been assumed – by a substance called phlogiston, but by oxygen in the air. It is with this that the modern history of artificial illumination begins. ‘The light produced by gas is too pure for the human eye, and our grandchildren will go blind’, Ludwig Börne feared of gas lamps in 1824. ‘Gas has replaced the Sun’, Jules Janin wrote in 1839. Illumination was also a means of control: the first target of the revolutions of 1830 and 1848 were the street lanterns. A new profession appeared: the lamplighter, who becomes a literary figure, as in Andersen’s ‘The Old Street Lamp’ (1847) and ‘The Lamplighter’ (1859) by Dickens. Saint-Exupéry’s Le Petit Prince (1943), upon arriving on the fifth planet, encounters only a streetlamp and a lamplighter:

When he landed on the planet he respectfully saluted the lamplighter.

‘Good morning. Why have you just put out your lamp?’

‘Those are the orders’, replied the lamplighter. ‘Good morning.’

The carbon filament lamp that Thomas Edison presented at the Exposition Universelle in Paris in 1878 swept away gas lighting, and became the new artificial sun, just as blinding as its natural counterpart. Schivelbusch cites the following medical text from 1880:

In the middle of the night, we see the appearance of a luminous day. It’s possible to recognise the name of streets and shops from the other side of the street. Even people’s facial expressions can be seen clearly from a great distance and – of particular note – the eye adjusts immediately and with the least effort to this intense illumination.

With electricity, humanity conquered the dark. Mealtimes shifted, as did those for socializing, for entertainment, for work – the ‘night shift’ became possible. A new rhythm now regulated daily life, one in conflict with our circadian one (a term derived from the Latin circa diem, ‘around the day’).

By making the night disappear, we alter the rhythm with which hormones are produced, in particular melatonin, which regulates the sleep cycle, and is synthesised by the pineal gland in absence of light. When darkness falls, its concentration in the bloodstream increases rapidly, peaking between 2 and 4 am, before gradually declining before dawn. Thus long periods of high melatonin levels are normal during winter months, while the opposite is true in the summer, when days are longer and brighter. According to the website Dark Sky, melatonin has antioxidant properties, induces sleep, boosts the immune system, lowers cholesterol, and helps the functioning of the thyroid, pancreas, ovaries, testes and adrenal gland. It also triggers other hormones such as leptin, which in turn regulates appetite.

Nocturnal exposure to artificial light (especially blue light), inhibits the production of melatonin. A brightness of only eight lux is enough to interfere with its cycle. This is a direct cause of insomnia, and therefore also of stress and depression and, through the deregulation of leptin, of obesity. Certain studies show that night shifts increase the risk of cancer (melatonin and its interplay with other hormones help prevent tumours). It’s therefore understandable that artificial lighting has generated the term ‘light pollution’.

This much concerns us humans. But the effect on other living beings is far more dramatic – after all we’re diurnal animals. As Eklöf writes, ‘no less than a third of all vertebrates and almost two-thirds of all invertebrates are nocturnal, so it’s after we humans fall asleep that most natural activity occurs in the form of mating, hunting, decomposing and pollinating’. The prey of nocturnal predators has far less chance of escape. Today elephants, who are also diurnal, are said to be becoming nocturnal in order to evade poachers. Toads and frogs croak at night as a mating call; without darkness their reproductive rate plunges. The eggs of marine turtles hatch on beaches at night; the hatchlings finding the water by identifying the bright horizon above it. Artificial light thus draws them away from it: just in Florida, every year this kills millions of newly-hatched turtles. Millions of birds die every year from colliding with illuminated buildings and towers; nocturnal migratory birds orient themselves with the moon and the stars, but are disoriented by artificial light and lose their way.

The worst effects are felt by insects. According to a 2017 study, total insect biomass has dropped by 75% in the last 25 years. Motorists have been aware of this for some time, through the so-called windscreen effect. The number of insects that get squashed on the front of cars is far smaller than in previous decades. There are many causes for this decline, but artificial lighting is certainly one, because the majority of insects are nocturnal. We don’t realize it, but illuminated cities are a major migratory destination for insects from the countryside. Light also disturbs their reproductive rituals. Moths are exterminated by their attraction to light, and more plants are pollinated by moths than bees (which are also declining). The problem of pollination is so serious that, as Eklöf recounts, a few years ago photos of an orchard in Sichuan showed workers with ladders pollinating flowers by hand. Working quickly, one might be able to pollinate three trees a day; a small beehive can do a hundred times that number.

A further side-effect of artificial lighting is that non-lit areas become even darker, because it takes time for the eye to adjust and reactivate the rods (which are sensitive to the intensity of light) and deactivate the cones (sensitive to colour) in the retina. The human eye is one of the most precise senses, capable of perceiving a single photon. It has been calculated as equalling a 576-megapixel camera. At night, when our eye has adjusted to very low levels of luminosity, we’re able to see quite a lot. With the full moon, we’re capable of walking briskly along a rugged path. But artificial light blinds us to everything that we would have seen with ease in earlier periods. Here is another case of the technological revolution simultaneously giving and taking away.

Light pollution has today created a market for darkness tourism; the hunt for (by now rare) places where darkness is total. Great sums can be spent in search of what we have gone to such lengths to defeat. As Paul Bogard tells us in his The End of Night: Searching for Natural Darkness in an Age of Artificial Light (2013), to find darkness in Las Vegas one has to go all the way to Death Valley. One of the darkest places on the North American continent, there, Bogard writes, the light of the Milky Way is so intense it casts shadows on the ground, while Jupiter’s brightness is strong enough to interfere with his night vision. It was in the Atacama desert, one of the darkest places on earth, that in 2012 Noche Zero, the first global conference in honour of darkness was held, attended by astronomers, neurobiologists, zoologists and artists.

A community of ‘lovers of the dark’ has thus formed, with its own cult books such as Junichiro Tanizaki’s In Praise of Shadow (1933), a veritable eulogy to the penumbral; its groups such as the International Dark Sky Association, founded by a handful of American astronomers in 1988; and its sanctuaries, parks and reserves. They cite studies according to which, rather counterintuitively, the lighting of roads can decrease safety by making victims and property easier to see. Theirs is a noble fight, though one with doubtful prospects for success, given the hunger for light that consumes our species. Speaking of consumption: LED bulbs consume far less energy than filament ones, and for this reason far more of them are used, increasing total light emission. It has been calculated than in the US and in Europe unnecessarily strong or badly directed lights (which are pointed at the sky, or other spaces that don’t need to be lit) generate emissions of carbon dioxide equal to that of 20 million cars. And every year the illuminated portion of the planet grows inexorably. I realize it’s banal to do so, but I can’t help but think of those two things which for Immanuel Kant ‘fill the mind with ever new and increasing admiration and awe, the more often and steadily we reflect upon them: the starry heavens above me and the moral law within me’. Could he ever have imagined the sky above us no longer filled with stars? We might ask if the moral law within us is also waning, or if it has already been lost.

Translated by Francesco Anselmetti.

Read on: Carlo Ginzburg, ‘Witches and Shamans’, NLR I/200.

Categories
Uncategorised

Scientific Capitalism

Of the six scientists awarded the Nobel Prize this year, three for Physics and Chemistry respectively, four had already founded their own companies. Here, in all its splendour, we observe the contemporary figure of the ‘scientist-entrepreneur’, where the stress falls on ‘entrepreneur’ and ‘scientist’ has a merely descriptive function. This figure – not new per se, but recent in its codification – has long been promoted by the world’s universities. It is a synthesis of the two paradigms of our time, neoliberalism, in which human beings are defined as entrepreneurs, of themselves if nothing else; and the neo-feudalism of a cognitive aristocracy, whereby alleged superiority of knowledge or competence entitles a select few to rule over the ignorant masses. Science departments today tirelessly exhort their faculty to become versed in the arcane business of funding procurement, and to pursue areas of inquiry that may be attractive to venture capital. More than a scientist-entrepreneur, the researcher today is becoming a scientific entrepreneur, in the same way one might be a real estate or a textile entrepreneur.

Now it seems that this ideal is favoured by the jurors of the Swedish Academy. This year’s Prize in Physics rewarded the research into an obscure, esoteric quantum property that puzzled even Einstein (who famously called it ‘spooky action at a distance’). Obscure, yes, but with potentially revolutionary applications in the field of quantum computing and therefore highly appealing to investors. It is no surprise, then, that two of the three laureates were entrepreneurs: John Clauser, founder of J. F. Clauser & Associates, and Alain Aspect, co-founder of PASQAL. The three Chemistry laureates were meanwhile recognized for their ‘development of a new method for assembling new molecules’. The technique, called ‘click chemistry’, makes the joining of molecules together simple and efficient. Here again, two were entrepreneurs. Morten Meldal co-founded Betamab Therapeutics in 2019, and perhaps the most emblematic case is Carolyn Bertozzi, who, having served for some time on the scientific committees of pharmaceutical giants such as GlaxoSmithKline and Eli Lilly, founded a host of startups that, for indicative purposes, are worth listing in full: Thios Pharmaceuticals (2001); Redwood Bioscience (2008), which was subsequently bought by Catalent Pharma Solutions (2014) though Bertozzi remains on its scientific committee; Enable Biosciences (2014); Palleon Pharma (2015); InterVenn Biosciences (2017) and finally OilLux Biosciences and Lycia Therapeutics (2019).

It is no coincidence that Bertozzi is the most entrepreneurial of this year’s prize-winners: her contribution was precisely to have found a way to apply ‘click chemistry’ to biological molecules. Over the last forty years, biology is the scientific field that has most fully embraced entrepreneurship precisely because it is directly connected with genetic engineering (note the industrial-technological term ‘engineering’). In his book Editing Humanity (2020), the founding editor of Nature Genetics Kevin Davies describes the discovery, patenting and subsequent exploitation of a new technique to cut and sew – to edit, essentially – the DNA of living organisms. The technique is known as CRISPR gene editing, an unwieldy acronym for ‘clustered regularly interspaced short palindromic repeats’. Its pioneers, microbiologist Emanuelle Charpentier and biochemist Jennifer Doudna, developed the technique in 2012 (and were awarded the Nobel Prize in Chemistry in 2020). Shortly after, other scientists improved the procedure, unleashing a vast and ferocious legal battle over patents which still rages on a decade later.

In 2013, Charpentier founded her first biotechnology firm, and in 2014 her second, ERS Genomics. Doudna was even more enterprising: before even making the new technique public she founded Caribou Biosciences (2011); afterwards she jumped ship to found Editas Medicine (2013), the first publicly traded CRISPR company, which is funded by Bill Gates among others. She left when rivals managed to appropriate a substantial part of the patent, founding Intellia Technologies (2014) in response, then Mammoth Biosciences (2017). In his book, Kevin Davies outlines no less than forty startups connected in one way or another to the CRISPR procedure.

Evidently, the Nobel Prize acts as a seal of quality for venture capital, which then encourages the laureate to commercialize their innovation. For example, Eric Betzig won the prize for Chemistry in 2014 for his pioneering work on super-resolved microscopy and recently co-founded Eikon Therapeutics (2021), which seeks to apply the results of his research. But as we’ve seen from Doudna and the rest of this year’s medallists, not all researchers wait for the Nobel before launching their own startup. Take the German physicist Theodor Hänsch, who received the prize in 2005 for his work on the optical frequency comb technique in spectroscopy. Some three years prior Hansch co-founded the firm Menlo Systems, which used this method to manufacture products for the market. That is, if one estimates the future profitability of a discovery, it is simply financial foresight to launch one’s company while you wait for the Nobel stamp of approval.

A leader in a given field throwing themselves into commercial ventures and stock exchange listings then has knock-on effects, encouraging their disciples, assistants and students to do likewise. A cycle develops that favours academics who know how to attract funding and who therefore, even before becoming full-blown entrepreneurs, are already effective company managers, fostering those protégés and projects that tend towards commercialization. Already in 2006, a study by the Max Planck Society found that that one in four scientists who patent their results also establish their own business. The neoliberal character of this dynamic is hardly accidental: the explosion of biotech firms (which, along with IT companies, constitute the overwhelming majority of ‘scientific’ startups) coincided with the triumph of Reaganism.

In his classic study of the invention of PCR (polymerase chain reaction, a procedure used for rapidly copying extracts of DNA) the anthropologist Paul Rabinow wrote that the year Reagan came to power:

the Supreme Court of the United States ruled by a vote of 5 to 4 that new life forms fell under the jurisdiction of federal patent law. Until the 1980s, patents had generally been granted only in applied domains… the Patent and Trademark Office had tended to restrict patents to operable inventions, not ideas… Finally, it was generally held that living organisms and cells were ‘products of nature’ and consequently not patentable. The requirement that patent protection be extended to the invention of ‘new forms’ did not seem to apply to organisms (plants excepted).

That same year, Congress passed the Patent and Trademark Amendment Act ‘to prompt efforts to develop a uniform policy that would encourage cooperative relationship between universities and industries, and ultimately take government-sponsored inventions off the shelf into the marketplace’. The result? From 1980 to 1984, during Reagan’s first term, ‘patent applications from universities in relevant human biological domains rose 300 per cent’. The patentability of genetic modification was clarified nine years ago:

On June 13th, 2013, in the case of the Association for Molecular Pathology v. Myriad Genetics, Inc., the Supreme Court of the United States ruled that human genes cannot be patented in the US because DNA is a ‘product of nature’. The Court decided that because nothing new is created when discovering a gene, there is no intellectual property to protect, so patents cannot be granted. Prior to this ruling, more than 4,300 human genes were patented. The Supreme Court’s decision invalidated those gene patents, making the genes accessible for research and for commercial genetic testing. The Supreme Court’s ruling did allow that DNA manipulated in a lab is eligible to be patented because DNA sequences altered by humans are not found in nature. The Court specifically mentioned the ability to patent a type of DNA known as complementary DNA (cDNA). This synthetic DNA is produced from the molecule that serves as the instructions for making proteins (called messenger RNA).

Now is not the time for a wider discussion of intellectual property (what would happen if mathematical theorems were patentable? To begin with, mathematicians would be pushed into concealing the proofs of a given theorem…but Occam’s razor forbids us to proceed in this direction). Nor of the concept of nature, which has been deformed by these legal rulings and the technical-industrial practice they have spawned. Instead, let us focus on the relationship between science and profit that we’ve so far been delineating.

One may assume that in the past, scientists were entirely disinterested, before being transformed into venal accumulators by the neoliberal revolution. Not quite. It is true that many have been motivated by a simple ‘love of science’ (I am thinking here for example of the physicist Paul Dirac or the mathematician Niels Henrik Abel), and that to act ‘in a scientific field is to be placed in conditions in which one has an interest in disinterest, in particular because lack of interest is rewarded’ (Bourdieu). But there were scientists in the past who gained a great deal from ‘pure science’. Without reaching for extreme cases such as the chemist Justus von Liebig (1803-73), immortalized for inventing the stock cube, or the physicist William Thompson (known as Lord Kelvin, 1824-1907), who amassed a great fortune thanks to his discoveries, the French biologist Louis Pasteur offers a good example.

Pasteur was always attentive to the agriculture and industrial dimensions of his research. He was the first to patent (among other things) the pasteurization of milk, then of wine and beer, and accumulated a fortune of one million francs by the time he died. In spite of this, however, Pasteur was celebrated as the purest of scientists, the disinterested scientist par excellence. What explains this? It should be noted that the notion of ‘pure science’ really takes hold in the second half of the nineteenth century. ‘Pure science’ is invoked by jurists, agronomists, philosophers of art, naturalists, chemists (the chemist Berthelot, speaking of the colours extracted from carbon, commented that ‘their discovery is the triumph of pure science’). As historian of science Guillaume Carnino writes,

If ‘pure science’ shows itself to be so transdisciplinary, it’s because it’s none other than the rhetorical expression of an aspiration that belongs to the academic world as a whole: the autonomy of research. But for the majority of scientists, the purity they ascribe to science does not contradict its very real entanglement in the market. Far from being devoid of any lucrative or moral intentions, the pure science of the 1860-80s allowed for the possibility of its application in industry… it’s not a question of counterposing disinterested science to applied science, but rather to demonstrate that the two proceed from the same logic, and that we must leave the field open to the most incongruous, academic and seemingly less ‘applicable’ of research projects in order to reap any economic benefits it might bring. The purer the science, the more profitable its outcome. The argument is astounding because it justifies the autonomy of the academy in the name of profit and material gain, and yet it’s effective and appears regularly in the writings of faculty members… Now, given that the condition for the existence of pure science is none other than disinterested research — the remuneration of scientists, that is, who dedicate themselves entirely to research they are passionate about — it’s suddenly convenient to preserve and foster, at any cost, that revered substance that seems to constitute the very spirit of the university. Put otherwise, the purity of science guarantees the interest that industrialists, governments and nations will find in it.

The problem of the neoliberal revolution is not, therefore, that scientists have become venal when once they were angelic. It is that while money was previously a side-effect of scientific inquiry, now it is its main purpose (grammatically speaking, scientist used to be the noun, entrepreneur the adjective; now it’s the opposite). And, typically, the moment scientists start profiting, they stop doing science.

The wildest example is that of the world-famous mathematician Jim Simons: his research into Riemannian topological varieties has found application in quantum physics, earning him numerous awards. In 1982, Simons used his mathematical research to develop an investment algorithm that exploited the inefficiencies of financial markets, and founded a hedge fund called Renaissance Technologies (its flagship fund is called Medallion, in sardonic reference to Simons’ various prizes). Simons has been referred to as ‘the world’s greatest investor’ and ‘the most successful fund manager of all time’. His personal fortune is estimated at around $25 billion. When he retired in 2010, his place was taken by Robert Mercer, an inveterate partisan of the far right, founder of Cambridge Analytica (renowned for its role in the Brexit campaign and Trump’s election) and a major funder of Breitbart News. In spite of some recent tax trouble with the IRS, Simons is still widely respected as a great philanthropist. If Marx coined the term ‘scientific socialism’, Simons can boast of having implemented scientific capitalism.

Translated by Francesco Anselmetti.

Read on: Michael Sprinker, ‘The Royal Road’, NLR I/191.

Categories
Uncategorised

Odourless Utopia

In Europe, the war bulletins come not just from Ukraine, but also from the climate front. The French government has cracked down on water use, banning watering lawns and washing cars in 62 of 101 departments, as more than 100 municipalities no longer have potable water. Nuclear power plants on the Rhône and Garonne have had to reduce production due to insufficient water in the rivers. In Italy, the government has declared a state of emergency in 5 of 20 regions, while Second World War bombs are discovered on the beds of its largest river, the dried-up Po. In Germany, the Rhine is so low that the barges plying its 1,000 kilometres from Austria to Holland have had to reduce their cargo from 3,000 to 900 tons so as not to run aground, and the river is expected to soon become impassable to freight traffic. In England, for the first time on record, the source of the Thames has dried up and the river is beginning to flow more than 5 miles further downstream. In Spain, restrictions on water consumption have been imposed in Catalonia, Galicia and Andalusia.

These are all warning signs. In a few centuries, the idea of water as an abundant resource and universal right may be unimaginable. It is easy to forget that even in the so-called advanced world, domestic running water – for toilets, cooking, personal hygiene, washing clothes and dishes – is a very recent and ephemeral phenomenon, dating back less than a century. In 1940, 45% of households in the US lacked complete plumbing; in 1950, only 44% of homes in Italy had either indoor or outdoor plumbing. In 1954, only 58% of houses in France had running water and only 26% had a toilet. In 1967, 25% of homes in England and Wales still lacked a bath or shower, an indoor toilet, a sink and hot- and cold-water taps. In Romania, 36% of the population lacked a flushing toilet solely for their household in 2012 (down to 22% in 2021).

The availability of domestic running water varies depending on one’s individual wealth and on the affluence of one’s nation. While in Western Europe and the US, the number of households with toilets equipped with running water currently exceeds 99%, in a number of African countries the percentage is between 1 and 4: Ethiopia 1.76%; Burkina Faso 1.87%; Burundi 2.32%; Uganda 2.37%; Chad 2.50%; Niger 2.76%; Madagascar 2.83%; Mozambique 2.87%; Mali 3.71%; Rwanda 3.99%; Congo 4.17%. In these countries the toilet is a marker of class status; in Ethiopia less than one in 56 households has one. The data also contains some surprises: there are more toilets in Bangladesh (35%) than in Moldova (29%), India is in roughly the same situation as South Africa (44% versus 45%) and just ahead of Azerbaijan (40%). While in Baghdad the number of houses with flushing toilets is 94.8%, in central Kabul it is 26%, and in Afghanistan as a whole it is 13.7%.

It is possible to trace the social and geopolitical history of running water. Its widespread accessibility was the the result of two primary factors: 1) the industrial revolution that provided the pipelines and purification plants needed for this colossal planetary enterprise; and 2) urbanization, for it is fairly obvious that bringing running water to a series of isolated cottages is far more expensive and complex than to centres of high population density. Urbanization was stimulated by the industrial revolution, and then in turn by the availability of running water for newly-arrived citizens. This may well be one of the most significant, and most peculiar, features of contemporary civilization. For what it created was the utopia of an odourless society. This would not have been possible without the spread of running water, but it was accelerated by the growing desire to deodorize the human habitat. In the twenty-first-century, we no longer perceive smells as our ancestors did.

In The Foul and the Fragrant (1988), Alain Corbin asks, ‘What is the meaning of this more refined alertness to smell? What produced the mysterious and alarming strategy of deodorization of everything that offends our muted olfactory environment? By what stages has this far-reaching anthropological transformation taken place?’ An incisive answer is offered by Ivan Illich in his brilliant little book, H2O and the Waters of Forgetfulness (1986), which reminds us that it was not until the last years of Louis XIV’s reign that a decree was passed for the weekly removal of faeces from the corridors of Versailles. It was in this era that the project to deodorize began. ‘The sense of smell’, Illich writes,

was the only means for identifying the city’s exhalations. The osmologists (students of odors) collected ‘airs’ and smelly materials in tightly corked bottles and compared notes by opening them at a later time as though they were dealing with vintage wines. A dozen treatises focusing on the odours of Paris were published during the second part of the eighteenth century…By the end of the century, this avant-garde of deodorant ideologues is causing social attitudes toward body wastes to change…Toward the middle of the century shitting, for the first time in history, became a sex specific activity…At the end of the century, Marie Antoinette has a door installed to make her defecation private. The act turns into an intimate function…Not only excrement but the body itself, it was discovered, emanates bad odours. Underwear that up to this time had served to keep one warm or attractive began to be connected with the elimination of sweat. The upper classes began to use and wash it more frequently, and in France the bidet came into fashion. Bed sheets and their regular laundering acquired a new importance, and to sleep in one’s own bed between sheets was charged with moral and medical significance…On November 15, 1793, the revolutionary convention solemnly declared each man’s right to his own bed as part of the rights of man.

Being odourless thus became a symbol of status:

smelling now began to become class-specific. Medical students observed that the poor are those who smell with particular intensity and, in addition, do not notice their own smell. Colonial officers and missionaries brought home reports that savages smelled differently from Europeans. Samojeds, Negroes and Hottentots could each be recognized by their racial smell, which changes neither with diet nor with more careful washing.

Naturally this myth was self-fulfilling, to the extent that colonized peoples were denied running water, soap and flushing toilets. Subaltern classes also began to smell and arouse revulsion. ‘Slowly’, Illich continues,

education has shaped the new sense for cleanly individualism. The new individual feels compelled to live in a space without qualities and expects everyone else to stay within the bounds of his or her own skin. He learns to be ashamed when his aura is noticed. He is embarrassed at the thought that his origin could be smelled out, and he is sickened by others if they smell. Shame at being smelled, embarrassment at coming from a smelly environment, and a new proneness to be offended by smell – all taken together place the citizen in a new kind of space.

Realizing this ideal of olfactory neutrality required increasing amounts of water. Before the Second World War, bathing once a week was considered hygienist paranoia. Only with the mass production of household washing machines did cleaning clothes become more frequent. I remember the London of the 1970s: on the Underground, the City clerks could be recognized by their detachable cuffs and collars; the former were changed regularly but the latter were grayish from having been worn for a week straight. The families that hosted us would ask us to insert coins into a special hot water meter: breakfast was included in the price, showering was not.

Now, though, the utopia of an odourless humanity has conquered much of the planet. Yet, as with many aspects of modernity, the moment we acquired the means to achieve a goal, its enabling condition (namely the abundant, unlimited availability of water) was lost. An ever more populous and rapidly warming planet will likely return to a state in which water is scarce and contested. This future may however be marked by a significant cultural difference. Whereas in the past, water was scarce for a humanity able to live happily with odours, now it will be scarce for one that considers their own odours insufferable, not to mention those of others.

I remember being struck by the extraordinary success of the Canadian TV drama H2O (2004), whose trailer announced:

A dead Prime Minister. A country in turmoil. A battle for Canada’s most precious resource – water. On the eve of testy discussions with the US Secretary of State, Prime Minister Matthew McLaughlin is killed in an accident. His son, Tom McLaughlin, returns to Canada to attend his fathers’ funeral where he delivers a eulogy that stirs the public propelling him into politics and ultimately the Prime Minister’s office. The investigation into his father’s death, however, reveals that it was no accident, raising the possibility of assassination. The trail of evidence triggers a series of events that uncovers a shocking plot to sell one of Canada’s most valuable resources – water.

As James Salzman noted in his book Drinking Water (2012), this omitted ‘the most exciting part, where American troops invade Canada to plunder their water supply’. A US–Canadian war over water! Until now, such conflicts seemed to be the preserve of semi-desert areas in the Middle East (think of Eyal Weizman’s writing on the Israelis’ use of water to surveil and punish Palestinians), or torrid Africa (as in the latent conflict between Egypt, Sudan and Ethiopia over the Grand Ethiopian Renaissance Dam built on the Blue Nile). But with the possible desertification of the central European plain, war for water will become a real prospect, even in regions once famous for high rainfall and water infrastructure. We citizens of ‘rich countries’, ‘industrialized nations’, ‘more developed powers’, will fight to smell less.

Translated by Francesco Anselmetti.

Read more: Nancy Fraser, ‘Climates of Capital’, NLR 127.

Categories
Uncategorised

Iron Musk

It is difficult to hide a certain satisfaction upon witnessing the collapse of bitcoin. Since I last dealt with the topic for Sidecar seven months ago, the total capitalization of cryptocurrencies has decreased from $2.6 trillion – equivalent to the total GDP of France – to only $901 billion (as of 15 June). One feels sorry, but only a little, for those gullible people who invested their modest savings in crypto currencies hoping for easy profits and got fleeced by another pyramid scheme – an updated version of the seventeenth-century tulip fever in the Netherlands, history’s first senseless financial bubble.

This schadenfreude is all the greater since the cryptocurrency crash particularly affects Elon Musk – in theory the world’s richest man, with assets valued at $268 billion. In the media, Musk is depicted as contemporary capitalism’s very own Tony Stark, alter ego of the Marvel superhero Iron Man: a business magnate, playboy, philanthropist, inventor and scientist. In 2019, Musk decided to accept cryptocurrencies as payment for the electric vehicles produced by his company Tesla. The following year, he invested $1.5 billion in the cryptocurrency Dogecoin. Musk has relied on the fact that the cryptocurrency market is controlled by a small number of people who are able to manipulate its ebbs and flows (save any sudden waves of panic). For several years these capitalists propped up the value of their investments in bitcoin by continuing to accumulate cryptocurrencies, just as public companies do when they inflate their own shares through ‘buybacks’.

In the space of a year, however, Dogecoin has lost over 80% of its value, dropping from $40 billion to $6.9 billion. Undeterred, Musk has continued to assert his faith in the venture, relaunching it in May as a means to pay for the merchandising of his space corporation, SpaceX. Every announcement made by Musk is followed by a rise in the price of Dogecoin: a fact that illuminates the mechanism through which this new form of capitalism increases the fortunes of its standard-bearers. The capitalist announces on social media that they will buy a given share. Their followers (or, perhaps more aptly, believers) rush to buy the same shares, which experience a vertiginous surge, after which the capitalist cashes in by selling a part of the bloated stock, easily covering the cost of the initial purchase.

What’s producing revenue here is influence. In Musk’s case, influence is accrued through his own comic-book persona: he will continue to amass wealth so long as he is seen as a Stark-like figure. This is how his image as the Iron Capitalist remains credible. For this reason, Twitter is the most efficient financial tool at his disposal: his 91 million followers scattered around the world are his real capital. Hence why on 4 April the value of Twitter’s shares increased by 27% after Musk announced he had bought 9% of the company’s stock (Dogecoin also went up 20% as a result). It stands to reason that Iron Man would want to control the source of his revenue by investing in it.

Musk’s adherence to this superhero persona is therefore not only – or not even primarily – a vain ostentation, but quite literally a question of economic interest. Throughout his career as an entrepreneur he has carefully fashioned his image as an inventor or scientist (even if he dropped out of his graduate studies in material sciences at Stanford after only two days). As Forbes emphatically proclaims, ‘Elon Musk is working to revolutionize transportation both on Earth, through electric car maker Tesla – and in space, via rocket producer SpaceX’. Musk must constantly renew these superheroic credentials, investing in fanciful, futuristic projects reminiscent of science-fiction: electric cars, space exploration, artificial intelligence and neurotechnology. The key is to launch a new project before the previous one has been completed; new investments make earlier ones look profitable, thereby raising the value of their stock.

Exemplary in this regard is the story of Tesla, the electric vehicle company which, without having established a foothold in the industry (how many Teslas do you see driving around?), launched itself into the field of self-driving cars, with predictably disastrous results. As of 20 February, Tesla cars had caused 11 accidents, 17 casualties and one fatality. But, for Musk, the mere promise of automated cars served to obfuscate the broader failure of the electric vehicle. Tesla went public in 2010, after receiving $500 million worth of financing from the US government. From 2010 to 2019 its value increased, but at a fairly typical pace for an innovative tech company in a period of quantitative easing. (At this time, investment funds were able to take out billions in interest free-loans, and, without quite knowing where to channel it all, invested in companies that were seen as promising; it’s this that underpinned the enormous boom in stocks, despite the near-stagnant real economy). Over the following two years, the company truly went into orbit, peaking at $1.2 trillion in November 2021, before sinking to $662 billion as of 15 June.

This valuation does not correspond in any way to Tesla’s ‘real’ size, which remains modest both in terms of vehicles produced (305,000 the whole of last year) and sales ($54 billion). In comparison, the Volkswagen group had a revenue of $250 billion and produced 5.8 million cars, but its capitalization only amounted to $167 billion. The ascent of Tesla was also fuelled by the growth of bitcoin, the promise of space exploration and, in 2021, the long-publicised touristic rocket ‘excursion’, which helped SpaceX surpass the $100 billion valuation threshold. In this way, the SpaceX and bitcoin boom retroactively triggered the rise of Tesla.

As we’ve seen, the valuation of Musk’s enterprises, as well as the aleatory estimates of his wealth, have always been based on the promise of future expansion: achievements that are just out of reach, just over the next hill. His trust in bitcoin therefore indicates more than just a speculative opportunism; it embodies the business model that operates across his various industries. It also demonstrates that the influence exercised by Musk through Twitter doesn’t only affect small investors (those that Italian stock traders call parco buoi, ‘the flock’), but also ‘professionals’: stockbrokers, financial advisors, fund managers and so on.

Every epoch has an entrepreneur who symbolizes its particular style of capitalism. At the end of the nineteenth century, during the robber baron era, it was the evangelist of modern billionaire philanthropism, Andrew Carnegie and his Gospel of Wealth (1889). Then it was Henry Ford, the fascist-sympathizing industrialist behind the Model T, who shocked the world by paying his workers five dollars per day and was deemed ‘the one great orthodox Marxist of the twentieth century’ by Alexandre Kojève. The post-World War II period, with its social democratic compromise, lacked Promethean entrepreneurs of the kind envisaged by figures such as Werner Sombart and Joseph Schumpeter. Yet in the 1980s the mythos of the entrepreneur was revived with the rise of Reaganism. Richard Branson emerged as the fitting stepson of Thatcher, whose privatizations and deregulations paved the way for Virgin Atlantic and Virgin Healthcare. In 1986 the then Prime Minister appointed him ‘litter tsar’, tasked with ‘keeping Britain tidy’. Later, the Blair government entrusted him with managing part of the newly privatized British rail infrastructure.

Branson inaugurated the era of the performer-entrepreneur, a man of showbiz more than business, foreshadowing the new generation of moguls who operate on social media. Mark Zuckerberg, who deftly exploited Facebook to build his own personal brand, was the first. Then, in truly cinematic fashion, entered Iron Man Elon. Yet these symbolic figures aren’t necessarily the most significant ones. John Rockefeller or John Pierpont Morgan were far more important than Carnegie, even if they never embodied an epochal style. Bill Gates was just as important as Steve Jobs (himself a mythical character, though he died before the new wave of social media). In the same way, Amazon’s Jeff Bezos shapes our lives far more than Elon Musk, even though his presence on social media is close to nil, and he is markedly less representative of what might be called ‘comic book capitalism’.

The truth is that Musk’s significance is more political than economic. I know from personal experience that public figures – however cynical their stated positions may appear – end up identifying with the role they play and believing in the principles they thought they were exploiting. Tony Stark inevitably begins to see himself as Ulysses, ‘that man skilled in all ways’, whose ingenuity allows his people to fulfil their historic mission. Yet, unlike his former Paypal associate and fellow cryptocurrency enthusiast Peter Thiel, Musk has little use for political proclamations. His actions speak for themselves. They reveal an individual convinced of his right to shape the fate of the world – not primarily through his wealth, but through his membership of a ‘cognitive aristocracy’, an elect few more intelligent, more knowledgeable and more perceptive than the rest.

Here we enter the phantasmagorical world of the comic-book capitalists, who often use their vast wealth to realise their teenage fantasies. Relevant to this dreamland is the disproportionate influence, especially in the eighties, of Ayn Rand’s Atlas Shrugged (1957), in which the Russian exilée describes ‘a dystopian United States in which private businesses suffer under increasingly burdensome laws and regulations’, plus the resistance of some heroic capitalists who eventually migrate and establish a free society elsewhere (a notable super-fan of this extremely dull book was Alan Greenspan).

The 2008 crisis dealt a blow to the partisans of Rand’s rational egoism (Greenspan himself ultimately abjured it). But it was soon to be replaced by a new cult work entitled The Sovereign Individual: How to Survive and Thrive During the Collapse of the Welfare State (1997), co-written by James Dale Davidson, a financial consultant whose expertise lay in how to profit from catastrophes, and William Rees-Mogg (1928-2012), long-standing editor of The Times. A 2018 Guardian article summarized the book’s four main theses:

1) The democratic nation-state basically operates like a criminal cartel, forcing honest citizens to surrender large portions of their wealth to pay for stuff like roads and hospitals and schools.

2) The rise of the internet, and the advent of cryptocurrencies, will make it impossible for governments to intervene in private transactions and to tax incomes, thereby liberating individuals from the political protection racket of democracy.

3) The state will consequently become obsolete as a political entity.

4) Out of this wreckage will emerge a new global dispensation, in which a ‘cognitive elite’ will rise to power and influence, as a class of sovereign individuals ‘commanding vastly greater resources’ who will no longer be subject to the power of nation-states and will redesign governments to suit their ends.

Though written in 1997, the book is perfectly synchronized with the world of cryptocurrencies, created a decade later in the immediate aftermath of the financial crash. The Sovereign Individual found an early adherent in Thiel, a member of the so-called Paypal Mafia, the group of young entrepreneurs – including Musk – that launched Paypal in 1998 and subsequently spawned a whole host of companies; Reid Hoffman founded LinkedIn, Russel Simmons and Jeremy Stoppelman founded Yelp; Keith Rabois was an early investor in YouTube; Max Levchin became the CEO of Slide, Roelof Botha a partner at Sequoia Capital. With the exception of Musk, they all appear together in a famous photo published by Fortune in 2007, sitting in a bar, dressed as Italian-American gangsters.  

Not all of this clique would become disciples of The Sovereign Individual: some continue to fund liberal causes and Democratic electoral candidates. Yet the real division within the group is between the paladins of crypto and the others. Remember, bitcoin presented itself as a tool that could render the state superfluous as a guarantor of currency – undermining one of its two remaining monopolies (the other being the monopoly on legitimate violence). bitcoin was a way of realizing Robert Nozick’s ultra-minimalist state in the economic and financial realm, well beyond even the most audacious Friedmannian vision, where the supply of money is entrusted to the market.

Even more radical in his political convictions is Thiel, who, as we learn in a recent article in the London Review of Books,

predicts the demise of the nation-state and the emergence of low or no tax libertarian communities in which the rich can finally emancipate themselves from ‘the exploitation of the capitalists by workers’, has long argued that blockchain and encryption technology – including cryptocurrencies such as bitcoin – has the potential to liberate citizens from the hold of the state by making it impossible for governments to expropriate wealth by means of inflation.

Thiel recently hired as Global Strategist for his investment fund the former Austrian chancellor Sebastian Kurz, a conservative politician increasingly gravitating towards the extreme libertarian right. Thiel has also become a fervid exponent of the ‘Dark Enlightenment’, the new philosophy embraced by the alt-right and by some Trumpians (Thiel was one of Trump’s earliest financers), which proposes the creation of a neo-feudal system governed by a small set of cognitively superior elites.  

These patricians cloak themselves in the noblest of robes: those of meritocracy. After all, who would be against the idea that whoever deserves more should obtain more? The problem is that this reasoning is always performed backwards, moving from consequences to causes; so-called meritocracy, far from arguing that rewards should be commensurate to merit, actually maintains the opposite. Possessing wealth is already incontrovertible proof of the fact that it’s deserved. The rich are rich because they deserve to be, and everyone else is the undeserving poor. Musk is the living apologue of this principle, its celebrity incarnation. Yet, precisely for this reason, he doesn’t need to express radical positions like his ex-partner Thiel. The concept of cognitive feudalism is irrelevant for him, since he can simply exercise such tyranny over his employees. Rather than flaunting his radicalism, he puts it into practice. He doesn’t gloat about cryptocurrencies’ ideological virtues; he merely uses them to inflate the valuation of his companies. As Nobel laureate Wole Soyinka wrote in his stinging critique of négritude: ‘a tiger does not proclaim his tigritude, he pounces’.

Yet the limits of this approach are plain to see. Tesla’s market performance mirrors that of cryptocurrencies with an astonishing similarity (Tesla’s collapse from $1 trillion to $662 billion since last November coincides with the recent crypto crash). The end of quantitative easing and the monetary tightening that central banks will implement to check inflation will precipitate the collapse of overvalued firms and Ponzi schemes of all types. At this point, capitalism will have to find itself some other heroes (or some other comics).

P.S. If the collapse of bitcoin was one good story this spring, there was also another. Last May, it was as if the Davos Economic Forum didn’t even take place; nobody paid it the slightest attention, and it hardly appeared in any news report. Before the pandemic, Davos seemed like the yearly reunion of the masters of the universe. Its sumptuous choreography suggested that movie stars and heads of state were visiting the Alpine ski resort, rather than capital’s bureaucrats and paper pushers. By contrast, this new sobriety is a breath of fresh air. Meagre consolation in the face of the war, perhaps, but still a small glimmer of hope.

Translated by Francesco Anselmetti.

Read on: George Cataphores, ‘The Imperious Austrian’, NLR I/205.

Categories
Uncategorised

Gary’s Inferno

There is only one thing you need to know about American democracy: it does not exist. Using data collected on 1,779 policy issues between 1981 and 2002 – well before Citizens United v. FEC made corruption first amendment protected speech – a 2014 study by two professors of political science at Princeton concluded that ‘the preferences of the average American appear to have only a minuscule, near-zero, statistically non-significant impact upon public policy’. This finding has significant downstream consequences on all aspects of American life, not least because it is largely unacknowledged, even as its effects, to varying degrees of injuriousness and fatality, are felt by 99% of the population.

The sovereignty of the ‘average American’ is outbid by economic elites as a matter of course, not excluding those supposedly divisive issues (e.g., abortion, gun control, single-payer healthcare) on which there is in fact bipartisan consensus in the electorate, but that does not mean he has become depoliticized as a result. On the contrary, he is more politically engaged than ever before – only his political activities consist in impotently watching cable news, posting about it on social media, arguing with family and friends during the holidays, decking out his lawn and bookshelf with totemic merch and PayPaling donations to whatever politician or cause has most recently shoved its cup into his diminished span of attention. Yes, he sometimes goes to rallies, protests, city council meetings, or even the ballot box, but these activities, whatever he may believe, have become ends in themselves. ‘Politics is downstream from culture’, one of the more astute ghouls recently uncorked in America liked to say, and it is true that the country’s myriad cultural pathologies give its politics their particularly rancid flavour. But the real takeaway from the Princeton study is that, in the daily experience of the average American, politics is culture, culture is politics, and – with one class of exceptions – never the twain shall meet.

Best known for his tenure as the sharp-tongued, hard-to-impress art critic at the Village Voice during the last gasp of the counterculture, Gary Indiana’s insight into this state of affairs is that insofar as the US has become a ‘televised democracy’, it may, like any other aesthetic phenomenon, be reviewed. Now 72, Indiana – born Gary Hoisington in the ‘factory world’ of Derry, New Hampshire in 1950 – has enjoyed an accelerating renaissance since the 2015 publication of I Can Give You Anything But Love. The utterly unsentimental ‘anti-memoir’ touches on his time in Berkeley in the late 60s, the LA punk scene of the 70s, and the downtown art world of the late 80s hollowed out by financialization and decimated by AIDS and the ‘depraved indifference’ of the Reagan and Bush administrations. Semiotext(e) and Seven Stories Press reprints of his collected Voice columns (1985-1988), his early novels Horse Crazy (1989) and Gone Tomorrow (1993), as well as his true crime trilogy Resentment (1997), Three Month Fever (1999) and Depraved Indifference (2002), have followed and been greeted with increasing interest. American readers born in the 80s, in particular, have been drawn not just to his nuclear-grade pithiness, his brazenly queer and bohemian narrative persona and his marriage of the techniques of American New Journalism and French avant-garde fiction, but also to the refreshing absence, rare among members of his generation, of nostalgia and apologetics in his accounts of the political events that have formed the pre-history of their lives.

Fire Season, an eclectic new selection of thirty-nine essays from 1984 to 2021, spans my own, give or take a few months on either end. It makes a compelling case that the window on American democracy closed sometime before I became a teenager: between Bill Clinton’s surprising second-place finish in the New Hampshire primary on 18 February 1992 and the opening of the assisted suicide trial of Dr. Jack Kevorkian on 20 April 1994. In that period, Indiana filed five pieces for the Voice – ‘Northern Exposures’, ‘Disneyland Burns’, ‘Town of the Living Dead’, ‘LA Plays Itself’ and ‘Tough Love and Carbon Monoxide in Detroit’ – that deserve to be regarded as classics of cultural reportage and travel writing. When paired with the more recent art, film and book reviews collected in Fire Season, they connect, as Christian Lorentzen writes in his introduction, ‘the twentieth and twenty-first centuries in ways readers and critics are only beginning to apprehend’.  

In ‘Northern Exposures’, Indiana returns to towns of the Granite State where he grew up to sit in reconverted porn movie houses and rooms that look like furniture showcases and Anglophile prep school auditoria alongside the ‘blue rinse jobs with ropes of synthetic pearls’, ‘Alan Alda types’ or else the ‘dewlapped, earnest preppies’ who have shown up to hear the mercenary, delusional or deranged pitches of half-a-dozen Presidential hopefuls (five Democratic, one Republican). Throughout Fire Season, Indiana shows himself to be landscapist worthy of Bosch and a portraitist worthy of Francis Bacon: he paints with a rich palette of displeasures whose pigments range from the scatological to the refined. What makes the candidates and the people who will vote for them symbiotic is not only that they are grotesque, hideous, odious, scabrous, tacky – to use some of Indiana’s favourite epithets – nor simply – with the exception of Pat ‘Caliban’ Buchanan, who is ‘tediously, exactly’ the frightening bigot and sexual hysteric he appears to be – that they are fake. It is that, like pink urinal cakes in a football stadium bathroom, their half-hearted attempts at concealment have only made everything smell worse. Critics are often praised for their visual abilities and Indiana’s eye for the revealing detail is second to none, but the moral sense is in the nostrils, and New Hampshire reeks of something that has ‘crawled up in you and died’.

New Hampshire’s vices are those of the nation: self-pity (they ‘regard themselves as the only true victims of history’), buck-passing (they ‘admit nothing’ and ‘blame everybody’) and provincialism (they refuse ‘to learn from the larger world’). ‘Of course people are “hurting”’, Indiana writes, giving the campaign cliché the scare quotes it deserves – ‘you usually do hurt after shooting yourself in the foot’. But to shoot oneself in the foot is still a form of agency. If only someone would inject the readership of the Union Leader with a journalistic cocktail of rabies vaccine and truth serum, the logic goes, the residents of this ‘backwards but perhaps not entirely hopeless state’ might start to acknowledge the ‘bad choices’ they have made, replace the ‘bad leaders’ they have put into power and actually fix problems that do not fail to repeat themselves, no matter how bitterly they are complained about.

The degree of civic optimism this presupposes, however miniscule, is no longer in evidence when Indiana flies to Paris a few months later to visit the Euro Disney resort in Marne-la-Vallée. An ‘obvious expression of cultural imperialism’, he writes in ‘Disneyland Burning’, Euro Disney is – no less than New Hampshire – a microcosm of the nation that made it, and not just in the literal sense that you can find Carnegie Deli and Big Bob’s Country Western Saloon there, or even in the sense that Mickey, Minnie et al are ‘genuine American archetypes’. A merger of state and corporation policed by a private security force backed up by gendarmes and staffed by ruthlessly exploited labourers, the ‘superficially varied’ architectural styles of the theme park’s six ‘lands’ ‘articulate…a mode in which any escape from cliché has become impossible’ and ‘presume a universe in which human beings no longer have any minds at all’. Indiana estimates that it would take two hours to make a complete tour of the park, but the average visit requires an outlay of three vacation days because you will spend most of your hard-earned leisure time waiting in line for rides and restaurants while being bombarded with advertisements telling you and hundreds of others how great the experience you’re not having is. Like the off-brand version in Branson, Missouri he will later describe in his essay ‘Town of the Living Dead’, what Euro Disney offers visitors is the slow suicide of this ‘alienated duration’, mort à credit. (Later, making mental notes for a piece on the artist Barbara Kruger, he will say, apropos of Walmart: ‘You can get anything you want at Walmart. The fact that you want it means you are already dead.’) Indiana is not the first to compare Disney’s franchises to concentration camps, but what makes them red-white-and-blue is that you have to pay to get in.

On this description, American culture is something more sinister than merely the means by which political power obfuscates its workings; it is the soft adjunct of a killing machine that reaches its telos in the carbon monoxide pumped through a rubber hose and into the mask held over your face by kindly Dr. Kevorkian. Just as Disney promises the ‘time of your life’ only to deliver dead time, Kevorkian’s ‘Kmart kind of suicide for a democratic era’ promises a dignified death, only to ‘surrender the last remaining mystery to faceless consumerism’. These days, the importance of aesthetic concerns is routinely downgraded in favour of more obviously political ones, but in a country where, as he writes in the Kruger essay, ‘democracy = capitalism = demolition of utopia’, matters of taste and tastelessness are far from irrelevant to the question: how should we live?

If it is to be objected that taking Disney and Kevorkian as the alpha and omega of US culture is to cherry pick from the bottom of the barrel, Indiana does not let its putatively higher precincts off the hook either. The American literary world will later be excoriated for its culture of ‘careerism’ and ‘fatuous self-promotion’ in his introduction to the French transgressive novelist Pierre Guyotat’s memoir Coma. Behind the ‘costume of authenticity’ worn by the first-person narrators of today’s ‘bourgeois literary writing’ – whether memoir, food writing or autofiction à l’Américain – ‘lies the mercantile understanding that a manufactured self is another dead object of consumption…a “self” that constructs and sells itself by selecting promotional items from a grotesque menu of prefabricated parts’. To write such a book is to indulge in the provincialism of personal identity; to read one is to be given yet another anesthetizing hit of ‘cultural morphine’. 

Meanwhile, some thirty miles north of the Disney mother ship in Anaheim, the only genuinely democratic event that occurs in a society where there is no means of peacefully translating popular will into public policy took place. Following the acquittal in the state trial of four LAPD officers charged with the use of excessive force in the beating of Rodney King, thousands of people staged a massive six-day insurrection (or if we must, ‘riot’) in South Central Los Angeles and Koreatown, which lead to over 60 deaths, 2,000 injuries, 12,000 arrests, and $1 billion of property damage. Covering the federal trial a year later in ‘LA Plays Itself’, Indiana does not fail to note that the father of Laurence Powell – the officer with the ‘put-upon, porcine expression of a slow-witted high school bully’ who broke King’s leg with his baton – ‘usually sports one of three differently coloured Mickey Mouse ties’. Nor does he fail to observe that although every politician, including Bill Clinton, that could get in front of a TV camera between 29 April and 4 May 1992 described the insurrection as a ‘wake-up call’, the underlying juridical and socio-economic conditions – the analogy Indiana reaches for is apartheid – that lead to it hadn’t ‘changed an iota’ since. What had changed? Gun sales. Police budgets. The ‘heavier application of cosmetics to a festering wound’.

‘The model really appears to be the old patronizing thing, corporations coming down, helping out, chipping in a little bit, rather than long-term stimulus’, LA Weekly reporter Ruben Martinez tells Indiana. ‘Given that the economic outlook is still piss-poor, and that that’s what set people so much on edge, how can you think there’s not going to be another riot eventually, whether it’s after the trial or some other occasion?’ Fast forward three decades, pausing the tape in Ferguson, Baltimore: the cover of Fire Season features a painting by Sam McKinniss, one of the artists reviewed in it, of an NYPD squad car that got torched during the uprising that followed the murder of George Floyd by police officers in Minneapolis, while then-Presidential candidate Joe Biden – who helped author the 1994 legislation that would give the US world history’s largest gulag – was in Philadelphia calling the murder a ‘wake-up call’.

‘Do I honestly “believe” in democracy?’ Indiana asks himself in his Obama-era travelogue ‘Romanian Notes’, channel hopping between coverage of Tahrir Square and James Clapper’s testimony to the House Intelligence Committee about PRISM, a surveillance program so extensive it would have made Securitate, Ceaușescu’s secret police, turn green with envy. The trap for a critic of such unrelenting negations is cynicism; however close he edges to despair, Indiana does not fall into it. Cynicism, after all, is just the flip side of the coin of ingenuousness, a sign that one has lost the ability to make distinctions. Although Indiana knows that democracy is ‘irrelevant’ to the people who run the US and a ‘joke to the people who own it’, he also knows that when you look into the abyss, the abyss looks into you. For critique – of an artwork, of a society – to be meaningful it must be undertaken, at least implicitly, in the name of a preferred alternative. When his wallet is stolen by a Bucharest taxi driver in a companion essay, ‘Weiner’s Dong, And Other Products of the Perfected Civilization’, Indiana is surprised and indignant to learn how many details of his personal life the customer service representative at Chase is able to access based on ‘publicly accessible information’. Surprise and indignation are emotions you are capable of feeling only if you believe things ought to be otherwise.

Fire Season is a vision of hell, but just as every Inferno must have its gradations of offense, it ought to have a place in it for virtuous pagans. The book’s heroes are, first of all, the reporters: Alisa Solomon of the Village Voice, Ed Leibowitz and Ruben Martinez of LA Weekly, Masha Gessen, Anna Politkovskaya. What earns them deserved praise is, quite simply, that they tell the truth. Truth is a much-abused concept in our time; what Indiana means by it is less our gamified sense of fact-checking than an ethos of candour: to ask questions others will not, yes; to point out inconsistencies and outright lies, yes; but also to refrain from omitting ‘complicating facts, mitigating causalities’ from one’s account, or to lard it with ‘exaggerations’. It was for her candour about the Second Chechen War that Politkovskaya was shot four times, once in the head, in a contract-style killing whose timing suggests that it doubled as an obscene birthday present for Vladimir Putin, Ramzan Kadyrov, or both. ‘There is no need for the truth anymore’, a Chechen war widow says in a documentary Indiana watches in ‘I Did Not Know Anna Politkovskaya’, a reflection on the art and function of journalism, the profession for which she gave her life. ‘That is why they killed her’. The widow is right in one sense – despite what Politkovskaya uncovered in Grozny two decades ago, Putin and Kadryov are, as I write, savagely burnishing their résumés as war criminals with the blood of Ukrainian civilians. But we also need the truth, and always will, because it is essential to human dignity. Without it, we are already dead.

The book’s other heroes are its artists (Louise Bourgeois, Tracy Emin, Kruger, McKinnis, Andy Warhol), its filmmakers (Robert Bresson, Louis Buñuel, Pier Paolo Pasolini, Jean-Pierre Melville, Barbet Schroder) and its many writers (Renata Adler, Samuel Beckett, Anya von Bremzen, Jean Echenoz, Guyotat, Jean-Patrick Manchette, Mary McCarthy, Paul Scheerbart, Unica Zürn). What by and large unites this somewhat disparate group is a certain sensibility: a fatalism that does not lead to resignation, a stoicism that does not preclude sympathy, an ironism that is tolerant of human folly, a tragicomic sense of life that is born of unperformed familiarity with grief, psychic extremity or violence. It is a sensibility that Indiana, as a reviewer of their work, shares. As with his reportage and his true crime novels, the idiom of his criticism is equally at home in the gutter as it is in the firmament; it is informed by the conviction that these are the zones of lived and aesthetic experience, however painful they may be to occupy, where something of value can be wrested from a cruel and cretinizing late capitalist social order.

It is noteworthy – but should come as no surprise – how few of the names on the above list were born in the US and how little of the work for which they are best known was done during the years 1984-2021. This sensibility may be diametrically opposed to the one laid out in ‘Northern Exposures’, yet in a way it is also a kind of shooting oneself in the foot. Over and over again, Indiana compares the experience of such art to a symbolic mutilation: Guyotat ‘spoils the flavour of bourgeois literary writing’ and ‘exposes [the class system’s] corruption of feeling’; Bresson ‘ruins one’s taste for mediocrity’, like a cigarette put out on the tongue; Kruger’s work is ‘the ruin of certain smug and reassuring representations, the defacement of delusion’, such as the one about living in a democratic society. The function of good art – and by extension criticism practiced as an art – is to render its audience unfit to serve as an extraction site for the cultural killing machine. Fire season, as I’m sure you’ve noticed, is every day now. This is what we’ll need if we’re going to survive.

Read on: Alexander Cockburn, ‘Dispatches’, NLR 76.

Categories
Uncategorised

Mithridatisation

In September 2020 Sir Geoffrey Nice announced the creation of the Uyghur Tribunal to ‘investigate China’s alleged Genocide and Crimes against Humanity, against Uyghurs, Kazakh and other Turkic Muslim populations’. On 23 March last year, 17 British MPs signed a parliamentary motion condemning the ‘Atrocities against the Uyghurs in Xinjiang’. On 6 May, the House Foreign Affairs Committee held a hearing entitled ‘The Atrocities Against Uyghurs and Other Minorities in Xinjiang’. Between October and December, ‘atrocities’, ‘genocide’ and ‘crimes against humanity’ filled the pages of the international press from the Guardian to Turkish dailies to Ha’aretz. On 20 January this year, a majority in the French parliament ‘officially recognised the violence perpetrated by the People’s Republic of China against the Uyghurs as constituting crimes against humanity and genocide’.

The words ‘atrocities’, ‘massacres’, ‘genocide’, ‘ethnic cleansing’, ‘torture’ and ‘crimes against humanity’ are used interchangeably in such denouncements. In other cases, reference is often made to ‘war crimes’. These terms have become so embedded in the news cycle that they scarcely induce any reaction at all. Their routine inflation weakens their capacity to appal, to stir, even to prompt reflection.

We rarely pause to consider that up until the end of the nineteenth century, such categories were alien to political discourse. They were exceedingly rare objects of moral indignation (see, for instance, Bartolomé de las Casas on the massacre of the indios), which had not yet solidified into justifications for political or military intervention. Nobody had ever been convicted of ‘war crimes’. Acts committed during war were never considered more culpable than the war itself. Vanquished enemies were enslaved or deported, but they weren’t cast as criminals; defeat – and everything it implied – was punishment enough.  

The substantive difference between ‘war crimes’ and ‘atrocities’ is that the former are tried and convicted once a war is over, as a sanction against the defeated party and legitimation for the victors. ‘Atrocities’, on the other hand, are more often mobilized in the interest of waging war; they are one method by which modernity constructs an enemy. The same act can be defined as an ‘atrocity’ before a war is declared and a ‘war crime’ once a war is over. The Uyghurs are certainly persecuted and oppressed by the Chinese state, but the persistent use of ‘atrocities’ by the Western security establishment is a semantic escalation that signals a political transition: away from peaceful diplomacy, towards New Cold War confrontation.

Before the Enlightenment, jurists used the word ‘atrocity’ when discussing punishment – though never with critical intentions, Foucault tells us, for the atrocity of the supplice (torture, quartering) was seen as proportionate to the atrocity of the crime, a formula which laid bare a specific conception of power: ‘a power exalted and strengthened by its visible manifestations…for which disobedience was an act of hostility…a power that had to demonstrate not why it enforced its laws, but who were its enemies, and what unleashing of force threatened them’. It was to eliminate this form of punishment that the Enlightenment introduced a new conception of atrocity, to substitute forms of retribution ‘not in the least ashamed of being “atrocious”’ with ‘punishment that was to claim the honour of being “humane”’. Atrocity was ‘the exacerbation of punishment in relation to the crime’, a novel surplus, irreducible to the accounting of crime and retribution, an excess in relation to the existing economy of infringement.

Another century was needed, however, for the atrocity to acquire its political definition. The first deployment – to my knowledge – of the term ‘atrocity’ by a Western statesman (who, in fact, used it as evidence of a ‘just’ cause for a possible war), was in an invective Gladstone addressed to the Ottomans in 1896: ‘…this is not the first time we have been discussing horrible outrages perpetrated in Turkey, and perpetrated not by Mohammedan fanaticism, but by the deliberate policy of a Government. The very same thing happened in 1876’, but the Sultan’s government, ‘declared that there were no atrocities, no crimes committed by Turks or by the agents of the Government’. Should the Sultan continue to commit these crimes and massacres, Gladstone concluded, ‘England …should take into consideration means of enforcing, if force alone is available, compliance with her just, legal, and human demand.’ 

It wasn’t just any politician, then, who inaugurated the discourse of ‘atrocities’. For over forty years (1852-94), Gladstone dominated British politics (he was Prime Minister for 13 years, Chancellor for another 13, and leader of the House of Commons for 9). It was Gladstone, above all, who invented humanitarian imperialism, or ‘liberal imperialism’ as it was then known, whose heyday would come in the American Century.

Why is it that atrocities hardly appear as an issue in the fifty preceding centuries? Because atrocities were taken for granted. It was common knowledge that power kills, tortures, sweeps away. Nobody threatened to wage war on Charles V for the ‘atrocities’ committed by the Landsknechte when sacking Rome (1527). Before the second half of the twentieth century, the United States never even questioned the genocide of native Americans, the victims of which were in the tens of millions.

Nowadays, atrocities mark the limit of acceptable or legitimate violence. Indignation towards them has become a key part of political etiquette – a means of demonstrating one’s respect for the rules of war, just as one would display one’s manners in a distinguished salon. Like all etiquette, this involves a great deal of hypocrisy. Outrage at ‘excessive violence’ serves to soften or conceal the ubiquitous violence described by Nietzsche, whereby humans inflict harm simply because they can. Exhibiting concern for atrocities is a means of of civilizing the struggle for global power, as if a more ennobled form could somehow change its content. This discourse has the effect of reintroducing an ideological element to war that was largely absent since the peace of Westphalia (1648). Peter the Great of Russia fought the Swedish king Charles X not for ideological reasons, not for civilization, not for good to prevail over evil or to put an end to any genocide, holocaust or massacre, but simply and purely to accrue more power.

The principle according to which only just wars should be fought, or, better still – that a war should first be ‘rendered just’ before it is waged – is a somewhat bizarre and thoroughly modern idea, rooted in the confluence of three long-term tendencies. The first is the Reformation, with its exigency for a redemptive motive for every human action (even for profiting). The second is colonialism, and the notion that wars against the colonized served to civilize them (what Kipling famously called ‘the white man’s burden’). Third is the emergence of public opinion. For it is before this audience that atrocities are paraded in order to justify moving against a constructed enemy (the absence of ‘public opinion’ is another reason why the issue had never been raised in preceding millennia). The atrocity must create a scandal, otherwise it’s ineffective. From this perspective, the politics of atrocities is a symptom of mass communication: first newspapers, then radio and TV, now social media.

In the 1890s, Britain witnessed the triumph of popular newspapers: from 1854 to 1899 the number of dailies in London grew from 5 to 155. Millions of readers were suddenly distressed by stories of atrocities taking place in exotic countries; dark skins, naked bodies, violence. It’s no coincidence that the first great revelations of this type came from the Congo, then the Amazon: atrocities against the ‘savages’. In 1885, the Berlin Conference assigned the Congo to the Association Internationale Africaine, an ante litteram NGO – or ‘humanitarian’ association – which had once employed the famous American explorer Henry Morton Stanley (‘Dr Livingston, I presume?’), and was controlled by the Belgian King Leopold II. This stretch of property measuring 2.6 million km2 was intended to resolve the rivalry between two major colonial powers in Africa, Britain and France (the birth of Belgium in 1830 was a consequence of the defeat of Napoleon, cutting off France’s northeastern provinces, which were integrated into Belgian Walloon). It’s not surprising, then that the campaign against the Belgians’ atrocities in the Congo flared up at exactly the moment Gladstone delivered his speech, nor that it was fanned by the Anglophone press.

When the British government commissioned the Irish diplomat Roger Casement to write a report on the matter, completed in 1904, the document served to establish the rhetorical canon for all future reports on atrocities: accounts of genocide, famine, farced labour, imprisonment, torture, rape, mutilation. One particular episode, reinforced by photographs, struck contemporaries’ imagination: the hands of dead enemies were amputated, so that local conscripts to the Force Publique (the Congo’s military police) could prove that they had really used their bullets, rather than pocketing them. The report enjoyed a global reception, thanks also to Mark Twain’s King Leopold’s Soliloquy (1907) and Arthur Conan Doyle’s The Crime of the Congo (1909). As a result, in 1908 the Congo was transferred from Leopold’s private holdings to public ownership by the Belgian state.

Casement also wrote the second great report on atrocities: those committed in the Peruvian Amazon’s Putumayo region, where he was sent to investigate in 1910-11, as the company that held the right to exploit the area’s rubber was registered in London and hired Barbadian labour – that is, British subjects. Here the report also certified mistreatments, malnutrition, forced labour, rape, murder, amputations, torture. In 1911, Casement was knighted for his findings. On the extraordinary life of this figure who went from international human rights superstar avant la lettre to concluding his earthly sojourn on the gallows at Pentonville, two texts are worth reading: Colm Tóibín’s ‘Roger Casement: Sex, Lies and the Black Diaries’ (2004) and Mario Vargas Llosa’s The Dream of the Celt (2012).

A great contributor to the dissemination of Casement’s report on Putumayo was the then British ambassador in Washington, Lord James Bryce, noted in the United States for his book American Commonwealth (with its lapidary verdict: ‘in Latin America, whoever is not black is white, in German America, whoever is not white is black’). With the advent of the First World War, it was Bryce that London would entrust in 1915 with the task of compiling a report on German atrocities committed in Belgium. The Bryce Report collected testimonies of various ‘outrages’ perpetrated by German soldiers, but global public opinion was particularly enflamed by one specific case, so much so that it would be cited by states that had until then remained neutral – Italy, the US – as justification for entering the war. The report outlined that ‘a third kind of mutilation, the cutting of one or both hands, is said to have taken place frequently’. A form of historical lex talionis: a decade earlier, photos circulated that showed the Force Publique practicing the very same punishment, now subjected to the Belgians that had introduced it.

The truth is, even if the Germans committed countless ‘outrages’, these specific accusations eventually proved unfounded, though they were still taught in French elementary schools in the 1930s. This leads us to the problems involved in campaigns against atrocities. For one, do they correspond to reality, or do they bend it to their advantage, in order to make the bad guys look a little worse? What’s more, not all atrocities become an object of scandal. Despite its similarity to the very worst of the Putumayo incursions, the British colonists’ hunting of aboriginals in Tasmania never generated comparable clamour.

Finally, efficacy. Sometimes scandals provoked by atrocities prove to be sharp tools. King Leopold’s crimes forced the Belgian state to take the reins of sovereignty in Congo; German atrocities in Belgium facilitated the entry of neutral powers into the war; the Nanjing massacre of December 1937 prepared American public opinion for war against Japan; the My Lai massacre in March 1968 accelerated Americans’ revolt against the Vietnam War; the atrocities at Srebrenica in the summer of 1995 built the foundations of the anti-Serbianism that precipitated intervention in Kosovo in 1999.

Yet there are just as many incidents that produce no such results: after the Putumayo ‘scandal’, Peru wasn’t sanctioned, and indigenous Amazonians continued to be oppressed, if more discreetly. In Rwanda, after the massacres of 1994 everything was forgotten. In such cases, the horrors registered in photographs and documentaries were initially transformed into a sort of monster by which humanity was transfixed, eager yet impotent to fathom the sheer quantity of evil for which it was responsible. The scandal became an occasion for contemplating the heart of darkness inside each of us. But it also induced an inurement to horror. An unintended consequence of the proliferation of campaigns against atrocities has been a kind of mithridatisation, in which we all become peaceful cohabitants with monstrosity.

Translated by Francesco Anselmetti.

Read on: Tor Krever, ‘Dispensing Global Justice’, NLR 85.