Categories
Uncategorised

Sceptical Credulity

They looked at me with a benevolent smile, almost pitying my credulity, my capacity to be fooled. This person, whom I met by chance, was in their sixties, had taught at the Sorbonne and published several books. They immediately told me they would never get the Covid vaccine. They smiled when I objected that over the course of their life they had unthinkingly accepted over a dozen vaccines, from smallpox to polio, and that to enter a whole host of countries every one of us has been inoculated – against tetanus, yellow fever and so on – with relative serenity. ‘But this vaccine isn’t like the others,’ they replied, as if privy to information from which I had been shielded. At this point I understood that there was nothing I could say to shake their granitic certainty.  

What struck me most, however, was their scepticism. I knew that if I entered into the conversation, at best we would have come to the issue of government deception and Big Pharma, at worst conspiracy theories about the microchips Bill Gates is supposedly implanting in the global population. Here we’re faced with a paradox: people believe in extraordinary tales precisely because of their sceptical disposition. Ancient credulity worked in a completely different manner to its contemporary equivalent. It was shared by the highest state authorities – who typically employed court astrologists – and the most downtrodden plebeians. Inquisitors believed in the reality of witchcraft, as did commoners, as did some of the accused witches themselves. In one sense the occult still functions this way in certain parts of postcolonial Africa, where the political class relies on the same rites as ordinary citizens, using witchcraft to perform some of the operations that are the purview of public relations departments in the so-called developed West. (Peter Geschiere’s 1997 text on this topic remains instructive: The Modernity of Witchcraft: Politics and the Occult in Postcolonial Africa). But, by and large, the modern world has given rise to a form of superstition that is accepted in the name of distrust towards the state and managerial classes.

Naturally, we have ample reason not to trust the authorities, even when it comes to vaccines. The journal Scientific American once lamented the impact of the fake Hepatitis-B vaccination campaign organised by the CIA in Pakistan with the aim of discovering Bin Laden’s whereabouts, which ultimately resulted in locals boycotting initiatives to vaccinate children against polio. We know of efforts to purposefully garble reports on the carcinogenic effects of glyphosate – the world’s most common herbicide – to tame the ire of its manufacturer Monsanto. And let’s not forget the decades in which the dangers of Teflon were hushed, whilst we cooked (and continue to cook) with coated pans. Nor can we ignore the authorities’ cynicism: between 1949 and 1969 the American armed forces conducted 239 experiments which introduced pathogenic germs amongst unknowing populations. In 1966 for instance, Bacilli were released into the New York subway to study their effect.

Scepticism towards authority is the basis of modern enlightenment rationalism. The anti-vaxxers, one must concede, are enacting the very process which permitted science to develop: refusing the principle of authority, rejecting the ipse dixit (ipse here no longer referring to Aristotle, but to the titled and legitimated scientist), upholding the principle that a theory is not in itself true just because it is espoused by an expert at Harvard or Oxford.

But here we’ve already begun to slide into the unintended consequences of sceptical thinking. We cannot disavow the liberatory force of the suspicion that religion was invented as a disciplinary tool, as insinuated by Machiavelli in the 16th century. It was this distrust that came to animate the tradition of libertinism (Hobbes and Spinoza were both suspected of inspiring libertines, perhaps because they were considered crypto-atheists), as well as the theory of The Three Imposters, which held that Moses, Jesus and Muhammad were tricksters who had feigned their divine knowledge to keep the masses in check:

Neither God, nor the devil, the soul, the skies nor hell resemble how they are depicted, and all theologians – those that disseminate fables as divinely revealed truths – with the exception of a few fools, act in bad faith and abuse of the credulity of the people to inculcate in them what they please.

(Traité sur les trois imposteurs ou la vie et l’esprit de monsieur Benoit de Spinoza [1719])

The radical potential of this statement is clear, but it must be noted that this is also the first known espousal of a systemic conspiracy theory. Its scepticism has a fideistic quality. The ambiguity it illustrates can be traced back to the Renaissance, which laid the foundations of modern rationalism and simultaneously found a faith-based solution to Catholic fallacies: the Protestant Reformation. Renaissance doubt goes hand-in-hand with mystic fervour; Erasmus of Rotterdam, Pietro Pomponazzi and Machiavelli are coeval with Thomas Müntzer, Calvin and Michael Servetus. Hence, incredulity had already become a politico-religious problem in the 16th century, as the title of a seminal study by the Annales historian Lucien Febvre suggests: The Problem of Unbelief in the Sixteenth Century: The Religion of Rabelais (1968). We can therefore understand how beneath vaccine scepticism lies an oftentimes ferocious intolerance, for this group of unbelievers structures itself like a sect. (Tara Haelle has reconstructed, rather interestingly, the way in which the anti-vax movement fashioned itself as a healthcare Tea Party in a recent article for The New York Times.)

But there’s more: the ruling class that squawks in horror at the superstition of its subjects is far from innocent itself. For the majority of people, science and technology have a magical quality, in that there is an obvious imbalance between the effort one puts into an action and its result. Uttering a spell, ‘open sesame’, for instance, needs little exertion, yet this is sufficient to move a large boulder blocking the entrance to Ali Baba’s cave. There is no cost input in reciting incantations that allow you to extract gold from stone. In the world of magic, the limits imposed by nature are no longer valid; you can fly on a broom or see what goes on in distant places. And what exactly do aeroplanes, cars, radars do? The Ring of Gyges and Aladdin’s lamp have become patented products, churned out by assembly lines and sold in supermarkets. If magic is a shortcut which covers great distances by way of an easy path (press a button and darkness disappears, press another one and you speak with people far away, yet another and you see what’s happening on the other side of the world), then the entirety of scientific and technological civilisation amounts to sorcery, even more so given that the vast majority of humans are unaware of the mechanisms by which this magic operates. Like the wizard of old, the modern scientist is a keeper of arcane knowledge. Few among us have even a vague idea of how a phone works, not to mention a computer. Naturally, there’s also the division between white (benevolent) magic and black magic, the latter causing ecological catastrophes and wars.

This enchanted dimension of modern life does not just derive from the fact that the bulk of humanity is kept in the dark about the functioning of the world of objects that surrounds it. The truth is that since the 1930s (and all the more so with the advent of the Second World War) the search for natural truth has changed gears. If research once possessed an artisanal quality (Enrico Fermi researched quantum physics in a Roman basement), now it has transformed into a veritable industry (almost 2,000 researchers work at CERN), and a costly one at that. The natural truths industry is financed by people, from politicians to CEOs, who know little about the projects they fund. An inverted relationship between researchers and donors has evolved in which the former, much like marketers or advertisers, must make constant promises that they will struggle to keep.

After the atomic bomb physicists had an easy ride; they could dangle extravagant weapons – whose unachievable prototypes remain firmly in the realm of Star Wars – before state officials, who would readily cut their own citizens’ pensions to finance the field. With the end of the Cold War, the rivers of military funding began to dry up, and the marketing of research needed rethinking. For decades, NASA has tried to ‘sell the cosmos’, instigating the belief that a colony on Mars was possible (an absurdity given the current state of technology).  It has also promised that with fresh funds it would be able to shield the earth from an inbound asteroid.

No longer able to promise the moon, the only miracle that remains for science to unlock is immortality: who would say no to this? Mark O’Connell’s extraordinary To Be A Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death (2017) contains plenty of promethean, multi-billionaire entrepreneurs pursuing infantile dreams of cryogenic freezing pending resurrection. In 1992, the great physicist Leo Kadanoff wrote in Physics Today: ‘We are fast approaching a situation in which nobody will believe anything we [physicists] say in any matter that touches upon our self-interest. Nothing we do is likely to arrest our decline in numbers, support or social value.’

The result is that it’s more and more difficult for non-specialists to distinguish between science and pseudoscience – or between scientists and salesmen. This is because the latter very often mimic the former, but also because of the proliferation of ‘heterodox’ scientists – figures who possess all the trappings of scientific legitimacy (a PhD, publications in authoritative journals, membership of illustrious faculties) but who end up on the community’s margins, or even excommunicated. Andrew Wakefield’s Vaxed (2016) claims that the Center for Disease Control and Prevention (CDC) has covered up the link between the MMR vaccine (measles, mumps and rubella) and the development of autism. The thesis was originally presented by Wakefield, a respected liver surgeon, alongside others in the eminent medical journal The Lancet. But the article was subsequently disproved, and the surgeon shunned from the profession (though it seems a co-author of his was absolved of the accusation of scientific fraud). From then on, Wakefield has been an anti-vax activist. Another disgraced scientist, Judy Mikovits – PhD in biochemistry, author of articles in Science, also accused of fraudulent practices – is the protagonist of two conspiracist documentaries from 2020: Plandemic: The Hidden Agenda Behind Covid-19 and Plandemic: Indoctornation.

These pariahs of the scientific community present themselves as new Copernicans facing an old Ptolemaic orthodoxy. They’re masters of all the formalisms of scientific research: bibliographies, diagrams, tables, footnotes. It’s understandable how they might sound convincing to those observing the commercialisation of the scientific-media complex from the outside.

I can confirm this disorientation with an anecdote. Shortly before he died, I went to interview René Thom (1923-2002), the founder of catastrophe theory, at a conference of physicists in Perugia. When I arrived, I discovered a meeting of physicists opposed to Einstein’s theory of relativity (nearly a century after it had been formulated in 1905), replete with papers presenting the flaws in the Michelson-Morley experiment (a key test for the theory), or in any case maintaining that its results could be explained by a host of other theories. I felt like I was participating in a clandestine meeting of some sect. I met European physicists who had been highly regarded in their field before they fell for a discovery which was proven to be false, and whose falsity they now struggled to acknowledge.

The close resemblance between science and pseudoscience – particularly in their relationship to funding, and therefore marketing – clarifies our recent difficulties in reasoning with anti-vaxxers, and why it seems almost impossible to break down the communication barrier without profound reforms to public education. For the latter, in its current form, is responsible for our present state of scientific, technological and mathematical illiteracy in an increasingly scientific, technological and digital world. Recently, in a large Roman market I saw an elderly man and woman converse across their respective vegetable stands. The man was an anti-vaxxer, and offered the argument that Covid-19 vaccines are dangerous and experimental. ‘Look who’s talking’, the lady replied, ‘all of you readily took Viagra without having the faintest idea of what it contained’.

A peculiar but highly significant case is that of Russia. Though it was the first country to patent a vaccine (Sputnik), by 2 September 2021 only 25.7 per cent of the population had been vaccinated, and only 30.3 per cent had received at least one dose (compared to a respective 58.4 per cent and 64.7 per cent in the EU). As a result, daily deaths in Russia have continued to reach 800 (out of a population of 146 million). To be sure, Russians’ wariness of the government has played a role (from the Tsars to Yeltsin, Stalin to Putin, there has never been much to trust). Even in Moscow we see versions of the fantasies about Covid and the vaccine we’ve discussed, including the online theory (signalled to me by friends who read Russian) that that ‘the virus was brought to earth by reptilian aliens who gained control of the earth in Sumerian times, and are responsible for creating the “Torahic religion”, and have now decided to curb the world’s population’, controlling humanity ‘via chips contained in the vaccine, in order to establish a new world order’. Amongst the reptilian humans are Obama, Putin and Biden (but not Trump).

But perhaps there is a more prosaic reason for Russian reticence towards the vaccine: Sputnik has not been recognized by Western (American and European) health organizations, invalidating it as a means to travel abroad. Many Russians maintain that if Sputnik permitted them to travel, there would be long queues to get vaccinated. Therein lies the power of bureaucracy, and of pharmaceutical companies’ commercial wars.

Read on: Marco D’Eramo, The Philosopher’s Epidemic, NLR 122.

Categories
Uncategorised

Crypto-Politics

Philosophy in the so-called ‘analytic’ tradition has a strange relationship with politics. Normally seen as originating with Frege, Moore, Russell and Wittgenstein in the early 20th century, analytic philosophy was originally concerned with using formal logic to clarify and resolve fundamental metaphysical questions. Politics was largely ignored, according to the Oxford analyst Anthony Quinton, before the late 1960s. Political philosophy, in fact, was routinely pronounced ‘dead’ at the hands of the analysts – so dead that the tepid output of John Rawls (whose A Theory of Justice was published in 1971) could appear as a revival.

At the same time, analytic philosophers were not uninterested in politics. Bertrand Russell is an especially well-known case, but other figures such as A. J. Ayer and Stuart Hampshire – both supporters of the Labour Party (and in Ayer’s case, later the Social Democratic Party) and critics of the Vietnam War – were also politically involved. The reluctance to engage with politics in their professional capacities might seem thus to reflect not lack of political interest, but a view of philosophy as a largely separate sphere. Russell, for example, wrote that his ‘technical activities must be forgotten’ in order for his popular political writings to be properly understood, while Hampshire argued that although analytic philosophers ‘might happen to have political interests, […] their philosophical arguments were largely neutral politically.’

While at times insistent on the detachment of their philosophy from politics – stretching to a pride in the ‘conspicuous triviality’ of their own activity that the critic of ‘ordinary language philosophy’ Ernest Gellner saw as requiring the explanation of social historians – the analysts at other times floated some quite strong claims as to the political value and potential of their own ways of doing things. As Thomas Akehurst has shown, ‘many of the leading British analytic philosophers of the post-war period made it clear that they took analytic philosophy to be in alliance with liberalism’ and ventured that certain analytic ‘habits of mind’ – in particular those associated with the ‘empiricist’ school dominant at that time – might offer a crucial protective against various forms of ‘fanaticism’.

It was less clear how this was supposed to work. The analysts’ own pronouncements on the relationship between their philosophy and politics seem to amount to little more than a declaration of a monopoly on openness, a boast of humility, a set of dogmatic and opaque assertions about the inhospitability of the analytic method to dogma and opacity (‘no-bullshit bullshit’, to borrow the moniker later bestowed on the so-called ‘no bullshit’ school of ‘analytical Marxism’ by its critics). ‘Empiricism is hostile to humbug and obscurity, to the dogmatic authoritative mood, to every sort of ipse dixit’, the Oxford philosopher H. H. Price had written already in 1940: ‘The same live-and-let-live principles […] are characteristic of liberalism too.’ In a similar vein, in his 1950 essay ‘Philosophy and Politics’, Russell describes empiricism (‘the scientific outlook’) as ‘the intellectual counterpart of what is, in the practical sphere, the outlook of liberalism’, and hailed Locke as the exemplification of the ‘order without authority’ allegedly central to both philosophies.

Between them, Russell and Price seem to be suggesting a series of links between 1. clarity, 2. the rejection of epistemic authority (amounting to a value of ‘thinking for yourself’); and 3. the rejection of political authoritarianism, a rejection which is assimilated to 4. liberalism, seemingly interchangeable with 5. democracy. Besides the looseness of the conceptual links in this chain, there are questions to be asked about its attachment to reality: about the relationship of these assertions by the analysts to actual analytic philosophy and philosophers, and to actual liberalism and liberals respectively. When Price lauds an intellectual culture ‘of free, co-operative inquiry, in which anyone may put forward any hypothesis he likes, […] provided it makes sense’ (which presumably means: makes sense to analytic philosophers), he evokes an image of a non-hierarchical, egalitarian discipline that is likely to strike those with experience of the culture of analytic philosophy as something of an idealization, to say the least. As for liberalism, Locke, and ‘order without authority’: what can you say, apart from maybe, ‘Don’t mention the slavery’?

Here we touch on another aspect of the analysts’ inchoate claim to offer a protective against political vice or misadventure. This is the wariness of ‘ideology’ and ‘theory’ (especially of the ‘grand’ variety). As Akehurst parses it, the idea is that a ‘focus on the concrete saves the empiricist from following grand theories, metaphysical chimeras and other strictly meaningless exhortations to devalue reality in favour of dangerous ideological fantasy.’ The thought here is intuitive enough, and liberals are not the only ones to feel its force – something like it is a theme of much anarchist writing, for instance. Yet whatever is to be said about it, it cannot be so simple as ‘Theory bad, Reality good.’ Rigid adherence to ideals, theories or principles can play its part in the commission of atrocities. But, first, what entitles liberals to the assumption that it is only other people who have ‘theories’? And second, can’t there be a converse and equal danger in the lack of fixed principles? Is ‘never torture’ a dangerous piece of dogma? Is it better to cast off such ideological baggage and keep all options on the table, doing as the facts of the situation (as we see them) demand? Liberalism’s history, in any case, boasts its fair share of terrible deeds committed both in the service of ‘grand ideals’ (for what else can we call ‘Liberty’?) as well as in response to the more ‘concrete’ considerations of expedience and profit.

If on the one hand analytic philosophy has presented itself as politically neutral, it has also regarded this very neutrality – this supposed freedom from dogma and ‘ideology’ – as the guarantor of liberal political conclusions. Analytic political philosophy, which has flourished and proliferated since the later part of the twentieth century, has preserved this basic posture. I wrote about this in my first book, The Political Is Political. What struck me about analytic political philosophy as a student (and what my tautological title was meant to capture) was that the subject was both paradoxically depoliticized and also political in covert ways. Under the influence of Rawls in particular, political philosophers have had remarkably little to say about actual politics or history. Armed with a sharp distinction between ‘descriptive’ and ‘normative’ (or ‘is’ versus ‘ought’), characteristic of the analytic approach, they have judged that such matters are the appropriate domain of social scientists. By contrast, the distinctive business of philosophers is to concentrate on the articulation of abstract ‘principles of justice’, often in the form of ‘ideal theory’. The contemporary political philosopher Charles Mills, while a self-identified adherent of the liberal-analytic approach, has criticized ideal theory as a form of ‘ideology’ in a loosely Marxist sense: a distortion of thought that reinforces an oppressive status quo by sanitizing and distracting from its ‘non-ideal’ features (Mills emphasises, in particular, the history and legacy of slavery in the United States).

Not only do philosophers in this tradition neglect real politics, I argued, but they also operate with a series of ostensibly neutral or ‘common sense’ methodological values  – ‘constructiveness’, ‘reasonableness’, ‘charity’ in argument – which invariably turn out, on closer inspection, to be interpreted in such a way as to favour liberal conclusions in advance, ‘begging the question’ against dissenting perspectives that might call for a more radical deviation from the political status quo. The injunction to ‘Be realistic!’, for example, commands virtually universal assent (who wants to be ‘unrealistic’?) but tells us nothing useful until filled out with some judgements as to what ‘reality’ is like and how it might be changed.

These, of course, are exactly the sorts of questions that are in dispute between people of different political outlooks. But what tends to happen in political philosophy, as in the wider world, is that taking account of reality is conflated with proposing less in the way of change, so that ‘realism’ and radicalism are assumed to be in inherent tension. Thus, while the ‘realist’ current in contemporary political philosophy is in one sense a challenge to the dominant approach (of ignoring real history and politics), it often ends up further entrenching a status quo bias, by equating ‘realism’ with small-c ‘conservatism’ and positioning ‘ideal theory’ as the peak of radical ambition.

Illusions of political neutrality, whether within the confines of academia or outside of it, are always deeply political, and usually conservative. (Think of the role that a faux-neutral, supposedly common-sense notion of ‘electability’ has played in British political discourse in recent years.) The answer, as I argued in the case of analytic political philosophy, is not to try to replace false neutrality with a ‘true’ one. The idea that this is possible, let alone desirable, is itself illusory. Judgements and assumptions about what is important, what kind of place the world is and what it could or should become – which is to say, political judgements and assumptions – are always and already embedded in the concepts and values we use to make and evaluate statements and arguments in philosophy and elsewhere. If there is an affinity between analytic philosophy and liberalism, it is perhaps in their mutual tendency to project this sort of illusion – to proceed as if their politics is no politics at all, just ‘realism’, or ‘common sense’. From this vantage point, the opponents of analytic philosophy and of liberalism can only appear respectively as obscurantists and fanatics. It’s no surprise, therefore, that some prominent analysts (Russell among them) became such ardent Cold Warriors.

The observation of G. J. Warnock, in 1958, that analytic philosophy was compatible with ‘a quite striking ideological range’ is clearly true, though Warnock also conceded that ‘there may be some deep-seated similarity of attitude and outlook’ not easily detectable to those who share in it. The political flavour of the school has been predominantly liberal, but it has not been exclusively so (think of analytic Marxists such as G. A. Cohen, or the radicals among the earlier ‘Vienna Circle’ of logical positivists). ‘Liberalism’, in any case, has meant (and continues to mean) different things to different people. Also like liberalism, ‘analytic philosophy’ is a slippery enough category to be resistant to any definitive characterization. It is not that analytic philosophy has any particular, fixed political content or valence. But its tendency toward ahistoricism and abstraction creates a vacuum at its heart, into which comes rushing, too often, too easily and too quietly, the dominant politics of the day. 

Read on: Lorna Finlayson, ‘Rules of the Game?’, NLR 123.

Categories
Uncategorised

Avalanche of Numbers

In the last few weeks, a report has been circulating in the online fora of the ultranationalist Indian diaspora. Its author, Shantanu Gupta, an ideologue closely associated with Prime Minister Narendra Modi’s Bharatya Janata Party, ‘tracked the coverage of the COVID-19 pandemic in India of 6 global publications – BBC, the Economist, the Guardian, Washington Post, New York Times and CNN – via web search results over a 14-month period’. His argument is that these outlets have distorted and exaggerated the effects of coronavirus in India. On what does Gupta base this thesis? On the fact that all these sources have used absolute numbers rather than cases per million. By the latter metric, we are told, ‘India is one of the better performing countries on the global map’. Here he is undoubtedly correct.

Countless times this spring we’ve seen the dramatic, record-shattering daily death counts from India, as it reportedly became the country with the third highest Covid deaths in the world. A quick look at these records: deaths in India reached their highest level on May 18th, with 4,525 per day. The USA topped this morbid leaderboard on January 12th with slightly lower numbers: 4,466. The UK reached its peak on January 20th, with 1,823 daily deaths; Italy on December 3rd with 993.

The problem is, India’s population stands at 1.392 billion. The USA’s is just 332 million, while the UK and Italy have 68 and 60 million respectively. If, then, we were to count the number of deaths per million inhabitants, ranking the highest daily death count yields quite different results: the UK holds a strong lead, with 28 deaths a day per million inhabitant; Italy is in second place with 17; the USA follows with 14; and India comes last, with just 3 per million inhabitants. Regarding the total number of deaths per million since the beginning of the pandemic, each country is almost identical, the only change coming at the very top: Italy clinches gold with 2,091 deaths per million, the UK 1,873, the USA 1,836, and India just 243.

One might argue that Indian statistics are unreliable (a fair objection, no doubt), due to the impossibility of accurately recording deaths in slums and other deprived areas. We now know that the true Covid death count in Peru was around triple the official figure. But multiply the Indian death count by four and it would still be inferior to that of more developed countries with far higher per capita incomes such as the USA, UK and Italy.

So has the pandemic in India been a bed of roses, as Modi has repeated for around a year, and as Gupta still maintains? Not at all. Try selling this to the families brought to ruin buying oxygen tanks on the black market or rooms in facilities with ventilators, or to the millions of precarious workers sent back home on foot, without a penny or subsidy to speak of. Even if, epidemiologically, Covid has not hit India more violently than other countries, it nonetheless spelled catastrophe for the health service and the wider economy. The numbers presented to underscore India’s Covid ‘tragedy’ in reality told an entirely different story. They were a testament to the brutal inequality of Indian society and the awful state of its health service: underfunded, staffed with underpaid workers, and lacking all kinds of vital equipment.

India’s pandemic casualties are but a macroscopic example of how numbers can be made to say anything, often conveying the opposite of what they really mean. In this past year and a half we’ve been submerged, buried, asphyxiated by an ‘avalanche of numbers’, as the Canadian philosopher Ian Hacking terms it. In his exceptional The Taming of Chance (1990), Hacking examines the fervour for statistics that took hold of Europe in the 1800s, following the Napoleonic Wars and the Industrial Revolution. Statistics, he argues, were endowed with a double dimension: by the 19th century they emerged as pillars of a new mode of governance, and underpinned a colossal epistemological revolution in science (just think of statistical mechanics, the kinetic theory of gases, and the attendant appearance of unsettling concepts: entropy first, the probabilistic interpretation of quantum mechanics after). It was this ‘avalanche’ that gave rise to the human sciences. Modern sociology was made possible by the availability of statistical data; Durkheim couldn’t have written his foundational text on Suicide (1897) without the mass of information provided by censuses. Our contemporary image of human beings derives in large part from the means developed to count them – an image that omits all that cannot be counted or indexed.

Statistics – numbers, that is – were obviously the primary tool for enacting what Foucault termed ‘biopolitics’, a form of governance in which it is essential for the sovereign to know the average life expectancy, the mean age of marriage relative to level of education, the number of possible conscripts at any given time, how long the state would have to pay a life salary, and so on. But a discipline cannot be an instrument of government without becoming a weapon of politics. The manipulation of statistics was born alongside statistics itself, hence the unforgettable, lapidary maxim from Mark Twain in his Chapters from My Autobiography (1906): ‘there are three kinds of lies: lies, damned lies, and statistics’.

This whole process gave rise to a new artform, as the rhetorical use of numbers ripened into a veritable ‘rhetoric of the number’. The media spectacle we’ve witnessed in the last 17 months has deployed this numerical rhetoric to the full extent of its powers; instructive, threatening, persuasive, dissuasive, distortive. It’s opportune, then, to examine this rhetoric a little more closely. We could begin by asking, as Jacques Durand did in his first, pioneering treatment of the subject: ‘What is a number? Is it a word amongst others, an integral part of language? Or is it a purely scientific object, extralinguistic in nature?’

We may not realize it, but we’re perennially bombarded by numbers used in what might be termed an ‘extra-arithmetic’ register: The One Thousand and One Nights, 2001: A Space Odyssey, Fahrenheit 451, The Seventh Seal, Ocean’s 11, Agent 007, 7-Up, 7-Eleven, the ‘number of the beast’ 666. In politics, the rhetorical use of numbers derives ‘from the fact that numbers and statistics – even from official sources – do not hold a mirror to reality but instead reflect, and deflect, it’. Precisely because of this function, numbers produce an effect of irrational persuasion. If I’m told that thousands of people have died in some disaster, I can accept this or I can remain sceptical, but if I’m told that the death count is 12,324, I must take it or leave it in toto. Numbers thus retain a persuasive force comparable to that of an image (‘worth a thousand words’). At the same time, numbers serve to decontextualize and absolutize.

We’re so inundated with numbers that we tend not to perceive just how much of the information we are presented with is superfluous and arbitrary. When we were told, for example, that Malaysia registered a record number of 5,298 cases on the 30th of January, nobody interrogated the reason why this figure was chosen. Why have we never been given daily updates on tuberculosis cases, even if every year it causes the death of around 1.7 million people? Think of the 1.4 million people who die every year in car accidents. Why are we not informed of roadside fatalities in Chile or the Philippines every evening? It’s curious that during Covid’s second wave all mention of deaths in nursing homes vanished; they have literally disappeared from mass media, yet the elderly continue to die in great numbers. The hypocritical sobs for the ‘tragedy of the elderly’ and the crocodile tears for ‘our grandparents’ have now been muted.

So, the first mechanism the rhetoric of numbers employs is the choice of which figures to include and which to omit. To declare the accumulation of total deaths, rather than the rate of mortality relative to the size of a given population, is a shining example of this numerical rhetoric. Rarely does the news – broadcast or print – use relative figures; it generally deals in absolute tallies. Less lethal but just as serious is the doctoring of statistics relating to work, with many curious stratagems at play in assessing the rate of unemployment (in the US, for example, people are not counted as unemployed if they worked just one hour the previous week). Then there are the various figures of speech which we do not have sufficient space and time to analyse at present. With numbers you can use antithesis: ‘A €700,000 fine for missed payment of €1.20’, ‘Murder for €20’; tautology: ‘2021 is not 2001’; repetition: ‘in 12 days, with 12 bottles, your face is 12 years younger’; enumeration: ‘buy two, pay for one’ (a shoe advert) , ‘one exceptional offer, two lifestyles, three advantages’; accumulation: ‘920 tonnes at 920 km/h’. This doesn’t even cover rhetorical devices that mix words and numbers. The list is long.

It’s evident from this brief excursus that numbers are not words like any other, nor are they completely extralinguistic signifiers. Strangely enough, their logic recalls (to come full circle) the use of Hindi words in English-language Indian dailies, which are replete with local terms that communicate what in English would be inexpressible. They represent an exotic solecism embedded in standard English and hark back to a shared local heritage. In everyday consciousness numbers surely enjoy a similar exotic fascination, owing partly to the general public’s imperfect grasp of arithmetic.

What remains to be evaluated is the intention behind this particular wielding of numbers. There’s little doubt here: general panic amongst the population was a poorly-concealed – if not declared – objective of the pandemic response. I don’t wish to say that the pandemic wasn’t to be feared. I do, however, think that authorities around the world considered instilling a long-lasting panic amongst the general public necessary in order for much-needed lockdown measures and curbs on civil liberties to be accepted with such acquiescence. A well-dosed and administered media panic was, and still is, far less costly and intrusive than police measures. And for this aim the rhetoric of numbers has no match.

Translated by Francesco Anselmetti.

Read on: Marco D’Eramo, ‘Geographies of Ignorance’, NLR 108.

Categories
Uncategorised

Canny Reader

The death of J. Hillis Miller, in February, marked the end of an astonishing period in American academic literary criticism – North American really, since the dominant figure, Northrop Frye, was born in Québec and taught in Toronto. The period might be said to start in 1947, with the publication of Frye’s first book, the Blake study Fearful Symmetry, and yielded a body of work drawing on the kind of Continental resources – Marxism and psychoanalysis but also theology, linguistics, hermeneutics, and mythopoetics – that had been accorded little place by earlier formalist approaches. Miller, the author of twenty-five books, was rare among the central figures in devoting his attention to study of the novel, from Emily Brontë to Ian McEwan. The arc of Miller’s career has been described by Fredric Jameson as ‘unclassifiable’, but in bald terms, it was the story of a pair of Francophone mentors, Georges Poulet and Jacques Derrida, who washed up in Baltimore – more specifically, the campus of Johns Hopkins, where Miller taught from 1952 until 1972. Miller welcomed their interventions and ran with them, transforming himself into a leading exponent of two critical schools, one – phenomenology – that remains more or less pegged to its post-war moment, the other – deconstruction – with wider fame and implications, and a more contested legacy.

He was born in Virginia, in 1928, and raised in upstate New York – ‘definitely the boondocks’, he recalled. His mother was descended from Pennsylvania Dutch; one of her ancestors, a Rhode Islander, had been a signatory on the Declaration of Independence. Miller’s father, himself the son of a farmer, was a Baptist minister as well as a professor and an academic administrator who emphasised women’s higher education. But Miller’s upbringing wasn’t especially urbane. When he arrived at Oberlin, to study physics, he had never heard of T. S. Eliot. And when he moved to Harvard for graduate studies, he felt uncomfortable – out of step with what he called the ‘white-shoe tradition’. (Miller later wrote that he and his wife, Dorothy, thought of the shift from physics to English during his sophomore year ‘as a vow of poverty for us both’, adding, ‘That has not exactly happened’.)

The subject of Miller’s PhD was Dickens, not an established area of academic study. ‘The idea was that a gentleman had already read these novels’, he told me, during an encounter in 2012. ‘You don’t have to teach them’. When Miller was hired at Hopkins, he was the department’s first Victorianist: ‘Until I came, everything stopped at 1830’. One effect of the appointment was Miller’s meeting with the Swiss critic Georges Poulet, a member of the Geneva School of phenomenology (though, in fact, he never taught there), who argued that literature embodies the ‘consciousness’ of the author. Miller’s dissertation, notionally supervised by the Renaissance scholar Douglas Bush – whose only comment was that he uses ‘that’ when he means ‘which’ – had been written under the influence of Kenneth Burke, especially his idea of symbolic action. But in the published version, which appeared in 1958, he announced the Poulet-like desire to identify in Dickens’s fiction a unique and consistent ‘view of the world’ and called the novel the means by which a writer apprehends and creates himself.

Miller never formally rejected Poulet, but looking back, he saw his period as a phenomenologist as something like a diversion, even a cop-out. He had always been drawn to ‘local linguistic anomalies in literature’, he said, and consciousness criticism provided a ‘momentarily successful strategy for containing rhetorical disruptions of narrative logic’ through reference to the authorial perspective as a unifying – and grounding – presence. But at a certain point in the late 1960s, he abandoned what he called ‘an orientation toward language as the mere register of the complexities of consciousness’ in favour of ‘an orientation toward the figurative and rhetorical complexities of language itself as the generative source of consciousness’. In The Form of Victorian Fiction (1968), he continued to emphasise ‘intersubjectivity’ – how the ‘mind of an author’ is made ‘available to others’. But by the time of Thomas Hardy: Distance and Desire (1970), he had begun to embrace what he called ‘a science of tropes’.

It was a shift cemented in 1972, when Miller was hired by Yale, where his colleagues included Harold Bloom, Paul de Man, Geoffrey Hartman, and Jacques Derrida. Talk of a ‘Yale School’ now goes back almost fifty years, but did it really exist? Derrida, for his part, said that he didn’t identify with ‘any group or clique whatsoever, with any philosophical or literary school’. Yet Miller argued that in spite of such ‘complexities’ – i.e. the best-known putative member claiming otherwise – ‘a Yale School did exist’, and characterised its activities as ‘a group of friends teaching and writing in the same place at the same time, with closely related orientations’.

Miller once argued that the Yale critics were united by an interest in the eighteenth century. It’s a bizarre contention, the meta-critical equivalent of counting the people at a dinner table but forgetting to include yourself. Bloom, Hartman, and de Man were certainly engaged in an effort, spearheaded by Frye, to overturn a prejudice against Romanticism enshrined by Eliot and disseminated by the New Critical textbook Understanding Poetry. The notable products included Hartman’s 1964 study of Wordsworth, the essays in de Man’s Blindness and Insight (1971), and Bloom’s books on Shelley and Blake as well as The Visionary Company (1961), his foolhardy reading of the entire English Romantic canon. Miller, by contrast, specialised in fiction of the Victorian and modernist periods; as late as 2012, he had never read anything by Samuel Richardson.

So the orientations to which Miller alluded are not altogether clear. Even Deconstruction and Criticism (1979), the ‘manifesto’ of the Yale School, exposed the degree of ambiguity, the conjunction in the title separating Bloom from the ‘gang of four’ who practised something called ‘deconstruction’. Hartman complicated things further in his preface by calling Miller, Derrida, and de Man the true ‘boa-deconstructors’ – a gang of three. Even the book’s intended unifying element, an emphasis on Shelley’s ‘The Triumph of Life’, was lost along the way. Looser points of convergence seem more persuasive. Miller pointed out that what he, de Man, Bloom, and Hartman – a different gang of four – were up to differed substantially from the activities of both the New Critics and Northrop Frye. The late Irish academic Denis Donoghue said that what linked the members of the so-called Yale School was simply that they ‘teach at Yale’.

What remains beyond doubt is that Miller was closely aligned to de Man and Derrida, and that their influence explains the discrepancy between the best books of his early period, The Disappearance of God (1963) and Poets of Reality (1965), and the books he published while at Yale, Fiction and Repetition (1982), and The Linguistic Moment (1985), his book on poetry. Miller first encountered de Man’s work at a Yale colloquium in 1964 where he delivered a version of the paper that became the Lukács essay in Blindness and Insight. Another turning point was Derrida’s appearance, two years later, at the star-studded Johns Hopkins conference, ‘The Languages of Criticism and the Sciences of Man’ – theory’s equivalent of the Sex Pistols concert at the Manchester Free Trade Hall. Miller was teaching a class when Derrida, a thirty-six-year-old lecturer at the École Normale Supérieure, and a last-minute addition ­– delivered his exhilarating lecture, ‘Structure, Sign and Play in the Discourse of the Human Sciences’. But he witnessed Derrida in other sessions – for example, he heard him tell the phenomenologist and autofiction pioneer, Serge Doubrovsky, that he didn’t believe in perception.

It was one of many things he didn’t believe in. Derrida’s project, owing debts of varying sizes to Plato, Nietzsche, Heidegger, and Freud, was to show that the majority of modern thought remained implicitly metaphysical, and posited a foundation – a Transcendental Signified, the logos or Word – from which many unchecked assumptions derived. The conference had been organised to remedy East Coast indifference to structuralism. In the event – a word that became a crucial part of theoretical vocabulary – the opposite was achieved. Derrida dethroned Claude Lévi-Strauss, and deconstruction was born.

You might say that the effect of deconstruction, in its literary-critical mode, was to augment a presiding canon of largely B-writers (Baudelaire, Benjamin, Borges, Blanchot, etc) with a group of H-figures (Hölderlin, Hegel, Heidegger, Hopkins, to some degree Hawthorne and Hardy), and to replace a set of keywords beginning ‘s’ – structure, sign, signifier, signified, semiotics, the Symbolic, syntagm, Saussure – with a vocabulary based around the letter ‘d’: decentring, displacement, dislocation, discontinuity, dedoublement, dissemination, difference and deferral (Derrida’s coinage ‘différance’ being intended to encompass both). And there was also a growing role for ‘r’: Rousseau, rhetoric, Romanticism (one of de Man’s books was The Rhetoric of Romanticism), Rilke, and above all reading, a word that appeared, as noun and participle, in titles of books by de Man, Hartman, and most prominently Miller: The Ethics of Reading, Reading Narrative, Reading for Our Time, Reading Conrad.

The New York Times, one of various media outlets that offered a beginner’s guide to this new creed, defined deconstruction as the theory that words and texts have meaning only in relation to other words and texts. That better describes the structuralist concept of intertextuality. A typical deconstructionist reading reveals the ways in which a text deconstructs itself, essentially by keeping irreconcilable ideas in suspension. (Caution: paraphrase ahead!) De Man argued, for example, that Hegel’s Aesthetics was dedicated to the same act – preserving classical art – that it reveals as impossible; autobiography at once veils and defaces the autobiographer; a literary text is something that asserts and denies its own rhetorical authority. Harold Bloom, in the recent posthumous book Take Arms Against a Sea of Troubles, called de Man’s Shelley an ironist who believed that disfiguration annihilates meaning, and then pointed out that this also describes de Man’s Rousseau, his Wordsworth, his Proust…

If Derrida was the founder of deconstruction, and de Man its most concise and feline practitioner, Miller was responsible for extending its range beyond philosophy and poetry. He defined the novel as ‘a chain of displacements’ – author into narrator, narrator into characters, donnée into fiction. He pointed out that attempts to characterise the literature of a given period as closed or open are undermined by the impossibility of demonstrating whether any one narrative is closed or open in the first place. It’s possible to reply that Miller’s criticism gravitated to novels openly concerned with the challenge of reading signs, with language as something that both uncovers and evades: The Wings of the Dove, Lord Jim, Between the Acts. But in his essay ‘Narrative and History’, he argued that Middlemarch, traditionally considered a realist text, can be seen as displacing ‘the metaphysical notions of history, storytelling, and the individual, and the concepts of origin, end and continuity’ with ‘repetition, difference, discontinuity, openness and the free and contradictory struggle of individual human energies’. The ‘canny reader’s motto’, from his formidable – and longest-gestating – book, Ariadne’s Thread (1992), begins: ‘Watch out when you think you’ve “got it”’.

Miller also emerged as the critic best-placed, or anyway keenest, to stand up for deconstruction – against the right, who saw it as iconoclastic and jargon-riddled, and the left, who saw it as elitist. He was frequently forced to deny that deconstruction was simply nihilist. ‘It’s not that nothing is referential’, he said, ‘but that it’s problematically referential’. His best-known essay, and the most virtuoso display of his thinking in action, ‘The Critic as Host’, which appeared in Deconstruction and Criticism, was a response to M.H. Abrams’s essay, ‘The Deconstructive Angel’, which argued that a deconstructionist reading was parasitic on the ‘univocal’ reading of a text. Then there was the charge, recently repeated by Louis Menand in his book The Free World: Art and Thought in the Cold War, that deconstruction was just as cloistered as the New Criticism – that it ignored, in Menand’s phrase, the ‘real-life aspects of literature’. There’s an argument that deconstruction only looked at history when history more or less forced itself onto the agenda when it emerged, in 1987, that de Man had published articles for Belgian newspapers under Nazi control. But Miller had already been engaged in rejecting the view of deconstruction as formalism by another – fancier – name.

In 1986, the year he left Yale for the University of California, Irvine – Derrida went along too – Miller used his first Presidential Address to the MLA Conference to attack the narrowness of newly resurgent historical approaches, and his own later work offered a serial deconstructionist’s riposte. His first formal intervention was Hawthorne & History: Defacing It (1991), an attempt to show that literature never merely ‘reflects’ history, and this was followed by a short book on the relationship between text and images, Illustration (1992), which Jameson called his contribution to ‘“cultural studies” as such’. In the last decade of his life, he produced a study of George Eliot that doubled as an exercise in ‘anachronistic reading’, an account of writing – including the work of Kafka and Toni Morrison – in relation to Auschwitz, and a study of ‘communities’ (always more conflicted than they just appear) in the work of writers as varied as Thomas Pynchon and Raymond Williams. 

But Miller was not content to offer deconstruction as an alternative to what he considered ‘logical’ historicism. He saw it as a truly materialist aesthetics. As long ago as 1972, Jameson argued that opposition to an ‘Absolute Signified’ encoded a critique of the authoritarian and theocentric, and compared the Derridean analysis of the word and its dominance to Marx on money and the commodity. The first overt statement of this position came a decade later in de Man’s essay ‘The Resistance to Theory’, in which he argued that those who dismissed theory as oblivious to social and historical reality exhibited an unconscious fear that it would expose their own ideological mystifications. Then he added, ‘They are, in short, very poor readers of Marx’s German Ideology.’

It wouldn’t be much of an exaggeration to say that this one sentence determined the course of Miller’s work ever after. He became increasingly insistent on the similarity, even interchangeability, of the two traditions. Rhetorical reading was political reading, with an essential role in teaching citizens how to decode what he called the ‘imaginary formulations of their real relations to the material, social, gender, and class conditions of their existence’. In the 1986 MLA address, Miller called for a deconstructionist understanding of the material base. Later, though, he appeared to suggest that such an understanding had always existed. In ‘Promises, Promises’, a 2001 essay principally concerned with similarities between Marx and de Man, though inevitably emboldened by Derrida’s Specters of Marx (1993), Miller called The German Ideology ‘a deconstructive literary theory avant la lettre’, adding ‘If Marx is a deconstructionist, deconstruction is a form of Marxism’. Both were based, in his account, on a refusal to take things for granted, and a need to investigate how a certain sign system – language, the commodity – was established, how it works, and how it might therefore be changed.        

There seems little consensus about the influence or afterlife of the movement to which Miller belonged. For many, it appears to have been something transient. Menand, in The Free World, argues that Derrida’s ‘anti-foundationalism’ never had much effect outside literature departments and ‘some kinds of art practice’. But Camille Paglia, writing in 1993, lamented that post-structuralism had ‘spread throughout academe and the arts’ and was ‘blighting the most promising minds of the next generation’, adding with only a dash of melodrama: ‘This is a major crisis if there ever was one, and every sensible person must help bring it to an end’. And Jameson, writing in NLR in 1995, noted that the ‘maddening gadfly stings’ of Derrida’s attack on metaphysics had hardened into orthodoxy – though he didn’t specify where.

Miller, for his part, liked to emphasise the deconstructive strain in the work of feminist critics such as Shoshana Felman, Eve Kosofsky Sedgwick, Barbara Johnson (who pointed out that the Yale School was always a Male School), and Judith Butler. He never stopped writing and lecturing about Derrida or de Man or the self-immolating tendencies of language or the self-critical faculties of novels or the radicalism inherent in good reading. But he noted with regret that what he called ‘the triumph of theory’ had been undone, and he stopped using the term deconstruction, arguing that ‘a false understanding’ had won the day with the media, and ‘many academics too’.

Leo Robson is lead fiction reviewer for the New Statesman.

Read on: Fredric Jameson, ‘Marx’s Purloined Letter’, NLR I/209.

Categories
Uncategorised

Big Man

Family lore has it that during the First Russian Revolution – 1905 – his mother carried anti-Czarist pamphlets in her school knapsack, and that she later worked briefly as a secretary to Rosa Luxemburg. That is where any biographer of Marshall Sahlins might want to begin. Or with the 18th-century mystic, the Baal Shem Tov, the founder of Hassidic Judaism, from whom the Sahlins clan sometimes claimed descent. Born in 1930, Sahlins grew up on Chicago’s West Side, in a family unaffiliated with any Russian faction, but the radical nimbus remained. His interest in anthropology came early, as a boy, playing cowboys and Indians, with a decided preference for the latter. The discipline attracted the children of Jewish immigrants in the interwar decades. Saul Bellow and Isaac Rosenfeld, fellow West Siders, also started out in anthropology, which provided critical purchase on their otherwise headlong plunge into American society, along with the means to levitate thrillingly above the folkways of the old country that persisted in their families and neighbourhoods.

Sahlins is sometimes treated as an heir to the grand American anthropology tradition of Franz Boas. In fact, he stemmed from a rival line. As an undergraduate at the University of Michigan in the 1950s, he studied with the Mencken-like maverick – and anti-Boas brawler ­– Leslie White. A former student of Veblen and a member of the Socialist Labor Party, White had toured the Soviet Union on the eve of the Great Depression and wrote for Party publications under the name ‘John Steel’. He was a paradoxical figure. Culture, in his conception, was both a reflection of a society’s underlying economic constraints, but also an autonomous force organizing its social life. He developed a theory of technological determinism in human history, but also insisted that most of his contemporaries had underplayed the degree to which humans were a symbolically constituted species (these were among the antinomies that Sahlins would try to resolve). White led a relentless, at times ad hominem, campaign against Boas and his students at Columbia, whom he believed had failed to appreciate the gap between primitive societies and the impersonal structures of modernity. Boasians were adept at collecting ethnographic data, White conceded, but they were poor interpreters and theoreticians of their bounty. They seemed to care only about the diffusion of social forms, but not their history. The Boasians, in turn, viewed White as a crude evolutionist who was, consciously or not, abetting the worst of the racial science of the 19th century. As for his student: Sahlins relinquished the technological evolutionism but retained the radical and historical commitments, as well as the intellectual scrappiness. Like his mentor, he detested schools and disciples: there are admirers of Sahlins across the social sciences, but no hard-line Sahlinists.

In 1951, for his doctoral work, Sahlins moved to Columbia where the Boasian dynasty was now being eclipsed by a more radical generation. There was Elman Service, who fought against Franco and forged the typology of band, tribe, chiefdom and state; the anti-fascist anthropologist-poet, Stanley Diamond, who founded Dialectical Anthropology; as well as better known left-wing scholars such as Eric Wolf and Sidney Mintz. Sahlins’s main influence while at Columbia was the Hungarian exile visiting professor, Karl Polanyi, then in his 60s. It was through Polanyi as well as the classicist Moses Finley that Sahlins got his first prolonged taste of a heterodox theory of the economy. He learned from them not only how artificial and state-conditioned the neo-classical understanding of the market was, but also how alien it was to settings outside the modern North Atlantic. While Finley and Polanyi applied their substantivist economic theory to the ancient world of the Near East and elsewhere, Sahlins started to do the same with Oceania. His dissertation, Social Stratification in Polynesia, which sought to demonstrate how Polynesian culture adapted to various island environments, was an attempt to blend the anthropological materialisms of White and Polanyi. It is a careful, library-produced work that gives a foretaste of the authority, but not the explosive creativity, of its author.

Sahlins, as was the case with Claude Lévi-Strauss, did not conduct as much sustained fieldwork as many of his contemporaries. In 1955-56, Sahlins and his wife spent nine and a half months living on the central Fijian island of Moala, which had 1,200 inhabitants, three Chinese shop-owners, and two outboard motors (though only one was operational). Upon arriving, the couple were frustrated to find themselves treated as superior beings. ‘It is unrealistic to believe that any European can be fully “accepted”; he can never be a Fijian in their eyes’, Sahlins wrote. After a few weeks, however, Sahlins was able to lower himself successfully in the Moalan hierarchy, to the point that he was no longer the first one served the local sedative drink of Kava, but came fifth or sixth. The couple spent most of their time on the island trying to discern pre-colonial rituals and social forms, though many of Sahlins’s most searching findings have to do with how the Fijians made use of colonial developments for their own ends. Colonialism had already thoroughly cannibalized some of the rituals on Moala. In wedding ceremonies, for instance, Sahlins described how families now indebted themselves far more than they ever would have in the pre-colonial period where gifts were made up of replaceable produce from their own land rather than movable goods from the outside. Rituals once meant to solidify kinship ties now threatened to devastate families (and demonstrated how Polanyi was correct to view the ‘rational economic actor’ as a fiction).

In 1965, nearly a half a century before Sahlins’s own student, David Graeber, co-coined the slogan ‘We are the 99%’, Sahlins coined the ‘teach-in’. After his doctorate, Sahlins had moved back to Michigan, where faculty members critical of the war in Vietnam came under fire for their plan to conduct a ‘teach-out’ – to teach their classes off campus. In response, as a consensus-building measure, Sahlins proposed ‘teaching-in’ – occupying classrooms and criticizing the war late into the night. (‘I might have been disposed to binary oppositions because in the 1960s Lévi-Strauss was an oncoming rage in the USA.’) The following year, Sahlins travelled to Vietnam, where he spent only a week but managed to produce ‘The Destruction of Conscience in Viet Nam’, a withering ethnographic report on the tribe of Kennedy-era operatives, whom he memorably described as ‘hard-headed surrealists’. He detailed the way Americans evaded structural questions by blaming ‘graft’ and ‘corruption’, minimized responsibility by conceiving of themselves as ‘advisers’, and channelled their rage for order into the torture of prisoners.

Sahlins was in Paris for 1968, working in Lévi-Strauss’s Laboratoire d’anthropologie sociale, where he was immediately recognized as capable of holding his own, and even occasionally showing up le maître. A decade later, in Culture and Practical Reason (1976), Sahlins tried to play the peacemaker between Marxists and structuralists. Marxists needed to recognize that structuralists had something to teach them about ‘primitive’ societies, while structuralists needed to acknowledge that Marxists had a unique purchase on the structures of modernity. Sahlins himself was more of an accretive thinker: he didn’t so much move through methodological phases, but compounded them, never really discarding anything, as his library vividly attested. Ultimately, however, Culture and Practical Reason fell on the side of the structuralists – cultural reason over practical reason. Sahlins charged the tradition from Morgan to Marx with willy-nilly positivism. Marxism, itself a product of bourgeois society, had only gone halfway in its analysis of it. For Sahlins ‘production is itself a system of cultural intentions’, as Lee Drummond once put it. Any Crow warrior who stumbled into 20th century Chicago would have been puzzled by the cultural distinctions between steak and kidneys, dog meat and pork, if they relied on practical reason alone, which would, according to Sahlins, see each of these as relatively equal sources of protein.

‘The Original Affluent Society’ – first published in Les Temps Modernes in 1968 – will probably go down as Sahlins’s most widely read essay, though despite its obvious power, it’s also the one most vulnerable to empirical criticism. (At the opposite end of the spectrum, Sahlin’s epic history of Hawaii in the Sandalwood period, Anahulu [1992], is his most empirically valuable book, but among his least read.) In it, Sahlins argued that far from being an epoch of misery and deprivation, life in the palaeolithic period consisted of a roughly 30-hour work week. With characteristic ferocity, Sahlins tried to account for this by going hour by hour through the palaeolithic working day. There was something quixotic in trying to generalize and tabulate about such a vast expanse of human history, and something anachronistic about trying to jam the concept of ‘leisure time’ into the social lives of cave-dwellers in 8000 BC. But the essay remains a political tour de force, less for its details, than for its bold re-conception of what scarcity can – and has – meant for humans for most of their history. If anything, Sahlins’s development of the argument, Stone Age Economics (1972), is a more urgent book now for the advocates of degrowth than when it was first published.

Even by the standards of postwar anthropology, Sahlins was a formidable critic, capable of laying waste to entire trends and subfields with an essay. A notorious instance was his attack on the ‘cultural materialism’ of Marvin Harris. A widely respected fellow student of White, Harris published a book called Cannibals and Kings (1977), which essentially argued that the Aztecs had practiced cannibalism because they needed the protein. Many hunter-gatherer tribes in Meso-America had practiced ritual sacrifice with a consumption element, but the Aztecs were a giant civilization that instead of quitting the practice – like so many other societies around them – simply upgraded it to civilizational-scale. The reason, according to Harris, was because they couldn’t get enough protein from the Valley of Mexico. In the New York Review of Books, Sahlins subjected this argument to withering criticism. After his trademark athletic tabulating, in which he tried to show that the Aztec elite could not possibly have acquired enough protein from the human limbs that Harris claimed they partly subsisted on, Sahlins pointed to the adequate protein available in a multitude of different forms around them: ‘Why build a temple, when all you need is a butcher’s block?’

The most famous of Sahlins’s many disputes was with the anthropologist Gananath Obeyesekere about the fate of Captain Cook – a historical episode Sahlins was always prepared to squeeze more insight from. Two years after the American Revolution, Cook had arrived on the main island of Hawaii and been apparently treated by the islanders as a God, but then they later killed him. Why? For Sahlins the answer was that Cook had arrived in the middle of a ritual in which a local god was welcomed on the island, and so he was taken to be that particular god, but when he later returned after breaking a mast, he haplessly entered into another pageant in which the god – which was now, again, himself – was killed and the king of the island restored to his station. Obeyesekere took the view that this was pure exoticization on Sahlins’s part: the islanders clearly had viewed Cook as a possible ally in their wars against Maui, and only killed him when they had determined he was more of a threat than an advantage. Moreover, they never thought he was a god until after his death, as was the case for all Hawaiian royals. There was something curious about the confrontation, as Clifford Geertz noted at the time: Sahlins, the white American scholar, taking the ethno-particularist position, Obeyesekere squarely in the universalist camp. As was often the case with Sahlins, there was something ‘highly carpentered and suspiciously seamless’, in Geertz’s words, about his account. Captain Cook’s perfect timing sets off what appears to be a ballet sequence. But Obeyesekere’s projection of realpolitik onto the islanders seemed even more dubious.

The debate turned out to be not a particularly fruitful episode for the discipline, as it mostly broke down on academic kinship lines. For Sahlins, it was another occasion to pursue what was perhaps his major preoccupation: reconciling the opposition between ‘structure’ and ‘event’ in the social sciences and philosophy. The point was not to privilege either, but to show their inextricableness: an event can only be an ‘event’ from the standpoint of a wider structure, which in turn can be reshaped or shifted by the event. Threading the needle between the event and the longue durée, Sahlins helped clear the way for anthropologists to refocus on the question of historical change that previous generations of structuralists and functionalists had abandoned. The postwar anthropological turn to history, as Joel Isaac has shown, was in large part an attempt to explain the persistence of human institutions, and provincialize state-centric accounts. No one battled the legacy of Hobbes and his vision of the weak sociability of humans more forthrightly than Sahlins, who believed that only by dislodging Western assumptions about the necessity of states as guarantors of human sociability, could the full panoply of possible human flourishing come back into view. In answer to Sartre’s old question: ‘Do we have today the means to constitute a structural, historical anthropology?’ Sahlins was in no doubt: ‘Oui, le jour est arrivé.’

For more than half a century, Sahlins was a member of the University of Chicago’s storied anthropology department. He never lacked for critical targets, either academic or political.  During the Bush years, it was the enlistment of anthropology by the US government in the ‘Human Terrain’ program in the Afghanistan War. In 2013, he dramatically resigned from the Academy of Sciences when he learned of the extent of the program and of the fact that they had inducted Napoleon Chagnon, a former White student who notoriously broke many of the codes of fieldwork and tried to augment the violence among his Yanomamö subjects, into the membership. More recently, Sahlins exposed the network of Confucius Institutes, propaganda mills run by the Chinese government that occupied parts of scores of university campuses in the US, to which he devoted one of his Prickly Pear pamphlets, a very valuable series of short books of which he was co-publisher. What was remarkable was not so much that the Chinese government was running an operation on the fourth floor of the Judd Building at the University of Chicago, but that it took an 83-year-old muckraker to expose it.

A friend of mine once house-sat Sahlins’s dog, a Great Pyrenees named Trinket, whom he had the duty of giving a haircut. ‘She’s too old to go to the doggy salon any longer, so, well, you do your best’, Sahlins told him. My friend explained he had no experience. ‘Do your best’, he said. The verdict was hard on his return: ‘Your best wasn’t very good.’ I remember Sahlins more as a presence than a figure. For decades he kept his fire trained tightly on the economics department, that was still in thrall to Stigler and Friedman, but by the time I arrived, he had sawed off the barrel and brought the whole of Western Civilization into range. My home in the classics department was not spared, since certain Ancient Greeks figured as particular villains in his story. To my shame, I barely knew who the author of Apologies to Thucydides was while I was trying to write an undergraduate thesis on Greek historians. But Sahlins was close with my advisor, and once commented that Thucydides’s History was ‘a good book to read against the grain of the war’. I didn’t realize at the time that he was probably referring to Iraq, not the Peloponnese.

Read on: Jacob Collins, ‘An Anthropological Turn’, NLR 78.