Categories
Uncategorised

Unlearning Machines

There is no denying the technological marvels that have resulted from the application of transformers in machine learning. They represent a step-change in a line of technical research that has spent most of its history looking positively deluded, at least to its more sober initiates. On the left, the critical reflex to see this as yet another turn of the neoliberal screw, or to point out the labour and resource extraction that underpin it, falls somewhat flat in the face of a machine that can, at last, interpret natural-language instructions fairly accurately, and fluently turn out text and images in response. Not long ago, such things seemed impossible. The appropriate response to these wonders is not dismissal but dread, and it is perhaps there that we should start, for this magic is overwhelmingly concentrated in the hands of a few often idiosyncratic people at the social apex of an unstable world power. It would obviously be foolhardy to entrust such people with the reified intelligence of humanity at large, but that is where we are.

Here in the UK, tech-addled university managers are currently advocating for overstretched teaching staff to turn to generative AI for the production of teaching materials. More than half of undergraduates are already using the same technology to help them write essays, and various AI platforms are being trialled for the automation of marking. Followed through to their logical conclusion, these developments would amount to a repurposing of the education system as a training process for privately owned machine learning models: students, teachers, lecturers all converted into a kind of outsourced administrator or technician, tending to the learning of a black-boxed ‘intelligence’ that does not belong to them. Given that there is no known way of preventing Large Language Models from ‘hallucinating’ – weaving untruths and absurdities into their output, in ways that can be hard to spot unless one has already done the relevant work oneself – residual maintainers of intellectual standards would then be reduced to the role of providing corrective feedback to machinic drivel.

Where people don’t perform this function, the hallucinations will propagate unchecked. Already the web – which was once imagined, on the basis of CERN, as a sort of idealized scientific community – is being swamped by the pratings of statistical systems. Much as physical waste is shipped to the Global South for disposal, digital effluent is being dumped on the global poor: beyond the better-resourced languages, low-quality machine translations of low-quality English language content now dominate the web. This, of course, risks poisoning one of the major wells from which generative AI models have hitherto been drinking, raising the spectre of a degenerative loop analogous to the protein cycles of Creutzfeldt–Jakob disease – machine learning turning into its opposite.

Humans, no doubt, will be called upon to correct such tendencies, filtering, correcting and structuring training data for the very processes that are leaving this trail of destruction. But the educator must of course be educated, and with even the book market being saturated with auto-generated rubbish, the culture in which future educators will learn cannot be taken for granted. In a famous passage, the young Marx argued that the process of self-transformation involved in real learning implied a radical transformation in the circumstances of learning. If learning now risks being reduced to a sanity check on the outputs of someone else’s machine, finessing relations of production that are structurally opposed to the learner, the first step towards self-education will have to involve a refusal to participate in this technological roll-out.

While the connectionist AI that underlies these developments has roots that predate even the electronic computer, its ascent is inextricable from the dynamics of a contemporary world raddled by serial crises. An education system that was already threatening to collapse provides fertile ground for the cultivation of a dangerous technology, whether this is driven by desperation, ingenuousness or cynicism on the part of individual actors. Healthcare, where the immediate risks may be even higher, is another domain which the boosters like to present as in-line for an AI-based shake-up. We might perceive in these developments a harbinger of future responses to the climate emergency. Forget about the standard apocalyptic scenarios peddled by the prophets of Artificial General Intelligence; they are a distraction from the disaster that is already upon us.

Matteo Pasquinelli’s recent book, The Eye of the Master: A Social History of Artificial Intelligence, is probably the most sophisticated attempt so far to construct a critical-theoretical response to these developments. Its title is somewhat inaccurate: there is not much social history here – not in the conventional sense. Indeed, as was the case with Joy Lisi Rankin’s A People’s History of Computing in the United States (2018), it would be hard to construct such a history for a technical realm that has long been largely tucked away in rarefied academic and research environments. The social enters here by way of a theoretical reinterpretation of capitalist history centred on Babbage’s and Marx’s analyses of the labour process, which identifies even in nineteenth century mechanization and division of labour a sort of estrangement of the human intellect. This then lays the basis for an account of the early history of connectionist AI. The ‘eye’ of the title links the automation of pattern recognition to the history of the supervision of work.

If barely a history, the book is structured around a few striking scholarly discoveries that merit serious attention. It is well known that Babbage’s early efforts to automate computation were intimately connected with a political-economic perspective on the division of labour. A more novel perspective here comes from Pasquinelli’s tracing of Marx’s notion of the ‘general intellect’ to Ricardian socialist William Thompson’s 1824 book, An Inquiry into the Principles of the Distribution of Wealth. Thompson’s theory of labour highlighted the knowledge implied even in relatively lowly kinds of work – a knowledge that was appropriated by machines and set against the very people from whom it had been alienated. This set the stage for speculations about the possible economic fallout from this accumulation of technology, such as Marx’s famous ‘fragment on machines’.

But the separating out of a supposed ‘labour aristocracy’ within the workers’ movement made any emphasis on the more mental aspects of work hazardous for cohesion. As the project of Capital matured, Marx thus set aside the general intellect for the collective worker, de-emphasizing knowledge and intellect in favour of a focus on social coordination. In the process, an early theory of the role of knowledge and intellect in mechanization was obscured, and hence required reconstruction from the perspective of the age of the Large Language Model. The implication for us here is that capitalist production always involved an alienation of knowledge; and the mechanization of intelligence was always embedded in the division of labour.

If Pasquinelli stopped there, his book would amount to an interesting manoeuvre on the terrain of Marxology and the history of political economy. But this material provides the theoretical backdrop to a scholarly exploration of the origins of connectionist approaches to machine learning, first in the neuroscience and theories of self-organization of cybernetic thinkers like Warren McCulloch, Walter Pitts and Ross Ashby that formed in the midst of the Second World War and in the immediate post-war, and then in the late-50s emergence, at the Cornell Aeronautical Laboratory, of Frank Rosenblatt’s ‘perceptron’ – the earliest direct ancestor of contemporary machine learning models. Among the intellectual resources feeding into the development of the perceptron were a controversy between the cyberneticians and Gestalt psychologists on the question of Gestalt perception or pattern recognition; Hayek’s connectionist theory of mind – which he had begun to develop in a little-reported stint as a lab assistant to neuropathologist Constantin Monakow, and which paralleled his economic beliefs; and vectorization methods that had emerged from statistics and psychometrics, with their deep historical links to the eugenics movement. The latter connection has striking resonances in the context of much-publicized concerns over racial and other biases in contemporary AI.

Pasquinelli’s unusual strength here lies in combining a capacity to elaborate the detail of technical and intellectual developments in the early history of AI with an aspiration towards the construction of a broader social theory. Less well-developed is his attempt to tie the perceptron and all that has followed from it to the division of labour, via an emphasis on the automation not of intelligence in general, but of perception – linking this to the work of supervising production. But he may still have a point at the most abstract level, in attempting to ground the alienated intelligence that is currently bulldozing its way through digital media, education systems, healthcare and so on, in a deeper history of the machinic expropriation of an intellectuality that was previously embedded in labour processes from which head-work was an inextricable aspect.

The major difference with the current wave, perhaps, is the social and cultural status of the objects of automation. Where once it was the mindedness of manual labour that found itself embodied in new devices, in a context of stratifications where the intellectuality of such realms was denied, in current machine learning models it is human discourse per se that is objectified in machinery. If the politics of machinery was never neutral, the level of generality that mechanization is now reaching should be ringing alarm bells everywhere: these things cannot safely be entrusted to a narrow group of corporations and technical elites. As long as they are, these tools – however magical they might seem – will be our enemies, and finding alternatives to the dominant paths of technical development will be a pressing matter.

Read on: Hito Steyerl, ‘Common Sensing?’, NLR 144.

Categories
Uncategorised

Circuits of War

A world war was declared on 7 October. No news station reported on it, even though we will all have to suffer its effects. That day, the Biden administration launched a technological offensive against China, placing stringent limits and extensive controls on the export not only of integrated circuits, but also their designs, the machines used to ‘write’ them on silicon and the tools these machines produce. Henceforth, if a Chinese factory requires any of these components to produce goods – like Apple’s mobile phones, or GM’s cars – other firms must request a special licence to export them.

Why has the US implemented these sanctions? And why are they so severe? Because, as Chris Miller writes in his recent book Chip War: The Fight for the World’s Most Critical Technology (2022), ‘the semiconductor industry produces more transistors every day than there are cells in the human body’. Integrated circuits (‘chips’) are part of every product we consume – that is to say, everything China makes – from cars to phones, washing machines, toasters, televisions and microwaves. That’s why China uses more than 70% of the world’s semiconductor products, although contrary to common perception it only produces 15%. In fact, this latter figure is misleading, as China doesn’t produce any of the latest chips, those used in artificial intelligence or advanced weapons systems.

You can’t get anywhere without this technology. Russia found this out when, after it was placed under embargo by the West for its invasion of Ukraine, it was forced to close some of its major car factories. (The scarcity of chips also contributes to the relative inefficacy of Russian missiles – very few of them are the ‘intelligent’ kind, fitted with microprocessors that guide and correct their trajectory.) Today, the production of microchips is a globalized industrial process, with at least four important ‘chokepoints’, enumerated by Gregory Allen of the Center for Strategic and International Studies: ‘1) AI (Artificial Intelligence) chip designs, 2) electronic design automation software, 3) semiconductor manufacturing equipment, and 4) equipment components.’ As he explains,

The Biden administration’s latest actions simultaneously exploit US dominance across all four of these chokepoints. In doing so, these actions demonstrate an unprecedented degree of US government intervention to not only preserve chokepoint control but also begin a new US policy of actively strangling large segments of the Chinese technology industry – strangling with an intent to kill.

Miller is somewhat more sober in his analysis: ‘The logic’, he writes, ‘is throwing sand in the gears’, though he also asserts that ‘the new export blockade is unlike anything seen since the Cold War’. Even a commentator as obsequious to the United States as the FT’s Martin Wolf couldn’t help but observe that ‘the recently announced controls on US exports of semiconductors and associated technologies to China’ are ‘far more threatening to Beijing than anything Donald Trump did. The aim is clearly to slow China’s economic development. That is an act of economic warfare. One might agree with it. But it will have huge geopolitical consequences.’

‘Strangling with an intent to kill’ is a decent characterisation of the objectives of an American empire that is seriously concerned by the technological sophistication of Chinese weapons systems, from hypersonic missiles to artificial intelligence. China has achieved such progress through the use of technology either owned or controlled by the US. For years, the Pentagon and White House have become increasingly irritated watching their ‘global competitor’ make giant leaps with tools that they themselves provided. Anxiety about China was not merely the transitory impulse of the Trump administration. Such preoccupations are shared by Biden’s government, which is now pursuing the same objectives as its much-maligned predecessor – but with even more vigour.

The timing of the US announcement came just days before the opening of the National Congress of the Chinese Communist Party. In a certain sense, the export ban was the White House’s intervention in the proceedings, which were intended to cement Xi Jinping’s political supremacy. Unlike many of the sanctions imposed on Russia – which, apart from the blockade on microchips, have proven rather ineffective – these restrictions have a high likelihood of success, given the unique structure of the semiconductor market and the particularities of the production process.  

The microchip industry is distinguished by its geographical dispersal and financial concentration. This is owed to the fact that production is extremely capital-intensive. Moreover, its capital-intensity accelerates over time, as the industry’s dynamic is based on a continuous improvement of ‘performance’: i.e. of the capacity to process ever more complex algorithms whilst reducing electricity consumption. The first solid integrated circuits developed in the early 1960s had 130 transistors. The original Intel processor from 1971 had 2,300 transistors. In the 1990s, the number of transistors in a single chip surpassed 1 million. In 2010, a chip contained 560 million, and a 2022 Apple iPhone has 114 billion. Since transistors are always getting smaller, the techniques for fabricating them on a semiconductor have become increasingly sophisticated; the ray of light which tracks designs must be of a shorter and shorter wavelength. The first rays used were of visible light (from 700 to 400 billionths of a metre, nanometres, nm). Over the years this was reduced to 190nm, then 130nm, before reaching extreme ultraviolet: only 3nm. For scale, a Covid-19 virion is around ten times this size.

Highly complex and expensive technology is necessary to attain these microscopic dimensions: lasers and optical devices of incredible precision as well as the purest of diamonds. A laser capable of producing a sufficiently stable and focused light is composed of 457,329 parts, produced by tens of thousands of specialized companies scattered around the world (a single microchip ‘printer’ with these characteristics is worth $100 million, with the latest model projected to cost $300 million). This means that opening a chip factory requires an investment of around $20 billion, essentially the same amount you would need for an aircraft carrier. This investment must bear fruit in a very short amount of time, because in a few years the chips will have been surpassed by a more advanced, compact, miniaturized model, which will require completely new equipment, architecture and procedures. (There are physical limits to this process; by now we’ve reached layers just a few atoms thick, which is why there’s so much investment in quantum computing, in which the physical limit of quantum uncertainty below a certain threshold is no longer a limitation, but a feature to be exploited.) Nowadays, most semiconductor firms don’t produce semiconductors at all; they simply design and plan their architecture, hence the standard name used to refer to them: ‘fabless’ (‘without fabrication’, outsourcing production). But these businesses aren’t really artisanal firms either: to give but three examples, Qualcomm employs 45,000 workers and has a turnover of $35 billion, Nvidia employs 22,400 with revenues of $27 billion, and AMD 15,000 with $16 billion.

This speaks to the paradox at the heart of our technological modernity: increasingly infinitesimal miniaturization requires ever more macroscopic, titanic facilities, so much so that the Pentagon can’t even afford them, despite its annual budget of $700 billion. At the same time, it requires an unprecedented level of integration to put together hundreds of thousands of different components, produced by different technologies, each of which is hyperspecialized.

The push towards concentration is inexorable. The production of machines which ‘print’ state-of-the-art microchips is under the monopoly of a single Dutch firm, ASM International, while the production of the chips themselves is undertaken by a restricted number of companies (which specialize in a particular type of chip: logic, DRAM, flash memory or graphics processing). The American company Intel produces nearly all computer microprocessors, while the Japanese sector – which did extremely well in the 1980s before entering a crisis in the late 90s – has now been absorbed by the American company Micron, which maintains factories across Southeast Asia.

There are, however, only two real giants in material production: one is Samsung of South Korea, favoured by the US during the 1990s to counter the rise of Japan, whose precocity before the end of the Cold War had become threatening; the other is TSMC (Taiwan Semiconductor Manufacturing Company; 51,000 employees with a turnover of $43 billion, and $16 billion in profits), which supplies all the American ‘fabless’ firms, producing 90% of the world’s advanced chips. 

Chris Miller, Chip War (2022), p. 197

The network of chip production is thus highly disparate, with factories scattered between the Netherlands, the US, Taiwan, South Korea, Japan, Malaysia (though note the cluster of firms based in East Asia, as shown by the map above). It is also concentrated in a handful of quasi-monopolies (ASML for ultraviolet lithography, Intel for microprocessors, Nvidia for GPUs, TSMC and Samsung for actual production), with monumental levels of investment. This is the web which makes US sanctions so effective: an American monopoly on microchip designs, drawn up by its great ‘fabless’ firms, through which enormous leverage can be wielded against companies in vassal states which actually manufacture the materials. The US can effectively block Chinese technological progress because no country in the world has the competence or resources necessary to develop these sophisticated systems. The US itself must rely on technological infrastructure developed in Germany, Britain and elsewhere. Yet this is not merely a question of technology; trained engineers, researchers and technicians are also necessary. For China, then, the mountain to climb is steep, even vertiginous. If it manages to procure a component, it will find that another is missing, and so on. In this sector, technological autarky is impossible.

Beijing naturally sought to prepare itself for this eventuality, having foreseen the arrival of these restrictions for some time, by both accumulating chips and investing fantastical sums in the development of local chip-manufacturing technology. It has made some progress in production: the Chinese company Semiconductor Manufacturing International Corporation (SIMC) now produces chips, though its technology lags behind TSMC, Samsung and Intel by several generations. But, ultimately, it will be impossible for China to catch up with its competitors. It cannot access lithographic machines nor the extreme ultraviolets provided by ASML, which has blocked all exports. China’s impotence in the face of this attack is clear from the total lack of official response from Beijing officials, who have not announced any countermeasures or reprisals for American sanctions. The preferred strategy seems to be dissimulation: continuing to work under the radar (perhaps with a little espionage), rather than being thrown out to sea without a flotation device.   

The problem for the American blockade is that a large proportion of TSMC’s exports (plus those of Samsung, Intel and ASML) are bound for China, whose industry depends on the island it wants to annex. The Taiwanese are fully aware of the pivotal role of the semiconductor industry in their national security, so much so that they refer to it as their ‘silicon shield’. The US would do anything to avoid losing control over the industry, and China can’t afford the luxury of destroying its facilities with an invasion. But this line of reasoning was far more robust before the outbreak of the current Cold War between the US and China.

In fact, two months prior to the announcement of microchip sanctions on China, the Biden administration launched a Chip and Science Act which allocated $50 billion to the repatriation of at least part of the production process, practically forcing Samsung and TSMC to build new manufacturing sites (and upgrade old ones) on American soil. Samsung has since pledged $200 billion for eleven new facilities in Texas over the next decade – although the timeline is more likely to be decades, plural. All this goes to show that if the US is willing to ‘deglobalize’ some of its productive apparatus, it’s also extremely difficult to decouple the economies of China and the US after almost forty years of reciprocal engagement. And it will be even more complicated for the US to convince its other allies – Japan, South Korea, Europe – to disentangle their economies from China’s, not least because these states have historically used such trading ties to loosen the American yoke.

The textbook case is Germany: the biggest loser in the war in Ukraine, a conflict which has called into question every strategic decision pursued by German élites in the last fifty years. Since the turn of the millennium, Germany has grounded its economic – and therefore political – fortunes in its relationship with China, its principal commercial partner (with $264 billion worth of annual trade). Today, Germany continues to strengthen these bilateral ties, despite both the cooling of relations between Beijing and Washington and the ongoing war in Ukraine, which has disrupted Russian intermediation between the German bloc and China. In June, the German chemicals producer BASF announced an investment of $10 billion in a new plant in Zhangjiang in the south of China. Olaf Scholz even made a visit to Beijing earlier this month, heading a delegation of directors from Volkswagen and BASF. The Chancellor came bearing gifts, pledging to approve the Chinese company Cosco’s controversial investment in a terminal for container ships in the port of Hamburg. The Greens and Liberals objected to this move, but the Chancellor responded by pointing out that Cosco’s stake would be around 24.9%, with no veto rights, and would cover only one of Hamburg’s terminals – incomparable to the company’s outright acquisition of Piraeus in 2016. In the end, the more Atlanticist wing of the German coalition was forced to give way.

In the present conjuncture, even these minimal gestures – Scholz’s trip to Beijing, less than $50 million worth of Chinese investment in Hamburg – seem like major acts of insubordination, especially following the latest round of American sanctions. But Washington couldn’t have expected its Asian and European vassals to simply swallow deglobalization as if the neoliberal era had never happened: as if, during recent decades, they hadn’t been encouraged, pushed, almost forced to entwine their economies with one another, building a web of interdependence which is now exceedingly difficult to dismantle.

On the other hand, when war breaks out, vassals must decide which side they’re on. And this is shaping up to be a gigantic war, even if it’s fought over millionths of millimetres.

Read on: Susan Watkins, ‘America vs China’, NLR 115.

Categories
Uncategorised

Agile Workplace

Sparked by the publication of Harry Braverman’s now-canonical Labor and Monopoly Capital, Marxists and other leftists mounted manifold criticisms of capitalist work regimes throughout the 1970s and 1980s. These discussions, later dubbed the ‘labour-process debate’, primarily focused on the thematic of Taylorism – the influential and enduring set of organizational principles developed by Ur-management consultant Frederick Winslow Taylor, who sought to synthesize industrial workflows by decomposing tasks, standardizing procedures, eliminating waste and siloing labourers. Though many of the labour-process analysts considered the extent to which Taylorist maxims had spread beyond the shop floor – impressing themselves upon so-called ‘white-collar’ work in administrative, clerical and service sectors – there was one nascent field that did not elicit any particular interest: software development. Still predominantly a military endeavour, it was occluded by a focus on workplace automation in general.

Much of the working culture of software communities appears collaborative by design. The labour and knowledge indexed by discussion forums, listservs, how-tos and public code repositories stand in contradistinction to the bureaucratic rationality of industrial assembly lines. Yet, as we shall see, two managerial strategies have thus far prevailed in software development’s history. The supremacy of one of these strategies over the other – Agile and Waterfall, respectively – is cognate to a broader structural renegotiation that took place between capital and labour in the twilight of the twentieth century.

Beginning around 1970, in response to what several NATO Software Engineering Conference attendees had diagnosed as a ‘software crisis’ – wherein unwieldy, difficult-to-manage initiatives routinely exceeded their allocated budgets and timeframes – attempts were made to mould computer programming in the shape of the scientific-managerial doxas then in vogue. Various methodologies were suggested to solidify the field; one – the Waterfall model – was eventually victorious following its adoption by the US Department of Defense. The intent of Waterfall was to systematize the often ad-hoc activity that comprises large-scale software production. Among the features that enabled Waterfall to conquer rival methodologies was its division of software development into six sequential stages: requirements, analysis, design, coding, testing, and operations.

The familiar features of the Taylorist factory floor are here transposed from industry to information technology: each stage corresponds to a dedicated department of specialists, who mechanistically repeat their métier ad nauseum. In Waterfall, work cannot begin on any one stage until work on the preceding stage has been completed and its quality assured, with the only necessary coordination between stages being the assurance of ticked checkboxes. Participatory input is discouraged; requirements provided by managerial strata in the first stage chart the full course for the following five. Improvisatory exploration is suppressed; formalities such as contracts, approvals, specifications and logs abound.

In the last decades of the twentieth century, criticisms of Waterfall in software engineering meshed with a new wave of criticisms of work erupting throughout the Western capitalist world. These expressions of popular discontent did not just take aim at the familiar target of exploitation, as conceived by the trade union movement. Rather, they emanated in large part from a relatively affluent subset of the younger working population, disaffected with the heteronomy of work, and emphasizing affective-existential themes like boredom, dehumanization, inauthenticity and meaninglessness. Marking a shift from quantitative material demands for wage increases, employee benefits and job security, this qualitative critique of working life resonated with the vocabulary of urban intellectual and artistic circles. This is what Luc Boltanski and Eve Chiapello refer to in The New Spirit of Capitalism as the ‘artistic critique’: an attack on bureaucratic calcification and hierarchical segmentation, on infantilizing work routines, stern schedules, and a sense of futility under the rubric of Taylorism.

The artistic critique instituted something of a paradigm shift in managerial literature during the 1990s, as new organizational forms were required to render capitalism seductive once again. The rejection of once-sacrosanct bureaucratic-rational principles of twentieth-century scientific management was so pronounced in this literature that Peter Drucker, prominent management consultant and prognosticator of ‘post-industrial society’, termed it a ‘big bang’. These texts – drafted by business managers, organizational engineers, industrial psychologists and the like – were a laboratory in which a properly twenty-first-century capitalist ethos was concocted: a new ‘spirit’, based on cultures, principles, assumptions, hierarchies and ethics that absorbed the complaints of the artistic critique. What emerged was a novel social arrangement, fashioned especially for skilled workers and the children of middle-class cadres, whose regulative principles were employee autonomy, participatory exchange, temporal flexibility and personal self-development.

Boltanski and Chiapello call this arrangement the ‘projective city’. In the projective city, the vertical command-control structures of the Taylorist factory are supplanted by the horizontal ‘network’, whose absolute and ideal amorphousness comes to dominate life both inside and outside the firm. This transformation imposes new operative compulsions, hierarchies of status, intra- and inter-firm politics, and affective states on wage-earners. In the projective city, the central locus of daily life is the project: the determinate activation of a discrete subsection of the network for a definite period and toward a specific goal.

The project – and its regime of continuous activity, the proliferation of which forms an end in itself – becomes the precondition for the connections between agents in the network. Agents cohere into teams which then disperse once the project’s common endpoint has been reached; success on one project is measured by each person’s ability to make it to the next. Rather than a career spent dedicated to a single specialty within the protective environment of the large firm, agents in the projective city are encouraged to multitask, collaborate, adapt and learn by forging interpersonal connections through successive projects. These agents can then incrementally leverage the skills and connections attained on past projects into more interesting, diverse, and – what amounts to the same thing – prestigious future projects, the very heterogeneity of which becomes their primary mark of esteem.

In the aftermath of Y2K and the bursting of the dot-com bubble, a new workplace methodology called Agile emerged as an attempt to sow these principles in the technology sector. Agile’s manifesto – drafted by an alliance of 17 software developers at a Wasatch Range ski resort in 2001 – consists of a mere four lines and 24 words, outlining grammars of action heretical to the Waterfall bible:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan.

Agile is everything that Waterfall is not: lightweight, incremental, fluid, encouraging plasticity with regard to tasks, roles, scheduling, and planning. Whereas initial stakeholder requirements in Waterfall are binding – often the product of months-long planning – Agile’s recurrent testing of products in-progress allows for feedback, reevaluation, modification and pivoting. So-called ‘lean’ teams are released from their individual silos, pushed toward an ideal symbiosis via an endless ascesis of observing, listening, and questioning. Face-to-face interaction and exchange become conduits for learning and self-development. Team members are interchangeable, picking up variegated tasks as needed by the project, countervailing Waterfall’s tendency toward increasing atomization and autonomization of spheres. Here enters the figure of the Project Manager or Team Lead – whose responsibilities can overlap with the Product Manager or Product Owner, depending on the environment – who acts as a facilitator: connecting individuals, redistributing information, unifying energies, routing vectors.

Today, these values congeal around a set of shared terms and best practices. Much of the vernacular antedates Agile, inherited from previous collaborative workplace methodologies like Scrum, Extreme Programming, DSDM or Crystal – but all have now been absorbed into its lexicon. In the Agile firm, the temporal unit of measure is the sprint, so called for its brevity and its intensity. A sprint ranges between two and four weeks, instilling a sense of urgency in the sprinter. While the activity that takes place within it is not strictly scheduled, a predetermined chunk of work must be completed by the time one reaches the finish line.

On desktop, the sprint is visualized two-dimensionally as a horizontal tableau of epics, stories and tasks. Epics correspond to projects, subsuming several stories and tasks. Stories are kept intentionally vague, merely denoting a single functionality that the finished product must support, in the format of: ‘as a <type of user> I can <action> because <intent>.’ While multiple individuals are assigned to a story, single individuals are assigned to a task, which is the smallest byte of work in the project. In the same vein as the sprint, the stand-up or daily scrum keeps team members on their toes; at the start of each working day, programmers assemble and serially recite the tasks they have achieved since yesterday, and which tasks they intend to achieve before tomorrow.

Agile has now not only usurped Waterfall as the dominant paradigm of IT, it has also spilled into areas beyond software development, including finance, sales, marketing and human resources. In recent years, a wave of gargantuan firms in legacy industries – Barclays, Cisco, Wells Fargo and Centene to name a few – have undertaken company-wide ‘Agile transformations’, plucking ‘coaches’ from top business schools and the Big Four consulting firms, and at the same time creating a vast market for productivity software suites made by big-cap companies like Microsoft, Atlassian, ServiceNow and SAP.

The question that nags is: why are employers across sectors willingly, even at considerable expense, instituting changes of this nature? Only one of two possible answers can suffice. The first is that these concessions are plain acts of benevolence on behalf of executives and shareholders, who aim to placate the legitimate malaise voiced by the artistic critique. The second is that a silent bargain between capital and wage-labour has occurred, with capital steadily shedding impediments to accumulation, and wage-earners forfeiting hard-won security in exchange for putative freedom.

It is clear that Agile dissolves many of the more visible features of hierarchical managerial control. But it does so only to recontain them in subtle and nuanced ways. For one, the self-organizing strategies of teams allow for certain workplace disciplinary mechanisms to take the form of normative compulsions rather than explicit instructions. Here, the complex interpersonal modalities of ‘sprint planning’ are illustrative. At the start of the sprint, teams convene to assess the tasks they’ve prioritized. One-by-one, the Project Manager identifies each task, describes what it entails, and asks if the criteria make sense. The goal is then for the team to map story points to that task, which is a number that defines its level of complexity. The team cannot discuss the next task until all team members have mutually agreed upon the current task’s number of points.

In hardline Agile firms, a device called point poker is used. In this game, team members blindly impute a number to the task – they ‘point’ – and then the Project Manager ‘reveals’, showing the level of complexity that each team member believes the task to be. There is an element of motivation psychology here: no programmer wants to be caught assessing a task assigned to them as exceedingly difficult, an anxiety that exerts a consistent downward pressure on the number of points assigned to tasks. Because points can be doled out until the sum of points reaches the velocity number – the maximum of points that the team can reliably handle before the next sprint – pressure is exerted upon individuals to shoulder a larger workload.

The homeostatic regulation of the Agile team accrues additional advantages to capital. Its internalization of discipline renders redundant much of the managerial and supervisory strata of Waterfall. This is also the case with the necessarily transient nature of the connections between agents in Agile: links to product owners, leaders, managers, teams and outside firms can often last solely for the duration of a single project. A project-centric culture, coupled with the premium placed on lean teams and fungible team members, encourages wage-earners to move freely not only from project to project, but from firm to firm. Hence, we see the proliferation of recently devised hiring instruments like temping, subcontracting, outsourcing, zero-hours, freelancing and permalancing, forming part of the basis for what has come to be known as the ‘precariat’. In this way, through the mores and rhythms of the work itself, relationships are rendered tenuous, and employers are freed from the commitment to long-term employee well-being.  

The virtue most venerated in the Agile environment is autonomy – which, in practice, amounts to a cult of individual performance. All are engaged in a constant battle with ossification. Ceaseless self-education, self-training and self-improvement is required. Workers must practice both the one-upmanship of accruing more responsibilities on projects and the continual anticipation of future competency needs. Threatened by what Robert Castel calls ‘disaffiliation’, an anxious self-consciousness pervades the projective city. Make yourself useful to others – or die. Boundaries between work and non-work disintegrate, not least due to the voluntary labour one must perform to extend one’s network socially. Underlying each project, the long-term personal meta-project is employability. For that, it is not enough merely to complete the task: one must distinguish oneself.

There are also the more familiar corporate control mechanisms that Agile’s progenitors claim no longer exist. Principles of self-determination clash with technocratic implementation. In today’s corporate pantheon, the Product Manager or Product Owner is a kind of hybrid entrepreneurial-aesthetic visionary, embodying the qualities previously associated with the artist of high modernism. The software product is the outcome of his creativity and ingenuity. While his co-workers are not figured as employees, but as formally equal teammates or collaborators, behind this façade the Product Manager enforces an ever intensifying work regime: ticketing systems to monitor activity, daily stand-ups to create accountability, deadlines to meet quarterly revenue streams. By breaking down his reveries into actionable assignments to be fulfilled in the span of days, the Product Manager controls the pace at which technology is created. Decision-making rests upon his word; deploying the product as quickly as possible is his objective. Under this despot cloaked in a dissident’s garb, the Taylorist separation between conception and execution reappears in the projective city: as if Agile has itself succumbed to the rationalization it pledged to banish.

Read on: Rob Lucas, ‘Dreaming in Code’, NLR 62

Categories
Uncategorised

The Mojibake

It is a truism that one only notices certain things when they break. An encounter with some error can expose momentarily the chaotic technical manifold that lies hidden below the surface when our devices function smoothly, and a little investigation reveals social forces at work in that manifold. For the history of capitalist society is caked in the layers of its technical infrastructure, and one need only scratch the surface of the most banal of technologies to discover that in them are fossilized the actions and decisions of institutions and individuals, companies and states. If our everyday interactions are quietly mediated by thousands of technical protocols, each of these had to be painstakingly constructed and negotiated by the sorts of interested parties that predominate in a world of capital accumulation and inter-state rivalry. Many persist as the outcome of survival struggles with others that proved less ‘fit’, or less fortunate, and which represent paths not taken in technical history.

In this sense, technical interrogation can cross over into a certain kind of social demystification; if the reader will tolerate a little technical excursus, we will thus find our way to terrain more comfortable to NLR readers. A string of apparently meaningless characters which render nonsensical an author’s name is the clue that will lead us here towards the social history of technology. That name bears a very normal character – an apostrophe – but it appears here as &039;. Why? In the ASCII text-encoding standard, the apostrophe is the 39th character. On the web, the enclosure of a string of characters between an ampersand and a semicolon is a very explicit way of using some characters to encode another if you aren’t confident that the latter will be interpreted correctly; it’s called an ‘HTML entity’ – thus &039; is a sort of technically explicit way of specifying a “‘”.

Until relatively recently there was a babel of distinct text encodings competing on the web and beyond, and text frequently ended up being read in error according to an encoding other than that by which it was written, which could have the effect of garbling characters outside the narrow Anglo-centric set defined by ASCII (roughly those that you would find on a standard keyboard in an Anglophone country). There is a technical term for such unwitting mutations: mojibake, from the Japanese 文字化け (文字: ‘character’; 化け: ‘transformation’). In an attempt to pre-empt this sort of problem, or to represent characters beyond the scope of the encoding standard in which they are working, platforms and authors of web content sometimes encode certain characters explicitly as entities. But this can have an unintended consequence: if that ampersand is read as an ampersand, rather than the beginning of a representation of another character, the apparatus of encodings again peeps through the surface of the text, confronting the reader with gibberish.

In large part such problems can be traced back to the limitations of ASCII – the American Standard Code for Information Interchange. This in turn has roots that predate electronic computation; indeed, it is best understood in the context of the longer arc of electrical telecommunications. These have always been premised on encoding and decoding processes which reduce text down to a form more readily transmissible and manipulable by machine. While theoretically possible, direct transmission of individual alphabetic characters, numerals and punctuation marks would have involved contrivances of such complexity they would probably not have passed the prototype stage – somewhat like the early computers that used decimal rather than binary arithmetic.

In the first decades of electrical telegraphy, when Samuel Morse’s code was the reigning standard, skilled manual labour was required both to send and receive, and solidarities formed between far-flung operators – many of whom were women – as 19th Century on-line communities flourished, with workers taking time to chat over the wires when paid-for or official traffic was low. Morse could ebb and flow, speed up and slow down, and vary in fluency and voice just like speech. But in the years after the Franco-Prussian War, French telegrapher Émile Baudot formulated a new text encoding standard which he combined with innovations allowing several telegraph machines to share a single line by forcibly regimenting the telegraphers’ flow into fixed-length intervals – arguably the first digital telecommunications system.

Though Baudot code was taken up by the French Telegraph Administration as early as 1875, it was not put to use so readily elsewhere. In the United States, the telegraphers were still gradually forming into powerful unions, and in 1907 telecommunications across the whole of North America were interrupted by strikes targeting Western Union, which included demands over unequal pay for women and sexual harassment. A year later, the Morkrum Company’s first Printing Telegraph appeared, automating the encoding and decoding of Baudot code via a typewriter-like interface. Whereas Morse had demanded dexterity and fluency in human operators, Baudot’s system of fixed intervals was more easily translated into the operations of a machine, and it was now becoming a de facto global standard.

‘Girl telegraph operators’ or ‘Western Union men’ striking in 1907.
The Morkrum Company rolled out its first completely automatic telegraphy machines in 1908, during the peak years of telegrapher militancy.

In 1924, Western Union’s Baudot-derived system was enshrined by the International Telegraph Union as the basis of a new standard, International Telegraph Alphabet No. 2, which would reign throughout mid-century (ITA1 was the name given retrospectively to the first generation of Baudot code). Though Baudot was easier than Morse to handle automatically, only a limited number of characters could be represented by its five bits, hence the uppercase roman letters that characterized the Western telegraphy of that moment.1 The European version allowed for É – a character that would remain absent from the core Anglo-American standards into the era of the web – but still lacked many of the other characters of European languages. There were some provisions for punctuation using special ‘shift’ characters which – like the shift key of a typewriter – would move the teletype machine into a different character set or encoding. But this shifting between modes was laborious, costly, and error-prone – for if a shift was missed the text would be mangled – pushing cautious senders towards simple use of the roman alphabet, in a way that is still being echoed in the era of the web. A 1928 guide, How to Write Telegrams Properly explained:

This word ‘stop’ may have perplexed you the first time you encountered it in a message. Use of this word in telegraphic communications was greatly increased during the World War, when the Government employed it widely as a precaution against having messages garbled or misunderstood, as a result of the misplacement or emission of the tiny dot or period. Officials felt that the vital orders of the Government must be definite and clear cut, and they therefore used not only the word ‘stop’, to indicate a period, but also adopted the practice of spelling out ‘comma’, ‘colon’, and ‘semi-colon’. The word ‘query’ often was used to indicate a question mark. Of all these, however, ‘stop’ has come into most widespread use, and vaudeville artists and columnists have employed it with humorous effect, certain that the public would understand the allusion in connection with telegrams. It is interesting to note, too, that although the word is obviously English it has come into general use in all languages that are used in telegraphing or cabling.

A 1930 telegram demonstrating the use of the reliable all caps, with no punctuation and minimal use of numbers.

The Cyrillic alphabet had its own version of Baudot code, but depended on use of a compatible teletype, making for a basic incompatibility with the Anglo-centric international standard. The vast number of Chinese characters of course ruled out direct encoding in such a narrow system; instead numeric codes identifying individual characters would be telegraphed, requiring them to be manually looked up at either end. Japanese was telegraphed using one of its phonetic syllabaries, katakana, though even these 48 characters were more than Baudot could represent in a single character set. Thus technical limitation reinforced the effects of geopolitical hegemony to channel the world’s telecommunications through a system whose first language would always be English.

Morkrum’s printing teletypes came to dominate American telecommunications, and after a name change to the Teletype Corporation, in a 1930s acquisition the company was subsumed into the giant Bell monopoly – which itself was to become practically an extension of the 20th Century American state, intimately involved in such things as Cold War missile defence systems. Having partly automated away the work of the telegraphers, in the early 1930s Teletype were already boasting in the business press of their role in facilitating lean production in the auto industry, replacing human messengers with direct communications to the assembly line – managerial missives telecommunicated in all caps, facilitating a tighter control over production.

Teletype Corporation ad, November 1931, showing a surprisingly early conception of ICTs as facilitating ‘lean production’.

With the propensity of telegraphic text to end up garbled, particularly when straying beyond the roman alphabet, the authority of those managerial all caps must sometimes have been lost in a blizzard of mojibakes. Communications across national borders – as in business and diplomatic traffic, for example – were always error-prone, and an incorrect character might mess up a stock market transaction. The limitations of Baudot code thus led to various initiatives for the formation of a new standard.

In the early 1960s, Teletype Corporation and IBM, among others, negotiated a new seven-bit standard, which could handle lower case letters and the standard punctuation marks of English. And before these components of the English language, the codes inserted into the beginning of this standard – with its space for a luxurious 127 characters – had a lot to do with the physical operation of particular bits of office equipment which Bell was marketing at the time via its Western Electric subsidiary. Thus while ASCII would carry codes to command a machine to ring a physical bell or control a feed of paper, it had no means of representing the few accented characters of French or Italian. The market unified by such acts of standardization was firstly national and secondarily Anglophone; the characters of other languages were a relatively distant concern.

Alongside these developments in existing telecoms, early computing struggled through its own babble of incompatible encodings. By the early 1960s, as computer networks began to spread beyond their earliest military uses, and with teletype machines now being called upon to act as input/output devices for computers, such problems were becoming more pressing. American ‘tech’ was in large part being driven directly by the Cold War state, and in 1968 it was Lyndon Johnson who made ASCII a national standard, signing a memorandum dictating its use by Federal agencies and departments.

The 1968 memorandum signed by Lyndon Johnson, effecting the adoption of ASCII as a national standard.

ASCII would remain the dominant standard well into the next century, reflecting American preeminence within both telecoms and computation, and on the global stage more generally (even within the Soviet Union, computing devices tended to be knock-offs of American models – in a nominally bipolar world, one pole was setting the standards for communications and computation). To work around the limitations of ASCII, various ‘national’ variants were developed, with some characters of the American version swapped for others more important in a given context, such as accented characters or local currency symbols. But this reproduced some of the problems of the ways that Baudot had been extended: certain characters became ‘unsafe’, prone to garbling, while the core roman alphabet remained reliable.

Consider the narrow set of characters one still sees in website domains and email addresses: though some provisions have been made for internationalization on this level, even now, for the most part only characters covered by American ASCII are considered safe to use. Others risk provoking some technical upset, for the deep infrastructural layers laid down by capital and state at the peak of American dominance are just as monoglot as most English-speakers. Like the shift-based extensions to Baudot, over time, various so-called ‘extended ASCII’ standards were developed, which added an extra bit to provide for the characters of other languages – but always with the English set as the core, and still reproducing the same risk of errors if one of these extended variants was mistaken for another. It was these standards in particular which could easily get mixed up in the first decades of the web, leading to frequent mojibakes when one strayed into the precarious terrain of non-English text.

Still, reflecting the increasing over-growth of American tech companies from the national to the global market in the 1980s, an initiative was launched in 1988 by researchers from Xerox and Apple to bring about a ‘unique, unified, universal’ standard which could accommodate all language scripts within its 16 bits and thus, effectively, solve these encoding problems once and for all. If one now retrieves from archive.org the first press release for the Unicode Consortium, which was established to codify the new standard, it is littered with mojibakes, having apparently endured an erroneous transition between encodings at some point.

The first press release for the Unicode Consortium itself displays encoding errors.

The Consortium assembled leading computing companies who were eyeing global markets; its membership roster since then is a fairly accurate index of the rising and falling fortunes of American technology companies: Apple, Microsoft and IBM have remained throughout; NeXT, DEC and Xerox were present at the outset but dropped out early on; Google and Adobe have been consistent members closer to the present. But cold capitalist calculation was at least leavened by some scholarly enthusiasm and humanistic sensibility as academic linguists and others became involved in the task of systematizing the world’s scripts in a form that could be interpreted easily by computers; starting from the millennium, various Indian state authorities, for example, became voting members of the consortium as it set about encoding the many scripts of India’s huge number of languages.

Over time, Unicode would even absorb the scripts of ancient, long-dead languages – hence it is now possible to tweet, in its original script, text which was originally transcribed in the early Bronze Age if one should so desire, such as this roughly 4,400 year old line of ancient Sumerian: ‘? ??? ???? ? ?? ????? ??’ (Ush, ruler of Umma, acted unspeakably). It must surely be one of the great achievements of capitalist technology that it has at least partially offset the steamrollering of the world’s linguistic wealth by the few languages of the dominant powers, by increasingly enabling the speakers of rare, threatened languages to communicate in their own scripts; it is conceivable that this could save some from extinction. Yet, as edifying as this sounds, it is worth keeping in mind that, since Unicode absorbed ASCII as its first component part, its eighth character () will forevermore be a signal to make a bell ring on an obsolete teletype terminal produced by AT&T in the early 1960s. Thus the 20th Century dominance of the American Bell system remains encoded in global standards for the foreseeable future, as a sort of permanent archaeological layer, while the English language remains the base of all attempts at internationalization.

Closer to the present, the desire of US tech companies to penetrate the Japanese market compelled further additions to Unicode which, while not as useless as the Teletype bell character, certainly lack the unequivocal value of the world’s endangered scripts. Japan had established a culture of widespread mobile phone use quite early on, with its own specificities – in particular the ubiquitous use of ideogrammatic emoji (絵文字: ‘picture character’; this is the same ‘moji’ as that of mojibake). A capacity to handle emoji was thus a prerequisite for any mobile device or operating system entering the market, if it was to compete with local incumbents like Docomo, which had first developed the emoji around the millennium. Since Docomo hadn’t been able to secure a copyright on its designs, other companies could copy them, and if rival operators were not to deprive their customers of the capacity to communicate via emoji with people on other networks, some standardization was necessary. Thus emoji were to be absorbed into the Unicode standard, placing an arbitrary mass of puerile images on the same level as the characters of Devanagari or Arabic, again presumably for posterity. So while the world still struggles fully to localize the scripts of websites and email addresses, and even boring characters like apostrophes can be caught up in encoding hiccups, people around the world can at least communicate fairly reliably with Unicode character 1F365 FISH CAKE WITH SWIRL DESIGN (?). The most universalistic moments of capitalist technology are actualized in a trash pile of meaningless particulars.

1 Bit: a contraction of ‘binary digit’; the smallest unit of information in Claude Shannon’s foundational information theory, representing a single unit of difference. This is typically, but not necessarily, represented numerically as the distinction between 0 and 1, presumably in part because of Shannon’s use of George Boole’s work in mathematical logic, and the mathematical focus of early computers. But the common idea that specifically numeric binary code lies at the heart of electrical telecoms and electronic computation is something of a distortion: bits or combinations of elementary units of difference can encode any information, numerical or other. The predominant social use of computing devices would of course prove to be in communications, not merely the kinds of calculations that gave the world ‘Little Boy’, and this use is not merely overlaid on some mathematical substratum. The word ‘bit’ might seem an anachronism here, since Baudot’s code preceded Shannon’s work by decades, but the terms of information theory are nonetheless still appropriate: Baudot used combinations of 5 elementary differences to represent all characters, hence it is 5-bit. This meant that it could encode only 31 characters in total: if a single bit encodes a single difference, represented numerically as 0 or 1, each added bit multiplies by 2 the informational capacity of the code; thus 25 = 32, though we are effectively counting from 0 here, hence 31. Historically, extensions to the scope of encoding systems have largely involved the addition of extra bits.

Read on: Rob Lucas, ‘The Surveillance Business’, NLR 121.