Category Archive: Analysis

  1. Mercenary State

    Comments Off on Mercenary State

    South Africa today is among the world’s foremost players in state and private militarization, producing weapons and exporting expertise for militias and national armed forces. This stems in part from a sprawling domestic security industry: In 2021, personal spending on private security in the country made up a massive $17 billion, or 4.2 percent of total GDP. The South African government itself spent $880 million on private security. Meanwhile, the country boasts one of the strongest armies in Africa, playing a regional “peacekeeping role” and supporting other state militaries in combat against rebels.

    These different components contribute to South Africa’s “securitization complex,” established during apartheid and composed of arms companies, mercenary groups, the military, and contracted security expertise. Over seven decades, the South African apartheid state’s security industry quashed local uprisings and decolonial struggles, backed its fellow apartheid regimes, and dismantled the supposed Black and communist threats of the “swart gevaar” and “rooi gevaar.” The apartheid-era South African army, the South African Defense Force (SADF), employed its forces to destabilize liberation movements in Angola and Mozambique through the twenty-three-year-long “Border Wars” with Namibia, Zambia, and Angola. The veterans of this era have now entered different privatized security companies based in South Africa, which operate throughout the region. 

    Today, one of apartheid’s greatest remnants is this vast securitization complex. Despite the declining presence of the South African military in domestic life, militarization has been “exported” through a privatized network of military services and weapons sales for a host of different clients, including states, multinational corporations, and other armed groups. The reach of these industries has spread beyond the African continent, implicating geopolitical warfare across the world.

    “Corporate warriors’

    For most of the twentieth century, the SADF enforced the apartheid state’s laws, making it infamous for its use of brute force, live fire against civilians, and other crimes against humanity. From 1967 to 1993, it was compulsory for young white men to complete nine months of military service, leading over 600,000 soldiers to serve in the army during that period. During the “Border Wars,” the SADF was deployed to destabilize other newly independent southern African countries.1 Later, the SADF occupied Namibia where it waged war against the local liberation movement. But following the end of apartheid in 1994, the army evolved into the South African National Defense Force (SANDF). The post-apartheid force sought to integrate troops from the SADF and guerrilla forces associated with the African National Congress, Umkhonto we Sizwe, the Pan Africanist Congress, and the Inkatha Freedom Party. The SANDF abandoned the conscription policy and now employs 70,000 service members, a drastic reduction from its earlier numbers. 

    The apartheid-era government’s engagement with dirty conflicts beyond state borders, alongside the transformation from the SADF to the SANDF, spurred the rise of a major mercenary and private security industry that spilled into the rest of the continent. With the widespread unemployment of SADF soldiers and South African police officers, investors with financial links to the apartheid regime began supporting small security companies. Former soldiers represented a perfect security contingent for a wide range of services: protecting South African assets in Africa, other African states’ assets, and corporate holdings—typically oil fields and extractive projects—and offering private security for politicians, wealthy elites, and the growing numbers of the South African middle class living in gated communities. 

    As Peter Warren Singer, a foreign policy analyst and former US military consultant, describes in his 2010 book Corporate Warriors, recruitment was easy. Private security offered well-paid jobs to soldiers with a wide range of experience in regional combat and domestic security. Singer notes that almost 60,000 soldiers left the SADF after 1994; those newly unemployed workers lacked experience in any other field and sought to regain the prestige they had lost under the transitional government. Private security companies paid significantly more than any government position, with salaries ranging from $2,000 to $13,000 per month in US dollars. The soldiers included white and Black South Africans, Namibians, and Angolans, though most high-level employees were white.2

    Today, as South Africa maintains one of the highest violent-crime rates in the world, it also hosts a robust security industry which makes up a significant portion of domestic employment and GDP. According to the Private Security Industry Regulatory Authority (PSiRA), a state-mandated regulatory body for the private security sector, by March 2024, the country had 2.9 million private security officers and 20,709 registered private security companies, compared to 150,000 police officers.3

    South Africa’s security businesses, sometimes known as “consultancies,” were established by former pre-democracy high-level police and army personnel. These businesses have been trusted for their “knowledge” and “technical expertise,” partially gained through their role in quelling anti-apartheid uprisings over years. Former South African army generals and police have set up joint companies with politicians from Mozambique and Angola, and South Africans commonly serve as consultants to private security companies in the region. 

    The most notorious South African private military company was apartheid army general Eeben Barlow’s Executive Outcomes (EO). Created in 1989 during the decline of apartheid, EO recruited fighters who left the SADF after the Border Wars and those from the ANC military wing uMkhonto Wesizwe, before it was officially dissolved at the end of 1998.4

    During the 1990s, EO was stationed in Kenya and the Democratic Republic of Congo (then-Zaire). Some observers called EO, “a private Pan-African peace-keeping force of a kind which the international community has long promised, but failed to deliver.” In his book, Singer praises EO as “a true innovator in the overall privatized military industry, providing the blueprint for how effective and lucrative the market of forces-for-hire can be.”5

    EO offered five key services: strategic and tactical military advisory services; an array of sophisticated military training packages in land, sea, and air warfare; peacekeeping or “persuasion” services; advice to armed forces on weapons selection and acquisition; and paramilitary services. Training packages covered the entire realm of military operations, including everything from basic infantry training and armored warfare specialties to parachute operations.6 The company managed to enter into business with African governments who had supported the ANC during the anti-apartheid movement. EO was able to insert itself among elite governmental networks and obfuscate the payments of bribes through offshore shell companies. They offered political leaders a combination of weapons, bribes, and power, blending South Africa’s historical technical weapons expertise and technology with the colonial legacy of corruption that still plagues the continent. 

    Although EO was disbanded in 1998, the company created a model for hundreds of other private security and mercenary groups in South Africa to follow. These burgeoning organizations would be dedicated to protecting oil and gas assets, mining sites, and working with local soldiers. Today, former EO mercenaries continue to be scattered around the world well after the company’s dissolution, reportedly found in combat in Libya, working under the US army and Blackwater in Iraq, and partaking in the attempted coup in Equatorial Guinea in 2005. 

    While private security groups in South Africa have flourished since the fall of apartheid, the SANDF has suffered from weakened power and a tarnished reputation. Though its apartheid-era ancestor played a major role in performing the tasks of domestic social control, the SANDF is largely invisible in daily South African life today. At the domestic level, the force is mostly relegated to army bases or brought out to police neighborhoods during intense bouts of gang violence. But at the international level, the SANDF continues to be a major actor, fighting other rebel groups and securing the assets of extractive industries essential to both the South African state and multinational corporations. 

    Weapons trade

    The post-apartheid securitization complex has evolved to include many interrelated sectors and ambitions, but it is in weapons manufacturing where we can discern the roots of a national defense and security sector. Present-day weapons development in South Africa draws on the legacy of the apartheid state-owned industry. In 1961, after Pretoria left the Commonwealth, it lost the bulk of its access to British weapons. This shift, alongside the voluntary United Nations arms embargo on the apartheid state in 1963, meant that the country could no longer rely on foreign states to purchase arms. 

    In 1968, the government created the Armaments Corporation of South Africa (Armscor) to ensure a domestic weapons supply under the embargo. However, domestic production relied on technological transfers and assistance from other countries, which would violate the terms of the embargo. An illegal network was built around facilitating the flow of weapons to and from apartheid South Africa, both for international combat but also for urban warfare and domestic repression within and outside its borders. 

    South Africa and its trade partners also sought to test their arms. Israel and South Africa, for example, conducted joint missile tests on South African soil and regularly co-developed weapons to be used by both states during this period. Since the mid-twentieth century, South Africa has remained a popular location for the development of new arms. Through Armscor, which also doubled as a procurement arm, South Africa became a key producer of armoured vehicles for domestic warfare. The weapons industry was crucial to the survival of the apartheid government, allowing it to suppress local uprisings and pursue wars with Angola and Namibia. 

    In 1992, halfway between the official end of apartheid and the first independent elections, the state-owned Armscor split into two entities—Denel, now the manufacturing arm of the state weapons industry, and Armscor—the procurement and weapons testing entity for the SANDF. For years, Denel maintained a strong relationship with European weapons giants, including Germany’s Rheinmettal. Present-day Denel, however, is a shell of its former self, having received government bailouts of $500 million in the last five years just to stay afloat. In 2024, although the government allocated $78 million to Armscor while refusing new funding to Denel, it still made it clear that the company would not collapse. Following the end of apartheid, Denel and Rheinmettal established a joint venture, Rheinmettal Denel Munitions (RDM), with the German manufacturer taking 51 percent ownership. Denel, RDM, and forty-seven other South African defense companies have clients on every continent. Denel was one of the first producers of the popular G5 155mm howitzer, a weapon used operationally in several conflicts, such as the Angolan Civil War and the Iran-Iraq War.

    South African companies have also increased their investment in factories for munitions manufacturing. Most recently, in January 2023, RDM announced it would be building an ammunition-manufacturing factory for the Hungarian state in Várpalota. In its statement, RDM says the contract covers the supply of plant engineering, technology, and process know-how, and the associated documentation, training, and all activities necessary to achieve full-scale production. RDM has established ammunition-manufacturing plants in thirty-six countries over the last thirty years, with the most recent customers being Saudi Arabia and Egypt. South African accents are thus commonly heard at arms manufacturing companies around the world.

    Despite a perceived decline in South African weapons expertise following the end of apartheid, weapons manufacturers continue to hold sway in southern African politics. In 2013, the Telegraph found that the head of the South Africa Paramount Group, Africa’s largest private defense and aerospace company, had persuaded then-Malawian President Joyce Banda to purchase $145 million worth of arms in exchange for public relations and media support for her political campaign.7

    The dominance of the securitization complex in South African politics becomes clear when scrutinizing the absence of effective domestic controls on weapons exports. In 2023, the National Conventional Arms Control Committee (NCACC), the South African state body that controls weapons licensing, reported that the country exported $2.3 billion worth of munitions to fifty-nine countries. The majority of South African weapons are sent to Europe: In 2023, Hungary and Germany accounted for 24 and 26 percent of total arms exports, while 20 percent were exported to African countries, and 8 percent were exported to the Middle East. 

    Israel was South Africa’s largest weapons importer from 1977 to 1987, when the weapons embargo on South Africa’s apartheid state and US pressure forced Israel to sanction South Africa. After the democratic transition, in 2004 the NCACC implemented a ban on weapons sales to Israel, but this policy has been easy to circumvent. Although the NCACC requires importers to ask approval for further sales of South African weapons through an end-user certificate (EUC), its process relies on good faith. South African-produced 155mm howitzers are sold to Germany, Israel’s second-largest weapons supplier, and potentially make their way to Israel without South African approval. The NCACC is also responsible for enforcing South Africa’s mercenary laws: any South African citizen wishing to fight in another country’s army must receive NCACC approval. Still, for decades, South African nationals have fought in the Israeli Defense Forces with few repercussions. 

    The NCACC itself is subject to industry and geopolitical pressures. In 2021, after Open Secrets questioned the NCACC on whether or not South African weapons sold to Saudi Arabia had been used in Yemen, the NCACC responded that the matter was “none of its concern.” Saudi Arabia, the United Arab Emirates, and the domestic weapons industry all disapproved when the NCACC inserted a clause into the EUC allowing for on-site inspections for exports of Saudi Arabian weapons to Yemen, forcing the NCACC to repeal the change.8 On the global stage, South Africa’s rhetoric of “peacekeeping” is undermined by the absence of effective enforcement mechanisms in the arms trade. This contradiction has gained more attention in the wake of the country’s high-profile case against Israel in the International Court of Justice. 

    Exported militarization

    Today, South Africa’s public and private sector militarization is increasingly linked to an extractivist economic model. South Africa’s own energy crisis—marked by frequent and extended blackouts and load-shedding across the country—has heightened the need for external, reliable energy sources. South African companies, including Sasol, Exxaro, and Kumba Iron Ore, hold significant investments in Mozambique’s natural gas and coal sectors. Eskom, South Africa’s state-owned electricity company, imports electricity from the Cahora Bassa hydroelectric dam in Mozambique. Protecting these assets has become the primary objective of both SANDF and private South African security forces. 

    From July 2021 to July 2023, the SANDF was stationed in the Cabo Delgado province as part of the Southern African Development Community Mission in Mozambique (SAMIM), a regional peacekeeping mission set up by the Southern African Development Community (SADC). The force was sent to protect three major gas projects in the Cabo Delgado—the largest in Africa—valued at a total of $50 billion: the Mozambique Liquefied Natural Gas project (Mozambique LNG) led by TotalEnergies; Coral South Floating Liquefied Natural Gas led by Eni and ExxonMobil; and Rovuma LNG led by Eni, ExxonMobil, and the China National Petroleum Corporation. Through the Industrial Development Corporation, the Development Bank of Southern Africa, and the Export Credit Insurance Corporation of South Africa (ECIC), the South African government heavily invested in the projects. The ECIC provided Mozambique LNG with funding of $1.2 billion, and several private South African companies won smaller contracts, with major logistics enterprise Grindrod contracted for building ports and transportation. 

    Insurgent attacks on Cabo Delgado security personnel and communities surrounding the Afungi Park began in 2017, sparking a war between the insurgents, the Mozambican and Rwandan militaries, SAMIM, and international private security and mercenary groups, including the notorious Russian Wagner Group. SAMIM, which brought together soldiers from Angola, Botswana, Democratic Republic of Congo, Lesotho, Malawi, South Africa, Tanzania, and Zambia, ultimately was a failure. The mission faced allegations of human rights violations, with Human Rights Watch reporting that soldiers sexually assaulted civilians and mistreated and mutilated the dead. At the same time, insurgent attacks continued to intensify. Thousands were killed and over one million were displaced. In May of last year, SAMIM officially announced their withdrawal from the region. 

    The Mozambican government also hired the Dyck Advisory Group, a South African private security company previously accused of human rights violations, for security assistance in Cabo Delgado. Following a combat operation, fifty-three witnesses told Amnesty International that Dyck operatives fired at civilian infrastructure, including hospitals, schools and homes, and indiscriminately fired machine guns from helicopters and dropped hand grenades into crowds of people. While the Mozambican government quietly decided against renewing the Dyck Advisory Group’s contract, the group was never held accountable for its crimes. Officially labelled as a private security company, the company functioned as a mercenary group—troops for hire by the government, protecting strategic assets in a model of “exported militarization.” 

    Outside of Mozambique, 2,900 SANDF soldiers today remain stationed in Kivu province in the Democratic Republic of Congo as part of the SADC mission targeting M23 rebels. Kivu is home to South African mining companies MPC Mining and Alphamin Bisie Mining, both accused by local communities of promoting landgrabs and causing displacement. In 2023, the mining industry contributed 7.3 percent—$11 billion—to South Africa’s GDP. 

    Conflicts around these sites of extraction have resurrected South Africa’s role in quelling protest and defeating counterinsurgency. Today, in place of a focus on domestic uprising, South Africa’s securitization complex consists of a booming domestic private security industry paired with an exported model of militarization centered around weapons trade and military service provision. This complex maintains strong ties to political elites and vested economic interests, while also claiming a significant contribution to South Africa’s national income and employment. The result is a disturbing continuity between the democratic state and its apartheid predecessor. While the image of South African militarization has changed, a powerful securitization complex remains fundamental to the state’s reproduction. 

  2. The Real Economy

    Comments Off on The Real Economy

    The following is an adapted excerpt from The Real Economy, published on February 25, 2025, by Princeton University Press.

    No discipline in the humanities or social sciences today has a convincing theory of the economy. Long preoccupied with honing methods, the core of the discipline of economics has abandoned investigation into what the economy really is. Preoccupied with either appropriating or criticizing the methods of economics, other disciplines have failed to articulate any alternative conceptions of the economy.

    A conspicuous absence of a convincing theory is what prevails today, but it was not always the case. In the period stretching from 1890 to 1930, when the modern discipline of economics was formed, fierce debates raged, as many different “visions” of the economy, as the economist Joseph Schumpeter once put it, circulated and competed with one another. This period—after Marx made the last great contribution to “political economy” but before the triumph of “neoclassical” economics—was a moment of “methodological pluralism.” For figures like Schumpeter, the subject of economics was by no means obvious. Rather, the very task of positing an economic problem, he wrote, would require that, “we should first have to visualize a distinct set of coherent phenomena as a worthwhile object of our analytic efforts.”

    In my encounter with these “years of high theory,” as one chronicler characterized them, the economics of Keynes and Veblen have loomed the largest. Veblen and Keynes were economic theorists writing before neoclassicals transformed economic theory into an entirely mathematical affair, and both preferred verbal exposition (Veblen nearly exclusively). Keynes entitled his most important books Treatise on Money and The General Theory, and Veblen, The Theory of the Leisure Class and The Theory of Business Enterprise. While these texts stand today as the most notable from the period, the economic visions of a great many twentieth-century economic theorists also crossed diverse institutionalist, post-Keynesian, Marxist, Austrian, French Regulation, and even neoclassical traditions—from Irving Fisher to John Hicks, Joseph Schumpeter, Frank Knight, Joan Robinson, Albert Hirschman, Nicholas Kaldor, and others. If in different ways, all first cultivated or sought to carry forward the rich legacies of pre-World War II economic theory.

    By revisiting traditions of literary economic theorizing that, if not completely lost, have long been overshadowed in the discipline of economics, and have not often been considered outside of it, my goal is to articulate a theory of the economy that is open to rich empirical study from multiple disciplinary and methodological perspectives, across the social sciences and humanities. Perhaps the effort will be of interest to contemporary economists, working in any tradition. No less, my ambition is to convince non-economists in the interpretive social sciences and humanities that there are traditions in economic theory of ecumenical interest as traditions in, say, philosophy, social theory, cultural theory, or political theory. Of course, I also wish to convince historians that it is worth conceptualizing the economy in the ways outlined here. Finally, while I believe this effort can help sharpen capitalism as a category of analysis, for it to succeed the theorization of the economy must be more generally valuable, beyond capitalism—including for understanding what kind of economy existed before capitalism, and what it would mean for an economy to come after it.

    Neoclassical economics and the real

    Economics abandoned a subject matter for a method in the decades after World War II, with the triumph of “neoclassical” economics, as Veblen first branded the paradigm in 1900. In the first half of the twentieth century, economics was intensely divided about whether to focus upon a subject or a method, as well as—not unrelatedly—which subjects and which methods. While the other social science disciplines of the twentieth century debated and fixed their subjects, sorting and cycling through methods, economics became an overwhelmingly methods-focused discipline. Perhaps only philosophy—tellingly, an avowedly metaphysical enterprise—can rival economics in this respect. To state the point most provocatively: economics has no subject. That helps explain its great success as the most “imperialist” social science. Fixated on method, economics began to freely roam across a great many subjects.

    The argument over subject conveyed a long inheritance that went back at least as far as the eighteenth- and nineteenth-century project of “political economy.” Here is how Adam Smith defined political economy in An Inquiry into the Nature and Causes of the Wealth of Nations (1776):

    considered as a branch of the science of a statesman or legislator, proposes two distinct objects: first, to provide a plentiful revenue or subsistence for the people, or more properly to enable them to provide such a revenue or subsistence for themselves; and secondly, to supply the state or common- wealth with a revenue sufficient for the public services. It proposes to enrich both the people and the sovereign.1

    The object of political economy defined its subject: the absolute generation of wealth, either in the form of subsistence or of revenue. A little more than a century later, the Cambridge economist Alfred Marshall began his Principles of Economics (1890), a landmark in the transition from political economy to economics, with this first sentence: “Political economy or economics is a study of mankind in the ordinary business of life; it inquires how he gets his income and how he uses it. Thus it is on the one side a study of wealth; and on the other, and more important side, a part of the study of man.”

    Marshall still emphasized the “study of wealth.” At the time, so too did rival schools of economics, including the German historical school, then at its peak at the turn of the twentieth century. In one representative German textbook, the economy referred to “all those processes and arrangements that are directed to the constant supply of human beings with material goods.” Material goods kept an accent on wealth. After World War I, US-based institutionalist economics, influenced by the fleeting German historical school and claiming inspiration from Veblen, did too. Walter H. Hamilton’s “The Institutional Approach to Economic Theory,” the address to the American Economics Association that first named that school, related that approach to the study of “material wealth.”2 In the 1930s, John Maynard Keynes, in his rebellion from Marshall, his teacher, sought to invent a new “theory of output as a whole,” or, as he called it, “the wealth of the community.”3 During the interwar moment of methodological pluralism in the budding academic discipline of economics, the study of wealth—a subject—was prevalent, even if the method or methods compatible with it were not at all agreed upon.

    But the more important aspect, as Marshall formulated in his Principles of Economics, was economics as “the study of man.” On this side, Marshall contributed to the “marginalist revolution” in economic theories of value—relative value, not absolute wealth.4 With this revolution, neoclassical economics was born, and with Marshall it rose into the mainstream. As Hamilton noted, what he called “value economics” was a second, distinct tradition in contrast to what Marshall called “the study of wealth,” or what we might call by contrast “wealth economics.” It went at least as far back as Smith, too. The relative values of goods in exchange, why people value some things relative to others, had greatly concerned Smith and the generations of political economists who followed in his wake. That included Marx, who by relating his Ricardian-inspired labor theory of value to his “law of capitalist accumulation” made the greatest attempt ever to completely integrate an economics of (absolute) wealth to an economics of (relative) value.

    The neoclassical marginalist revolution focused on transforming value economics, setting wealth economics to the side. By contrast to political economists from Smith to Marx, who argued for objective, cost-of-production theories of relative value, Marshall and his peers advocated a psychological and subjective theory of utility satisfaction at the margin. The theory posited a form of economizing behavior or conduct.

    However, the marginalist theory of value was not the only account of economic behavior at this time. Consider Max Weber’s struggles to define the “concept of economy” upon his appointment as a professor of economics (not sociology!) at Freiberg in 1898, replacing a member of the German historical school.5 Weber declared in the posthumous Economy and Society that, while avoiding “the much-debated concept of ‘value’” it was still possible to theorize “economy.” Weber focused on “economic activity,” by which he meant “careful choice among ends,” or “rational calculation.” This required a sociological account of valorization, without committing to the full-blown axiomatic theory of “relative value” that marginalists advocated.6

    Soon, as Hamilton noted, US institutionalists were also preoccupied with economic activity, appealing to “instinct, impulse, and other qualities of human behavior,” including the institutional processes of economic valuation that could not be reduced to neoclassical axioms.7 Keynes’s General Theory underscored the various “propensities,” “motives,” and “preferences” that explained the contingent valuation of capital assets, as well as the investment and consumption choices that determined “output as a whole.”8

    When neoclassical economics triumphed after World War II, what prevailed was marginalist value economics refashioned as a generalized method for an extraordinarily unspecified subject, the “study of man.” There was no longer any attempt to link this economics back to the study of wealth. It was Lionel Robbins who in 1932 defined economics with respect to a certain kind of conduct alone, abjuring wealth altogether. Economics, Robbins wrote, is “the science which studies human behavior as a relationship between ends and scarce means which have alternative uses.”9

    The definition captured several important stipulations of what became, broadly speaking, twentieth-century “microeconomics,” a term that first emerged during the 1930s. Its great founding text was John Hicks’s Value and Capital, written by Robbins’s London School of Economics junior colleague. Value and Capital barely referred to the economy at all. When it did, passingly, Hicks inferred that the economy was nothing more than the sum composed by the “preferences of the individuals” in it. Hicks’s inspiration was the Frenchmen Léon Walras, a founder of marginalism who first posited the possibility of a “general equilibrium” among all markets. Value and Capital focused on putatively economic topics, whether capital, employment, or consumption, but all from the perspective of a mathematical economics of relative value. The word “wealth” appeared four times in Value and Capital.10 During the 1930s, Robbins rightly sensed that what was being developed in the neoclassical camp was a generalized method for the study of individual choice under conditions of scarcity and substitution—anywhere it occurred, inside or outside the economy, whatever it might be.

    Neoclassical synthesis

    During the decades after World War II, together with neoclassicism, Robbins’s definition of economics became hegemonic. In short, rather than wallowing in knotty subject / method problems, neoclassical economists simply chose method over subject and got over it. The shift was subtle. By claiming human behavior and not the economy as its subject, economics claimed for the unique value of its discipline the contribution of a method. To get analytical traction, however, that method required making highly restrictive assumptions about human behavior, which required often excluding subject matters that, by seemingly any possible definition, would count as fundamental to the economy. Doing so, in the latter half of the twentieth century neoclassical economists successfully constructed a highly idealized but extraordinarily powerful method, in which—in the final step—compelling arguments must be expressed, à la Hicks, in mathematical or, relatedly, quantitative form. By jettisoning subject, economists by and large abandoned verbal exposition as well.

    At MIT, Samuelson in 1955 first announced the “neoclassical synthesis” between Walrasian general equilibrium microeconomics and a special-case Keynesian macroeconomics of aggregates, the latter applicable when the economy for some reason entered slumps in total output and employment.11 He utilized a new term—“the economy.” As historians and allied scholars have argued, in postwar public debates throughout the world, economists helped discursively fix the economy according to the macroeconomic concepts of output, growth, prices, employment, and development.12 The legacy is that we still say today, intelligibly, that the economy is growing, is developing, or that employment in it is rising or falling.

    But in economics, as opposed to public discourse, the postwar discursive fix was highly unstable.13 To his credit, Samuelson would outright admit the logical inconsistency of the neoclassical synthesis—of simply papering over the centuries-old tension, going back to Adam Smith, between the economics of absolute wealth, which leaned toward the study of a subject, and the economics of relative value, which led neoclassicals toward the application of a method. Microeconomics operated on one track, macroeconomics on another. There was no attempt to integrate them while taking each seriously.

    Smith had at least tried to integrate his understanding of “economic sentiments” in market exchange with his account of the wealth of nations; Marx had at least tried to integrate his labor theory of value with his law of capitalist accumulation; Weber had at least tried to integrate his sociological theory of “economic activity” to “the economy,” in the sense, as he put it, of “the optimal utilization of given means of production to meet the demand for goods on the part of a given human group”; Keynes had at least tried to integrate his analysis of propensities, motives, and preferences with his macro-theory of output as a whole; Veblen had at least tried to integrate his theory of habit, especially habits of emulation and predation, with his account of pecuniary valuation and wealth ownership. The neoclassical synthesis was far more modest. “The way I finally convinced myself was to just stop worrying about it,” Samuelson would reflect.14 That was nothing much to be ashamed of. Often, intellectual formations overcome their contradictions therapeutically in this way. They do not solve them; they simply stop worrying over them and move on—if only for a time.

    What postwar economists began to worry about most was how to properly reason through models, using mathematics. If the economy now existed, it existed in an idealized world within the logical space of a model, not in the world outside of it. By intent, the discipline began to lose contact with what Marshall called the “ordinary business of life.” For instance, postwar general equilibrium theory required excluding money from standard models by assumption. In a remarkable rhetorical flourish—according to what in 1949 Don Patinkin first named the “classical dichotomy” separating the monetary and the real—what postwar economists began to refer to as the “real economy” excluded money altogether.15 That is, the closest twentieth-century neoclassical economics ever got to defining the subject matter of the economy was to mathematically model an idealized real economy, in which money is taken to be a nominal factor alone, and does not really exist.

    By the 1960s, the Chicago School had begun to look past markets. In The Economic Approach to Human Behavior (1976), Gary Becker cited the Robbins definition approvingly. “The definition of economics in terms of scarce means and competing ends is the most general of all. It defines economics by the nature of the problem to be solved,” regardless of the domain of life in which the problem appears. Economics need not focus on “the market sector,” Becker emphasized. He concluded, “what most distinguishes economics as a discipline from other disciplines in the social sciences is not its subject matter but its approach.”16 Ronald Coase, famous for importing this approach into the study of legal institutions and inspiring an institutional economics of a very different type than what Hamilton promoted in 1919, approvingly cited both Robbins and Becker before summing up this tradition best when he wrote, “economists have no subject matter. What has been developed is an approach divorced (or which can be divorced) from subject matter.”17

    Thus was launched the imperialism of microeconomic approaches to human behavior across the social sciences, which soon made its mark in political science, law, sociology, and history among other disciplines. The method of economics may have required using strong assumptions about human behavior and its context—Becker cited “maximizing behavior, market equilibrium, and stable preferences, used relentlessly and unflinchingly”—that to this day make the jaws of most humanists and social scientists who are not economists or sympathetic to their methods drop to the floor. When economists invade their territory, however, mouths close and teeth gnash. For individual choice under conditions of scarcity and substitution really is a problem that often occurs not just in markets, and not just in the economy, however one wants to define it or mark its boundaries, but also elsewhere, in politics, family life, or other social arrangements. Indeed, the method of economics has achieved impressive analytical traction across multiple domains. Of its limits, however, this was all Becker was prepared to concede: “I do not suggest that concepts like the ego and the id, or social norms, are without any scientific content. Only that they are tempting materials … for ad hoc and useless explanations of behavior.”18

    It was in 1988 that Coase declared that economists “have no subject matter.” Their method, born of high abstract theory, could travel anywhere. By then, already important changes were afoot pointing the discipline in new directions. In the 1980s, applied econometrics and applied microeconomics were becoming ever more prominent fields.19 One explanation for this was the increasing real-world policy influence of microeconomics in the United States during the 1960s, on issues such as crime, poverty, discrimination, and education.20 Yet, trends in macroeconomics—still the most policy-relevant branch of the discipline—cut back in the old direction. The search for “microfoundations” in “rational expectations” starting in the 1970s transformed macroeconomics.21 Microeconomics and macroeconomics—relative value and absolute wealth—did not so much finally integrate. Rather, micro largely swallowed macro. As one influential 2010 survey explained:

    Many macroeconomists have abandoned traditional empirical work entirely, focusing instead on “computational experiments,” as described … by Kydland and Prescott (1996). In a computational experiment, researchers choose a question, build a (theoretical) model economy, “calibrate” the model so that it mimics the real economy along some key statistical dimensions, and then run a computational experiment by changing model parameters (for example, by changing tax rates or the money supply rule) to address the original question. The last two decades have seen countless studies in this mold, often in a dynamic stochastic general equilibrium framework.22

    By this time, the signifier “real” in economics could either refer to any humdrum empirical reality outside the window of a university academic office, or the idealized “real” of the highly abstract models written from within them. In the above passage, the “real” referred to that reality which exists outside the model. Yet, the brand of macroeconomics being described above, privileging the logical coherence of abstract mathematical models, somehow referred to itself with the moniker “real business cycle theory.” Criticizing this school of macroeconomics in the wake of the global financial crisis of 2007–2008, the economist Paul Romer named it “post-real” economics. In his view, that was how divorced from empirical reality its assumptions had become.

    Two senses of the real—critical and constructive

    Of course, the disciplinary evolution of economics does not end at the turn of the twenty-first century. The narrative I have offered of its twentieth-century trajectory, focusing on some of the crucial turning points of the postwar era, is admittedly condensed and oversimplified. On the other hand, the issue cannot simply be confined to economics: historians and allied members of the interpretive social sciences and humanities had no theory of the economy either. True, historians wrote outstanding and essential genealogies of the concept “economy,” but this work consciously stopped short of theorizing economy, in a positive sense. Capitalism represented a better option, as it was a concept abundantly theorized, even if contentiously so, by such grand thinkers as Karl Marx, Karl Polanyi, or Max Weber, the initiators of traditions in which labor, the commodity, the market, or instrumental rationality were the central analytical categories, not the economy. But to talk about Capitalism, must we not know what makes an economy capitalist? To know that, surely, must we not know what the economy is?

    To work toward a compelling theory of economy, theorists take two approaches—one critical, another constructive. First, there is talk of the real economy when there is confidence in specifying conceptually what the economy is, or where there is faith that it has been positively fixed for good use. Yet, we may also speak of the real when we do not have such faith in a constructed concept. At these moments, critique comes to the fore. We do not want to have momentary conceptual closure. Rather, the desire is to reflexively critique a concept, by destabilizing it.

    Obviously, these two senses of the real strain against one another. The question is whether that strain is productive. Is a constructed concept open to productive critique, and is critique productively feeding back into construction or not? Consider twenty-first century economics discourse, where, on the one hand, macroeconomic adherents of real business cycle theory cling to an old concept of the real economy, fixed in the postwar era, that exhibits little empirical curiosity in the world, whereas, on the other hand, adherents of the newfangled credibility revolution, critical of past practices in the discipline, yearn to conduct real world experiments to set economics upon a more secure empirical foundation. When the two “reals” blindly pull against one another this much, that is when a concept may suffer from an intolerable indeterminacy.

    In addition to this form of critique, however, another constructive step is necessary to theorize the economy. To construct an adequate account of the economy by simply critiquing neoclassical economics, from whatever standpoint, is highly unlikely. The challenge is to positively construct a theory of the real economy that is open to dialogue with a variety of disciplines—economics and beyond—and is therefore at once sufficiently determinate but also itself open to productive critique and thus genuine conceptual life.

    Neither of the two moments of the real, the critical or the constructive, necessarily precedes or is antecedent to the other. There is no ultimate destination, no final determination of what the economy is, really, for all times and places. There is no whole, only partial visions of parts. The real economy must be a plural subject, for there is always process.

    The real economy

    How might we arrive at an account of the real economy? The first thing to say is that the most consistent perspective on the economy is taken from accounting logics—not equilibrium, optimization, market exchange, labor, the commodity form, class struggle, or some other candidate. In a late-in-life 1986 interview of John Hicks, whose views had changed considerably since authoring the foundational microeconomics treatise Value and Capital,  the questioner noted: “When I read your work, I see balance sheets (Hicks: “Yes”), income statements (“Yes”) and you see their ordering (“Yes”). You also seem to think like an accountant about capital.” Hicks responded, “Yes exactly.”

    Whatever the economy is, its reality does not exist independent of our ability to account for it. The origins of accounting practices go as far back as the invention of writing in the ancient Near East, c. 3400 BCE, when the earliest states first sought to account, chiefly, for coerced labor inputs and the storage and distribution of grains that resulted from them—through money, the unit of account and medium of exchange, but not yet the store of value (as it would become under capitalism). Rather than the ancient Greek oikos or early modern political economy, typical starting points for the genealogical origins of the economy, my theorization of the economy is rooted in this ancient accounting origin.

    From accounting, I draw nearly all the central conceptual building blocks of the economy, including capital, wealth, income, profit, stock, and flow, as well as the related distinctions between stock / flow and capital / income. I critically trace the intellectual and business histories of these terms. But I also stand back to positively theorize from their basis a concept of the real economy.

    My argument is that the storage of wealth over time is the act that first creates the economy. This argument hinges upon the accounting logic that was central to Keynes’s famous attack on Say’s Law, the notion that “supply creates its own demand,” or, as Say himself put it, “products are paid for by products.” Because wealth is stored—whether in the form of granaries, cultivated lands, money, or in some other form—purchasing power traverses over time, and leaks out from any present period. There is therefore always a demand constraint in the economy, though it too may manifest in different ways. The character of this demand constraint, and its possible resolution through the domestication of external sources of demand, initiating new flows of income through investment, defines the economy in tandem with the (also) shifting character of the storage of stocks of wealth.

    The real economy is, then, a bounded spatiotemporal order of demand-constrained production, determined by logical accounting relationships among the different stocks of wealth in the economy that generate different flows of income over time. All economies are defined by singular stocks, around which they gravitate. In a capitalist economy specifically, that stock is capital. The theory of capitalism proposed herein is rooted in an economic theory of capital—one stock of wealth among, historically speaking, many. This outline conception of the real economy leaves much to be sketched in, from a variety of different perspectives and disciplines. But it is unabashedly rooted in an economics of wealth. The intent is to restore a refined conception of wealth, or anything produced that is conventionally valued that may be stored over time, as the central subject of the economy.

    The Real Economy by Jonathan Levy is available from Princeton University Press.

  3. Concentration Spiral

    Comments Off on Concentration Spiral

    The concentration of banking power has transformed Colombia’s economy and society. With financial capital controlled by a small number of major banks, competition is heavily restricted, and small and medium-sized enterprises have limited access to credit. Although large banks claim to promote financial inclusion, in practice, their dominance in the sector contributes to rising inequality.

    In 2024, Colombian President Gustavo Petro’s government proposed redirecting a portion of the savings deposited by citizens toward productive projects, such as the expansion of agricultural land, housing, and renewable energies, with the explicit object of financing decarbonization, among other developmental goals. Petro’s proposal would have obliged private banks to redirect investments toward sectors of national interest with a preferential rate. The proposal was unsurprisingly controversial, exposing the conflict between the banks’ interest in making savings profitable and the government’s intention to offer low-cost loans. The rift demonstrated the banks’ significant influence over policies at the national level. 

    The disagreement over Petro’s proposal stemmed from opposing conceptions of the purpose of savings—should private savings be used for corporate or public interests? The largest banks won the dispute after significant lobbying to prevent the passage of the law.  This debate, however, must be understood within the context of the transformation of the banking system over the last two decades. From 2000 to 2022, thirteen conglomerates have risen to dominate the banking sector, shaping profitability, access to credit, and pricing. These conglomerates not only hold banks, but they also own the country’s largest pension funds and insurance companies. Colombia is unique among the region in this regard: given the entangled interests of the country’s largest private financial institutions, economic and political power is concentrated in the hands of a few major economic players. 

    1990s reforms

    The roots of banking concentration in Colombia can be found in the financial reforms of the 1990s. Prior to these reforms, the Colombian banking system was characterized by highly specialized lending entities, such as savings and housing corporations (CAVs), state-controlled interest rates, and credit rationing, with the Central Bank serving as the basic source of domestic resources.1 Several factors drove banking liberalization. The economic crisis of the 1980s exposed the limitations of Colombia’s import substitution model. Then, following a wave of debt crises that affected many countries in the region, a consensus toward neoliberal policies was formed, motivating the need to attract foreign investment and guarantee the country’s integration into international markets. 

    These reforms—supported by the IMF and the World Bank’s structural adjustment programs—included trade liberalization, financial market deregulation, and the privatization of state-owned companies. Economic liberalization was said to improve the stability of the banking system, aligning market incentives and the channeling of financial assets to productive investment.2

    Previously specialized banking services consolidated into a new multi-bank scheme, reducing operating costs for the sector and allowing all commercial banks to carry out the same types of operations. The reforms also facilitated the entry of foreign banks, eliminated restrictions on international banking operations, deregulated interest rates, and reversed the state’s majority participation in the banking system.

    Prior to these reforms, state participation in the banking sector in some Latin American countries reached more than 50 percent. Around the region, however, reforms in Bolivia, Costa Rica, Ecuador, Guatemala, and Honduras were not accompanied by regulatory changes. Countries that did not strengthen banking supervision suffered successive crises.3

    In the Colombian case, during the 1990s, low interest rates set by savings and housing corporations (S&H) triggered a credit boom that facilitated massive mortgage lending without adequate creditworthiness checks. The subsequent drop in real estate prices and the sudden increase in interest rates increased portfolio risk, as many borrowers began to face difficulties in meeting their loan obligations. This situation, worsened by lax regulation and the external shock of the Asian international crisis, culminated in a financial crisis at the end of the decade. The collapse was compounded by structural problems of high public debt and political instability. The crisis caused a deep contraction in various sectors of the economy, with imports declining; real estate prices, credit, and construction falling; and unemployment and poverty indicators skyrocketing.4 Although the government implemented financial rescue measures and fiscal adjustments to try to stabilize the situation, the recovery was slow and painful.

    In response to the crisis, the Constitutional Court required all CAVs to become commercial banks by 2002. Many banks also carried out mergers and acquisitions, as smaller banks suffered in the crisis, reducing the overall number of entities in the banking sector.5 Several banks and commercial finance companies intervened to liquidate these smaller companies. Banco Andino, Banco del Pacífico, and Banco Selfín all closed operations during this period, and the Compañía de Financiamiento Comercial FES was liquidated. Also noteworthy is the fact that other entities later to be privatized, such as Banco Interbanco and Compañía de Financiamiento Comercial Aliadas, were initially administered by the Financial Institutions Guarantee Fund (Fondo de Garantías de Instituciones Financieras, Fogafin). Cooperative banks were consolidated into a new bank, Megabanco, which was sold to Grupo Aval in 2006 in order to settle debts. 

    Bank concentration

    Thus, the twenty-first century began with a trend toward banking concentration in Colombia, contrary to the objectives of the liberalization reforms. The effects of this concentration can be seen in the collection of funds and the distribution of credit within savings’ deposits. As can be seen in Figures 1 and 2, of the twenty-nine registered banks, seven capture the deposits and manage the loans of 80 percent of the country’s funds.6

    Figure 1

    Figure 2

    The high concentration of the credit business comes with a consequent concentration in profits, as measured by the main indicator of market concentration, the Herfindahl-Hirschman Index (HHI). The HHI reached 2,796 in 2022, a 55 percent increase from 2009, when it stood at 1,800.7 In 2022, the three largest banks took 75 percent of the sector’s profits, with the single largest bank—Bancocolombia—taking almost half of profits, as shown in Figure 3.8

    Figure 3


    The four largest financial conglomerates control between 75 and 80 percent of all assets, liabilities, and equity in the sector. Banking concentration, both in terms of loans and profits, has deepened. In the region, Colombia has a relatively high level of banking concentration—higher than that of countries such as Ecuador and Mexico, but lower than that of Peru.9 Figure 4 shows the market share of the three largest banks in various countries. The concentration in Colombia is particularly high, and within Latin America it is only surpassed by Nicaragua, Venezuela, and the Dominican Republic.10

    Figure 4

    Concentration of capital

    The major beneficiaries of banking concentration are thirteen financial conglomerates representing the country’s strongest economic groups.11 Each conglomerate is associated with a holding company—an investment vehicle that exercises the first level of control over the entities that make up the conglomerate. According to the Colombian Financial Superintendence, seven of Colombia’s thirteen conglomerates are linked to thirty-three financial entities.12 However, in our research on these conglomerates’ partnerships, we found links to 226 companies in multiple countries. 

    Financial conglomerates were established in Colombia after the enactment of Law 45 in 1990, which pursued economic liberalization measures, as well as the Laws 510 and 546 in 1999, which responded to the financial crisis. These laws encouraged mergers and acquisitions, with the objective of converting financial companies to banking companies. Through these measures, Colombia could maintain competitive financial institutions in the face of globalized markets.

    As conglomerates have grown over time, their ownership networks have become more complex, with a presence in countries including Colombia, Chile, the Bahamas, Panama, and the Cayman Islands. These conglomerates also own companies in other sectors in Colombia. Grupo Sura, which owns Bancolombia—the largest bank in the country—is co-owner of the Pension Fund Administrator (AFP) Protección. The AVAL Group, which controls the Popular, Occidente, Bogotá, and AV Villas Banks, also owns AFP Porvenir.

    Porvenir is the pension fund with the largest share in pension savings—46.3 percent—followed by Protección—with 35.2 percent. These two AFPs are among the top five pension administrators in Colombia. Together, they account for close to 90 percent of AFP affiliates in Colombia, giving them significant influence over the pension market.

    Grupo Bolivar, owner of Banco Davivienda, the second largest bank in the country, also owns one of the most important insurance companies, Seguros Bolivar. The integration of financial and insurance institutions has a multifaceted impact on the real economy, as the decisions made by these entities can influence the flow of capital to different economic sectors. The interconnection between pension funds, insurance companies, and banks is amplified in the event of a crisis. A problem in one can quickly spread to the rest of the system. Conglomerates hold influence over financial regulation itself, as regulators rely heavily on information provided by them. Regulations are designed to favor their stability and thus profitability, limiting competition and hindering the entry of new players into the financial market.

    Between 2000 and 2022, 87 percent of the assets managed by the top ten banks corresponded to obligations with external agents (liabilities), while banks only held 13 percent as equity. In other words, almost all of the banks’ profit generation resulted from the external holdings of third parties.  In contrast, shareholders’ contributions represented only 0.6 percent of assets on average, and its weight decreased over time: in 2000, shareholders’ contributions were equivalent to 2.5 percent of assets, while in 2022 it was only 0.3 percent. The return on their resources averaged 262 percent. Shareholders recovered all their invested capital and were able to obtain additional returns in just one year, a feat that companies in other sectors of the economy do not manage to achieve over several years.

    Figure 5

    The profits from the banking sector are enormous, especially in the Colombian context. One year of bank profits would be enough to cover the 2024 budget of the Ministry of Justice and comprises three times the budget of the Ministry of Environment. The three wealthiest Colombians come from the banking sector: David Vélez (owner of NuBank), Jaime Gilinski (owner of GNB Sudameris bank) and Luis Carlos Sarmiento Angulo (owner of the financial conglomerate Grupo Aval).13

    Financial inclusion?

    The dominance of these conglomerates in the financial market also has direct implications for competition and efficiency of banking services. The Colombian banking sector has very high intermediation margins—the difference between the interest banks receive on loans and the interest paid to savers. Figure 6 indicates that the largest banks have the lowest rates for remunerating savings while charging very high rates for loans.

    Figure 6

    Between 2000 and 2022, the ten largest banks earned interest-based income of $155 billion constant pesos, resulting in an average annual net margin of around $18 billion. Despite this high margin, the Superfinanciera imposes millions in fines on different banks for charging over the allowed rates. In 2024, Bancocolombia faced such fines, while Banco Itaú was sanctioned in 2023. The Superfinanciera similarly fined Banco Popular, part of Grupo Aval, for charging fees on failed transactions at electronic teller machines for almost a year.

    Bank profits come not only from intermediation, but also by charging for financial services. Transaction fees in Colombia, from ATM withdrawals, transfers to other accounts, or the use of a plastic card, are historically among the highest in the region.14 Consider the following: a person who earns a minimum wage and wants to deposit it in a savings account each month will have to pay a handling fee for his debit card ($11,600 COP or $2.7 USD), a fee for  making withdrawals ($2,287 COP or $0.53 USD), a fee for transferring money to other accounts ($811 COP or $0.2 USD), and yet another fee for using services from his account ($864 COP or $0.2 USD), among others costs ($33,238 COP or $7.8 USD).15 In total, this person will spend 4 percent of his or her monthly income to pay financial costs. While the money is held in his savings account, the bank will earn an interest margin of 170 percent and pay for the savings at 0.5 percent.16

    The high concentration of the sector gives banks the power to decide how much liquidity is offered and to whom. Despite making up 99.5 percent of businesses and generating 65 percent of employment, small and medium-sized enterprises in Colombia only manage to stay afloat for an average of two to three years. Of all the credit in the banking sector, only 7 percent is allocated to the smallest companies. Interest rates and costs associated with banking services are high, which can deter medium-sized companies and entrepreneurs from seeking formal financing. Finally, greater concentration erects barriers to entry for new competitors in the market, which in turn intensifies concentration. Banks have recently faced penalties amounting to $1.5 billion COP for violating protections on competition, but fines have still failed to deter such behavior.17

    Concentration of power

    The concentration of financial profits has political implications. For one, many traditional political parties in Colombia benefit from the financing provided by the conglomerates. One investigation showed that Luis Carlos Sarmiento Angulo, who until 2024 served as the head of Grupo Aval’s board of directors, contributed more than $11 billion COP to traditional political parties in 2022, almost doubling his contributions from 2018. While this financing is legal, it necessarily grants undue influence to the sector. Grupo Aval similarly financed more than 66 percent of President Iván Duque’s 2018 campaign. Duque’s presidency was marked by increased austerity, militarism, and deepening ties with the country’s economic elites.  

    With banking and pension companies controlled by the same owner, there is also a risk that conglomerates can use savers’ earnings to finance projects with a political tilt. This occurred only recently, as AFP invested in road projects, such as Ruta del Sol (Odebrecht), promoted by politicians close to the former president, Juan Manuel Santos.18 Such influence even extends to clear cases of corruption. In 2012, Corficolombiana, the banking subsidiary of the Aval Group, bribed Colombian government officials to keep the extension of a 527-kilometer-road construction project for itself. The judicial process indicates that at least $28 million was paid with the knowledge and approval of the former president of the company. As a result, in 2023 the US Securities and Exchange Commission (SEC) sanctioned Grupo Aval and its banking subsidiary Corficolombiana.19

    Pronounced banking concentration in Colombia has resulted in a severe concentration of capital that goes far beyond setting high interest rates: select economic groups end up determining the shape of the real economy by allocating credit support and administering insurance and pensions. The banks’ power reaches a systemic level in citizens’ savings. In this context, supervisory institutions, such as the Financial Superintendency, have been key in monitoring the technical indicators of banking entities and financial conglomerates. These institutions work to avoid systemic risks that could affect economic stability, directing their main efforts toward data disclosure and transparency. These efforts, while significant, are only the beginning of a broader and necessary process of rethinking how to reverse the hyper-concentration of the financial sector. 

    The larger question remains: how can the state counteract existing liquidity and credit allocation models so that citizens’ savings are directed toward collective public interests rather than private profit? In August 2024, despite failing to pass a law that would force banks to offer cheaper financing for development projects, Petro reached an agreement with the major banks to secure $13.6 billion in lending for residential, industrial, manufacturing, agriculture, and tourism-related projects. While increased financing marks a small step forward, these loans will continue to be beset by high interest rates, and powerful financial conglomerates will remain at the helm of the Colombian economy. 


  4. Europe Enters Its Metal Era

    Comments Off on Europe Enters Its Metal Era

    This month, Trump entered into formal talks with Russia—without Kyiv’s consent—to settle the war in Ukraine, largely on Putin’s terms. And on Friday, speaking with Zelensky in the Oval Office, he and his Vice President JD Vance performed as imperial overlords dressing down their upstart vassal. For Europeans, the once unthinkable prospect of an American departure from Europe has become a palpable possibility. The question on their minds: Can the European Union survive without the transatlantic military alliance that was famously created seventy-five years ago to, in the words of its founding Secretary General, “keep the Germans down, the Russians out, and the Americans in”?

    The apparent collapse of Atlanticism as the ruling ideology of European elites has been swift. Germany’s Chancellor-in-waiting Friedrich Merz, a committed Atlanticist—coming from stints at Blackrock and corporate law firms to promote the EU-US Transatlantic Trade and Investment Partnership—was initially willing to appease the US after Trump’s win in November (offering to buy more American LNG and weapons). But JD Vance’s speech in Munich this month marked a turning point, with Merz denouncing it as an act of electoral interference that was “no less drastic, dramatic, and ultimately no less brazen, than the intervention that we have seen from Moscow.” Following the CDU’s persuasive win at the polls on Monday, Merz cast the US as an enemy of the European project. He urged the Union to build up its own defence capabilities, warning that it was now “five minutes to midnight for Europe.”

    Europe is now fully and self-consciously security-constrained. This reality collides with two other foundational constraints. Europe’s self-imposed fiscal limits are infamous (we have argued they leave the continent poorer, weaker, and less green); and its energy constraints exploded into view following the Russian invasion of Ukraine, as gas prices soared and spread throughout the economy. These have unleashed political reverberations on the continent: the far-right waves hitting its elections are a powerful reminder of such fundamental vulnerabilities, which undermine the prospects for a green pathway out of the continent’s stagnation. Not for nothing in April of last year Emmanuel Macron declared “Europe is mortal: it can die.” The Green Deal is out, Europe has entered its metal era.

    European elites are now convinced that Europe’s best bet going forward is to get over its self-imposed fiscal binds and borrow to invest on the sinews of power in the twenty-first century: defense, clean energy, and technology. Standing in the way of this is the EU’s unique structure, which delegates economic policy to Brussels and security to individual nation-states. Fulfilling the vision of autonomy put forward by this emerging elite consensus would require resolving questions of making versus buying green and military goods, and overcoming the strictures of Europe’s fiscal constitution—and, if it is to further the peace agenda of the European project, a wholly new politics at national, continental, and international levels.

    Realities of the Russia-Ukraine War

    It’s been three years of brutal attritional warfare in Ukraine following Russia’s full-scale invasion in 2022. Hundreds of thousands have been killed or wounded; millions of Ukrainians and nearly 800,000 Russians have been made refugees. Despite the many offensives and counter offensives over the last year, prolonging the war at terrible human cost, there has been little territorial gain by either side and fundamental realities have not budged. Russia will not withdraw militarily from the 20 percent of Ukraine’s territory it now occupies, Ukrainians will not abandon their desire to integrate economically and socially with the West, and Putin will not accept any settlement that allows Ukrainian integration into NATO and is demanding stringent quantitative and qualitative caps on Ukraine’s future military (as in their 2022 Istanbul talks).

    Russia has illegally annexed five regions of Ukraine. Crimea in 2024; Donetsk, Kherson, Luganks, and Zaporizhzhia in 2022. (Source: ISW).

    Economically, the US and Europe have not been able to weaken Russia’s war-fighting capability with sanctions. Derided by Western commentators as a “gas-station with an army,” Russia proved much stronger than the West anticipated. The desire by large developing countries and China to continue to do business with Russia—stocked with arms, hydrocarbons, food and fertilizers, as well as an expansionary fiscal policy—has meant that Russia’s economy grew faster than those of G7 countries in 2023 and in 2024, per the IMF. As sanctions expert Nicholas Mulder astutely summarized two years ago, “the limited efficacy of the sanctions is due to Russia’s policy response, its size, its commercial position and the importance of nonaligned countries in the world economy.”

    Russia earned €242 billion from global exports of oil and gas in the third year of its war, with total revenues “now inching closer to the trillion figure” (Source: CREA)

    The West’s security guarantee

    At the UN Security Council this week, Washington voted with Moscow and Beijing on a resolution to end the war, but made no mention of Russian aggression or Ukraine’s territorial integrity. Will there be a neutrality-in-exchange-for-occupied-Ukraine deal? That option requires an ironclad Western defense guarantee in the event of another Russian attack on Ukraine. It is here that the hard choices—for Ukrainians, Europeans and Americans—begin.

    For Ukraine, with several eastern cities destroyed and its people demoralized, military victory in the form of driving Russian troops out of Ukraine is no longer possible. What will emerge instead is a negotiated settlement to ensure Ukraine holds on to the remaining 80 percent of its land—and seek the strongest Western security guarantees and economic reconstruction package. Indeed, Zelensky has agreed to the principle of the peace-for-land formula, but only if Ukraine is given NATO protection.

    The Trump administration has laid down a hard line to its European NATO Allies. First, US troops will not be a part of future peacekeeping missions in Ukraine. Second, NATO’s Article 5 protections of mutual defense will not apply to any European forces sent to postwar Ukraine. Third, Ukraine should not expect to become a member of NATO. Fourth, Ukraine must trade peace-for-land and give up its claims on Russian-occupied territory.

    All of this raises uncomfortable questions for domestic politics in Europe. Who exactly will do the peacekeeping in Ukraine now that the US has withdrawn military support? Who will pay to reconstruct Ukraine—and how? Will the fiscal rules constraining European spending need to be broken in order to finance defense? Or will the EU opt for higher taxes and a shrunken welfare state to foot the bill?

    Behind closed doors in February’s emergency meeting in Paris, US has sent these direct questions for European governments to answer. (Source: Reuters)

    Boastful talk by the continent’s leaders at the Munich Security Conference of a large European army and “strategic autonomy” from the US have given way to sober recognition of Europe’s constraints. The gap between words and deeds has been increasingly evident over the course of the last three years, as  European NATO countries have sent ammunition and money, but not troops; it has never risked escalatory measures like no-fly zones. With the end of the war now in sight, European military assistance in the form of large peacekeeping forces is also unlikely, with Macron earlier this month referring  to the idea as “far-fetched,” adding that “we have to do things that are appropriate, realistic, well thought, measured, and negotiated.” Poland’s Prime Minister Donald Tusk, the leader of Europe’s largest army and Ukraine’s largest military backer behind the UK and US, said unequivocally that “Poland will not send troops to Ukraine.” Only British Prime Minister Keir Starmer has so far expressed any willingness to send in troops—a commitment that British military chiefs say cannot be met.

    Sovereign Europe?

    With European security now set to be left in European hands, how will its politics change? With the prospect of increased military spending, there is anxiety in some quarters of a reduction in social spending. Will the outcome be military Keynesianism for business and the state; austerity for the people? At the same time, with militaries as a large collective source of global emissions, any increase in defense spending will likely be met by pushback on climate grounds. Conservative voices, meanwhile, argue openly for the other side of the tradeoff.

    It may not have to be a zero sum game. A report from the Kiel Institute for the World Economy this month has argued against the widely assumed “guns or butter” trade-off, emphasizing that “more money, labor, and raw materials channeled to military uses have not traditionally come entirely at the expense of private consumption.” The classic argument in favor of military Keynesianism is that money invested in necessary domestic industries with stable demand will create positive spillover effects: a stimulus chain of productivity growth, jobs, and increased tax revenues, which are in turn funneled towards social spending. In the period between 1950 and 1970, for example, European countries regularly invested 5 percent of GDP on defense while continuing to increase their social spending.

    However, as researchers from the French Institut Delors have argued, it requires the right kind of defense investment: more production, and spending not just more but “spending better” and “spending together.”  In 2020, just 11 percent of aggregate national defence budgets within the EU were allocated to joint projects, falling well short of the EU’s own 35 percent target to encourage coordinated spending. The latest official European data show that in 2023, more than 80 percent of funding went to procurement, mostly of off the shelf products from non-European companies, limiting the sorts of positive externalities identified in the Kiel report.

    The emergent consensus is seeking an escape hatch from stagnation and fragmentation—and sees sovereignty and new growth potential in the military Keynesian solution to the guns or butter tradeoff.

    But the conditions of possibility for a sovereign Europe lie in defense and energy. And on these grounds, European sovereignty has been heavily constrained in the hydrocarbon age. Lacking enough oil and gas to power itself, for seventy-five years the continent has been squeezed by the three centers of hydrocarbon power: the US, Russia, and the Gulf Kingdoms. By interrupting the flow of energy—the US oil embargo in 1956, the Arab oil embargo in 1973, and the Russian gas embargo in 2022—these powers have inflicted pain on European citizens and treasuries, forcing change in European foreign policy and security arrangements. Europe has been trapped in various relationships of dependency and vulnerability across these three energy powers.

    Today, this means that in order to be free from authoritarian blackmail—whether Putin’s or Trump’s—Europe must go green. Despite all the rhetoric of the European Green Deal as transformative in addressing energy, growth, social welfare and nature, it has translated into neither substantial industrial policy nor foreign policy. Even in 2024, the EU was spending more money on Russian oil and gas (€22 billion) than on financial aid to Ukraine (€19 billion). Appeasing Trump by buying more seaborne LNG and locking-in more fossil-fuel infrastructure will only make the problem worse. Investment in green energy is the only path toward independence or strategic autonomy.

    A new growth politics

    The latent logic of this emergent elite consensus is to make—where initiatives like the Green Deal failed—a bid for a new European growth model, this time based on rearmament. If the combined goals of defense and green industry are in fact pursued in earnest by European governments, a number of tensions and structural dilemmas will have to be confronted. 

    The energy transition and defense both require industrial policy. In each, to be effective means choosing what exactly to make, and what exactly to buy. European countries are buying Lockheed Martin F-35A Lightning II fighters, AH-64 Apache helicopters, Patriot air defense systems, and Abrams tanks; but they are also buying a variety of more local defense kit for the continent’s rearmament. Poland, the only European NATO country already spending 5 percent of GDP on defence, has put in purchase orders for Eurofighter Typhoons, manufactured by a consortium of Airbus, BAE Systems and Leonardo, and bought ammunition and aircraft from Sweden’s Saab.

    The lucrative hedge fund trade since the Trump election—sell US arms manufacturers, and buy European arms manufacturers—gives an indication of the expected boom in the European military-industrial complex. (Source: ENAAT)

    The European Commission estimates that individual countries will need to spend an additional  €500 billion on defense over the next decade. However the EU’s common defense spending is half that, reaching $270 billion euros in 2023. Just 20 percent has been collaborative from EU-based suppliers. Experts find that there has never been a “genuine pan-European defense procurement market, but rather … [27 markets] fenced off with regulatory barriers to entry aimed at protecting national defence industries.” The barriers to these are political: member countries don’t want to their national champions to be dictated from Brussels.

    How to pay for it? The EU has pulled a rabbit out of a hat and is set to activate the fiscal “escape clause” allowing member states to exceed the common debt and deficit limits for national defense spending. Changes are also likely for the  European Investment Bank’s mandate, which it hopes will also tempt private capital to pile more into continental defense.

    Sharply raising investments in local defense industry will prompt a “make versus buy” debate.  Traditionally, market liberals and Transatlanticists associated with Germany, the Baltics, the UK, and Poland have positioned themselves on the “buy” side of the argument, opting to import defense kit from the US. More than three-quarters of the defense purchases by EU member states since the Russian invasion of Ukraine were from outside the EU, with almost two-thirds of it from the US.

    Opposed to the Transatlanticists are the sovereigntists, or the strategic autonomists, associated with France, who want to build out a larger European military industrial complex. France has a large armaments sector; it is the third-largest exporter after the US and Russia. It is more dirigiste than Germany and fellow liberals—its 2023 military-budget law allows for requisitioning of its domestic industry—but is experiencing increased rivalry from Turkey, Israel, and Korea.

    A parallel dilemma exists in clean-energy technology, where certain industries such as solar PV manufacturing have been all but lost in Europe. For these industries, as Mario Draghi’s report last year into European competitiveness outlines, there is a case for increasing manufacturing in Europe, especially where there are strategic, security, or technological benefits to doing so.

    Two geographies: European investments in military and clean tech. (Sources: Delàs Center; Breugel)

    With hundreds of billions of euros in defence and green-industrial investment now on the horizon, can a new politics of growth emerge in Europe? More state-led investment in defence and green sectors is likely to lead to faster economic growth. Whether or not it leads to broader social transfers will depend on the political negotiations that unfold in individual countries—but they are at least partly constrained by fiscal opportunities and questions of coordination at the EU level.

    Aligning climate action with military armament risks fomenting nativism in an era of increasing climate-driven migration and far-right ascendency. Yet “energy security” is an intrinsic component of Europe’s new “security-security” goals. If “security” is going to dominate the next phase of the European project, how will climate and social agendas be fought for?

    The energy transition and transatlantic fracture can be a basis for new alliances based on something other than fossil fuels, as Pierre Charbonnier has argued. For countries such as Brazil, India, and African countries rich in tropical rainforests and minerals, “What do we offer these countries so they side with us? Europe should build its foreign policy on a coordinated response to the climate question.”

    Europe’s fiscal straitjacket and lack of productive investment harmed its security, its climate goals, and its capacity for international collaboration. Brussels is now attempting a concerted albeit disjointed effort to make the continent a sovereign pole and managing the dilemmas posed by green and defence industrial policy. It has so far shown little appetite for wide-ranging reforms of the Bretton Woods Institutions that cripple climate and development spending in the global south.

    The virtual destruction of most US soft power initiatives doesn’t mechanically increase the desirability or solidity of European programs like its minerals MOUs with Namibia, DRC, Rwanda and Zambia; just as a sophisticated agenda of defense and clean energy industrial policy and funding won’t automatically address the disaffection of Europeans voting for far-right parties. Money doesn’t buy sovereignty; nation-building, as the heroism shown by ordinary Ukrainians should remind Europeans, is about the ultimate political question of what is worth fighting and dying for.

  5. The Canal Zone

    Comments Off on The Canal Zone

    “China is operating the Panama Canal. We didn’t give it to China, we gave it to Panama, and we’re taking it back,” announced US President Donald Trump at his second inaugural address. Since his return to office, Trump’s repeated vows to “take back” the canal have emerged as part of a broader push to return to the heyday of American expansionism, throughout which Panama served as a key neocolonial outpost. Today, 5 percent of all global commerce crosses through the Panama Canal, a crucial node in a network of 144 international routes and 1,700 ports worldwide. More than 40 percent of US container traffic relies on the waterway, which was controlled and operated by the US from the completion of its construction in 1914 until December 31, 1999, after which the 1977 Torrijos-Carter Treaty handing over control of the waterway to Panamanian authorities went into effect. 

    Trump’s threats have intensified ongoing political instability in the Central American isthmus. Following months of mass protests sparked by corruption scandals and growing inequality, Panama’s latest electoral cycle culminated in May 2024 with the victory of José Raúl Mulino, who ran for president after his running mate, former president Ricardo Martinelli, was convicted of money laundering. During Martinelli’s tenure, from 2009 to 2014, GDP grew at an average annual rate of 8 percent—one of the highest rates in recent history, largely attributed to the construction boom related to the canal’s expansion projects. The memories of this growth seemed to prevail in the minds of Panamanian voters, who supported Martinelli’s bid for reelection despite his legal disputes. Nonetheless, the path forward remains unclear. While Mulino promised “Más chenchén en tu bolsillo,” or “More cha-ching in your pocket,” he outlined few concrete policies to achieve this goal. After assuming office eight months ago, Mulino faces a series of immense challenges. His government is tasked with revitalizing one of the economies most battered by the pandemic, beset by high unemployment and a fiscal deficit, along with the latest dilemma of US pressure over the canal and the implementation of Trump’s mass deportation agenda. 

    Upon handing over control of the canal to Panamanians at the start of this century, US officials warned that the waterway’s economic performance would decline. To the contrary, Panama has experienced historic rates of economic growth, while suffering from increased regional and income inequality as a result. 86 percent of the country’s GDP today is concentrated in the three provinces surrounding the canal—Panama, Panamá Oeste, and Colón. The canal’s economic significance over the past two decades has solidified the transit-oriented consensus of Panama’s developmental model, often referred to by scholars as transitismo. This consensus describes an enclave economy oriented to the needs of global commerce and premised on a transgression of territorial sovereignty, given the construction and management of the canal by the US.1 Transitismo has long extended beyond the canal itself to include assets such as the port system, hubs for air transportation, the flag of convenience registry, low-tax investment regimes, and banking services. In spite of national control, this model has continued to produce a dual economic structure, in which productive and high-earning activities located around the canal zone of influence coexist with an agrarian countryside characterized by low-productivity and subpar labor conditions.

    Panama lies at the center of the American continent, in the narrowest stretch of land between the Atlantic and Pacific oceans, and as such the country has long held a “historic vocation” of facilitating the circulation of commodities and natural resources for the needs of capital accumulation. The struggle for national control of transit-oriented assets has shaped Panama’s development throughout the twentieth and twenty-first centuries, especially given the shifting positions of domestic political and economic elites toward the US. Through the transitista model, we can understand the Panamanian isthmus as part of a world-system that since the colonial era has subjugated the colonies to the interests of imperial metropolises.2 Today, as Panama faces yet another challenge to its sovereignty, the gaps in this developmental model—vulnerable to the ebbs and flows of global trade and geopolitical dispute—are laid bare to see. 

    Forms of enclave      

    In contrast to other peripheries of the Spanish Empire, Panama was historically dominated by a merchant class dedicated to the commodities trade. In the nineteenth century, the construction of the transcontinental railway became essential to this economy. With the discovery of gold deposits in the San Francisco Bay of California in 1849, the railway was meant to connect the Caribbean coast to the Pacific. Construction lasted until 1855, which also marked the beginning of the circulation of US dollars in Panama, since it was the currency used by passengers and railroad workers. The sudden influx of dollars incentivized the development of a service industry that catered to railway users, normalizing the use of US dollars within Panama. The railway also served an instrumental purpose, as the US attempted to halt investments in the American continent from other imperial rivals including England. At the same time, the Americans hoped to develop their own manufacturing industries. Thus, from its onset, the transitista model has been intimately bound to imperial rivalries of the world.

    Over time, Panama’s merchant faction formed alliances with urban landlords and rural landowners. In 1903, this merchant elite diverged with Colombian landowners on the issue of granting the US a land concession in the center of the isthmus to construct a waterway. Panama seceded from Colombia, formalizing an unequal relationship with the US and becoming its own nation-state. The Hay-Bunau-Varilla Treaty of 1903 granted the Panama Canal to the US in perpetuity, turning the country into a de facto neocolonial protectorate. For a single payment of $10 million, the United States controlled some thirty kilometers in the heart of the country, excluding Panama City and Colón. Half of this initial payment was granted through JP Morgan investments in New York real estate. Any additional territory deemed necessary to the maintenance and security of the canal was also bequeathed. Panama renounced its ability to tax the Canal Zone, subsidiary companies, and their employees. In exchange, the US government agreed to pay a lifelong rent of $250,000 starting in 1913.3.

    The monetary convention of 1904 further established the US dollar as legal tender. The circulation of the dollar reduced transaction costs for the US, and as Colombia faced a crisis of hyperinflation from 1899 to 1902, Panama was disincentivized from forming its own Central Bank, in an effort to avoid the risk of an inflationary spiral.4 The new republic’s 1904 constitution sanctioned American interventionism, granting the US the right to meddle in local affairs and ensure constitutional order whenever necessary. In parallel, Panama’s independence secured the dominant classes’ access to the country’s customs authority, which was formerly controlled by the Colombian government. Thus, the nation’s foundational framework, as well as the network of businesses and services that the US established around the Canal Zone, restricted the ability of the dominant classes to capitalize on the interoceanic waterway,  real estate, and the management of public finances.

    The construction of the canal was completed in 1914, furthering dependence on the world market and US interests. Taxation jumped from an average of $6 million between 1918 and 1920 to an average of $9 million between 1926 and 1932. About 9 percent of the increase came from the rents around the canal and investments in the New York real estate market.5 Nonetheless, alliances between the dominant classes—often based on clientelist regimes of strongman leaders—splintered throughout the twentieth century. Periods of economic crisis, such as the Great Depression, produced conflicts that ultimately diversified the dominant classes’ strategies to capitalize upon the interoceanic shipping route, including renegotiating the protectionist measures around the Canal Zone, such as trade between territorial entities, the taxation of the Canal Zone and its related industries, and harvesting lands owned by US multinationals. Meanwhile, periods of economic growth, such as the onset of the Second World War, intensified anti-imperial demands against US occupation. In 1945, the Canal Zone accounted for 21 percent of the national GDP—the war boom had the effect of consolidating productive domestic capital, especially in the construction sector.6. With the end of US military operations related to the war and the changing sizes of commercial vessels, canal transit dropped significantly, leading to a postwar recession. The US was unwilling to expand the canal in order to accommodate new vessel dimensions. To replace the lost income of urban landowners no longer leasing to the military, the Colón Free Trade Zone was inaugurated in 1948. By 1950, as part of the greater Canal Zone, it represented 8 percent of national GDP. 

    The Remon-Eisenhower Treaty of 1955 renegotiated Canal rents, increasing import substitution and encouraging the development of an internal market. These measures were successful: by 1960, internal production in Panama accounted for 86.9 percent of food consumption in general, total investment grew by 13.6 percent, and 25 percent of GDP went to capital formation. From 1960 to 1965, investment in machinery doubled. By the end of the decade, spending had grown by more than 20 percent annually.7 The alliance between the dominant classes was, above all, maintained by US control of the Canal Zone, which justified military intervention throughout the century and allowed for the diversification of elite strategies to capitalize on the canal.

    Financial coups

    Transitismo grounded the disputes over control of the Canal Zone and the US-Panama relationship, with significant implications for Panamanian politics. When Arnulfo Arias, a Nazi-sympathizer who had served as president twice before, won the presidential election of 1968, the National Guard led a coup d’etat that reorganized the transit-oriented consensus. Arias was replaced eleven days into his administration by the military leader Omar Torrijos Herrera, who oversaw the reordering of this developmental model toward a more active incorporation of global finance into the enclave structure. The Torrijos regime not only wrote and implemented a new constitution—one which remains in place until this day—but it also oversaw the establishment of the International Banking Center in 1970 and the successful negotiation of the Torrijos-Carter Treaties of 1977.

    Earlier amendments to the US Bank Holding Company Act allowed for the expansion of US banking services to international markets, in an effort to compete with Euromarkets that had emerged after the Second World War. Following the recommendations of economist Arnold C. Harberger, also known as a founding father of the Chicago Boys, Torrijos moved to establish the International Banking Center in response to these shifts, enabling the circulation of money through the export of banking services. The establishment of the Center reinforced Panama’s dependency on the world market, and US bank holding companies became lenders of last resort, guaranteeing liquidity for the domestic market and replacing issuers.8 The economy grew immediately after the establishment of the Center, but to a limited extent. GDP rose at an annual rate of 6.5 percent up until 1973, but fell to 2 percent in 1974 with the onset of the oil crisis. Economic deterioration was uneven: while manufacturing, construction, and commercial sectors saw declines in their annual growth rates, banking and financial sectors grew considerably. In response to the global recession, the Torrijos regime launched the 1976–1980 National Development Plan, which actively pursued the incorporation of the banking and financial sectors into the transitista model of development. The changes were dramatic: in 1960, Panama had five banks; by 1984, the country had 122. This outcome marked a compromise between imperial interests and the dominant merchant classes, a consensus which also backed the Torrijos regime. At the regional level, the creation of the International Banking Center would prove essential to the transnationalization of Latin American economies at the start of the twenty-first century.9

    Importantly, the Torrijos-Carter Treaties stipulated the handover of the canal and its operations to Panamanian authorities on December 31, 1999. With these treaties, national sovereignty over the entire territory, including the Canal Zone, was finally recognized. The agreements included the gradual replacement of the Panama Canal Commission—a US corporation—with the Panama Canal Authority, an autonomous legal entity. The lands and infrastructures that belonged to the Canal Zone, such as the railway and the port system, were also returned, except for those deemed strategic for its military defense, some of which were handed over with restrictions, as in the case of the Smithsonian Museum and its affiliates. Panama acquiesced to military cooperation and defense coordination with the US to guarantee the neutrality and safety of Canal operations during war and peacetime. 

    Torrijos’s sudden death in an airplane crash in 1981 prompted the rise of military strongman Manuel Antonio Noriega, who ruled as a de-facto dictator from 1983 to 1989. Noriega, initially a key US ally, assisted in aiding anti-Sandinista forces in Nicaragua and pushed forward economic liberalization measures, exposing the Panamanian economy to even greater market vulnerabilities. But by 1988, Noriega’s deals with other nations led the US to reverse its position, as the Northern power sought to depose the dictator amid the looming prospect of the canal handover. That year, the United States accused the Noriega regime of drug trafficking and imposed stringent sanctions on the Panamanian economy, freezing assets held in the National Bank of Panama, cancelling import quotas for Panamanian products, suspending payments from the Panama Canal Commission, and prohibiting US citizens and companies from conducting business with the Panamanian government.10 The resultant liquidity crisis generated the largest contraction that the Panamanian economy had ever experienced up to that point. GDP plummeted by 13.5 percent in 1988 and the unemployment rate grew to 16.3 percent.11

    On December 20, 1989 the US launched “Operation Just Cause,” a military invasion of Panama City. The US justified the attack through the Treaty Concerning the Permanent Neutrality and Operation of the Panama Canal, signed as part of the Torrijos-Carter Treaties of 1977. The military dropped a bomb on the capital every two minutes for fourteen hours, leaving at least 3,000 dead, 6,000 injured, and thousands of unacknowledged and missing victims. This brutal end to Noriega’s regime marked the beginning of a new chapter in the transitista developmental model: the interoceanic shipping route would soon be under Panamanian control.12

    Uneven growth

    In accordance with the 1977 Treaties, Panama gained full control of the canal at the start of 2000, a watershed moment that marked the incorporation of the most crucial asset of the transitista development model to Panama’s national economic structure. In the twentieth century, transit-oriented consensus was shaped by struggles between dominant classes and the US, who negotiated how the canal could contribute to projects of national development. After the canal handover, the primary concern became one of macroeconomic management. Panamanian control of the canal and the consolidation of other transitista assets—the port system, the air transportation system, the flags of convenience registry, and low taxation investment regimes—laid the foundation for Panama’s high economic growth in the twenty-first century. The canal and its operations could be enlisted directly to serve national development, immediately fueling public and private construction for the canal’s expansion. Following the turn of the century, Panama was home to one of the most dynamic and attractive economies in the region (Figure 1).

    But economic growth did not alter the domestic market’s dependency on the world market: during the construction boom, trade between China and the US accounted for 80 percent of transit through the canal. Even with national control of the transitista developmental model, domestic stability depended on the tax revenue collected from the canal’s zone of influence.13

    Figure 1

    At the sectoral level, the activities that propelled economic growth were closely related to canal activities—construction, financial intermediation, commerce (especially in the Colon Free Trade Zone), transportation, storage, and communications performed much better than the agrarian sector, manufacturing, and natural resource exploitation (Figure 2). While the industrial composition and evolution of the national economy yielded profits for one segment of the population, it deepened structural inequalities overall. The sectors that saw the lowest growth were those that employed the majority of the country’s workers. 

    Figure 2

    The transitista development model also exacerbated territorial inequalities. According to recent studies done by CEPAL, Panama suffers from the highest territorial disparity in terms of GDP per capita in Latin America. While provinces like Panamá and Colón boast per-capita income levels similar to those in Spain and Portugal, the rest of the country, including Darién and Bocas del Toro, have per-capita income levels equivalent to those in sub-Saharan Africa.

    Figure 3

    In other Latin American countries, increases in production have led to convergences in income, but in Panama the territorial income gap per capita has tended to grow. This is a result of the nation’s dual economic structure,14 which hinders the development of provinces outside of the canal transit zone15 and bears important implications for the country’s political currents. The banking sector and special economic zones operate in enclaves, operating exclusively around the canal and isolating themselves from the laboring population. An ecosystem of low taxes limits the resources available to the state to support more equitable development in the countryside. Panama has continued to suffer from high poverty, deficient public services, a poor education system, grave institutional inefficiencies, a lack of government transparency, and labor precariousness. While investment regimes that include multiple tax benefits for private capital have employed an array of highly qualified professionals, they have not benefited the overall population. Panama’s transit-oriented consensus may have sustained the growth of recent decades, but it has also reproduced this dual reality.16

    Prior to 2009, the Panameñista Party and the Democratic Revolutionary Party had alternated in power for almost twenty years after the US invasion. Although macroeconomic indicators were positive, the dividends of growth were not enough to address the country’s structural problems (Figure 4). The widening growth gap thus posed a political opportunity for Ricardo Martinelli, founder of the Democratic Change party. Martinelli rode on Panama’s discontent toward the “traditional” political elite, winning the 2009 election with more than 60 percent of the vote.

    Figure 4

    During Martinelli’s five-year term, the economy surged at a rate unprecedented in the country’s history, earning Panama praise from multilateral organizations and international media. GDP grew at an annual rate of 8 percent, the highest ever observed for a five-year term in Panama. Likewise, the unemployment rate fell to historic lows of 4.1 percent. But for all the positive macroeconomic indicators, the country’s dual economic structure remained firmly in place, tied to the persisting transitista development model.

    Martinelli’s administration inherited favorable public finances: tax revenues ran a surplus from 2006 to 2008, which is exceptional considering the global recession. The debt-to-GPD relationship in 2009 was at 40 percent—the lowest of any political transition in the country—allowing for a vast and rapid expansion of public spending. The Martinelli administration embarked on a pro-cyclical fiscal policy: as the GDP grew, so did government spending, to the point that fiscal accounts began to run at a loss. To cover the deficit, public debt increased until it reached $18.23 billion, or 37 percent of the GDP. A large portion of the government’s infrastructure investments were executed as “turnkey” projects, paid for only after construction began. This allowed the government to transfer debt to the future, lowering pressure on state finances during incumbent presidential mandates.17

    With the backdrop of the commodities boom boosting trade between China, Latin America, and the US, the construction sector in Panama became a key driver of the economy, growing 155 percent from 2009 to 2014, for an annual growth rate of 21 percent, tripling the overall economic growth rate. By 2006, the expansion of the Panama Canal unleashed a construction boom in public and private infrastructure projects, including the Panama City subway, a bridge over the Atlantic entrance to the canal, a healthcare center, the second and third phase of the Coastal Beltway in Panama City, and multiple highways. In the private sector, apartment buildings, skyscrapers, shopping malls, and offices increasingly dotted the city. The construction sector’s contributions to GDP rose from 9.7 percent in 2009 to 17 percent in 2014, making up almost a third of the economic growth over the five-year term.18

    The rest of the population felt significant windfall gains: unemployment rate and informality dropped to historic lows; access to credit and financing swelled,19 and social protection programs were implemented via money transfers.20 A strengthened labor market and increased social protections raised the incomes of the most vulnerable, but the national income distribution nonetheless remained deeply unequal, with a Gini coefficient above 50 points. While the end of the Martinelli years saw the deterioration of the government’s fiscal accounts, corruption charges against the administration, and the intensification of the country’s unequal economic structures, the period is largely remembered for bringing prosperity to a wide swath of the population.

    Transitismo at its limit 

    The bust following the boom revealed the limits of the transitista model. By 2014, global macroeconomic conditions had declined, a result of the drop in the price of natural resources and the end of the commodities supercycle, as well as the trade war between China and the United States. GDP growth fell from 5.1 percent to 3.3 percent in 2019, the lowest recorded since the 2009 recession.

    The conclusion of major infrastructural projects meant that Panama’s investment growth rate fell from 14 percent in 2009 to 1.2 percent in 2019, while the average growth rate of the construction industry fell from double-digits between 2011 and 2015 to an average of 0.7 percent in 2019—the lowest recorded since 2005. Meanwhile, private consumption and exports from the Colón Free Trade Zone also diminished, in part because of the economic crisis in Venezuela, tariff conflicts with Colombia, and the drop in the price of natural resources. This led to a decline in trade, the foundation of the transitista model. 

    Figure 5

    The start of operations at the expanded Canal in 2016, however, did lead to an increase in transportation, storage, and communications. The Panama Canal Authority’s contributions to the National Treasury grew considerably by 2017, climbing to $3.1 billion, 63 percent more than the previous year. Nonetheless, public finances continued to worsen as the levels of public spending grew more than national income. The national taxation system remained unreformed, leading the government to adjust its fiscal rules in order to grow the deficit. Unemployment climbed from 5.1 percent in 2015 to 7.1 percent in 2019, while informality rose from 40 to 45 percent. The Panama Papers scandal aggravated the situation, and foreign investment immediately dropped to 5.6 percent of GDP—the lowest level since 2009.21 The contraction reinforced the extreme concentration of economic activity along the canal.

    The bust was expected in a country that had failed to propose a new development plan in the aftermath of a construction boom. The Covid-19 pandemic only exacerbated the crisis. The abrupt suspension of trade caused an unprecedented shock, causing Panama to experience one of the sharpest contractions of GDP in the world, at 17.7 percent. Construction, commerce, and transport accounted for 77 percent of this contraction. Since the country’s economic sectors were organized around the canal and its operations, there were no productive alternatives in the countryside to help weather the paralysis imposed by the pandemic. The unemployment rate surged to 18.5 percent, while informal labor climbed to 52.8 percent. The rate of participation in the formal workforce fell from 66.5 to 63 percent—an eleven-year setback. Salaried employment in the private sector contracted to such an extent that, even by 2023, the country had not fully recovered.22 A drop in tax revenue hurt public finances, and debt as a percentage of the GDP rose to 56.4 percent in 2023, running at about $47 billion.  

    Though economic activity began to resume in 2022, external shocks like the Russia-Ukraine war increased oil prices and the overall cost of living. These pressures denoted a wave of mass protests in the country, responding to the state of economic crisis, corruption, and inequality. Droughts brought further disarray—the arrival of El Niño affected the reservoirs that provide water for Canal operations, forcing the canal to restrict the transit of commercial vessels until 2024. The decline in shipping traffic across the canal also meant fewer contributions to the National Treasury. According to INEC data, toll income had contracted by 11.7 percent as of June 2024. The last straw was a 2023 contract approved by Laurentino Cortizo, Panama’s leader throughout the pandemic, which greenlit operations in Minera Panamá, an open-pit copper mine spanning 13,000 hectares located 180 kilometers north of the capital. Protests against the signing—which occurred without a public bidding process—formed the backdrop of the 2024 election. Martinelli, and then Mulino, triumphed through an anti-incumbency platform that promised a new way forward.

    A nation adrift  

    Despite this massive popular mobilization, Panama’s political situation today represents continuity more than rupture. Mulino’s early economic plans have reinforced a reliance on transitismo, and a viable alternative to this model has yet to materialize. Mulino’s only concrete campaign proposal focused on the construction of the Panama-Chiriquí Train, which would offer an unsustainable economic boost, similar to the construction boom of the previous Martinelli government.

    The Mulino Administration must now also contend with the Trump Presidency. Two weeks ago, amid Trump’s “oath” to recover control of the canal, newly sworn-in US Secretary of State Marco Rubio met with Mulino in Panama City, a visit that concluded with an agreement for Panama to host third-country nationals deported from the US. Thus far, nearly 300 deportees, many from Asian countries, have been flown to the country, with about a third transferred to a military jungle camp in the Darien region and the rest held in hotels in the capital. That Mulino is willing to comply with Trump’s severe and violent deportation agenda speaks to the importance of maintaining Panama’s economic and security relationship to the US.

    Publicly, Mulino has responded to Trump’s threats by guaranteeing neutrality in canal operations and returning to the provisions of the Torrijos-Carter Treaties of 1977. But behind-the-scenes negotiations may soon reveal other power plays, with Panama potentially caving into pressures from the North on issues such as port concessions, infrastructural investments, tariffs on US ships, and a reinvigorated US military presence on the waterway. Already, a growing sect of Panamanians online have expressed support for handing back control of the canal to the US, a sentiment that undoubtedly stems from the rigid economic inequality produced by the transitista model. For many in the country, the question remains: Does the canal truly contribute to the greater good of the country?

    Nonetheless, the transit-oriented consensus continues to dictate national economic policy. Trump’s statements pose significant threats to the extant path, even as US hegemony in Latin America faces rising challenges from China. Amid popular grievances concerning poor labor conditions and unaffordable living costs, state capacity for social protection still relies on an exhausted development model, marked by a dual economic structure and the cyclical deterioration of macroeconomic and social indicators. Even if Mulino pursues some form of geopolitical realignment to restrain the influence of the US, his attempts to revitalize the transitista model appear to be yet another bet on dependency, leaving adrift demands for meaningful change.

    This article was translated from Spanish to English by Maria Cristina Hall.

  6. How to DOGE USAID

    Comments Off on How to DOGE USAID

    We often hear that the new Trump administration inaugurates the age of technofeudalism. Just look at Elon Musk, pontificating about so-called “Department of Government Efficiency” (DOGE) democracy from the Oval Office while undemocratically occupying the US Treasury payment system. But is the administration simply using bullying as a mode of power, as Adam Tooze recently diagnosed it, destroying institutions without measure or plan?

    The smashing of the US Agency for International Development (USAID) makes a good case for both. For American liberals, USAID stands as a beacon of progressive values—a vehicle for delivering essential public investments in sexual and reproductive rights, climate resilience, or the Sustainable Development Goals (SDG) in the global South. The many voices defending it from the DOGE onslaught described it as a force for good, even as, or precisely because, it quietly advances US soft power objectives. This view is widely shared. As Bernie Sanders put it, “Elon Musk, the richest guy in the world, is going after USAID, which feeds the poorest people in the world.”

    But this was not smashing without a plan. We learned in early February from Bloomberg that the Trump Administration planned to shift some USAID funding to the US International Development Finance Corp (DFC). Created during the first Trump administration, the DFC deploys public money to leverage or mobilize private investment overseas, in partnerships with institutional investors. As Bloomberg summarizes it: “The new approach would see reduced humanitarian assistance and a greater role for private equity groups, hedge funds, and other investors in projecting economic might as the US competes for influence and strategic projects overseas with China.”

    At first glance, this looks like the privatization of foreign aid: a shift from public provision to market solutions. But there’s a larger story at play. The new Trump Administration is turbo-charging the lesser known but increasingly dominant agenda within USAID: “mobilizing private capital.” This approach, which I have termed the Wall Street Consensus, is a decade-old international development paradigm that has been promoted by the World Bank, the United Nations, and rich countries’ development agencies, including the USAID under the Biden Administration.

    The Consensus reimagines the role of the state as a facilitator of private investment through various subsidies to investors that are often described as “derisking.” Development is no longer a public good to be directly financed by states, but a market opportunity to be unlocked through the alchemy of public-private partnerships (PPP) into “investible,” privately-owned projects.

    USAID and the Wall Street Consensus

    In its pre-Trumpian formulation, adherents of the Wall Street Consensus championed a vision of what it termed “investible development.” State and development aid organizations, including multilateral development banks, would escort the trillions managed by private finance into SDG asset classes, be those education, energy, health, or other infrastructure. The state derisks by using public resources—official aid or local fiscal revenues—to improve the risk-return profile of those assets, often described as bankable projects. In the energy sector, it commits to purchase private power at predetermined prices and/or predetermined quantities that guarantee a reliable cash flow for investors. A similar process takes place in  investible health. In Turkey, for example, the  Ministry of Health ended up spending around 20 percent of its budget on guaranteed payments for PPP hospitals co-owned by the French asset manager Meridiam, with an average cost per bed twice that of a public hospital. In water, public money for private water typically reduces universal access by imposing user fees on poor populations.

    “Leveraging” or “mobilizing” private investment is code for granting public subsidies to privately owned social infrastructure. This involves a new distributional politics that shifts public resources to private investors. The for-profit logic at the core of this development paradigm curtails universal access to social infrastructure and is fertile ground for human rights violations. For example, Bloomberg reports that development aid-funded private hospitals in Africa and Asia have detained patients and denied care on a systematic basis.

    USAID has also promoted “derisking private investment,” a fact celebrated  by its former leader Samantha Power when she said late last year that: “USAID over the last four years has increased private sector contributions to our development work by 40 percent. For every dollar of taxpayer resources that we have spent, we have brought in $6 in private sector investment.” 

    When the Obama Administration made national security a primary focus for USAID programs, it launched Power Africa, a USAID energy initiative ostensibly aimed at improving energy access across the continent. On the now defunct USAID website, Power Africa presented several of its success stories, including the 450 MW Azura-Edo power plant in Nigeria and the Lake Turkana Wind Project and Kipeto Wind Project in Kenya, two of the largest renewable projects on the continent. If these projects, to some extent, represent important steps in closing the critical energy gaps blighting the continent, they  nonetheless illustrate a familiar Wall Street agenda: USAID created opportunities for private financiers while imposing significant fiscal burdens on African governments and stunting opportunities for autonomous industrial upgrading. They have effectively worked as an “extractive belt,” channeling scarce fiscal resources of global South countries to global North investors.

    Nigeria’s Azura-Edo natural gas powerplant is perhaps the most striking example of USAID-supported extractivism through derisking. The first privately-financed power project in Nigeria, the World Bank described it “as an example of how we have attracted private sector investment in the power sector.” To do so, the Bank, alongside official development institutions from the US (DFC), Germany, France, Sweden, and the Netherlands, organized and financially derisked bank lending to the project. But the fiscal derisking terms that Azura—now majority owned by the US private equity fund General Atlantic—extracted from Nigeria have been the subject of ongoing controversy. 

    The Nigerian state, via its state-owned Nigeria Bulk Electricity Trading, signed a $30 million-a-month take-or-pay agreement. Since Azura’s installed capacity could not be easily absorbed by the dilapidated Nigerian energy grid infrastructure, the Nigerian state ended up paying for energy in excess of what it can actually use. In a cat-and-mouse game, Azura has been threatening the Nigerian government to trigger the World Bank’s partial risk guarantee, a derisking instrument meant to discipline Nigeria into meeting its payment obligations to international investors. A triggered risk guarantee becomes a loan to Nigeria, as the de-risking state always pays, thus affecting its sovereign rating. As one Nigerian government official put it in 2024, “the agreement was a big mistake.” Lacking the resources to keep paying the exorbitant fees to Azura, he summarized that “this agreement is killing us.” Predictably, USAID celebrates the Azura deal quite simply as a “success.”  

    Another success story is the Lake Turkana Wind Power (LTWP) project. According to USAID, the agency “is creating an enabling environment for renewable power in Kenya by supporting a Grid Management program to help Kenya with grid management of intermittent renewables.” Private sector partners notably include Aldwych, working alongside the Standard Bank of South Africa, the African Development Bank and Nedbank committed financing and insurance, as well as the US Treasury Department.

    The main equity owners included various Nordic public entities and Vestas, the Danish wind turbine manufacturer. These then sold their stakes to Anergy Turkana Investments, a state-owned South African asset manager, and Blackrock’s Climate Finance Fund. On the fiscal side, the Kenyan state entered a twenty-year power purchase agreement (PPA) that commits the state-owned Kenya Power and Lightening to purchasing the wind power generated. This fiscal derisking of demand was so generous to the LTWP owners that the World Bank withdrew its backing for the project; it stressed that the take-or-pay provision (like in the Azura case) would force the Kenyan state to pay for power it could not use. Even if the Kenyan grid could absorb it, the twenty-year contract locks Kenyan taxpayers into paying a Sh16/kWh price, now nearly three times higher than the Sh5.8/kWh market price. The Kipeto windfarm, now owned by the French asset manager Meridiam, is a similar take-or-pay arrangement, committing Kenya Power to compensate Meridiam in US dollars, thus taking currency risk from private investors.

    Figure 1: Lake Turkana Wind Project: ownership and fiscal derisking

    The derisking extractivism became so controversial that in late 2024, a Kenyan parliamentary committee asked the Ethics and Anti-Corruption Commission and the Directorate of Criminal Investigations to probe the role of state employees in the signing of the power purchase agreement between Kenya Power and LTWP. The Kenyan government ultimately imposed a moratorium on PPAs in the energy sector, but the implications of the agreement were not only fiscal. High energy costs undermine public efforts to strengthen the manufacturing capacity of the Kenyan economy and create new sources of political conflict between foreign-owned energy producers, the Kenyan state and local manufacturers.

    USAID worked both as an instrument for humanitarian assistance and for extractivist derisking. In that domain, it placed emphasis on visible outcomes like infrastructure and investment while obscuring the long-term economic and social costs for those on the receiving end.

    DOGE and foreign aid

    This title of this piece is not my original turn of phrase. It hails from a blogpost by two Trumpist financiers: Jon Londsdale, a Peter Thiel mentee, and Ben Black. The latter, the son of Apollo Global Management co-founder Leon Black, was nominated by the Trump administration to head the DFC in its repurposed role as a more aggressive instrument of US economic power. As Bloomberg reports, the DFC would become Trump’s sovereign wealth fund, with its overall funding cap raised to $120 billion from its current $60 billion investment cap—already larger than USAID’s $40 billion budget.

    A Harvard trained lawyer, Black heads the private equity firm Fortinbras, unironically named after Shakespeare’s character whom Hamlet describes as possessing “divine ambition,” his mark of greatness fighting for his family’s honor. The son of an asset stripper, he has now been tasked himself with stripping USAID of its commitments to humanitarian assistance—that’s what it means to DOGE.

    Somewhat ironically, the two financiers’ diagnosis of USAID echoes that of Bernie Sanders, albeit while denouncing the  politics that apparently drive it. Under Biden, they argue, USAID became a “dependency program for foreign nations,” an “absurd mission drift” wasting taxpayer money into virtue signaling projects like climate or gender equality, “pandering to the interest group-driven issue of the moment.” Alas, nothing on how USAID operations have been benefiting their tribe.

    Instead, they propose to reorganize foreign aid with the purpose of “securing access to critical resources, building strong market economies, and promoting pathways for private capital to invest … backed by DFC financing, American mining, shipping, and resource-dependent businesses could step in, bringing capital and expertise” to strategic geopolitical interests like Greenland.

    The DFC operations in 2023 offer a snapshot of its derisking activities. That year, it committed around $10 billion, $1.2 billion of which was earmarked for Ukraine, with no disclosed information on the specific programs. Its largest twenty investments are all over $100 million. The largest, totalling $747 million, committed to Gabon’s debt for nature swap. At first glance, such projects seem  a win-win scenario: indebted nations like Gabon receive debt relief in exchange for commitments to environmental conservation. The problem, however, is that these swaps outsource environmental policy to external actors—in this case the US Nature Conservancy—and create profit opportunities for financiers—US Bank of America New York arranged the issuance of blue bonds. All the while, they do little to address the root causes of debt accumulation, such as exploitative trade relationships or volatile global financial markets.

    Several of the DFC’s other large commitments illustrate its role at the intersection of US geopolitical priorities and US corporate interests. The DFC provided a $300 million guarantee to Goldman Sachs designed to underwrite potential derivative obligations arising from the company’s contract with PKN ORLEN, the Polish oil giant, as it sought to hedge its risks from imports of US liquefied natural gas. It committed $150 million to the private equity fund I Squared Climate Fund for infrastructure investments in India, Indonesia, Philippines, Vietnam, Cambodia, El Salvador, Malaysia, Mexico, Dominican Republic, Peru, and Brazil. In another derisking operation, the DFC committed $100 million to private equity Global Access Fund, which privatizes water infrastructure via PPPs.

    If the second Trump administration is chaotic in presentation, parts of its agenda are nonetheless coherent. The new government will turbocharge the extractivist derisking of USAID via the DFC—“feeding into the woodchipper,” as Elon Musk put it, the aid part of the US development agenda. It is the Wall Street Consensus on steroids, run by private equity titans for private equity.

  7. Controlling Capital

    Comments Off on Controlling Capital

    Central banks are back in the spotlight. After more than three decades of low inflation in rich countries, the rise in prices observed between 2021 and 2023 forced academic discussions into the public sphere. Such debates are not restricted to technical economic issues but deal explicitly with the politics of central banking. In the recent case of the United States election, for instance, more than a few commentators pointed at inflation to explain Donald Trump’s victory. Some went further, arguing that inflation explains a general anti-incumbent bias in recent elections across the globe.

    Economies south of the Equator were also subjected to the inflation wave, but their particular situation exposed different challenges faced by their inflation-targeting central banks. If the debates over monetary policy in the North have revealed to some degree the narrowness of interest rate policy in managing inflationary tensions, the particular constraints of Southern countries expose the limits of conventional monetary policy even more starkly. Placed below the rich economies in the currency hierarchy that structures the international monetary system, peripheral countries face distinct challenges—namely, vulnerability to global financial cycles and the difficulty of asserting monetary autonomy.

    Many countries demonstrate this distinct position—think of recent controversies over monetary policy in Bolivia, Colombia or Turkey, each driven in part by the drying up of global liquidity. Brazil, with its extremely volatile currency, is a clear case of such challenges, but has been mired for two years in a public economic policy debate that largely fails to address the role of the global financial cycle. Instead, attention has been squarely focused on the levels of the interest rate set by the central bank. To be sure, Brazil’s extraordinarily high interest rates do have profound implications, and contribute to the reproduction of the country’s extreme inequalities. But the challenges posed by inflation require one to go beyond this immediate issue, and to seriously consider the question of capital controls.

    Lula versus the central bank

    When center-left Luís Inácio Lula da Silva started his third term as president in 2023, he was blocked from appointing the head of the Brazilian Central Bank and some of its directors by legislation approved during the far-right government of Jair Bolsonaro (2019–2022), his predecessor. Following international trends, Congress passed a law in 2021 to establish fixed terms for the central bank president and directors that do not align with the electoral cycle—the president of the central bank, for instance, now starts their term in office in the third year of the government that appointed him and outlives it for another two years.

    The stated aim of the law was to prevent the politicization of monetary policy. But politicization could not be avoided through legislation. The president of the central bank appointed by Bolsonaro, Roberto Campos Neto, was not only loyal to him but was active in his campaign for reelection. (Among other things, Campos Neto allegedly created a polling aggregator to aid Bolsonaro’s campaign when he was already in charge of the central bank.) Less than three weeks into his third presidency, Lula started publicly blaming Campos Neto and high interest rates for blocking economic growth and job creation. Taking their cue, parts of the left and some critical economists declared Campos Neto the enemy. The latter, in turn, often resorted to monetary policy committee minutes to criticize government policies.1

    There were abundant reasons for the complaints against the central bank, as it can be argued that it kept the interest rate at an excessively high level for an extended period, despite inflation slowing since mid-2022. Complaints may have been even more justified since April 2024, when Campos Neto made a series of public remarks that were read by some as an attempt to sabotage the government, pushing inflation expectations upward.

    From the government’s perspective, the exchange of blows with Campos Neto served the political purpose of pulling focus away from Lula’s decision to not break decisively with austerity—its choice of a “slow-motion lulismo.”2 It also fostered hopes that, once Lula was allowed to replace Campos Neto in 2025, monetary policy could shift to an expansionary stance, contributing to push the economy forward. But when 2025 arrived, the situation had changed, and the earlier hopes were all but shattered.

    Lula’s appointee to head the central bank is Gabriel Galípolo, a young former banker who has also traveled in left-wing economics circles.3 After a brief period working at Lula’s Ministry of Finance, Galípolo was nominated for the board of the central bank in mid-2023. During the first seven monetary policy committee meetings he took part in, the board lowered the interest rate.4 But in a subsequent set of six meetings, Galípolo backed a pause in monetary policy easing (in the first two) and then its reversal (in the last four)—rates have risen so far from 10.5 to 13.25 percent. In the most recent meeting, the first since Galípolo was made President, the rate was hiked 1 percentage point.

    Meanwhile, some economists on the left—acknowledging that Campos Neto’s exit would not bring with it lower interest rates—have shifted their target. In an open letter addressed to the National Monetary Council, nine influential critical economists argued that price rigidities and indexation prevalent in the Brazilian economy make the current inflation target (of 3 percent) “dysfunctional,” calling for a change of the target to 4 percent to “allow for a more balanced growth of the Brazilian economy.” Since then, others have joined the chorus.5

    The Brazilian inflation target remained 4.5 percent between 2005 and 2018 and, since 2019, has been lowered 0.25 percentage points each year—in keeping with the disinhibited neoliberalism that characterized Brazilian economic policy after the Workers’ Party (PT) was ousted from power in 2016. Lula himself broached the possibility of increasing the target in 2023, during his disagreements with Campos Neto, but the government has so far opted to keep it at 3 percent. 

    What happened? Why wasn’t Campos Neto’s exit sufficient to ease the central bank’s stance? Why has Galípolo backed the recent increases in interest rates, including the aggressive hikes of one percentage point decided upon during the last two committee meetings?

    The answer lies in the trajectory of inflation itself. After peaking at 12.1 percent in April 2022 as part of the global pandemic inflation wave, it declined sharply to 3.2 by June 2023. Since then, however, inflation increased again and closed 2024 at 4.8 percent—higher than the 4.5 percent threshold set by the inflation targeting regime, which admits a 1.5 deviation from the target. Unsurprisingly, conventional economists blame fiscal policy uncertainties for both the recent currency depreciation and the rise of inflation, and call for their preferred solution of austerity. Telling as it is of the underlying political disputes, this much-abused argument fails to explain the latest macroeconomic turmoil.

    Global financial cycles

    It is perhaps the curse of large countries to be inward-looking, registering all developments as domestically determined. This tendency has not escaped foreign observers: historian Perry Anderson, for instance, argued that Brazilian national culture is “uniquely self-contained.” Macroeconomic policy debates are not an exception to this rule. Sometimes honestly, other times in bad faith, they underestimate the role of the global financial cycles in exchange rate movements and, consequently, inflation rates.

    The one variable that remains the most crucial determinant of changes in the price level in Brazil is the exchange rate, as illustrated below, which not only impacts the price of imports and domestically produced tradable goods but also some service prices like rents, which are usually readjusted based on an index that closely tracks currency movements. The exchange rate, in turn, moves in tandem with those of most peripheral countries—even if its oscillations tend to be stronger than the average—reacting to global fluctuations of liquidity that are determined by the United States’ monetary policy.6 To understand Brazilian inflation, one needs to examine global financial cycles.

    Inflation targeting was established in Brazil in 1999, replacing the currency anchor as the primary stabilization tool amidst the global shocks of the late 1990s. According to one periodization, the period from 1997 to 1999 was marked by a double bust: the coincidence of cyclical declines of capital flows and commodity prices. The Brazilian economy had managed to overcome persistent hyperinflation in 1995 by pegging its currency to the US dollar, in the run-up to the double bust. But despite its efforts to attract foreign capital to sustain the peg—the basic interest rate averaged 23.7 percent between 1996 and 1998—the volatility of capital flows that wreaked havoc in Mexico, East Asia, and Russia in those years eventually forced the Brazilian real to float.

    Following the global trend, Brazil replaced its fixed exchange rate regime with inflation targeting and succeeded in preserving some degree of stability, but at a very high cost (unemployment and inequality increased, and foreign debt levels soared, with the basic interest rate being set at 45 percent in 1999). Having declined from 9.6 to 1.7 percent between 1996 and 1998, inflation jumped back up to 8.9 in 1999, due to the large depreciation, but then averaged 6.8 in the following two years. Subsequently, uncertainties related to the Nasdaq crisis in the US combined with domestic electoral speculation in 2001 and 2002, forced further devaluations of the real of 25 percent each year, in turn pushing inflation back up to 12.5 percent.

    As Lula entered office for the first time in January 2003, the double bust had transformed into a double boom. The reordering of the global economy with China’s integration into world production and trade circuits resulted in a decade of growth acceleration, increasing commodity prices, and a capital flow bonanza, which lasted until 2011. Gradually but steadily, the exchange rate appreciated from the peak of 3.89 reais per dollar in September 2002 to the trough of 1.56 in July 2011. Inflation, in turn, was kept within the target band between 2004 and 2014, averaging 5.4 percent annually until 2011—while the double boom lasted. And, finally, the basic interest rate was allowed to come down from 19.8 to 14.5 percent—comparing the averages for 1999–2002 and 2003–2011, respectively.

    Politically, the government took advantage of this period to adopt policies that substantially reduced wage inequality and boosted domestic demand. At the same time, property income flowing to the top became increasingly concentrated, while the double boom stimulated rising household indebtedness. The trend was regional: accelerating growth with falling wage inequality was the common characteristic of the so-called Pink Tide countries. But the promising trajectory of inclusive growth had a less appealing counterparty, as the boom consolidated the region as an exporter of primary commodities, increased the external vulnerabilities of its economies, and empowered an extractivist and agrarian fraction of the ruling class that would eventually draw the Pink Tide to a close and weaken the countries’ young democratic institutions.

    In 2011, as the double boom became once again a double bust, difficulties started to emerge. In Brazil, the exchange rate depreciated à la Hemingway—gradually and then suddenly—peaking at 4 reais per dollar in January 2016. Inflation followed suit: it averaged 6.1 between 2012 and 2014, despite great effort by the government to keep administered prices in check, then jumped to 10.7 in 2015. The central bank’s sharp contractionary turn, increasing the basic rate from 7.25 to 14.25 percent, between April 2013 and July 2015, had a limited effect: the pressures on the price level and on the currency would only ease from 2016 onwards, when global conditions turned again, and global liquidity started to recover.

    2016 was also the year in which the PT was ousted from government by a parliamentary coup. The subsequent governments took a series of measures to dismantle redistributive mechanisms, including labor market and pension reforms and a constitutional freeze on government spending. The resulting combination of stagnation and a relatively stable exchange rate kept inflation at a low level, until the pandemic struck. When it did, capital outflows from the periphery were vertiginous, with the Brazilian exchange rate depreciating around 30 percent in 2020 alone and inflation reaching a peak at 12.1 percent.

    The point is not that domestic determinants do not matter for inflation dynamics in Brazil; the way that the global financial cycle impacts the economy is shaped both by its structural position within the world economy and by its domestic policy decisions. But peripheral economies are regularly overwhelmed by sudden movements of global financial flows, which force their governments to face stark trade-offs.

    When capital flows out, economic activity is usually driven down, while prices are driven up. In this circumstance, the government can raise the interest rate to manage inflation, via the impact of the interest rate on capital flows and the exchange rate, thereby feeding the contraction. The government’s other option is to let inflation squeeze purchasing power, hoping that exports stimulated by the depreciated currency will compensate for diminished domestic demand and preserve employment. The more open to capital flows an economy is, the more acute the trade-off. Richer economies at the center, with stronger currencies, face significantly milder dilemmas, as the pass-through from exchange rate movements to prices tends to be much smaller.

    The pendulum

    Why would economies open themselves to capital flows, then? Foreign capital tends to be represented as a necessary tool to jumpstart development, investing in activities beyond the reach of domestic agents and bringing with it new technology and advanced management practices. But these benefits tend to be elusive: even in the form of productive investment—that is, a multinational corporation opening a plant—capital inflows often risk creating enclaves, weakly connected to the rest of the economy, focused on repatriating profits out of the recipient country.

    The bulk of capital flows is not represented by productive investments, but by short-term portfolio flows seeking the largest gains on the shortest time horizon. These hot money flows, as they are aptly called, tend to burn receiving countries.

    Understandably, then, the regulation of capital flows has historically been a contested issue, following the double movement described by Karl Polanyi in The Great Transformation. When financiers push for financial liberalization, a countermovement reacts to protect society from the resulting volatility. In the era of the gold standard, examined by Polanyi, unregulated capital flows were one of the three drivers of the satanic mill that sowed the seeds for the world wars and the rise of fascism. Describing its consequences in the late 1920s, he wrote:

    “An almost unbroken sequence of currency crises linked the indigent Balkans with the affluent United States through the elastic band of an international credit system … ‘Flight of capital’ was a new thing. … And yet, its vital role in the overthrow of the liberal governments of France in 1925, and again in 1938, as well as in the development of a fascist movement in Germany in 1930, was patent.”

    Amid the debris of World War II, the pendulum swung. At the Bretton Woods conference, the planners of the postwar order agreed that governments had the right to control “all capital movements.” To one of its architects, John Maynard Keynes, the shift was clear: “what used to be heresy,” he said, “is now endorsed as orthodoxy.”7 The new vision lasted many decades, during which free capital flows were the exception, rather than the rule. Almost half a century later, in 1989, John Williamson—in his codification of the Washington Consensus—shied away from an open call for complete liberalization, focusing only on foreign direct investment. “Liberalization of foreign financial flows,” he wrote, “is not regarded as a high priority. In contrast, a restrictive attitude limiting the entry of foreign direct investment is regarded as foolish.”8

    But after decades of deepening global economic integration, with production increasingly fragmented in transnational supply chains welded together by haute finance, the forces pushing for liberalization were eventually ready to strike back. In 1997, Michel Camdessus, then Managing Director of the International Monetary Fund, proposed to amend the articles of agreement of the Fund to allow the institution to officially push for financial liberalization. The consequences of the double bust, in particular the East Asian crisis, stood in the way of the intended amendment, which was never approved.9 But the tide toward liberalization was already in motion. As Dani Rodrik recalled, despite the failure to amend its statutes, “the IMF continued to goad countries it dealt with to remove domestic impediments to international finance, and the United States pushed its partners in trade agreements to renounce capital controls.”10

    Brazil, alongside its Latin American neighbors, heeded the call early on. In the 1990s, a social bloc defending neoliberal reforms started to coalesce and the first laws to dismantle capital controls and liberalize the foreign exchange markets were enacted. This was the force that sustained the currency peg and slayed hyperinflation of the 1990s. Once Lula became president, in 2003, liberal economists argued that price stabilization would only be consolidated if financial liberalization went all the way, making the real “fully convertible.” In opposition, economists on the left summoned theoretical and empirical arguments to claim that further liberalization would only push the Brazilian currency further down the global hierarchy, deepening external vulnerability. But their arguments did not prevail. Slowly but unambiguously, policy moved in the direction of deeper convertibility—the Polanyian pendulum was again in full swing.

    After the global financial crisis of 2008, the first cracks in the consensus toward financial liberalization started to show. In Brazil, Dilma Rousseff decided to shift economic policy to mitigate the negative impacts of the overvalued real on manufacturing production. The currency had hit bottom in July 2011, six months into her term in office. A series of measures were adopted to reduce currency volatility and manage capital flows, including selective taxation of—and the imposition of reserve requirements on—different foreign exchange operations, including currency derivatives. For a time, these measures were effective in increasing exchange rate stability. In 2012, Rousseff’s finance minister claimed that Brazil was prepared to face a global “monetary tsunami,” dramatically comparing the situation to the disaster that had struck Fukushima a year earlier. Politically, however, the government was not prepared—it moved to discipline finance without having forged a coalition capable of withstanding the reaction. As global liquidity dried out, pressure mounted to restore the neoliberal policy framework and adopt monetary and fiscal austerity. In 2013, capital controls were abandoned.

    Globally, however, the shift toward capital controls went ahead. In 2012, the IMF issued an “institutional view” on the topic, stating in its characteristically cautious language that “there is … no presumption that full liberalization is an appropriate goal for all countries at all times … In certain circumstances, capital flow management measures can be useful.” The following year, at the world’s central bankers’ rendezvous in Jackson Hole, economist Hélène Rey went further. Given that “gross capital inflows, leverage, credit growth, and asset prices dance largely to the same tune,” she claimed that the standard view, which maintained that countries had autonomy to set their monetary policies in the presence of financial liberalization, was an illusion: “Independent monetary policies are possible if and only if the capital account is managed.”11

    A few years later, in 2016, IMF economists once again put capital controls in the spotlight, highlighting it alongside two other policies in a self-critical article titled “Neoliberalism: oversold?” They claimed that “capital controls are a viable, and sometimes the only, option” available to countries facing “an unsustainable credit boom.” Finally, in 2020, reacting to the disastrous capital outflows from the global periphery as investors sought security in the first months of the pandemic, the IMF’s Independent Evaluation Office argued that the fund’s “institutional view” on the topic should be revised, “allowing for … more long-lasting use of capital flow measures.”

    Brazil, however, kept swimming against the tide, still trapped in the previous consensus—due to the extraordinary role played by financial interests in its political economy. During Bolsonaro’s government, as part of its request to join the Organization for Economic Cooperation and Development (OECD), a law that further liberalized currency markets was enacted to comply with the organization’s rules. But with its extreme currency volatility, the Brazilian economy is particularly subject to global financial cycles. If the political strength of high finance over economic policy goes unchallenged, the country will remain atop the rankings of external vulnerability.

    Market discipline and democratic risk

    At stake is not only economic stability, but also the survival of democratic institutions. In the interwar period, examining Europe, Polanyi remarked: “Labour Parties were made to quit office ‘to save the currency’” or “in the name of sound monetary standards.”12 Almost a century later, the authoritarian threat has not been put to rest.

    During Lula’s first successful presidential campaign, in 2002, currency speculation and the threat of capital flight were so intense—the financial media both named and perpetuated a fear of the “Lula risk”—that he issued a letter committing to abide by the neoliberal policy framework. Taking stock of the episode, scholar Daniela Campello concluded: “There is no way to understand either the Lula presidency or its consequences to the Brazilian Left without reference to financial globalization and market discipline.” Halfway through his third term as president, such pressures remain.

    Last November, the Ministry of Finance announced a fiscal plan that cut back some social transfers and slowed down minimum wage increases in an attempt to convince the “markets” of its commitment to eliminating the fiscal primary deficit. It also included a proposal to extend income tax exemptions to parts of the middle classes, compensating it with an increase in taxes on the rich. Economists from financial institutions took it personally, touting their resistance to progressive taxation under the pretense of fiscal responsibility.

    In two days, the real lost almost 4 percent of its value against the dollar, and three weeks later depreciation had reached 6 percent, effectively pushing inflation above the target band. Interviewed by the Financial Times, one portfolio manager pleaded guilty: “The market is very concerned regarding Brazil’s fiscal accounts and especially the government’s response to it. The only way the market has to call the attention of the government is through the exchange rate.”

    Despite his recent fall in popularity, Lula’s government may opt to keep at its current strategy, taking great pains to avoid stirring conflict and hoping a divided opposition keeps open the path to reelection. To its loyal base in the poorest sections of Brazilian society, the PT offers to mitigate the harshest edges of Brazil’s inherited austerity, as well as some reforms to make the tax system more progressive. The increased budget that Lula negotiated before taking office has so far kept growth going at a somewhat high rate, in comparison with the quasi-stagnation of the past decade, but its lagged effects are bound to be reduced in the final two years of his term. With conservative parties recently strengthened by municipal elections and the far-right emboldened by Trump’s electoral success, the risks involved in staying the course are high. And with the new US government aggressively attacking trade globalization, more likely than not the next two years will be characterized by global financial turbulence, with its usual impacts on peripheral currencies.

    An alternative path would entail shifting the country’s economy in a direction that can reduce its external vulnerability and, at the same time, its reliance on the agrarian elites—increasingly loyal to the far-right—in order to forge a social bloc that stands a better chance of holding the authoritarian threat at bay. Catching up with the global trend of reversing financial liberalization would be particularly useful in this regard. It could not only attenuate the impact of the global financial cycle on the Brazilian economy but also weaken the financial capitalists’ blackmailing tool over government policy.

    For this purpose, the experience of managing capital flows in 2011 and 2012 may be particularly useful, suggesting a series of potentially effective instruments—from reserve requirements on different currency operations (especially derivatives) to selective taxation targeting short-term flows—all of them under the control of the government.13 As financial institutions are prone to continuously devise new instruments and operations to escape the constraints of regulation, targeting would need to be regularly revised in light of detailed assessments of market dynamics. But the rationale should be clear from the outset: focusing on hot money and carry-trade operations in order to avoid impacting longer-term flows and placing excessive pressure on the balance of payments.

    Any step in that direction would certainly create animosity with financial institutions, which would stand to lose the profits they accrue from short-term financial speculation. However, as the latest currency turmoil indicates, a strong contingent among the financial elites places itself in opposition anyway, being determined to push the government to commit to deepening austerity. Challenging them is unavoidable for Lula if he is to even partially fulfill his electoral promises or avert the far-right’s return to power. By focusing on capital controls, the government could empower itself to weather the challenges posed by global financial cycles and enable long-term strategies to overcome the productive bases of external vulnerability—a strategy that would in this case be better protected against global turbulence and domestic financial opposition.

  8. Oil in the Imperial Periphery

    Comments Off on Oil in the Imperial Periphery

    The majority of the nearly two hundred sovereign states that exist today were born through decolonization following the end of the Second World War. With the colonial metropole fearing the emergence of unstable and unviable states, smaller territories were often included in larger entities through mergers of various protectorates and colonial territories. Nonetheless, several small colonies—Qatar, Bahrain, Kuwait, and Brunei, as a few examples—managed to become independent on their own, rejecting amalgamation. It was the colonial politics of oil that led to the creation of many of these “unlikely” states, whose contemporary politics continue to be shaped by resource wealth. What made these states sovereign was not the internal administrative units of larger states, culture, racial supremacy, or military power, but rather the historical contingencies conditioned by oil management in the colonies.

    Colonial conquest often sought out natural resources: the Spanish and the Portuguese expanded into the “New World” in their quest for gold and silver; Britain’s expansion into the Middle East was overwhelmingly driven by its Navy’s shift from coal to oil; and after the First World War, Britain, Australia, and New Zealand clung to Nauru for decades to exploit its phosphate reserves. My recently published book, Fueling Sovereignty, demonstrates that natural resources, especially oil, also figured prominently in the process of decolonization and state formation. Unexpected oil discovery in imperial peripheries enabled local rulers to use their financial independence to resist mergers with larger formerly colonized territories.

    The case of Brunei, a tiny state in Southeast Asia, reveals how the politics of oil production informed the boundaries and structure of postcolonial states. The discovery of oil remains essential to Brunei’s economy—in 2024, oil and gas made up over 90 percent of country’s exports and government revenues and the majority of the country’s national income. As a result, Brunei boasts the second-highest GDP per capita in the region, after Singapore.

    Throughout the twentieth century and continuing until the present, Brunei has utilized foreign interests in its oil wealth to ensure its security and separate, independent status. In 2019, Chinese company Hengyi Industries invested $3.45 billion into an oil refinery and petrochemical site in the country, as part of a joint venture with the government. Building on this cooperation, last week, the Sultan of Brunei Haji Hassanal Bolkiah Mu’izzaddin Waddaulah visited Beijing to discuss oil and gas exploration in the disputed waters of the South China Sea. Such strategic geopolitical moves draw from a long history of Brunei’s critical alliances with more powerful actors.

    The colonial politics of oil

    With fewer than 500,000 residents and a land area of approximately 2,260 square miles, Brunei is among the smallest states in the world in terms of both population and size. The country is located on the island of Borneo, bordering Malaysia and Indonesia. Not coincidentally, this island is the only one in the world shared by three sovereign states. It was entirely possible that there would be just two states, with Brunei enveloped into a larger state entity. But oil, together with the protectorate system—indirect colonial administration through local rulers with internal sovereignty and protection from internal and external threats provided by the colonizer—enabled Brunei to achieve statehood.

    When the Portuguese occupied Malacca in the early sixteenth century, Brunei became an alternative trading hub for Muslim merchants. This made the sultanate a major regional power, ruling over the entire Borneo and beyond. However, by the seventeenth century, its territory was reduced to the northern half of the island as Brunei declined. Colonial rule arrived in Brunei in the nineteenth century, when the British made Brunei, Sarawak, and North Borneo (present-day Sabah) their protectorates and the southern half of the island fell into the hands of the Dutch. There were, therefore, four colonial units on the island.

    Of these, Brunei was the least likely to become sovereign. In the latter half of the nineteenth century, Sarawak was ruled by the Brooke family (who originally came from Britain) and North Borneo was ruled by a British company called the North Borneo Chartered Company. The two territories competed over the cession of Brunei’s territory. To this day, Brunei’s territory is divided into two non-contiguous parts because Charles Brooke of Sarawak annexed the area in between called Limbang in 1890. Because Brunei had little economic value to the metropole, Britain had little incentive to deter this aggression.

    What changed Brunei’s fate was the discovery of oil in 1903. In 1929, the further discovery of a major oil field in Seria made Brunei became the third largest oil producer in the Commonwealth by the mid-1930s. By 1950, Seria was the largest field in the Commonwealth. Within just a few decades, Brunei made a transition from a negligible peripheral colony to an important asset to the British Empire. Because of its economic value, the British quickly became more committed to Brunei’s security and survival. Three years after the initial oil discovery, Britain and Brunei signed a treaty ensuring that there would be no further cession of Bruneian territory to Sarawak and North Borneo. This secured the survival of Brunei as a separate entity throughout the rest of the colonial period. It also augmented the Sultan’s authority, because all policies made under British guidance were announced under the Sultan’s name. The regime became increasingly secure, as the British reassured the Sultan’s status and the right of succession of his descendants. The discovery of oil also heightened the Sultan’s leverage in future negotiations with the British.

    Brunei’s rejection

    The biggest challenge to Brunei’s existence as a separate entity ironically came with the wave of decolonization. In the early 1960s, the British began considering their withdrawal from British Borneo and pursued a merger of Brunei, Sarawak, and North Borneo, as they regarded the three territories as too small to become individual sovereign states. Such a decision was nothing extraordinary for the metropole. In fact, merging small colonies was Britain’s method of choice when it came to decolonization. Small states, Britain believed, were less stable and more prone to communist influence. As a result, the rollback of the British Empire came with the proliferation of federated states in different parts of the world, including the West Indies Federation, the United Arab Emirates, and the Federation of South Arabia.

    A concrete proposal for a merger was presented in May 1961 by the Malayan prime minister, Tunku Abdul Rahman. Malaya, which itself was a federation, invited Brunei, Sarawak, and North Borneo to join it to create a new federation called Malaysia. This proposal gained full British support. The Sultan of Brunei, Omar Ali Saifuddin III, initially welcomed this proposal because of British encouragement and security concerns. The British particularly emphasized that a small and rich state like Brunei could easily fall prey to stronger and belligerent neighbors, which convinced the Sultan to view Malaysia positively. It would have been, therefore, entirely possible at this point that the Bruneian Sultan made a decision to merge his territory into Malaysia.

    However, repeated negotiations in the following two years revealed serious disagreements between Brunei and Malaya. The first, and less controversial, concerned the representation of Brunei in the new federation. Far more divisive was the distribution of Brunei’s oil wealth. While Brunei wanted to retain its control over its oil revenues, Malaya maintained that a significant part of Brunei’s oil should belong to the federal government. This issue first emerged as a disagreement over Brunei’s annual contribution to the federal budget. Both governments agreed that there should be an annual financial contribution, but they disagreed on the exact amount—with Malaya requesting $70 million and Brunei offering $30 million. Even after agreement was reached at $40 million, treatment of future oil discoveries remained controversial. Malaya pursued tight control of Brunei’s oil fields, while Brunei asserted its sovereignty over them.

    Unable to find a solution, the Malayan side sent Brunei an ultimatum. On June 21, 1963, Abdul Razak, the Deputy Prime Minister of Malaysia, sent the Sultan of Brunei a letter listing “outstanding issues” that Brunei needed to agree to in order to join Malaysia, all of which were related to the distribution of Brunei’s oil wealth. The Sultan, however, did not see any room for compromise. As a result, Malaysia was established without Brunei on September 16, 1963.

    After rejecting Malaysia’s proposal, Brunei ultimately turned back to Britain for its security needs. The shift was prompted by a series of unforeseen events. In December 1962, a large-scale rebellion by the Brunei People’s Party (Parti Rakyat Brunei, PRB) seized most of the state, including the capital and the Seria oilfield. Sultan requested Britain’s assistance, and British troops suppressed the revolt and continued to station Gurkha regiments in the sultanate. Following the intervention, the Sultan was convinced that the British would offer security to Brunei regardless of whether it joined Malaysia. Vital British oil interests—including Brunei Shell—remained in the territory. Brunei Shell’s discovery of a new oil field in June 1963 heightened Brunei’s perceived importance to the metropole. The Sultan believed that Brunei’s oil would be too valuable for Britain to give up.

    The regional context was also crucial to the British security commitment. At the time, Malaya–Indonesia relations had worsened after the revelation of Indonesia’s involvement in the Brunei Revolt. The Sukarno administration declared a policy of confrontation (Konfrontasi) against Malaya in January 1963, and Malaysia severed its diplomatic relations with Indonesia immediately after its independence. The hostility between the two countries led to a militarized conflict without the declaration of war. There were military skirmishes near the border in Sarawak and Sabah, making these regions the frontiers of combat. Although Brunei itself did not share borders with Indonesia, the British augmented its commitment to Borneo, making its desired disengagement from Brunei nearly impossible. In a way, Konfrontasi was a blessing to Brunei.

    Waiting for the right moment

    As Brunei become increasingly convinced that its future lay outside Malaysia, Britain faced a dilemma. Its basic position regarding the decolonization of Brunei was unchanged; Britain hoped that Brunei would join Malaysia, enabling it to withdraw. However, because of existing security arrangements and oil interests, Britain could not unilaterally withdraw, nor force Brunei to join Malaysia, which would give evidence to the “neocolonialism” critique from Indonesia and the Philippines. Brunei was not a colony, but a protectorate with internal sovereignty. Thus, the Sultan of Brunei had the right to determine the decolonization outcome of his state.

    After a few years, Britain gave up on the idea of Brunei in Malaysia; its new goal was Brunei’s separate independence. Though the Sultan favored this outcome, he was not ready for British departure due to security concerns. At the time, Malaysia had been supporting former PRB members in their campaign against the status quo in Brunei, and when Indonesia forcefully annexed East Timor in 1975, Brunei feared a similar fate. Only after Malaysia terminated its support of the PRB and relations with Indonesia improved in the late 1970s did Brunei agree to its independence. Announced on June 29, 1978, Brunei would achieve formal independence in 1984. Even after independence, British Gurkha regiments remained in Brunei to offer security. For Brunei, independence was achieved strategically, under the security umbrella of Britain.

    The Sultan was the major beneficiary of this arrangement, as he held significant control over the fate of his state. When Britain strongly pressed for withdrawal and constitutional reform, the Sultan expressed dissatisfaction by suddenly announcing his abdication on October 4, 1967, in favor of his son, Hassanal Bolkiah. He also threatened to withdraw his enormous wealth held in British banks. As a result, the British were forced to agree to postpone their withdrawal. The negotiations revealed the power of oil wealth, which Brunei’s ruler used as leverage against a much stronger state.

    Colonial oil and separate independence

    The production of oil alongside distinctive structure of colonial governance enabled separate independence. In local monarchies with little economic appeal to the metropole, the discovery of oil granted newfound economic significance and political strength. As a result, the colonizers became more invested in safeguarding these regions.  

    For Brunei, these two factors—oil control and the status of a British protectorate—incentivized the Sultan to seek independence strategically. The Sultan was reluctant to share Brunei’s wealth with its poorer neighbors, losing his singular authority in the territory and becoming one of many rulers in Malaysia. Oil and British protection also ensured financial stability and security. While there was little doubt that Brunei could be financially independent, there were concerns about its security. It was Britain that removed Brunei’s internal and external security threats. In the relationship between the metropole and the protectorate, oil granted Brunei significant bargaining power. British oil interests deterred the metropole from withdrawing unilaterally from Brunei or forcing it to join Malaysia.

    Brunei’s state formation process continues to define politics in the sultanate today. Since the British suppression of the revolt in 1962, there has been no political force able to counter the sultan’s power. The Sultan, with the backing of the British, serves as the head of state, prime minister, minister of finance and economy, minister of defense, minister of foreign affairs, and the commander-in-chief of the armed forces. While separate independence benefitted the Sultan, it perhaps reduced the possibility for a more democratic state.

    Brunei is not an isolated case. A similar process involving oil-rich protectorates was seen in the Persian Gulf. Qatar and Bahrain, along with seven other sheikhdoms, were expected to become part of a new federation called the United Arab Emirates but refused. These colonial territories were characterized by significant internal sovereignty, and thus local rulers were able to shape the process, timing, and, outcomes of decolonization.

  9. Slashing the State

    Comments Off on Slashing the State

    Javier Milei’s rise to the presidency of Argentina came with all sorts of promises for economic, political and cultural repair. In one campaign speech in the run up to the October 2023 election, Milei claimed that should his party, La Libertad Avanza (Liberty Advances), come to power, “Argentina could reach living standards similar to those of Italy or France in fifteen years. If you give me twenty years,” he went on, “Germany. And if you give me thirty-five years, the United States.” When he did in fact come into office in December of that year, he did so with a bold political agenda but little congressional support; La Libertad Avanza won just 15 percent of seats in the Cámara de Diputados and 10 percent in the Senate, both of which remained dominated by Peronists on the one side and Juntos por el Cambio (Together for Change), the coalition that led Mauricio Macri to the presidency in 2015, on the other. The promises had been large but, though victorious, Milei had been granted a tight space in which to maneuver. 

    Milei’s popularity was premised on his reputation as a staunch market radical and his apparent position against Argentina’s political elite. In presidential debates, he warned against “the damned caste” that, he claimed, “in fifty years would turn Argentina into the biggest slum in the world.” Corrupt politicians were keeping the public hooked on state handouts so as to keep themselves elected. In turn, the system produced budgetary deficits that led to rising debt or excessive money printing, driving inflation and economic collapse. 

    The solution he proposed was a radical deregulation of the economy, focusing on a reduction in public spending and taxes. Corrupt politicians and their cronies would pay the consequences and honest, working people, Milei said, should not be concerned. At the same time, Milei proposed a series of entirely unrealistic measures that would “make Argentina great again.” These included dollarizing the economy, closing the Central Bank, severing diplomatic relations with China and implementing a voucher-based education system.

    Although most of these proposals were quite obviously impossible to implement—and indeed would never be attempted—they were an important feature of Milei’s winning pitch to voters, who were exhausted after more than a decade of deep economic malaise. In the end, his gambit worked. After obtaining just short of 30 percent of the vote in the general elections in October, he won almost 56 percent at the run-off three weeks later. He was roundly backed by big business, interested in deregulating the labor market and eager to see tax reductions, but by sections of the working class too.1

    The newest shock

    Since coming to power just over a year ago, Milei has presided over an intense economic shock. Quick off the block, the new government announced a drastic devaluation of the peso, which had an almost immediate negative effect on real wages; by February they had dropped significantly; those in the public and informal sectors bore the brunt of this decline. 

    With wages failing to keep pace with rising prices, purchasing power dropped sharply. Then came the slashing of public spending. Milei famously promised to “take a chainsaw to the state,” and that he did, cutting government spending from 44 percent to 32 percent of GDP. The largest savings came from cuts to pensions, public works, public-sector salaries, energy and transport subsidies, and social programs. Perhaps most radically, the government has turned off the tap on all public infrastructure funding, including essential projects—like the expansion of a critical gas pipeline—that were already underway, but have now had to be halted. Thousands of public servant jobs were cut, often without clear criteria as to where and why particular cuts were being made. These measures were effected with notable cruelty; some public servants learned they had been fired only after arriving at their offices. Amid these actions, Milei used an executive order to push through pension reform. This in conjunction with the elimination of price controls on certain medicines meant that the cost of living for the elderly increased sharply. In February 2024, the government announced reductions in subsidies for utilities and public transportation and a spike in prices quickly followed—contradicting an explicit campaign promise. 

    Amid these macroeconomic adjustments, the government also rushed to put through an Executive Order, or decreto, which would allow it to bypass Congress. It was by decreto that the government cancelled rent controls, allowed soccer clubs to become private companies, and removed price controls. Congress could not be leapfrogged entirely, however, especially when it came to reform on labor regulation and on taxes. An omnibus bill, the Ley de Bases, was introduced by Milei with the intention of achieving far-reaching transformation, from national defence to economic regulations. When this failed to achieve congressional approval, it became clear that certain alliances would have to be made in Congress. 

    Conciliatory alliances

    Juntos por el Cambio, the coalition that governed Argentina under Macri, was the obvious choice for such an alliance. Not only had Macri voters overwhelmingly supported Milei in the run-off election, Milei had reciprocated by appointing Macri’s 2023 presidential candidate, Patricia Bullrich, as the head of the Ministry of Security. He also placed other former members of Macri’s government in key positions, including appointing Luis Caputo as Minister of Finance. Both the macristas and La Libertad Avanza shared basic assumptions: Argentina was in need of shock therapy, and Peronism must be purged. This turn toward Milei on the part of former macristas is in part explained by the fact that many in Juntos por el Cambio felt that the Macri government had not gone far enough in restructuring the Argentine economy. More significant, however, was the pressure applied from business interests, which were leading the charge against Peronism, particularly Cristina Kirchner. 

    With regard to Milei’s campaign promise to sever diplomatic relations with “Communist” countries like China and Brazil, it was basic pragmatism that put an end to such ideas. Argentina’s heavy reliance on a swap arrangement with China for accessing Central Bank reserves would make such a move almost impossible. After facing pressure from the Chinese government to cancel part of the agreement, the Argentine government was forced to issue a diplomatic apology. Even so, it took until June to secure an extension of the swap. More recently, Milei attended the G20 meeting in Rio de Janeiro, where he shook hands with Brazilian President Lula da Silva—whom he had previously insulted publicly.

    On the economic front, instead of lifting capital controls as promised, Milei maintained the restrictions—the “cepo”—inherited from the previous two administrations. Concerned about the devastating effects that another peso devaluation could have on local prices and, consequently, on consumption, the government doubled down on these capital controls and introduced a new strategy based on the monetary approach to the balance of payments. This system, known as a crawling peg, established monthly 2 percent devaluations aimed at forcing convergence between inflation and the dollar’s exchange rate. Curiously, the same strategy was implemented by Argentina’s last dictatorship between 1978 and 1981, with disastrous consequences. Notably, the architect of that plan was Ricardo Arriazu, one of Milei’s most revered mentors. The strategy gained a lot of support from the financial sector, which itself stood to gain from the fixed devaluations of the peso. 

    More open to political negotiations in Congress, Milei’s government was able to pass its omnibus law, Ley de Bases, in July. The law had initially failed to pass through Congress but compromises were ultimately made to see it through. The law no longer included the privatization of YPF, Argentina’s national oil company, and the national rail system, and it softened the delegation of extraordinary powers to the president. This Congressional victory was one example of how Milei’s administration adopted a two-pronged strategy. On one hand, it used the promise of increased national resources—or the threat of reducing them—to secure new votes in the Senate. On the other hand, it collaborated with representatives from Macri’s coalition to advance reforms supported by both parties. Despite the anti-populist rhetoric deployed by Juntos’s candidate against Milei in 2023, by 2024 common ground was identified and comfortably shared.

    What effect have these reforms had? Economic and social indicators continued to show meager results during the first half of 2024, with poverty rates climbing to 52.9 percent—an 11 percent increase from the second half of 2023. While measuring income poverty in countries with high inflation regimes involves significant methodological challenges, the sheer magnitude of this increase speaks for itself. This in turn led to a sharp decline in consumption and economic activity, and a further deterioration of the standard of living. Although the worst of the recession occurred in January, immediately following the currency devaluation, the situation failed to improve during the first eight months of the year. Industrial manufacturing, construction, and retail were particularly affected.

    Inflation—Milei’s great bugbear—did decline during the first half of the year, but with little consequence for most people; the Consumer Price Index between June and August appeared to have bottomed out at a monthly rate of 4 percent—still extremely high. Rising prices created pressure for another devaluation. However, the government remained firm, maintaining the monthly 2 percent crawling peg out of concern that a new inflation spike could damage its public image. The artificially “cheap” dollar put additional pressure on the Central Bank’s reserves, which were already under strain due to falling international prices for key Argentine exports like soybeans and corn. While the government managed to improve reserve levels during its first months in power, progress slowed after it prioritized keeping the dollar’s exchange rate in line with its pre-announced schedule of devaluations.

    Culture war

    Milei’s outspoken views on left-wing ideas were never hidden: he had openly threatened left-wingers, expressed his opposition to abortion laws, and dismissed climate change as a lie—a theme to which he dedicated much of his speech at Davos in January. Unlike on economic issues, where Milei’s agenda made swift concessions to macrismo, his cultural and ideological crusade only escalated once in power. The first target was the national public university system. Milei had already frozen university budgets, which, amid inflation, effectively meant defunding them. Research funding, already scarce, disappeared entirely, while professors’ wages, like those of most workers, fell far behind inflation. By October, the war on universities was expanded when the government, with its supporters in tow, began referring to universities as “left-wing indoctrination centers” and falsely claimed that they refused to be audited.

    In its cultural war, Milei’s administration has an army of foot soldiers—mostly young men. At the core of this sits an organized group dedicated to intimidating political opponents, doxing individuals on social media, and amplifying the hate speech coming from above. Over time, this group has gained traction, consolidating its influence in the launch of its own streaming channel, Carajo, which is funded by some of the same backers who supported Milei’s presidential campaign, along with young media owners eager to expand their profits by targeting a new, untapped audience. With most of the shows hosted by men, Carajo became a radical platform that echoed some of the government’s most reactionary ideas; LGTBQ+ rights and climate change are both prominent targets. Many of these streaming “stars” have gone on to form a political movement claiming to be the government’s “militant branch,” and declaring itself ready to defend it at any cost. 

    Threats of state repression have been gathering, too. Various police forces have been empowered to further intimidate protestors and discipline social movements and opposition parties. Tactics include employing speakers at transport stations to denounce demonstrators and urge people not to join rallies. Things escalated further in October, when the government repurposed Mi Argentina—an app first created during the pandemic to store IDs, driver’s licenses, and other personal information—to send messages opposing unions organizing a transportation strike.

    By November, Milei’s shock therapy looked to be yielding some of its intended results. The tax amnesty campaign initiated by the government saw around $20 billion held by Argentines outside the national banking system flow back into the country. As capital returned, market indicators improved and the government in turn was able to maintain the crawling peg at around 2 percent. Inflation dropped below 3 percent in October and kept on falling. Despite coming at the expense of plummeting real wages and a severe recession, many Argentines, exhausted from years of high inflation, have celebrated. A gallup poll conducted at the end of last year shows that economic optimism is rising, with 53 percent saying that their standard of living is improving—the first time the percentage has inched above the majority line since 2015.


    Brimming with confidence, the government has in recent weeks announced that it will push for the complete deregulation of public transportation as well as a relaxation of gun control policies.

    Looking ahead

    Congressional elections are set to take place in 2025, at the same time as significant debt payments are due. Even if the government succeeds in keeping inflation and the US dollar under control, other factors—such as a devaluation of the Brazilian real or shifts among key competitors for Argentine exports—could still cause havoc. For its part, the agricultural sector is ready to push for a new devaluation of the peso that could benefit the value of its exports. Other sectors exporting services, like software programing, are already beginning to see what a “cheap dollar” means for incomes. As Argentina becomes more expensive in dollars, European and American companies are beginning to turn elsewhere for their foreign “digital nomad” laborers. With the lowest exchange rate in decades, Argentines are likely to flood Brazilian and other foreign beaches, while international tourism to Argentina is expected to further nosedive.

    Though the economy remains vulnerable, success in managing inflation means that Milei will likely perform well in the upcoming congressional elections, which would in turn give his government greater political leverage. Even if the government doesn’t perform well, it is likely to increase its seats in Congress; the few representatives they have won’t run for reelection until 2027. The return of Donald Trump to the White House has further bolstered the morale of local mileistas. Beyond the obvious political alignments, Trump could pressure the IMF to offer Argentina a better deal and more favorable payment schedule, as it did for Macri in 2018.

    After a year in government, it seems rather than simply trying to follow in the footsteps of the US or Europe, Milei is interested in transforming Argentina into something else. A country with a long tradition of strong middle classes, limited inequality, and the provision of basic public services by welfarist policies, Argentina is now facing the possibility of losing the already damaged institutions that underpin the country’s success vis-à-vis other Latin American economies during most of the twentieth century. The dismantling of the welfare state, the attempts to promote tax benefits for the rich, and the complete deregulation of the economy risk turning Argentina into a country where economic growth is guaranteed by low salaries and a total lack of regulation over foreign companies. The combination of such a model with the deterioration of state capacities is likely to spell increased polarization in a country that has been living under democratic rule since only 1983. 

    Though there has been significant collaboration between Juntos por el Cambio and Milei’s government, a complete political alignment is still a way off—mainly because Macri doesn’t want to give up his political power to Milei. It nonetheless looks like a new political space is forming in Argentina, one that gathers parts of the traditional center-right, neoliberal champions and the far right into something that resembles quite accurately the current US Republican Party. The failure of softer alternatives like the Macri government of 2015–19 has formed the conditions for various factions to come together under Milei’s leadership. For those members of Juntos por el Cambio who have stood apart from the far-right drift, they will need to decide if they cleave to their centrist position, marginal now as it is, or if they will try to build bridges with Peronism and its allies. Peronism, in turn, needs to clarify how much it is willing to give up in order to cement a political alternative that, like the popular fronts in the past, unite dissimilar factions to oppose a greater evil. This does not seem likely in the short term, but a strong performance of the governmental coalition in the 2025 congressional elections might create the incentives for such a realignment. If it happens, although Milei certainly will not make Argentina great again, it will transform its political system into something that looks not dissimilar to the current US political order.

  10. After the Diamond Rush

    Comments Off on After the Diamond Rush

    At the 2025 African Mining Indaba, leaders from across the Continent met representatives from multilateral lending institutions, energy companies, and US national security agencies to discuss how to “position mining as the continent’s foremost industry, driving sustainable investment and fostering economic growth.”

    The conference location, in Cape Town, South Africa, is no coincidence. In 1871, news of diamond discoveries brought some 50,000 people to Kimberley, South Africa—a town built around a gaping 240m deep hole that miners dug with picks and shovels. The frenzy marked the beginning of a long series of mining booms in the region. Diamonds were followed by gold, coal, chrome, iron, manganese and platinum.

    For over 150 years, mining has constituted a core feature of the South African economy. The seemingly inexhaustible bounty of the earth made the country the wealthiest in the continent and financed one of the most all-encompassing systems of racial segregation in the world. Blessed with the largest gold deposits on the planet, successive governments in the colonial and apartheid periods cultivated a tight relationship between industry and the state. Mining wealth was used on a large scale to economically uplift white residents and to finance industrialization through state-owned corporations.

    The Investing in African Mining Indaba conference’s year of founding also coincides with South Africa’s first non-racial elections. In 1994, the ANC’s postapartheid government aspired to similarly channel South Africa’s legacy of mineral wealth for its ends. But as firms internationalized and easily accessible mineral deposits were exhausted, mining capital exited the country and left the government to court foreign investment flows with market-friendly reforms.

    The contemporary South African economy is burdened by long-term stagnation and pervasive unemployment. In the most recent elections, voters expressed their profound disappointment with the failed promises of democratization. Behind the country’s profound promises and disenchantments lies the trajectory of an industry beholden to transformations in the global political economy. And its fate holds a cautionary tale for other African countries relying on mineral extraction for structural reforms.

    Mineral rushes

    The region around Kimberley was swiftly annexed by Britain, and neighbouring territories soon followed. In the early 1870s it largely remained under the control of independent African polities, and the territory under which lay vast gold deposits were independent Boer republics. Thirty years later, every one of these had been crushed and the entire region parcelled out between European colonial powers. The borders established then are still the borders of nation-states in the region today.

    There was a direct link between this rapid colonization and the extraction of mineral wealth. It was control over diamonds that made Cecil Rhodes a wealthy man and financed the creation of the British South Africa Company which seized vast swathes of Southern Africa on behalf of Britain. Others too owed their fortunes to diamonds. Mining magnate Ernest Oppenheimer began his career in the diamond industry in Kimberley and subsequently established the Anglo American Corporation which dominated South Africa’s economy over the twentieth century.

    The rush for diamonds was a precursor to an even greater rush for gold on the Witwatersrand in modern-day Johannesburg in 1886. The diggers camp housing 3,000 people in 1886 was a city of 100,000 in 1896 with over £215 million invested in the new gold mines. There was no place here for the prospector with a pick and shovel though. The gold outcrop visible on the surface sank sharply into the ground and formed a vast deposit that was low-grade but predictable.

    By 1908, South Africa produced one-third of the world’s gold output, and by 1920, it accounted for over half, rapidly displacing diamonds as the most valuable export. In 1922, the mining magnate Lionel Phillips tried to convey the scale of operations to his company’s shareholders by inviting them to imagine that they were standing in the middle of Crown Mines, then the largest on the Rand:

    If we take the position we are standing in in this room as the central level above and below which work is proceeding we should have to look 1,000 feet below our feet and 1,000 feet above our heads over a distance of three miles in length, with thousands of men distributed all over the area.1

    Counterintuitively, the industry was not that profitable once the feverish speculation had subsided. Returns to investors for South African gold companies averaged only 3.1 percent per annum from the 1880s to 1969.  One reason was that the industry was heavily taxed. Colonial administrators and later apartheid governments used revenues from the mines to provide subsidies for white agriculture, establish state-owned industries, and finance a wide range of “upliftment” schemes to tackle white poverty.

    This form of racial Keynesianism began in earnest in the 1920s when low gold prices and explosive industrial relations prompted serious rethinking within government and a push for economic diversification away from the gold industry. This involved direct state intervention in the economy to drive secondary industrial development and broaden mining from coal.

    In 1923, the Electricity Supply Commission (Eskom) was formed to “render, by the provision of power without profit, a worthy and ever-increasing contribution to the development of South Africa.” This was followed by the creation of the state-owned Iron and Steel Corporation (Iscor) in 1928 to utilize large domestic deposits of iron and coal.

    Revenues from gold mining were used to pursue interlinked development objectives, namely industrialization and reducing white poverty. This involved close collaboration between new state-owned companies and private mining firms. The coal industry went from being export oriented to supplying domestic markets fostered by the state, along with Eskom’s new power station and Iscor’s furnaces.

    The mining industry initially opposed the formation of a state-owned steel manufacturer. However, they soon made their peace. There was much to be gained for the mining industry through close collaboration with the state. Nowhere else in the world held anywhere near the size of South Africa’s gold deposits and the industry needed long-term political stability to extract from them.

    New industries and mines provided protected employment for whites with comparatively high wages fixed by state-controlled industrial bargaining. This was the result of a state-brokered compromise after years of violent industrial upheaval by white workers that culminated in an attempted insurrection by white miners in the 1922 Rand Revolt. Only a person holding a blasting license could be employed as a miner and only a white man could hold a blasting license. The first African was not granted a blasting certificate until December 1988.

    The gold industry remained profitable by ruthlessly cutting costs, and was supported in this by the government. Production costs in 1939 were lower than they had been twenty years earlier and profits higher. This was achieved by suppressing wages for the African workers who constituted almost 90 percent of the industry’s workforce and for many decades the industry survived by recruiting hundreds of thousands of migrant workers from across Southern Africa and paying them very low wages. Real wages for African mine workers actually declined between 1889 and 1969.

    Paying for apartheid

    Afrikaner nationalists continued these same policies after the surprise election of the National Party in 1948. Profits from the mining industry financed the increasingly complex system of racial segregation known as apartheid and the new government raised taxes on the gold industry by 5 percent not long after the election. The National Party saw the mining industry as a tool for the economic upliftment of their constituency of Afrikaans-speaking whites. Afrikaner nationalists had long resented dominance of the industry by British capital and once in government leaned on big mining firms to allow the entry of Afrikaner capital into the sector. The result was that mining conglomerate Anglo American sold a controlling share in its subsidiary General Mining to Federale Mynbou to create the first Afrikaner-controlled mining company.

    The period of “high apartheid” in the 1960s coincided with the years of peak production from the mining industry. To put it simply, there was money to pay for a developmental state that provided social mobility for whites, new infrastructure of roads, dams, power stations, and social engineering projects to increase racial segregation. There were real, material benefits for the National Party’s constituents, and in 1968 Samuel Huntington termed South Africa a “satisfied society.” The country had become a tightly integrated industrial economy between large mining firms and state-owned companies. By the 1960s, Eskom’s two largest customers were Iscor and Anglo American, while Anglo American was the largest supplier of coal to both Eskom and Iscor.

    Relations between government and industry were not entirely without friction. As historian Charles van Onselen put it, “The one paid, the other pronounced. It was never a marriage made in heaven.” Nevertheless, the close relationship paid dividends as a new mining boom began.

    Until 1971, international gold markets had an unusual structure. The price of gold was fixed by governments who maintained convertibility between their currencies and gold and they agreed to purchase any quantity produced, so the market was limitless. By the late 1960s, this fixed price, alongside high taxes and rising operating costs, caused some anxieties in the industry. These disappeared with the onset of a spectacular mining boom in the 1970s. In 1971, the United States unpegged the US dollar from gold and upended the Bretton Woods system. Gold prices soared, almost quadrupling over the following three years. South Africa then produced almost 70 percent of the world’s gold supply. This bonanza occurred alongside a wave of industrial unrest and wages finally rose. Real wages for African mineworkers rose by over 300 percent between 1970 and 1976.

    It wasn’t only gold that boomed. South Africa became a major coal exporter in the 1970s as production increased enormously with the shift from shallow underground mines to open pit operations. Over 28m tons were exported in 1980, up from 1m tons in 1969, in addition to a growing domestic market from newly built coal-fired power stations. Iron boomed too. Iscor constructed new steel plants in the 1970s and enormously expanded iron ore extraction in the Northern Cape to supply them. Steel production doubled over the decade.

    The 1970s also marked the beginnings of a boom in platinum. The Bushveld Complex, which contains about 50 percent of the world’s reserves of platinum group metals, was discovered in the 1920s, though it was in the 1970s that demand really took off. The adoption of catalytic converters as a way of reducing vehicle emissions transformed platinum from a precious metal into an industrial commodity and output rose dramatically through the 1970s and 1980s.

    End of the boom

    By 1980, the gold industry contributed 38 percent of total government revenues, the rest of the mining industry around 7.5 percent. Never again were such riches to be had. In 1981, gold prices began falling and declining ore grades triggered a slump from which the industry has never recovered. African mineworkers, joining the newly legalized National Union of Mineworkers in increasing numbers, continued to win above-inflation pay rises, though the falling value of the rand temporarily shielded the industry from rising costs. Employment in mining reached a new peak of 756,000 in 1986. When the rand stabilized, however, the industry’s position was painfully exposed. Mining’s contribution to GDP more than halved over the decade and by 1990 the entire sector contributed only 6 percent of government revenues.

    This proved to be disastrous timing for the incoming ANC government that won the first democratic elections in 1994, formally marking the end of apartheid. The ANC’s economic policy had shifted over the previous decade. The party had long advocated nationalization of the mining industry. On the day of his release from prison Nelson Mandela announced that this nationalization “is the policy of the ANC and a change or modification of our views in this regard is inconceivable.”

    Much had changed during Mandela’s 27 years in prison, however. The collapse of the Soviet Union prompted a shift in thinking and one encouraged by the mining industry itself. Many leading figures in the industry who had become convinced that apartheid policies were now hampering the industry, limiting companies’ ability to train their workforces and access capital, pushed for economic and political liberalization, opening negotiations with the still-banned opposition during the 1980s. The chairman of Anglo American led a delegation of businessmen to negotiate with the ANC in exile and an executive at Consolidated Gold Fields later arranged secret talks between the apartheid government and the ANC at a country estate owned by the company in England.

    Straitened economic circumstances had already prompted a change of economic thinking in the National Party leadership. The refusal of US banks to extend credit to the government in 1985 triggered a financial crisis and provoked market-orientated reforms. The government began to privatize state-owned companies, including Iscor, which was sold in 1989.

    The ANC had paid relatively little attention to economic policy during the liberation struggle. Now, with doubts about the efficacy of state ownership, the party, as one account of this period puts it, fell sway to “intellectual seduction by big business” and the National Party.2 By 1992, the party’s Ready to Govern policy document talked of a strategy for mining that would “where appropriate, involve public ownership and joint ventures.”

    Once in government after 1994, plans for state ownership morphed further into advocating a strategic alliance between the state and big business with “private capital as a social partner for development and social progress.” The state and business would collaborate to redress racial economic disparities. The mining industry would help pay for this through empowerment initiatives for the African majority, job creation, and financing a shift from mining to labour-intensive manufacturing.

    The main legislative tool for economic transformation has been Black Economic Empowerment, which focused initially on boosting Black ownership of big business by mandating mining companies to achieve a minimum of 15 percent Black shareholding by 2009. This began with a series of voluntary deals in the mining sector, notably the decision of Anglo American to sell its subsidiary Johannesburg Consolidated Investments to a Black empowerment consortium. This sale was consciously modelled on Anglo American’s sale of its subsidiary General Mining three decades earlier. Further deals followed in a similar manner, including the sale of Iscor’s iron and coal assets to a Black-owned consortium.

    Criticism of these deals has usually focused on the way that politically connected insiders became wealthy through these deals, among them South Africa’s current president Cyril Ramaphosa. There was a broader problem though: the mining industry slumped in the 1990s and some parts never recovered. Resources from the sector have dwindled: taxes from mining represented 3 percent of GDP in 1980 and 0.6 percent of GDP in 2010. The gold industry disintegrated during the 1990s in the face of rising production costs and low prices. Even worse, output and employment continued to slide during the huge boom in gold prices in the 2000s. South Africa lost its place as the largest gold producer in 2007, when it was overtaken by China, and has since slumped to eighth. In 2020, annual production dipped below 100 tons for the first time since 1901. Job losses continued too. Gold mining employment fell from over 500,000 in 1985 to 380,000 in 1995 and 160,000 by 2005.

    The mining sector had a hardy self-image. General Mining’s corporate history was called Through Fortress and Rock, while Western Deep Ltd titled the account of their biggest mine Down Where No Lion Walked. Neither of these companies exists today. General Mining’s much reduced successor company moved out of their grand offices in central Johannesburg —which had a helipad on the roof—partly because they no longer needed so much space. Most existing gold companies, some over a century old, along with a host of platinum and coal firms disappeared during the 1990s and 2000s.

    The gold is still there. The issue is that remaining deposits are located too deep underground to profitably extract. The deepest mines go down to a depth of 4000 metres and extraction is so expensive that even with gold prices at record highs they struggle to make money. In 1996, Anglo American announced plans to sink new shafts on the edge of the Rand down to 5,000 metres before abandoning the plan. One contemporary estimate suggested that extraction could only begin 13 years after such a project commenced, a poor rate of return on investment. Instead, Anglo American spun off its gold mines into a new company AngloGold and by 2020 it had sold off or closed down all their South Africa operations before exiting the country entirely in 2023.

    This decision became an increasingly common one. In 1995, the ANC lifted capital controls that had been in place since 1960 as the party assumed there would be a wave of foreign investment after the dropping of apartheid-era economic sanctions. The opposite occurred. There was substantial capital flight in the 1990s and 2000s, especially from the mining industry. Anglo American represented perhaps 45 percent of the value of the Johannesburg Stock Exchange in 1990 (the company claimed at the time this was exaggerated, and that it was “only” 30 percent). In 1999, the company moved its headquarters to London and shifted its primary listing to the London Stock Exchange as part of a strategy of divestment from South Africa. This has continued steadily. The company’s latest plans involve selling off almost all their remaining mines in South Africa.

    Most of what were once firmly national companies with their listing and almost all their operations within South Africa became international, or were bought up by other international mining firms. Newly internationalized firms had much less reason to maintain close links with the South African state and when rapid economic growth in China stimulated a huge commodity boom in the 2000s they turned their attention elsewhere.

    The global mining boom was apparent almost everywhere except South Africa. Investors who grumbled about regulatory uncertainty and logistical problems in South Africa poured money into mines in Burkina Faso, Democratic Republic of Congo, and Mali.

    After the rush

    The ANC has retained faith in mining as the inexhaustible provider of wealth and jobs and its policy documents have consistently emphasized the need to expand the sector “for the current and future growth of our economy and job creation.” The government sought to use the industry to provide jobs for its constituents by imposing a tax on the recruitment of mineworkers from neighbouring states who the industry had long relied upon. While most of the people who work in the industry today are South African nationals, unemployment remains one of the most serious and persistent economic challenges in South Africa.

    It is true that mining remains a large-scale industry in the country, even if a diminished one. Thousands upon thousands of tons of metals and coal are extracted and exported each year. Yet the mining industry has stagnated and has not revived even as gold prices reached record highs in 2024. For other countries on the continent, the South African mining industry represents the dangers of a commodity-boom driven developmental agenda.

    The ANC’s efforts to leverage this exploitative industry into a broader economic transformation faltered  when the industry dramatically shrunk. Hundreds of thousands of jobs disappeared and in many ex-mining towns nothing has replaced them. The party could not deliver jobs to their supporters who consequently have abandoned it in droves.

    At the same time, the close alliance between industry and state that the ANC thought would characterize a new nation-building project dissolved as mining companies exited South Africa.  The industry internationalized and mining companies found they did not require a close relationship with governments. Mining companies had other options, the country they left did not.