Category Archive: Analysis

  1. Kishida’s New Capitalism

    Comments Off on Kishida’s New Capitalism

    In September 2021, Japanese Prime Minister Fumio Kishida was elected on an ambitious platform called “New Form of Capitalism.” As leader of the Liberal Democratic Party, he promised to achieve a new and better economic system where economic growth and income distribution form a virtuous cycle. The Japanese government has presented several plans to promote this new capitalism, but the Japanese people remain dissatisfied with the current economic reality. Despite the Japanese stock market soaring and the Nikkei index briefly surpassing levels only seen during the bubble period about 34 years before, economic growth has been stagnating since mid-2023. Kishida’s approval has steadily declined, dropping to 25 percent in February 2024 from over 50 percent in early 2022.

    Kishida’s government is not the first to promise an innovative approach to economic management. In 2013, Shinzo Abe committed to reigniting Japanese economic growth through an expansionary monetary policy. But despite all of the tinkerings of Abenomics and the current Kishida plan, wage growth remains stagnant. Behind the policy innovations and campaign promises, a glaring development is hampering long term wage growth: the enduring decline in the bargaining power of workers. Any successful transformation of the Japanese economy thus ought to prioritize the structural power imbalances that plague sustainable and equitable economic growth.

    Achievements and limitations of Abenomics

    Shinzo Abe gained international fame through Abenomics, an economic strategy that integrated quantitative and qualitative easing, a flexible fiscal policy, and deregulatory structural reforms. Abenomics was designed to end the deflationary cycle and stimulate the Japanese economy through a reflationist approach. One of its key components was the Bank of Japan’s (BOJ) purchase of long-term Japanese government bonds in order to induce liquidity. In April of 2013, the BOJ declared its intention to double the monetary base and achieve a 2 percent inflation target within two years. In 2016, it implemented a negative interest rate policy and yield curve control (YCC), directly controlling long-term government bond rates. Due to this expansionary and unconventional monetary policy, the BOJ’s holding of government bonds surged from 11.6 percent in March 2013, before Abenomics, to approximately 53.9 percent in September 2023.

    In retrospect, Abenomics exhibited both successes and failures. While it effectively generated jobs and ended deflation, it fell short on wage and national income growth. Ultimately, it generated a slight economic recovery accompanied by increased employment and a tighter labor market. The Japanese yen depreciated due to quantitative easing, which resulted in increased exports, corporate profits, and a rise in the stock market index. The government debt to GDP ratio stabilized thanks to the strong control of interest rates by the BOJ, but the 2 percent inflation target was never realized despite overcoming deflation. Crucially, real wages increased only for two years between 2013 and 2020, and private consumption growth stagnated though there was a recovery of corporate investment.  During the second phase of Abenomics, known as “Japan’s Plan for the Dynamic Engagement of All Citizens,” the Japanese government introduced more progressive reform measures. Beginning in 2016, it made concerted efforts to support irregular workers who faced discrimination in the labor market and to reduce excessively long working hours in Japan, as part of its revised labor reform agenda, known as Hatarakikata Kaikaku (“Work Style Reform”). Additionally, the government presented plans to increase social welfare provisions for child care assistance and support for the elderly. In order to raise the labor supply, promote domestic demand, and thereby stimulate economic growth, natalist policies have aimed to stabilize Japan’s population at 100 million by 2060.

    Abenomics emphasized the importance of wage growth, and tried to establish a virtuous cycle of more egalitarian income distribution through stimulating domestic demand. Nevertheless, wage growth remained stagnant. According to a report by the Japanese government using OECD data, the level of real wage per worker increased by 41 percent in the US and by 34 percent in Germany and France, but it increased only by 5 percent in Japan between 1991 and 2019. Japan has suffered from an excess saving problem—a major sign of macroeconomic imbalance, as corporate saving from profits has been much larger than investment, more serious than other advanced countries.

    Abenomics clearly failed to halt the long and continuous stagnation of wage growth—in fact, real wages in 2019 were even lower than those in 2013, while corporate profit increased significantly. Abe’s tax reforms further favored capital over workers: between 2013 and 2016, the government cut the effective corporate tax rate from 37 percent to about 30 percent, while raising the consumption tax rate from 5 percent to 10 percent between 2013 and 2019. 

    It is no wonder that economic recovery proceeded slowly. Private consumption, the largest component of GDP in Japan, recorded positive growth only in 2013, 2017, and 2018, while GDP growth was positive from 2013 to 2018. The Japanese economy entered a recession, experiencing negative growth in 2019 even before the Covid-19 crisis. In response to the pandemic, the government implemented large-scale fiscal stimulus packages, but the recovery was still slower compared to other advanced countries.

    Kishida’s plan for New Capitalism

    The weaknesses of Abenomics formed the backdrop to Kishida’s energetic campaign. Kishida’s New Form of Capitalism notion specifically emphasized the need for wage growth among vulnerable workers. During his campaign for leadership of the Liberal Democratic Party (LDP), he argued for increasing the wages of workers in healthcare, childcare, care for the elderly, and those working in subcontractor companies. Given the absence of an heir to Abe’s platform, he was able to capitalize on these promises to enter government. He also presented a plan to raise the capital gains tax, which stands at a flat rate of only 20 percent compared to a 55 percent income tax for the highest bracket. The gap between the two tax rates means that the real tax burden falls after income exceeds about 100 million yen in Japan, referred to as the “100 million yen barrier,” as Figure 1 shows.

    Figure 1. Income tax burden in Japan

    Source: Hisanaga (2022), p. 11

    But the plan for raising the capital gains tax was retracted due to resistance and a fall in the stock market. In 2021, the Nikkei 225 fell for 11 consecutive days after Kishida announced his plan on September 30, and he canceled it on October 11. Later in 2023, the Japanese government introduced a limited hike of the capital gains tax up to 22.5 percent for only the super-rich who earn more than 3 billion yen.

    Though Kishida was not very successful in raising the capital gains tax, New Capitalism persisted. In his first speech as Prime Minister, he stressed that there is no growth without redistribution. Kishida established the Council of New Form of Capitalism Realization under the PM’s office shortly after his election. The council, chaired by Kishida, began organizing regular meetings consisting of government officials, the representatives of companies, workers, and specialists. At the meeting in November 2021, the government announced its reformed agenda in a document called “Urgent Suggestions for a New Form of Capitalism.” The government underscored the creation of sustainable stakeholder capitalism in which growth and distribution form a virtuous cycle. The plan includes a growth strategy encompassing green economic transformation. The distribution strategy covers wage growth for vulnerable workers, the reduction of the wage gap, promotion of investment in human capital, and support for companies to raise wages. In December 2021, another plan by the government was presented for the government to supervise fair trade between large firms and small subcontractor firms so that increases in cost due to external shocks could be smoothly transferred to the subcontract prices.

    In June 2022, the Japanese government presented the ‘Grand Design and Action Plan for a New Form of Capitalism.’ The document presented New Capitalism as the next stage in a natural evolution from laissez-faire to the welfare state, and then to neoliberalism. New Capitalism is the system in which the market and the state together strive to realize people’s happiness by addressing inequality and climate change. It argues that fair distribution of the fruits of growth is an investment in sustainable growth, and Japan should make efforts for wage growth, fair trade, and better education. In particular, the government announced that it would make strategic investments in human capital, science and technology, startups, and transformation of green technology and digital technology. Specifically, investment in human capital and distribution includes several plans for wage growth and worker training, such as the government subsidy for companies to raise wages and the appropriate setting of delivery prices by subcontractor firms. Other plans such as doubling asset income and the promotion of startup companies were announced in November 2022. In May 2023, the government presented a three-pronged plan for labor market reform to improve workers’ skills and introduce job-based wages. It is notable that the Japanese government, specifically the Council of New Form of Capitalism Realization, has been updating and following up the plan continuously.

    In general, the plan for New Capitalism was broadly in line with inclusive growth strategies suggested by international organizations after the global financial crisis, while also following the main tenet of the second phase of Abenomics. It is also consistent with the revival of industrial policy and the modern supply-side economics characteristic of the Biden administration.

    Recent economic growth and wage growth in Japan

    Kishida’s New Capitalism faces significant challenges. Since the Covid-19 pandemic, the government has sought to boost, through fiscal stimulus, programs meant to support households against inflation. But the economic recovery in Japan has been relatively slow, especially compared to that of the US. The fiscal stimulus Japan disbursed in response to the pandemic amounted to 16.7 percent of GDP by September 2021, smaller than 25.5 percent in the US. The growth of private consumption has been particularly weak, recording negative figures since the second quarter of 2023, along with stagnant household income and wages in the recent period. Although the real GDP growth rate in 2023 was 1.9 percent, higher than before, the Japanese economy suffered from negative growth in the third quarter and zero growth in the fourth quarter of 2023. The economy stagnated even further with the GDP growth rate at -2 percent in the first quarter of 2024.

    Figure 2. Economic growth of real GDP and components in Japan (%)

    Source: Cabinet Office

    Inflation finally returned to Japan, peaking in late 2022, but the BOJ did not rejoice. Inflation was not associated with the stimulation of domestic demand backed by wage growth, as expected by the BOJ and the government, but with an external shock; the yen’s depreciation. The BOJ had continued the negative interest rate policy and the YCC, though with some tweaks to allow the long-term bond rate to rise gradually. This led to a large gap between the interest rates in the US and Japan because the US Fed hiked the interest rate rapidly in 2022. Hence the Japanese currency’s significant depreciation, from 110 yen per one dollar in January to 150 yen in October, which stands at about 156 yen as of May 2024. Together with the increases in energy and food prices after Russia’s invasion of Ukraine, the depreciation of yen increased consumer price index inflation to even higher than 4 percent in late 2022, though it has slowed to 2.7 percent as of March 2024, as Figure 3 shows.

    Figure 3. Consumer Price Index Inflation Rate in Japan (%)

    Source: Ministry of Internal Affairs and Communications

    The problem is that nominal wage growth was much lower than inflation. The real wage growth has been negative consecutively for 24 months since April 2022, as Figure 4 illustrates, although the fall became smaller in January 2024. This is the opposite of what was expected by the New Capitalism plan.

    Source: Ministry of Health, Labour, and Welfare

    Kishida’s government has been active and enthusiastic about calling for wage increases, and large companies have responded positively in the recent period. Fast Retailing Co., well known for its primary subsidiary, Uniqlo, raised wages by 40 percent in 2023 and other companies followed suit. Even the Japanese business federation Keidanren argued that the wage increase is the responsibility of companies. Companies that engage in wage negotiations with labor unions each spring, called Shuntō, saw an average wage growth of 3.6 percent in 2023, much higher than before, as Figure 5 demonstrates. 

    Wage increases are expected to be even higher in 2024. According to the Japanese Trade Union Confederation, known as Rengō, the first release of spring wage negotiations shows that wages rose by 5.3 percent, the highest after 1991. Just after this, the BOJ increased the interest rate, for the first time since 2007, ending the negative rate policy and yield curve control, because it assessed that the Japanese economy can finally achieve healthy inflation along with the wage increase and the expansion of aggregate demand. Contrary to the US, where the wage-price spiral was viewed with enormous concern, the BOJ has actively pursued this positive spiral since the implementation of Abenomics.

    Figure 5. The wage increase from spring wage negotiation (%)

    Source: Japanese Trade Union Confederation

    Not macroeconomics, but political economy of wage growth

    The rapid increase in wages due to cooperative bargaining and changes in monetary policy are surely good news, a sign of reflation in the Japanese economy after about 30 years. However, it remains to be seen whether Japan is actually on the verge of ending its long-run economic stagnation. While large companies with labor unions have increased wages, many small and medium companies have little room to do the same, significantly contributing to the stagnation of wage growth. Moreover, the role of trade unions that would be indispensable to the wage increase is very limited. Thus, the overall nominal wage growth was only 1.2 percent in 2023, much lower than the result of wage negotiation between companies and labor unions. This is in stark contrast to the recent surge in the Japanese stock market. The Nikkei 225 index increased up to higher than 40,000 points in late March 2024, higher than the peak of the bubble in December 1989. The surge was mainly associated with the increase in corporate profits, the government’s effort to support the stock market through corporate governance reform, and the inflow of foreign investment. However, the share of people who invest in the stock market  in Japan was just 12 percent of the total population as of 2022, while the share of people without any savings was about 27 percent.

    There is a growing recognition of the urgency of wage growth across Japanese society. A recent report by the Ministry of Health, Labor, and Welfare argues that a 1 percent increase in wages can lead to an increase in consumption and growth, raising production by 0.22 percent and creating an additional 160 thousand jobs. But after a decade of policy experimentation, it’s increasingly clear that this sort of wage growth depends on recalibrating the balance of power within the real economy. Japan’s unionization rate has continued to decline for several decades, alongside an increase in the share of irregular workers. The unionization rate fell from 30.8 percent in 1980 to 21.5 percent in 2000, and further to 16.5 percent in 2022, with the rate of union membership for part-time workers being only 8.5 percent in 2022. The share of irregular workers in the total workforce continued to rise from about 20 percent in 1990 to about 37 percent in 2022.

    Japanese trade unions are formed at the corporate, rather than the industrial level. Individual unions are typically segmented and take decentralized actions, possessing limited bargaining power. In recent years, Japanese unions have increasingly turned to cooperation with employers. In fact, there were only sixty-five strikes in 2022. Strikes had peaked at 9,581 in 1974 but fell sharply to 1,698 in 1990, to 129 in 2005, and dropped to less than a hundred after 2008.

    The strike by workers in the Seibu department store in August 2023, the first strike of labor unions in department stores in sixty-one years, caught Japanese society by surprise. It is clear that not only social consensus but also the workers’ struggle to organize is essential to the wage increase in Japan. Moving forward, any plan toward a sustainable economic recovery must prioritize enhancing workers’ negotiating power and promoting unionization among irregular workers and workers in small firms. Without a fundamental change in the balance of power, nothing new will come of Kishida’s New Capitalism.

  2. A Safe Haven for Hidden Risks

    Comments Off on A Safe Haven for Hidden Risks

    Perceptions are shifting regarding the US fixed-income market. In September 2019, interest rates on overnight repos unexpectedly spiked, leading the Federal Reserve Bank of New York to inject $75 billion in liquidity. In March 2020, the Covid-19 pandemic triggered a wave of securities selling, prompting the Fed to purchase over $1 trillion in securities. These events have raised concerns about market stability.

    In response, regulators have mandated that Treasury and repo transactions be cleared through clearinghouses. However, many participants, such as hedge funds, lack direct access to central counterparties (CCPs) and rely on dealing banks to connect them. Dealers, citing potential costs, have begun to ditch such clients. An intentional yet indirect consequence of these regulatory policies is the reduced participation of hedge funds in this market.

    The issue, however, stems from a misdiagnosis of the underlying problem. The prevailing understanding attributes this instability to the behavior of alternative investment funds like hedge funds engaging in “basis trades,” framing liquidity as the key issue in the Treasury market. However, drawing on insights from my interview with seasoned fixed-income portfolio manager Mohsen Fahmi, this piece argues that the market suffers from a different problem altogether: specifically, a chronic inefficiency in the hedging market that could potentially lead to a systemic failure of fixed-income risk management strategies.

    Locating risks hidden in plain sight within pragmatic risk management practices involves broadening our perspective. We need to view investment funds’ business models, including traditional ones like bond funds and alternative ones like hedge funds, as responses to changes in market structures rather than in isolation. This perspective allows us to see these business models as vehicles that transfer inefficiencies and opportunities from one market, such as the derivatives market, to other markets, such as the Treasuries cash market.

    Bond mutual funds are major players in the US Treasury market.  The primary risk they encounter is the interest rate risk. Fund managers often use derivatives like options and interest rate swaps to hedge against this risk. These hedging tools have proven effective as their duration matches that of the underlying assets. For fixed-income fund managers with long-term investments, the Treasury options and interest rate swap markets used to offer contracts with durations matching their portfolios. However, the market for Treasury options is disappearing, affecting fixed-income fund managers with long-term investments. Additionally, as an earlier interview with Ralph Axel (the rate strategist at BofA, one of the largest swap dealers) demonstrated, the swap market is also shifting toward catering to different types of clients, particularly those with shorter-term investments.

    The inability of options and interest rate swaps markets to offer the exact duration leaves an important gap in fixed-income risk management—“duration drift”—the unhedged portion of the portfolio due to the mismatch between the durations of the derivatives and the assets. Moreover, these risks are usually hidden through hedge accounting conventions that enable inefficient hedges to remain unreported.

    To tackle inefficiencies in the options and swaps markets, there has been a shift in hedging demands towards the futures market. Shorter contracts in interest rate swaps and the disappearance of options force fixed-income asset managers to construct their hedging strategies based on futures. This shift generates extra demand for such contracts and increases basis points that hedge funds exploit. However, this behavior of hedge fund managers is just the tip of the iceberg. The larger hazard is shaped by the distortions in the fixed-income derivatives market. Consequently, the solution to stabilize the market lies not in the US Treasury cash market but in its hedging markets.

    Instead of focusing on restricting the actions of hedge funds, regulators should learn from risk managers of mutual funds to develop a tool that measures the extent of hedging inefficiencies system-wide. Such a tool can become a new indicator of systemic risks in the fixed-income market. By developing and implementing tools to measure hidden hedging inefficiencies, regulators can gain better insights into creating a more stable and resilient financial system.

    How regulators see hedge funds in the Treasury market

    The shift in fixed-income fund managers’ portfolios toward US Treasury futures has caused prices to diverge from the related underlying asset, a Treasury security, leading to what is known as “basis.” Normally, policymakers and academics treat such deviations as short-lived phenomena, or sunspots, rather than structural issues. These deviations are expected to be resolved through arbitrage activities, which help the market correct itself, and the prices revert to their baseline and fundamental levels.

    However, regulators’ problem with hedge fund arbitrage strategies is that they are orchestrated in a way that siphons liquidity from the market. The key to understanding this hostility towards hedge funds compared to a theoretical arbitrager is “leverage.” Regulators’ adverse view of hedge funds as arbitrageurs stems from their reliance on leveraged funding. Alternative investment funds conduct the arbitrage through “basis trades”—borrowing from the repo market to buy US Treasury securities at a lower price and simultaneously selling US Treasury futures at a higher price.

    Figure 1: Anatomy of  A Cross-Market Basis Trade

    Regulators at the SEC and the Fed are worried that basis trade strategies are often made possible by low or zero haircuts on repo financing, which could result in liquidity crises. The high leverage utilized by hedge funds implies that if market conditions change suddenly, these funds might be compelled to quickly liquidate their positions, triggering “fire sales” that could destabilize the market. This withdrawal of liquidity by hedge fund arbitrage destabilizes the market rather than helping it return to equilibrium and market-clearing conditions.

    Moreover, regulators also point fingers at traditional funds, such as fixed-income asset managers. They argue that these managers have shifted to the futures market due to their increased appetite and leverage, disregarding these funds’ pragmatic risk management considerations. In summary, the behavior of both traditional and alternative fund managers is seen as a key factor in heightened volatility in the US Treasury and repo markets.

    However, the problem with this view is that regulators are focusing too much on funds as a self-contained problem rather than as a symptom of a much deeper issue in other segments of the fixed-income market structure. This perspective overlooks underlying structural issues in fixed-income markets, such as changes in the availability and terms of hedging instruments like swaps and options. These changes force fund managers to adopt different strategies, including the increased use of futures, which can inadvertently contribute to market volatility. By not addressing these root causes, regulatory efforts may fail to achieve true financial stability.

    Figure 2: Funds and Basis

    Duration drift: a hidden risk in fixed-income risk management

    Examining how fund business models interact with market structure, rather than solely focusing on fund behavior, allows us to uncover how these funds’ practical management solutions risk becoming key drivers of instability in the Treasury market. Although Treasury securities are widely regarded as the world’s safest assets, they pose a risk to fixed-income fund managers: their prices move inversely to interest rates.

    This risk originates from the evolution of the term structure or yield curve. The term structure represents the relationship between a bond’s term to maturity (when the final and largest payment is made) and its yield to maturity (mostly capturing the bond’s interim, yet smaller, payments). The value of these two sets of cash flows, determining the value of securities, moves inversely to interest rates. This inverse relationship impacts the portfolio’s overall return and necessitates sophisticated hedging solutions to mitigate risks.

    For held-to-maturity securities, these risks impact the book value. However, the price risk becomes evident when securities are available-for-sale and fund managers liquidate them before maturity. In such cases, the fixed-income portfolio is exposed to market price and interest rate fluctuations. Interest rate hedging tools like swaps can help mitigate this risk. When the hedging is efficient, the fund’s return can stabilize and become comparable to a fixed benchmark. In fixed-income portfolios, derivatives effectively make the funds risk-free by synthetically aligning the bonds’ duration with the fund manager’s holding period.

    The key to effective fixed-income risk management and creating a de facto risk-free asset is identifying derivatives that can synthetically align the fund’s fixed durations with increasingly varying investment periods. Ideally, these derivatives should have the same duration as the fixed-income assets. In the past, the interest rate swaps and options markets were liquid at every maturity, providing fixed-income managers with valuable tools for managing duration risks. As a result, these derivatives were widely popular for this purpose.

    Incorporating such derivatives, especially those that match the bonds’ duration, helps establish adjacent points on the yield curve. These points form a vector of differences between portfolio and benchmark exposures that are highly correlated and typically move in opposite directions. This relationship allows fund managers to offset risk positions effectively, ensuring the portfolio’s return remains stable and comparable to a fixed benchmark, thus creating a near-risk-free investment environment.

    For instance, if interest rates rise, the difference between the bond portfolio and benchmark returns becomes negative. Simultaneously, the difference between the swaps and benchmark returns turns positive. In an ideal hedge scenario, these opposing movements are equal in magnitude and cancel each other out. Therefore, combining swaps with bonds can help mitigate exposure to term structure risks, stabilizing the portfolio’s overall performance.

    When the durations do not align—when the maturity of an underlying asset does not match the hedging instrument—the small period of time left out of the contract generates a hedge risk. For instance, a mismatch occurs when an interest rate swap that hedges a 10-year Treasury security matures in 8 years. This mismatch especially exposes bonds with longer terms to maturity.1

    Traditionally, the swap and option markets could offer near-perfect matches for most points on the yield curve, ensuring effective risk management. Recently, however, the size of the Treasury options market has been shrinking. It has also become more challenging for risk managers to find swaps that precisely match the maturity dates of their funds’ underlying assets. This phenomenon, known as “duration drift,” poses a significant challenge to effective risk management.

    The unhedged portion of the funds’ duration can lead to a loss of value as interest rates fluctuate, and to the failure of the overall hedging process. When hedging methods like swaps or options result in duration drift, risk managers assess the extent of these drifts to devise additional strategies rather than leaving them unaddressed. Treasury futures, in particular, offer a cost-effective alternative to swaps for duration hedging.

    Understanding and managing duration drifts is crucial for comprehending why fixed-income risk managers have increased their demand for Treasury futures. However, this nuanced aspect of pragmatic risk management often becomes a mere footnote in financial statements and hedge accounting conventions. This oversight is concerning because small losses from hedging inefficiencies in individual funds can become systematic if duration drift is widespread in the fixed-income market.

    Effective regulation should prioritize the systemic implications of duration drift. Recognizing and addressing the causes and effects of duration drift can lead to more robust financial regulations. In such a market structure, focusing on less structural issues, such as hedge funds’ basis trading strategies, would only distract regulators from achieving the financial stability goals.

    Hedge accounting: the art of hiding inefficient risk management

    Just as the derivatives crises of the mid-1990s emerged as a result of inadequate reporting rules, duration drifts characteristic of contemporary swap markets can pose systemic risks given bookkeeping standards. The most critical flaw at the heart of swap accounting is precisely concerned with short-term, “ineffective” hedges like those constituting duration drift. In the past, these small hedges were generally ignored by accountants. While some recent reforms do aim at making hedge ineffectiveness more apparent in the financial statements, these measures have not been fully successful.2

    Hedge ineffectiveness due to underhedging the floating-loan cash flow can go unreported as it is not a realized loss yet. Hedge accounting means that gains and losses on exposures and effective hedges of those exposures are recognized in net income in the same periods. As the swap cash flow will not affect or change the cash flow of the underlying asset, and the liquidity issues arising from the inefficient hedge have not materialized yet, the accounting of the swap can conform to the accounting for the hedged item. The swap does not need to do so if the hedged item is not marked to market daily for accounting purposes.

    Similarly, for overhedging, a swap’s fair value is reported on the balance sheet and income statement. Thus, the very notion that the fair value of the swap implies hedge ineffectiveness need not be reported as a separate line in either statement. 

    Accountants may hide the risks, but the risk managers must face them. The challenge lies in finding practical risk management solutions to address the unresolved risks. This is because certain areas of exposure may still need further hedging while others are already over-hedged.3

    Policy implications

    Pragmatic risk management considerations provide a different perspective on Treasury market vulnerabilities than basis trading. In November, our interview with Bank of America strategist Ralph Axel revealed a significant shift in the swap market structure. Rather than being used as a hedge, swaps are now largely used to provide synthetic short-term funding. Whereas portfolio managers used to constitute the biggest clients of swap dealers, a new wave of clients, especially alternative investment funds such as open-ended funds, is turning to swap markets for access to funding.4

    The shift in swaps from hedging to funding has already started to show cracks in the system. Historically, traditional bond funds used swaps for hedging purposes, while alternative investment funds relied on the repo market for funding. As the efficiency of the repo market declined, alternative asset managers began turning to synthetic and indirect funding instruments such as swaps. This shift has disrupted the market structure for more traditional funds, introducing inefficiencies and distortions in their hedging strategies.

    Figure 3: Mapping Funds

    This shift in the function of the swap has impacted swap durations. A swap for funding involves applying the interest rate swap to a portion of the debt rather than the entire amount. This is known as a partial hedge and involves very short-term durations. As a result, swap markets for longer-term maturity, popular amongst fixed-income risk managers, are no longer liquid, and portfolio managers cannot enter the exact contract they seek. Instead, the contracts offered in the swap market are either just below or just above the desired length—in line with the needs of closed- and open-ended alternative funds.

    Bond fund managers are adapting to changes in the swap market structure by turning to other derivatives, such as futures. This recent shift has caused price pressure on these derivatives. Additionally, since these strategies are classified as partial hedges, they remain unreported under hedge accounting conventions. This allows managers to avoid showing unnecessary volatilities in profit or loss due to changes in the hedging instrument’s fair value. However, from a financial stability perspective, this accounting convention creates an information gap and hides risks in financial markets, such as those caused by duration drift.

    Measuring the extent of duration drift can effectively estimate hidden vulnerabilities in the Treasury market. It can also explain the additional price pressure on futures contracts and the motivation behind basis trading. Without capable valuation models and accounting conventions to capture these risks, regulators should introduce new tools that can display and estimate duration drift. Such tools would be more effective in stabilizing the Treasury market than imposing restrictions on private investment funds.

    This is especially critical given the new wave of regulatory pressure. For instance, the Securities and Exchange Commission (SEC) has recently implemented new rules requiring most US Treasury transactions to be cleared by the end of 2025, even though the Fixed Income Clearing Corporation (FICC) is currently the sole clearinghouse for US Treasury securities and repos.

    Restructuring the Treasuries market to rely on one central counterparty for handling the entire market could lead to significant systemic risks beyond the usual concentration risks associated with CCPs. While concentration risk may be manageable if the underlying risks are well understood, unresolved fixed-income risk management issues and duration drift could create a blind spot, exacerbating potential concentration risk in CCPs. This presents concerns about not only the concentration of known risks but also hidden risks like duration drift in CCPs that may not be apparent to regulators.

    Risk managers of large fixed-income funds, such as bond and pension funds, closely monitor duration drift and its potential impact. Regulators should learn from these practices to develop a tool that measures the extent of duration drift system-wide as a new indicator of systemic risks in the fixed-income market. By developing and implementing tools to measure duration drift, regulators can gain better insights into the market’s underlying risks and proactively address them. This approach will help ensure a more stable and resilient financial system capable of effectively managing known and hidden risks.

  3. Border Traffic

    Comments Off on Border Traffic

    Ecuador’s prominence in the transnational network of organized crime is a relatively recent phenomenon. Although the country has supplied chemical inputs for cocaine production in Colombia since the 1990s, there were few spikes in violence or power struggles between criminal organizations fighting over control and access to drug trafficking routes. In early 2024, however, Ecuador captured global attention when an organized criminal group (OCG) took over national TV channel TC Television, holding staff hostage in Guayaquil. Violence escalated dramatically the year prior, leading Ecuador to be classified as the most violent country in Latin America. 

    How did Ecuador transform into a battleground between local criminal groups? The most compelling explanation examines Ecuador’s strategic role in the logistics chain of drug trafficking. In just a few years, the Andean country has become a highway for the transportation of cocaine to the United States and Europe, making it a crucial zone for transnational organized crime.1

    The rise of cocaine trafficking in Ecuador can be attributed to the influence of international OCGs, disputes among local OCGs, and the importance of the logistics and value chains for the globally lucrative cocaine business. The risk level of operations, the distance between production centers, and the various intermediaries involved in transporting cocaine to the global North influence both the profitability of cocaine production—as well as the resultant violence within Ecuador. 

    How does the price of a kilogram of cocaine, with an initial production value of $1,500 in Colombia, reach $20,000 in the United States? Confronting control strategies implemented by Colombia and the United States, criminal groups have been displaced toward the border regions, with an increase in illicit coca leaf cultivation and cocaine production in Ecuadorian territory. The Ecuador-Colombia border has become an epicenter of drug trafficking worldwide, with Ecuador generating approximately $300 million in revenue in 2019, in addition to the revenue derived from drug trafficking logistics, estimated at around $150 million in 2022 for the country’s OCGs. 

    While these numbers are relatively modest compared to the Colombian context, the flow of approximately 500 tons of cocaine per year to Ecuador—alongside still incipient but rising cocaine production—has incited conflicts over territorial control and institutional corruption. This scenario of war and criminal anarchy has left over 15,000 dead in three years. In response, Ecuadorian President Daniel Noboa has embarked on a desperate mission—declaring an “internal armed conflict” to combat criminal groups. Twenty-four years after the launch of Plan Colombia, Ecuador is following a similar path, with the government aiming to increase the military’s arsenal of weaponry in the state’s war on drugs. 

    Structural changes

    To understand Ecuador’s rise in the drug trafficking value chain, we must look to two crucial moments in history: 2000 and 2016. The former marked the start of Plan Colombia, the Colombian government’s main strategy to combat drug trafficking. Supported financially and militarily by the US, the policy strengthened Colombia’s military and police in order to destroy illicit crops and target cocaine production. Part of the US “war on drugs,” the policy also increased maritime interdiction through the Southern Command of the Caribbean Sea to intercept cocaine transport. By 2016, upwards of $141 billion had been invested into Plan Colombia, with $10 billion provided directly from the US. 

    The strategy led to the borderization of illicit crops, which spurred interest in new border networks in the supply of chemical precursors, as well as in the consolidation of logistics networks for the transportation of cocaine to Central America and the United States. Although there was a reduction in cocaine flows in the Caribbean, drug trafficking organizations shifted their logistics networks toward the Pacific Ocean. In response to these efforts, Mexican criminal organizations assumed a more prominent leadership role in the drug trafficking business. Due to the “balloon effect” generated by maritime controls in the Caribbean, the Sinaloa Cartel saw a lucrative opportunity to establish a cocaine trafficking logistics network through the Pacific Ocean. In 2006, the International Narcotics Control Board (INCB) warned that the Pacific route had become more relevant for the drug trade, with Ecuador’s Galápagos Islands serving as a landing for vessels carrying cocaine to Central America and Mexico to evade maritime controls and refuel. 2

    As a result, the Sinaloa Cartel strengthened its presence in Ecuador. The organization had operated in the Andean country since 2003, initially through emissaries whose objective was to coordinate the transportation of cocaine from production enclaves in Colombia to Central America and Mexico. Since then, the organization has established a complex logistics network in Ecuador responsible for mobilizing cocaine from the Colombia-Ecuador border to international markets, involving former Ecuadorian military personnel and a network of businesses and asset laundering facilitated through links and intermediaries of the Revolutionary Armed Forces of Colombia (FARC).

    Faced with changes in drug trafficking routes in the region, the Sinaloa Cartel’s Ecuadorian subsidiary networks sought to embed itself in cocaine trafficking between 2010 and 2016. During this period, Ecuador’s OCGs perfected their logistics for cocaine transportation through the Pacific Ocean—these criminal organizations centered on the transportation and storage of cocaine. The Galápagos Islands, along with other supply and transfer points for cocaine such as Cocos Island in Costa Rica, became strategic locations to establish storage and transportation points for cocaine through loaded boats in open waters.

    However, the balance of power of organized crime in Ecuador shifted in 2016, following the Colombia-FARC peace accords. Cocaine production began an aggressive decentralization process that particularly affected the Ecuador-Colombia border region The departure of FARC opened up space for the entry of new actors like Albanian, Italian, and Mexican mafias, who experimented with new techniques and supply chains that involved Ecuador. 3.

    Figure 1: Border between Ecuador and Colombia

    Decentralization coincided with the arrest and extradition of “El Chapo” Guzmán—the leader of the Sinaloa Cartel—to the United States, intensifying organized crime in Ecuador. The extradition resulted in a loss of power and legitimacy and created an opening for the Jalisco New Generation Cartel (CJNG). The CJNG, which began operations in Ecuador in 2018, aimed to mobilize shipments acquired from Colombian FARC dissidents through Ecuador to Central America and Mexico. To achieve this, the organization hired Ecuadorian logistical networks for cocaine transportation. One of the CJNG’s objectives in Ecuador was to weaken the most powerful criminal organization in the country, “Los Choneros,” which since 2003 was in a strategic alliance with Sinaloa. This alliance granted the Choneros significant power and a criminal monopoly in the country, but, with the weakening of Sinaloa, the CJNG began to finance their rivals. The CJNG quickly caught the interest of the Choneros due to structural changes in the cocaine business in southern Colombia. 4 The relationship between the cocaine economy and the power structure of the business led to the fragmentation of the Choneros organization by the end of 2019. What resulted was fierce competition among local criminal networks to control cocaine transportation.

    Comprising three criminal groups—Tiguerones, Lobos, Chone Killers, and Lagartos—and financed by the CJNG, a new criminal structure, the “New Generation Alliance,” soon emerged. On the other hand, groups loyal to the Choneros, led by the new leader of the organization “Fito,” and supported by Sinaloa, remained. Within this landscape, Balkan mafias have surfaced as a source of financing for the highest bidder, generating even more aggressive competition among local criminal networks. 

    These changes in the local criminal structure, alongside international participation, have formed a kind of criminal anarchy in Ecuador. Alliances and disputes rise and wane amid bids to control drug trafficking routes and storage centers in the country. By the end of 2023, Ecuador’s high homicide rates led it to be classified as the most violent nation in Latin America. 5

    The global cocaine chain

    The economy of drug trafficking has undergone significant structural changes in recent decades. The managerial “cartel” model, where a few mafia structures control production, trafficking, and sale of illicit drugs in consumer markets, no longer prevails. In line with the process of economic globalization, criminal groups have adapted their business environment through an innovation strategy in drug trafficking, including the decentralization and specialization of various activities related to this lucrative illegal industry. 

    This evolution has transformed the economy of drug trafficking from a model dominated by large drug cartels to one where numerous criminal actors participate and specialize in each link of the chain. In the context of globalization, these groups have adopted practices similar to those in formal international trade, implementing a systemic value chain process aimed at reducing risks, increasing profits, and leveraging the specialization of each criminal organization operating in different territories.

    In recent years, armed groups and OCGs from Colombia have specialized in cocaine production, while Mexican OCGs have specialized in logistical chains that transport various illicit drugs to sales networks for consumers. The groups dedicated to logistical activities constitute the most important and profitable link in the value chain, due to their ability to influence the selling price in the consumer market. Ecuador is situated within this dynamic of value chains.

    Ecuador was historically considered a country with low levels of violence, a “peaceful island” compared to the serious security conflicts in Peru—where the coca leaf plays a significant role in the economy—and Colombia, which hosts persistent armed conflict. After 2019, the situation changed abruptly, with Ecuador’s proximity to the growth of illicit crops and cocaine production enclaves contributing to the rise in violence. According to the United Nations Office on Drugs and Crime (UNODC), between 2015 and 2019 there was an alarming 76 percent increase in coca leaf crops in Colombia, rising from 96,000 to 169,000 hectares. 6 This increase was exacerbated by the FARC peace accords, which precipitated the rise of illicit crops in border areas. In 2016, 30 percent of crops in the border departments of Nariño and Putumayo were within 20 kilometers of the border, as depicted in the following image:

    Figure 2: Concentration per hectare of illicit crops on the southern border of Colombia

    Source: Rivera Rhon and Bravo Grijalva (2020)

    By 2022, according to data provided by UNODC (2023), 47 percent of Colombia’s total cocaine production was concentrated in border departments with Ecuador. Furthermore, 50 percent of these crops were within a 10 kilometer radius of the border. These figures demonstrate a process of expansion or “borderization” of cocaine production centers into Ecuadorian territory, where criminal actors take advantage of the institutional weakness of states to exert control over borders. This is compounded by the state’s historical neglect of these regions—affected communities have few socioeconomic opportunities, making it easier to recruit cheap labor dedicated to the maintenance and harvesting of these crops.

    Productive enclaves of cocaine in Ecuador

    The cultivation and harvesting of the coca leaf shapes the nature of organized crime, given its status as a high-value, “plunderable” good, and its success or failure directly influencing the global cocaine market. Looking at the variation in cocaine hydrochloride production on the southern border of Colombia, the departments of Nariño and Putumayo, bordering Ecuador, reported an increase from around 20,000 hectares of coca leaf in 2010 to over 100,000 in 2022. This implies that by 2022, approximately 800 tons of cocaine were produced in Colombian territory destined for international markets. 

    Although Ecuador was previously considered a country free of illicit crops, a 2020 study using satellite images identified 154 plots, representing approximately 700 hectares of illicit coca leaf cultivation at that time, in the provinces of Esmeraldas, Carchi, and Sucumbíos.7 Figure 3 shows the presence of illicit crops on the Colombia-Ecuador border, depicted on the left side.

    Figure 3: Illicit coca leaf crops in Esmeraldas-Ecuador (2018)

    Source: Rivera Rhon and Bravo Grijalva (2020)

    Taking into consideration market costs related to the production yield of each hectare of fresh coca leaf, approximately 4,830 kilograms of cocaine could have been produced per harvest in the provinces of Esmeraldas, Carchi, and Sucumbíos. This is concerning considering that coca leaf crops can generate an approximate yield of eight harvests per year. Thus, using the production in the Ecuadorian northern border as a reference, approximately 38,640 kilograms of cocaine hydrochloride were produced in 2019 alone, representing approximate revenues of over $300 million for Ecuadorian GCOs.

    Logistics

    Situated between the world’s two main cocaine producers and using the US dollar, Ecuador has become a key point for cocaine trafficking worldwide. To transport cocaine from “productive enclaves” to Ecuadorian ports, OCGs use the extensive road network of the Amazon, the coast, and the Andean mountains, facilitating the movement of hundreds of kilograms of cocaine from the border to maritime ports in less than twelve hours. Additionally, they take advantage of international border crossings and approximately fifty informal crossings between Ecuador and Colombia, thus facilitating the transportation of substances and the supply of inputs for cocaine production.

    Figure 4: Cocaine trafficking routes in Ecuador.

    Source: Ecuador Anti-Narcotics Police (2023)

    Intelligence reports from the Ecuadorian Anti-Narcotics Police estimate that between 70 to 80 percent of cocaine produced in the southern departments of Colombia enters through the northern Ecuadorian border.8 Analyzing this production-to-trafficking relationship in 2022, approximately 571 tons of cocaine hydrochloride entered Ecuador destined for Europe and the United States. Of these, only 32 percent are retained by the efforts of Ecuadorian control authorities, meaning that 68 percent of this production generated economic profitability for Ecuadorian criminal networks.

    According to data obtained by the Ecuadorian National Police, each kilogram of cocaine yields Criminal Organizations an approximate profit ranging between $500 to $1,000. This represents revenues of over $150 million in Ecuador in 2022. Nevertheless, these figures do not include payments to third parties, bribes, or fees to Mexican or Albanian OCGs.

    The absence of controls, profitability, and the significant presence of export ports make the city of Guayaquil in Ecuador the most important for cocaine trafficking to international markets. Amid conflicts between OCGs, 35 percent of the homicides nationwide have been concentrated in this port city. 9

    The profitability and interest in logistics prompted a substantial increase in violence in Ecuador. The country’s homicide rate increased from 5.7 in 2017 to 45.6 in 2023. 80 percent of violence in Ecuador is the product of disputes between those with criminal backgrounds. In a context of criminal fragmentation in Ecuador, homicide rates are significantly higher in the strategic corridors of drug circulation. Changes in the homicide rate in these corridors correspond to rivalries between drug trafficking organizations. 

    Figure 5: Homicides in Ecuador

    Source: Ministry of the Interior (2023)

    Distribution and consumption in international markets

    The Ecuador-Colombia border and the Mexico-United States border host the highest homicide rates in the world, “precisely because these points of entry are scarce, drug traffickers are prepared to fight tooth and nail to control them.” 10 Cases of tunnel construction and the Sinaloa Cartel’s expansion into Chicago and other US cities illustrate the need for OGCs to control large cocaine warehouses prior to their commercialization.

    Once large packages reach distribution networks, kilograms of cocaine are divided into small portions to be sold in grams. The consumer constitutes the last link to complete the drug trafficking value chain and, at the same time, plays the most representative role as the economic catalyst of this phenomenon. This business is paid in cash, circulating its value back to large traffickers and producers. 

    According to the Global Cocaine Report (2019), the gram of cocaine in a producer country like Colombia has a value of less than $5, while it exceeds $215 in countries like Australia. Even the profit of those who sell the substance at retail (micro-trafficking) tends to be higher since they usually mix cocaine with cornstarch, talcum powder, lime, or fluorine to increase their profits. 11 The US accounts for 2.5 percent of global cocaine consumption; consumption and price determine the main destinations of drugs produced in the Andean region. 12

    The Covid-19 pandemic saw the overproduction of cocaine, leading to an increase in cocaine consumption across the world. Europe and North America remained the largest consumers. According to the Global Cocaine Report (2023), “the estimated number of users worldwide has grown steadily over the past 15 years, driven in part by rising population levels but also by a gradual long-term increase in prevalence.” 13

    As a supply and demand market, productive enclaves interpret their market and cater to production according to consumer preferences, which increasingly demand cocaine at lower prices, higher purity, and with a multitude of new actors willing to provide by any means necessary.

    Ecuador’s path forward

    With a dramatic four-year rise in violence, Ecuador shows that the absence of criminal activities in the past does not exempt a country from the influence of drug trafficking. The scenario prompts a profound evaluation of traditionally reactive security policies in Latin America, which center on substance interdiction actions and short-term policies. This approach has neglected fundamental aspects of prevention, such as border control, social cohesion, and, above all, the strengthening of financial intelligence units responsible for monitoring possible activities linked to money laundering and the growth of illicit economies.

    The Ecuadorian case exemplifies how the drug trafficking economy has been shaped by internal and external factors of the value chain. Smaller groups enter conflicts in order to control logistics at the border regions, connecting production to trafficking, leading to the significant increase in violence rates seen after 2019. Although the transnational nature of drug trafficking is rhetorically recognized as a problem requiring greater coordination between hemispheric prosecutors and police institutions, in practice, there remains a high degree of distrust among states. This results in isolated actions employing a militarized strategy to promote interdiction and deterrence. Over two decades of militarization through the War on Drugs in Colombia and Mexico have not achieved the desired results. On the contrary, the militarized approach has contributed to the increase in the value and purity of cocaine in consumer markets.

    Crop eradication policies have led transnational OCGs to specialize and adapt their business model through alliances and value chains that concentrate their activities in fragile institutional environments, such as border areas. At the Ecuador-Colombia border, strategies for interdicting productive enclaves have led to the “borderization” of drug trafficking, generating greater coordination between local and international criminal groups that have facilitated reducing their risks while managing to co-opt a greater number of officials in charge of their control.

    Under President Daniel Noboa, Ecuador is pursuing a militarization and policing strategy that is destined to fail, relying on the reactionary approach and seeking little regional cooperation. Despite Noboa’s very publicized efforts to wage “war” on criminal actors, homicide rates have yet to see a dramatic decrease. Only cooperative approaches that understand the value chain will be able to address both the causes and consequences of drug trafficking. Such coordination requires strong regional political alliances, which may be difficult to forge given the ideological fragmentation on the continent. The fragmentation itself can in part be attributed to the dominant influence of the US, and divergent positions towards the War on Drugs across the region. 

  4. The Productivity Gap

    Comments Off on The Productivity Gap

    Standard development economics anticipates that the composition of a country’s labor force will go hand in hand with the composition of output, reflected in the division between the primary, secondary, and services sectors. But contemporary India, as well as several other countries across the global South, are challenging this expectation: changes in output between the sectors have preceded changes in the composition of the labor market. 

    The gap bears political and economic implications: grossly inadequate industrial job creation surfaces as an unmanageably vast informal economy existing side by side with the organized economy. It shows no clear sign of decreasing in importance with industrialization, and this informality overlaps with the same agricultural, industrial, and service divisions of the formal sector. 

    Today, about 90 percent of the Indian labor force is engaged in the informal economy, producing about 46 percent of output. This makes labor productivity in the informal sector just above half of the national average. The remaining 55 percent of GDP is produced by only about 10 percent of the labor force in the formal sector, with 5.5 times the labor productivity of the national average. Only about 5 percent of that labor force is employed in large businesses composed of public and private corporate industries and services, including public utilities, where labor productivity gaps are much higher.

    This enormous difference in labor productivity is a double-edged sword. It is the potential source of a large increase in output if, as Arthur Lewis predicted, there is a shift from small agriculture to large, organized industry. Both Lewis and Kuznets anticipated that transitioning from the informal to the formal sector would generate enormous productivity gains, the main source of economic growth. 

    But the productivity gap can also harm employment, as any given level of output can be produced with less labor. About 11 million people currently enter the labor force every year, with at least 20 million unemployed carrying over from the previous year. Assuming an average 7.5 percent unemployment, these figures yield an annual 4.1 percent unemployment growth in a population that is growing at approximately 1.9 percent. 

    Classical development economists overlooked this problem. They failed to consider that the formal sector may face constraints to market size as a result of insufficient effective demand—which makes it unprofitable for organized industry to expand production beyond what the market will take—and be unable to accommodate most of the displaced workers from agriculture. This tendency was predicted by none other than Adam Smith, who observed that the extent of division of labor is limited by the size of the market. 

    To stabilize the worsening unemployment crisis, Indian employment has to grow at a minimum of 6 percent per annum. Unfortunately, in the forthcoming Indian elections, job creation has not been on the table for Prime Minister Narendra Modi’s Bharatiya Janata Party (BJP). Instead, the party in power reflects conventional development thinking that industrialization at a faster pace will do the job. In what follows, I challenge the expectations of classical development economics and reflect on policy paths out of this predicament. 

    Kuznets, Lewis, and Schumpeter

    Three prominent economic theorists form the backbone of classical development economics: Joseph Schumpeter’s understanding of economic evolution as “a distinct process generated by the economic system itself” anticipates that elements endogenous to capitalism will force innovation and economic development through creative destruction.1 “Frictional unemployment” is held to be an integral part of the general process of creative destruction—with employment rising and falling according to the needs of new, innovative industries.

    In a similar vein of optimism, the highly contested Kuznets Curve argues that income inequality follows a U shape with higher per capita income: it rises during early industrialization and then declines as more and more workers shift away from agriculture and into industry. For developing countries, the theory indicates that the best course of action is to simply wait until employment adapts to rising productivity. 

    Finally, the Lewis model anticipates that industrial development will progressively draw on an “abundance of labor” from agricultural production, resulting in a long period of “labor transfer.” During this period, wages are expected to be slightly above subsistence—enough to attract rural workers. Wages are expected to rise when the “turning point” of industrialization is reached with all surplus labor absorbed in industry and excess supply of labor eliminated. 

    These classical theories of development tell an optimistic story but suffer from two key weaknesses. First, all three theories are concerned with development in the “long run”—a time frame which is nearly meaningless in social and political terms, particularly in democracies with elections at regular intervals. Second, the theories neglect the possibility that inadequate effective demand would constrain the size of the industrial sector. It is the “elephant in the room” that conventional development theory ignores. 

    Corporate industrialization in the short run

    In recent decades, the Indian economy has exhibited continually growing GDP alongside growing unemployment. This “jobless growth” is characterized by rising output in organized industries combined with overcrowding and declining livelihood in the unorganized sector.2 While the winners of this development benefit from higher productivity, the losers are left uncompensated, eking out an existence in the unorganized sector. The dispossession of those displaced from small-scale agriculture due to land acquisition is invisible in this growth model; such conditions drove the farmers’ protests in 2021.

    The cost of industrialization is borne by the most vulnerable sections of society. High prices, increasing costs of living, and heavy indirect taxes are general symptoms, but the worst effects are hidden from the public eye. In India, the Adivasis, roughly 8 percent of the population, account for 40 percent of the displaced population, meaning they have a five times higher likelihood of being displaced in the name of development. In the absence of reliable data on displacement and caste, we can only note that the incidence of land acquisition falls disproportionately hard on the small not large holders. Around half of the Scheduled Caste households had holdings less than 0.4 hectares; their average holding is 0.52 hectares which is about half the average holding of other households (1.05 hectares, and only 6 percent of them had more than 2 hectares of land.)3 Acquisition of land on a disproportionately large scale in the name of economic development deepens economic inequality and exacerbates social divisions in the countryside. 

    Almost ironically, the higher the labor productivity gap between organized industry and traditional agriculture, the more severe this phenomenon of unemployment for any given market size. Our economic analysis must be relevant to the ever-present short run. Insufficient effective demand is not a problem that can be done away with through appealing to international trade and investments. The notion that foreign trade might relax demand constraints only works given a trade surplus. But India has faced a significant and persistent trade deficit, which stood at $78 billion in 2023, down from $121.62 billion the year before. As a result, the foreign trade multiplier works in reverse to subtract rather than add to the size of the domestic market.

    Attempts to enhance international competitiveness and reduce the trade deficit through greater mechanization and robotization have only increased the productivity gap between large and small businesses. With high import intensity of mechanization, the additional import often exceeds additional export at the margin. This contributes to a persistent trade deficit without boosting India’s international competitiveness. Instead, mechanization has encouraged further labor saving production processes only to worsen the unemployment problem.

    Rising “portfolio investment” intended to compensate for the declining demand generated by low wages and flexible labor markets neither expands domestic market size nor the productive capacity of the economy.4 It does, however, influence the ownership pattern of stocks. The upper echelons of  Indian society have been pleased with their luxurious consumption baskets, made possible with partially or totally imported goods sustainable only by international capital inflows and higher income accruing to them from a buoyant stock market. “Portfolio investment” has not only eased the foreign exchange constraint on elite consumption, but it has also helped widen the functional income gap between income from property and from work.

    Growing inequality is promoted for incentivizing private investment by large businesses. Heavily subsidized land prices and related natural resources for corporations encourage dispossession. Tax breaks, cultivated blindness to willful defaults of bank loans and frauds committed by favored corporations, and generous contributions to the BJP’s election funds together present a picture of comfortable mutualism between the government and big business while unemployment soars. 

    Large scale land acquisition for industrialization adds to the pressure of unemployment with no solution in sight. As repeatedly pointed out in the many reports of the Auditor General of India, speculative land ownership by corporations, alongside land banks owned by the government, have made a large percentage of land acquisitions idle, unused for industrialization while exacerbating unemployment. 

    Doing away with classical development theories opens the way for pro-poor intervention and for the state to cultivate a fiscal mechanism to compensate the losers of development processes. Resolving the crisis of unemployment poses an opportunity to resist the class-based discrimination enacted by large corporations in collaboration with the state.5

    Policy solutions

    Instead of increasing overall labor productivity in the economy by trying to transfer labor from low productivity small agriculture to high productivity organized industry, our focus should be on reducing the gap between the two, namely by raising the productivity of small agriculture. We must focus on raising the productivity of land, and not labor: infrastructure development, including road communication and connectivity, should center around the countryside rather than aspiring towards world class connectivity among cities amid a sea of destitution. Increasing land productivity usually improves the income of not only the landowner but of the larger community. Increasing local purchasing power creates a local market and weakens the demand constraint faced by organized industry. 

    Land productivity ought to be defined inclusively, taking into account all types of agricultural produce, including subsistence and commercial crops, fishing, and animal husbandry. Forests, rivers, water bodies’ medicinal plants, and marine products of the commons should be demarcated as public rather than private goods for raising land productivity. The relative price of crops is a particularly important policy instrument, and reaching such an inclusive optimal cropping pattern should entail decreasing rather than increasing dependence on inputs purchased from markets. Less dependence on market-based inputs would impact the cost of cultivation and the recurring problem of debts that disproportionately harm small farmers. 

    A minimum support price system for agricultural produce—a central issue in the farmers’ massive mobilization in 2021—as well as insurance for crop failures are imperatives of our time, both as income support and price incentive. Under measures of collective water, land, and forest management aided by the Panchayat government, regular droughts, floods, and crop failures could hopefully inflict less distress. The village-level Panchayat government should also manage the use of all commons and extend employment guarantee schemes towards creating public assets and improving the commons. 

    A main source of the increase in land productivity would result from better use of unemployed and underemployed labor. Labor mobilization on a massive scale for development of the countryside was one of the most spectacularly successful features of post-revolution China and Vietnam. In the Indian case, the Rural Employment Guarantee in the early 2000s marked a step towards the right direction, but newer iterations must partner with local Panchayats with sufficient fiscal and administrative autonomy. Such a provision already exists in the Indian Constitution, but it is rarely implemented. 

    To improve employment outlook, land productivity on small farms must be complemented by rapid expansion in the services sector. While small scale industries and services could respond to increased demand from higher-earning small farmers, the greatest potential employment growth would come from the expansion of basic welfare services like primary health and education. These are desperately needed public goods, and they can also form a social wage, providing an alternative to the current attempts to increase private monetary wages meant to allow individuals to afford basic necessities. 

    Finally, land acquisition ought to come with the payment of a minimum annual rent to those dispossessed. This will go a long way towards minimizing the destitution of tenants, agricultural laborers, fishermen in coastal areas, and forest dwellers, among others. Across these development initiatives, we must defend the decentralization of decision-making and local autonomy that have been integral to the Indian federal structure. India’s path to development cannot take the form of over-centralization in the hands of government-supported corporations.

    This piece is adapted from a recently published article in the Journal of Brazilian Economics. 

  5. Underdevelopment and War

    Comments Off on Underdevelopment and War

    In the 1960s and ‘70s, the Colombian national government embarked on an ambitious agrarian reform program to address poverty in the increasingly violent countryside. Under the bipartisan project of the National Front, which alternated power between the Conservative and Liberal parties, these efforts sparked domestic and international debates around the nature of developmentalism in the country, especially since they coincided with a series of economic missions meant to tackle underdevelopment. Such interventions were influenced by international institutions like the World Bank and the United Nations Economic Commission for Latin America and the Caribbean (ECLAC), as well as by discussions around economic growth and capitalism taking place in North America and Europe. Amid regional development debates, the Colombian approach would be determined by entrenched inequality in rural areas and the emergence of armed resistance against the state. 

    Proposals that viewed the national economy through the lens of neocolonialism and global dependency began to circulate in intellectual and policy circles, and Colombia soon became a testing ground for various developmental diagnoses. The 1969 publication of Mario Arrubla’s Studies on Colombian Underdevelopment (Estudios sobre el subdesarrollo colombiano) linked these dependency theory debates with other theories of global Marxism. Arrubla, an economist, led the magazine Estrategia, the original publisher of the essays that made up his 1969 book. At a time when Latin American intellectuals sought to distance themselves from local communist parties and political liberalism—while simultaneously  advocating for a socialist revolution—the magazine brought together a group of left-wing intellectuals disillusioned both by the National Front and international institutions. In the case of the Grupo Estrategia, the revolutionary fervor did not translate into armed action. The group exemplified a new intellectual left in Latin America, which advanced critical renewals in Europe and North America to overcome the “sclerosis” of Soviet Marxism and repudiate imperialist tendencies. Today, the contributions of Arrubla and the Grupo Estrategia help explain Colombia’s greatest injustices: the state’s absence in peripheral regions, a powerful oligarchy, high inequality, and persistent violence in rural areas. 

    Building the modern state

    The victory of the Liberal Party candidate Alfonso López Pumarejo in 1934 brought an end to a long cycle of conservative hegemony in Colombia. The election resulted in a period of liberal policies and the expansion of social rights. López Pumarejo initially favored forms of national industry that relied on the masses to counterbalance landowners, who themselves opposed the comprador bourgeoisie. He promoted union organization and legalized the right to strike, which had been severely curtailed by previous governments. These pro-labor policies, however, proved to be too disruptive for his broad liberal coalition, and landowners began to oppose the popular movement. Under the succeeding government of Eduardo Santos Montejo, and even under López Pumarejo’s own government upon returning to power between 1942 to 1945, the Liberal Party abandoned the reforms and ultimately failed to halt a brewing conservative opposition. 

    A conservative political shift ended the brief period of cooperation between workers and the industrial bourgeoisie. Land concentration rose in rural areas, and landowners launched a reactionary enterprise that massacred the liberal masses, making Colombia the “top producer of decapitated heads per capita.1 The “self-defense” of the campesinos—agricultural farm workers—was silenced with “blood and fire” during a historic period that was later known as La Violencia.2 The humanitarian disaster reached enormous proportions, paving the way for a “pacifying” dictatorship that would serve the interests of the liberal bourgeoisie who had lost political power. General Gustavo Rojas Pinilla claimed a de facto government between June 13, 1953, and May 10, 1957, until a Civil Front—composed of workers, students, and sectors of the traditional political elite—carried out a peaceful coup that sent the general into exile.

    With the end of Rojas Pinilla’s dictatorship in 1957, an entire generation was born into political life. Students and young people pushed for the democratic Civil Front to support a vision of popular democracy grounded by the working class. Some had ties with local communist organizations that were aligned with international communism, which offered a critical analysis of the new regime’s agrarian policies. A local newspaper titled Crisis advocated for a “democratic agrarian reform” that would end the existing semi-feudal regime by redistributing land to campesinos. As support for the bipartisan National Front was growing, Crisis contributors (including Arrubla) championed the development of an independent national economy, which would require investment in productive agricultural forces and a ban on the importation of agricultural products.3 This vision echoed the agrarian reforms of the German occupation, which sought to tackle the post-war famine, transform rural property relations, and promote land redistribution.

    Early critiques

    Prior to the consolidation of the National Front in 1957, studies of the structure of the Colombian economy, neocolonialism, and development had long circulated among intellectual circles. These included conservative historian Luis Ospina Vásquez’s 1955 book Industria y protección en Colombia, 1810-1930, Alejandro López’s 1927 classic Problemas colombianos, and Luis Eduardo Nieto’s 1942 text Economía y cultura en la historia de Colombia. The latter was a key reference for Marxist economics at a time when the discipline had yet to be fully professionalized in Colombia. Nieto Arteta, a lawyer, was a member of the Marxist Group (1933–1934), which also included figures such as Liberal leader Jorge Eliécer Gaitán and the future rector of the National University Gerardo Molina. As heirs of the Second International, the Marxist Group had a deterministic vision that viewed social change through shifts in the forces of production and wealth generation.4

    Some members of the Marxist Group joined the INCORA (Colombian Institute of Agrarian Reform) or the Ministry of Labor under the National Front government. Beginning in the 1960s, academics such as sociologist Orlando Fals Borda, as well as sociologist/priest/guerrilla-member Camilo Torres, spearheaded the professionalization of social sciences at the National University while also participating in state organizations.5 Other contemporaries aligned with the critical sector of the Liberal Party, represented by the Movimiento Revolucionario Liberal (Liberal Revolutionary Movement, MRL), which would dynamize a new modernizing era within the regime, thus hoping to compensate for the failures of the Liberal Republic. In his 1968 essay “Colombia: Violence and Underdevelopment,” philosopher Francisco Posada Díaz recognized the failure of the bourgeois-democratic revolution promoted by López Pumarejo, pointing to the National Front as an opportunity to recover that lost promise.6 Alongside those in academic circles, members of the National Front also sought to understand the Colombian economy through the lens of dependency theory. From the time the coalition took power, these efforts were influenced by a growing number of international missions in the country, which quickly honed in on the problem of  Colombian rural productivity.

    “Operation Colombia”

    During the 1950s, the World Bank, ECLAC, and the “Economy and Humanism” mission led by the Dominican priest Louis Joseph Lebret each arrived to Colombia amid international efforts to study Latin American economies. Despite distinct approaches, the missions all declared a situation of social emergency: the Colombian countryside had troubling levels of poverty due to unequal land distribution and land use restructuring during La Violencia. The country’s agricultural production had little value added: almost 90 percent of the most productive land was dedicated to livestock and remained in the hands of landowners. Campesinos had been displaced to hillside lands and carried out small-scale agricultural work that barely achieved subsistence levels.

    Between 1945 and 1949, living costs increased by 71.6 percent in the city of Medellin and 58.2 percent in Bogotá, causing the currency to lose 41.7 percent and 36.8 percent of its value respectively—an inflationary scenario without wage growth. As the decade progressed, the indicators did not improve; the Lebret Mission reported a 21.2 percent increase in the cost of living between 1950 and 1954, while the index of real wages in relation to the cost of living decreased by 23.2 percent. The combination of inflation and stagnant wages intensified wealth concentration.7

    After serving as an advisor to Franklin D. Roosevelt’s New Deal program in the US, Canadian economist Lauchlin Currie arrived in Colombia on a World Bank mission in 1949. Pressured by a strong McCarthyism in the US, Currie chose to stay in Colombia beyond the mission and continued to work as a World Bank consultant. By the time he presented his plan for “Operation Colombia” in 1961, he had already been in the country for over a decade and had produced several economic studies of the Colombian countryside. His plan proposed an accelerated form of development for Colombia with rapid technological advancement in rural regions.

    Currie’s plan was one of two developmentalist responses in the public arena, alongside the  National Front’s agrarian reform. Currie advocated for expanding agricultural lands,  importing machinery, and relocating 1.5 million campesinos to cities, where they would become workers integrated into urban industry. This proposal implied a form of planning that would increase the availability of foreign exchange and the import of capital goods to generate massive housing programs and improve the quality of life. Such an outcome was unlikely, however, as Colombia faced a deficit in the balance of payments due to the global decline in the price of coffee, its primary export. 

    Currie would later become one of the most important economists working in Colombia; his ideas influenced academic curriculums and policies implemented in the 1970s. But “Operation Colombia” was rejected by President Alberto Lleras Camargo,  who supported the agrarian reform proposed by Liberal politician Carlos Lleras Restrepo (then Minister of Government and later President of the Republic from 1966–1970). Lleras Restrepo’s outlook hewed closer to that of ECLAC, with a development strategy that understood center-periphery relations and dependencies and sought to embolden a strong state that prioritized planning, industrialization, protectionism, and import substitution. 

    In contrast to “Operation Colombia,” the National Front-backed agrarian reform maintained campesino-managed small and medium-sized plots of land that had been thus far sustained with a low level of agricultural development and slow technical assimilation. The proposal advocated for “Rural Action Units” as a first cooperative phase to organize life in the countryside. In 1961, after significant debate, Congress passed Law 135 to carry out the reform, which by this point had come to more closely resemble plans backed by the US and the International Development Bank than ECLAC proposals.  The INCORA was responsible for implementing the reform over four years.8

    Mario Arrubla and the Grupo Estrategia

    In 1969, amid debates around agrarian reform, Mario Arrubla published Studies on Colombian Underdevelopment, a collection of three essays that offered a critique of both “Operation Colombia” and the National Front’s proposal. At the time of publication, political organizations of the New Left had consolidated, and the student population was increasing significantly. Arrubla’s text was widely disseminated, reaching over 60,000 legally published copies, thousands more bootleg versions, and over thirteen editions.

    The book consists of three essays previously published in Estrategia. The magazine was active between 1962 to 1964, and despite its short tenure, had a lasting influence on the country’s economic and political debates. Estrategia sought to recreate the Sartrean project of Les Temps modernes, synthesizing a left-wing sociability in the 1960s. Mario Arrubla and the Grupo Estrategia went further, using theoretical and political arguments to challenge developmentalist solutions for the country. 

    Born in 1936 in Medellín, Arrubla was a Colombian left-wing intellectual who, like Currie, attempted to diagnose the Colombian economy in the post-war period. Arrubla identified an unprecedented encounter between the high international demand for coffee and rising exportable production, which led to an increase in coffee’s global price. This led to a boom in Colombia’s foreign trade, and in the context of long-standing unequal exchange, Colombia reached a neocolonial stage in which the industrial bourgeoisie established its predominance over the comprador bourgeoisie and other exploitative classes. The industrial bourgeoisie also took advantage of a new customs tariff adopted in 1951 that protected industries producing consumer goods and taxed such products at a low rate. Industry advanced with these shifts, but wealth continued to accumulate to those with large (and sometimes idle) landholdings. For Arrubla, this indicated that industrial advancement alone could not lead to meaningful reform. He noted that in Colombia, three dynamics managed to coexist: low social indicators, industrial advancement, and the agrarian problem—a substantial expanse of land that remained unexploited due to land concentration.  

    Arrubla’s 1962 essay was one of the first to criticize Lauchlin Currie, arguing that he sought an unfeasible vision of classical capitalist development in Colombia. He considered the agrarian reform promoted by Lleras Restrepo as “more starkly lucid” because, unlike “Operation Colombia,” it did not aim to accelerate the disintegration of the campesinos; instead, it sought keep campesinos in the countryside by incorporating them into a system of agricultural units that would support their families and stem urbanization. The plan was designed to contain rural poverty, that is, to avoid “an unsustainable social situation” that could take on “revolutionary” overtones in the cities. From the first installment of Estrategia, Arrubla was skeptical of bourgeois political interventions to alter Colombia’s economic structure. He rejected the notion that a progressive industrial sector could drive developmental alternatives, even as a necessary stage within the framework of a national project that considered the needs of the population.

    In the third essay of Studies on Colombian Underdevelopment, Arrubla offered his own analysis of the structure of the Colombian economy. Relying on ECLAC statistics, he argued that the local bourgeoisie controlled the domestic market but failed to promote industrial development. Foreign flows resulted from the export of agricultural goods rather than manufactured goods, and as a result, the trade deficit had grown. With little industrialization, conditions of dependence deepened in this “neocolonial stage.” Arrubla disagreed with ECLAC, rejecting the notion that import substitution would solve this cycle. This neocolonial form of capitalism, he argued, was a “deformation” of classical capitalist development; without the heavy industry that would serve as the primary sector in a classical capitalist economy, neocolonial capitalism appeared as a remarkable creature that “lacked a head.” Given this deficiency, Colombian capitalism would “prematurely age in two or three decades against all the appearances of its vigorous initial impulse,” as the condition of structural neocolonialism would bring about successive trade deficit situations until the economy entered a stage of chronic crisis and paralysis. Such crises would invite greater imperialist penetration, as falling exports would lead to increased foreign investment and intensified dependence. Arrubla looked back on the early twentieth century, noting the failure of the neocolonial bourgeoisie to offset losses in import capacity: debts continued to exceed investments, and the exports still trailed imports by the end of the 1930s. In Arrubla’s view, the developmentalist path in Colombia remained unfeasible. 

    Arrubla and the Grupo Estrategia engaged closely with the popular reinterpretations of Marxism and Marxist critique circulating around Latin America in the 1950s and 60s, such as The Theory of Capitalist Development and The Political Economy of Growth by American economists Paul Baran and Paul Sweezy. By the late 1950s, the Monthly Review—led from 1948 by Leo Huberman, Francis Otto Matthiessen, and Sweezy—became a point of reference for the international New Left in the field of Marxist economic history and Keynesian theory. 

    Baran and Sweezy produced a fundamental corpus for the development of Marxist theories of dependency, showing that the exchange of capital with the underdeveloped world was governed by imperialist logics that generated a situation of dependence. The bourgeoisie of dependent countries—termed comprador or lumpen as heirs of colonialism—and ongoing militarism were important supports for this unequal relationship. This perspective “revealed the contradictions of any transformation of economic regimes in the underdeveloped world within the limits of capitalism.” During the 1950s, Baran’s texts were pivotal in understanding third world revolutions, as they articulated how the “morphology of backwardness” originated in processes of colonialism and imperialism. Furthermore, they identified two contradictory movements that marked the development efforts of dependent countries: a progressive movement marked by a process of creative destruction of productive forces, and a regressive movement that preserved existing archaic systems of labor. For Baran, overcoming backwardness required serious—ultimately socialist—planning that would curb unproductive consumption and allow a productive use of surplus.9

    Studies on Colombian Underdevelopment translated the ideas of Baran and Sweezy into the local context. In the book’s second essay, Arrubla describes Colombia as part of a region unevenly linked to the great metropolises of the global economy. He establishes three stages: the first a colonial stage where Latin American countries advanced in semi-colonial development processes, which lasts until the 1930s; the second stage, in which the conjecture of the great imperialist crisis and the rise of import substitution created the conditions for a “new creature,” a form of Latin American industry that propelled Latin American capitalisms; and finally, the third stage, in which some countries transitioned from being semi-colonies to neo-colonies.

    In this classification scheme, an “underdeveloped” or “dependent” country was structurally hindered in the global economy: “one member of the team specializes in starving to death while the other bears the ‘white man’s burden’ and collects the profits.”10 Arrubla also offers a critique of Baran for what he saw as a weak characterization of the colonies vis-a-vis the growing imperialism of classic capitalist countries. Responding to this absence, Arrubla himself undertook the task of building typologies of colonial and semi-colonial economies, what he referred to as type A and type B. Type A colonies were those in which natural resources, mining, and plantations were exploited by foreigners, such as Venezuela, Bolivia, and Cuba, whose commercial exchange was closer to a classic “plundering” situation. In type B colonies, which included Argentina, Brazil, and Colombia, domestic elites exploited primary (usually agricultural) export products, and this trade generated a constant income and influx of foreign exchange.  

    Type A and type B colonies faced distinct political consequences per their condition. As a result of the exploitation of primary products by foreign investors, Cuba experienced a form of imperialist domination that was simple to identify. For Colombia, on the other hand, the formation of capital by bankers and landowners had established a kind of domination exercised by the national bourgeoisie instead of a foreign entity. In those places, Arrubla argues, “nationalist consciousness” tended to “fall asleep more easily.”

    Political legacies

    Although the initial intention of the Grupo Estrategia was more agitational than militant, and the group refrained from declaring itself as the organ of any existing political party, its members did form the Partido de la Revolución Socialista (Party of Socialist Revolution, PRS) to “collaborate in the task of creating Marxist cadres and linking them to the working class.” These cadres were mainly composed of students from the public universities, such as the University of Antioquia, the National University in Bogotá, and the Santiago de Cali University as well as the University of Valle in Cali. 

    Arrubla and Estanislao Zuleta, another prominent figure in Estrategia, shared experiences in communist campesino training camps in the Sumapaz moorland and trade union schools in Medellín. The PRS built on these early militancies, forming its base among  unions at Antioquia-based companies like Fabricato, Peldar, Tejicondor, Propalia, and Everfit, which helped found the Federation of Workers of Antioquia (Federación de Trabajadores de Antioquia, FEDETA). The PRS also aligned with the resistance efforts of mining workers in Segovia against the Frontino Gold Mines.

    The PRS advocated for a socialist, anti-imperialist, anti-bourgeois, and anti-feudal revolution, but their calls to action resonated more with students than industrial sectors. The group only lasted for a year, as members disagreed over the decision to engage in insurgent armed action against the state. In order to keep the masses unified, the leaders of Estrategia argued that armed action required insurgent conditions. The group’s affiliation with the PRS became a balancing act to justify themselves as left-wing intellectuals while taking distance from a growing leftist armed struggle.  

    Despite the dissolution of the PRS, Studies on Colombian Underdevelopment maintained a lasting influence, operating as an authoritative reference for economics departments well into the 1980s, as well as for several revolutionary and armed political organizations. In the mid-1970s, the text served as a reference for Marxist economic history, as historians shifted from “academic history” to a “historiographic revolution” prioritizing “bottom-up” perspectives.11 These sources rendered a long-term perspective of the world economy that illuminated uneven development over five centuries, in contrast to the linear views of  the USSR Academy of Sciences’ Manual of Economics, which stated that each society must develop its productive potential to the maximum before being able to move to a higher form. From 1974 to 1979, Arrubla also edited the magazine Cuadernos Colombianos, which, alongside the new magazine Ideología y Sociedad, sparked the growth of cultural magazines in the 1970s. This decade of intellectual output reflected the increased professionalization of history and the social sciences.

    While Ideología y Sociedad initially aligned with Arrubla’s diagnoses, it soon featured prominent critiques as well. Most notably, Colombian economist Salomón Kalmanovitz argued that without considering production relations, Arrubla’s analysis lacked a careful examination of intermediate goods, and as a result over-emphasized external conditions.12 He cited macroeconomic indicators from the second half of the 1960s that showed growth in national development, demonstrating that the absence of capital goods did not prevent, via raw materials or intermediate products, developmentalist advances.

    Kalmanovitz refuted the notion of dependence in Studies on Colombian Underdevelopment, arguing that it “explains the non-development of capitalism without specifically referring to the transformation of production relations.” Kalmanovitz was particularly interested in arguing for the theoretical superiority of Marxism, which he juxtaposed with Arrubla’s dependency theory. One of Kalmanovitz’s references was Brazilian political scientist Francisco Weffort, who denied the “real-historical existence of a contradiction between the nation (as an autonomous unit, necessarily understood in terms of power and class relations) and dependence (as an external link with central countries).” Weffort criticized the “mechanism often suggested by some dependency theorists when they spoke of a ‘concomitant relationship’ between the changes occurring in peripheral countries and the changes produced in central countries, since it nullified the possibility of a transformation emerging from the dominated countries.” By contrast, Arrubla’s theoretical approach considered the relationship of dependence with the metropolis not as an “external” fact but as a structural element of the national economy. 

    Dependency and the armed conflict

    Mario Arrubla and the Grupo Estrategia strongly critiqued the form of industrialization pursued in Colombia during the mid-twentieth century. Like many of his generation, Arrubla rejected the National Front, accusing the project of excluding the masses and obfuscating the role of external political forces. He was also disillusioned with local communist groups who maintained an alliance with the left-wing of the Liberal Party and thus indirectly participated in the regime. Arrubla’s distance from the organs of the state enabled his writing, and his anti-developmentalist stance grew stronger the more he identified Colombia’s national industrial development with its bloody history. For Arrubla, the violent dissolution of the campesino structures had severe implications under neocolonial conditions in Colombia, amounting to the highest “social cost” and requiring “a particularly high quota of pain for the popular masses,” since “in the absence of heavy industry, employment opportunities lag far behind labor supply, and the reserve army takes on monstrous proportions.”13

    In Colombia, the acute dissolution of rural life was achieved “with blood and fire.” The bourgeoisie of dependent countries had understood that they needed to “introduce more or less significant modifications in the political forms of their domination,” which could imply strong, “big-bourgeois” governments or dictatorships of various types. The guerrilla groups opted for the armed path: early gestures toward what would be the guerrilla processes of the sixties had already begun to propagate in the country and would strengthen shortly thereafter. In line with the Old Left, the old communist guerrillas regrouped and gave rise to the Fuerzas Armadas Revolucionarias de Colombia (Revolutionary Armed Forces of Colombia, FARC)—the longest-lasting guerrilla in the continent. In May 1964, the army deployed a bloody military operation against one of the areas of campesino self-defense, the small territory of Marquetalia in the department of Tolima; the following year, the Ejército de Liberación Nacional (National Liberation Army, ELN) of Guevarist inspiration, made its public appearance in Simacota, Santander. It was not long before the figure of Camilo Torres emerged as the revolutionary intellectual of that group. Torres, in line with the decisive emergence of the new political left, would mark the eclipse of the committed intellectuals typified by the Grupo Estrategia, who radicalized their stance as intellectuals in place of bearing arms.  

    As an exile in the US in the twenty-first century, Arrubla became more nuanced in his notion of dependence, although he would still consider “the true nature of imperialism” as a variable required to fully understand economic policy.14 Arrubla began to consider the revolution in a more positive light, as favorable to social justice. Meanwhile, Colombia’s historical drifts in the late 1970s left him profoundly disillusioned by the left parties’ appropriations of Marxism, as well as by the progressive paths forward for Colombia. Arrubla’s career reflects the intersections between dependency theories and Marxism during the period. From the 1960s onward, Latin American critical thought began to include an amalgam of theorists who viewed the national economy from the dependency-imperialism binomial—Faletto and Cardoso, for example. Today, we can look at very different theoretical and political efforts that renew theories of imperialism: “socialism of the 21st century,” “Bolivarianism,” or “buen vivir.”  

    Situated within debates on the dependency-imperialism dichotomy and Marxist critique, Arrubla’s own trajectory demonstrates the plurality of dependency theories circulating in Latin America from the 1960s onward. More than half a century has passed since Arrubla described the Colombian economy as neocolonial, during which time political violence has intensified and macroeconomic indicators continue to exhibit disappointing findings. There is now talk of the weak industrialization or even deindustrialization of the Colombian economy; meanwhile, the agrarian issue and fierce land concentration remain the Gordian knot of the so-called post-conflict era. The current government under Gustavo Petro has proposed a structural transformation of the Colombian economy, but it continues to face domestic political challenges as well as ever-present global financial constraints. These persistent obstacles—present in both the local and international spheres—testify to the enduring resonance of Mario Arrubla’s diagnosis of Colombia’s society and economy. 

    This essay is based on the author’s recent book, Hombres de Ideas. Entre la revolución y la democracia. Los itinerarios cruzados de Mario Arrubla y Estanislao Zuleta: los años 60 y la izquierda en Colombia (Bogota, Ed. Ariel, 2023).

  6. Great Green Wall

    Comments Off on Great Green Wall

    Biden’s announcement this week to sharply raise tariffs on Chinese imports is an escalation in the yearslong tariff war on China. The new tariffs specifically target green goods, most notably electric vehicles, duties on which have now quadrupled to 100 percent. Tariffs on lithium-ion batteries, critical minerals, and solar cells will also be substantially increased. The measures are set to take effect in 2024 (with the exception of graphite, where Chinese dominance is most stark and tariffs begin 2026).

    The Biden administration has raised tariffs on roughly $18 billion of imports of goods from China

    Why now? There is no doubt that the announcement of these tariffs are performative. With Trump leading the polls in several swing states, Biden’s decision to fly the protectionist flag is intended to win over voters.

    The performative aspect is also to reassure investors in domestic manufacturing, who despite the IRA’s generous manufacturer and consumer subsidies, are worried about the flood of cheaper Chinese imports outcompeting domestically made green goods. The combination of new protective tariffs plus IRA subsidies is meant to buy time for US-based firms to catch up in green technologies.

    Biden’s tariffs are more targeted than those on over $300 billion of Chinese imports introduced by Trump in 2018. But the signal they send is that tariffs on China are bipartisan; given this broad anti-China sentiment in Congress, they will be almost impossible to unwind.

    Top Biden officials – Janet Yellen, Jake Sullivan, Tony Blinken– have delivered the “our blessed homeland, your barbarous wastes” message to China.

    Tariffs on EVs are already at 27.5 percent (Trump slapped an extra 25 on top of the standard 2.5 percent US tariff). That combined with the IRA’s anti-China tax credit design has meant that only Polestar (owned by China’s Geely) has been exporting Chinese EVs to the US. Chinese batteries, on the other hand, are still being imported but the IRA’s “foreign entity of concern” rules aim to bar the $3,750 tax subsidy from going to EVs containing battery metals processed in China, whether by foreign or Chinese firms.

    US risks becoming a technological backwater

    China is the world leader in EV production and innovation. The Cambrian explosion of Chinese EV firms over the last five years—there are now over 200 EV manufacturers in a darwinian competition for margins and market share—has meant that Chinese EVs are now better and cheaper than their Western counterparts, resembling smartphones on wheels.

    Biden’s intention is to stave off the Chinese and stimulate a domestic and friendshored buildout of the EV supply chain, stretching from mines to the factory floor. Side deals with friendly governments have been made; Canada and Australia have both been deemed eligible for Defence Production Act support for their battery metals. After their howls of outrage, Europeans, Japanese, and Koreans received a leased vehicle exemption meaning that the “made in America” rules don’t apply to them, and their firms’ vehicles could still qualify for subsidies if they were leased rather than bought. Since the exemption was finalized in December 2022, EV imports from Korea, Japan, and Germany have surged.

    For the US, this is too big to get wrong. American car makers survived the competition from Japanese and Korean imports in the 1970s and 1980s but this time, the business model of the entire industry is shifting. Firms are no longer competing for dominance on the level of vehicle technology, but for the entire ecosystem. For Bidenists, There Is No Alternative to walling off US production. 

    After all the Administration’s chatter about following in Alexander Hamilton’s footsteps, the US may have to do what every developing country that wants its industry to catch up has done: form joint ventures with leading foreign firms and throw the best engineers at the shop floor to absorb their technology. 

    This is what the US has done with chips. CHIPS Act subsidies attracted all the world’s best manufacturers to set up fabs in the US (see South Korea’s Samsung in Texas and TSMC in Arizona). By contrast, Biden’s auto policy of infant industry protection without technology transfer is a recipe for bloated and lazy domestic firms making unaffordable, unattractive green goods. 

    US car companies are not oblivious to that risk. While the government walls off production, US firms are acquiring Chinese knowhow by simply licensing the superior technology of lead firms BYD and CATL. Ford (in Michigan) and Tesla (in Nevada) are partnering with CATL to make batteries. CATL says that it has structured its licensing deal with Ford so that it is compliant with “foreign entity of concern” rules. For its part, Tesla already uses BYD cells in Germany; Ford and GM use BYD batteries. Even Trump doesn’t like the idea of a great wall against Chinese FDI in America. Speaking at an Ohio rally in March, he signalled an openness to Chinese firms building plants “in Michigan, in Ohio, in South Carolina”—so long as they were prepared to employ American workers.

    The story is similar globally. Hungary and Germany, Brazil and Chile, have no intention of falling behind in the technological race. They don’t want to buy batteries from China, but have instead attracted Chinese firms to do FDI in Europe so they can make batteries themselves, thus creating jobs, know-how, and local value-added. CATL has opened a $7.3 billion gigafactory in Europe. According to the Rhodium Group, the goal of EU’s probe on Chinese EV subsidies last year was to force Chinese investment into the EU.

    Cat and mouse

    The new tariffs come as no surprise to Beijing. In anticipation, Chinese firms have been busy putting facts on the ground. Chinese firms have been rerouting supply chains through third countries with pre-existing Free Trade Agreements with the US—Morocco, Mexico, and Korea chief amongst them. Their goal is to ensure backdoor access to the vast American market and obtain IRA subsidies from the US Treasury. That outbound investment strategy, duly supported by the Chinese government’s NEV Industry Development Plan (2021–2035),means that Chinese firms and their joint ventures with local partners in third countries are well prepared to bypass Biden’s latest round of 25 percent tariff on batteries and the 25 percent tariff on the critical minerals inside them.

    Auto firms headquartered in US, Europe and East Asia have built transnational battery production networks with leading Chinese firms like BYD and CATL. (Source: Gavin Bridge)

    Chinese firms employed a similar strategy in the 2010s when they bypassed US solar tariffs by rerouting into Southeast Asia. Over 80 percent of solar cells imported into the US now come via Vietnam, Malaysia, Thailand, and Cambodia.

    Will the rerouting strategy work for the far larger and more politically consequential auto industry? US politicians are already planning countermeasures to Chinese circumvention. In a letter to Biden’s trade chief Katherine Tai last November, the House Select Committee on the Chinese Communist Party wrote that the US “must also be prepared to address the coming wave of [Chinese] vehicles that will be exported from our other trading partners, such as Mexico, as [Chinese] automakers look to strategically establish operations outside of [China].” 

    Earlier this month came the Treasury’s final “Foreign Entity of Concern” ruling. Its intention is to reduce US reliance on China for battery components and critical minerals. Automakers won’t be able to receive IRA tax credits if any company in their battery supply chain has 25 percent or more of its equity, voting rights or board seats owned by a Chinese government-linked company.
    This cat and mouse game will continue for years to come. China will continue to have a dominant position in all parts of the EV and battery supply chain even if Chinese-branded EVs will be a difficult sell in the nation with the biggest “overcapacity” in crude oil.

    Geopolitics all the way down

    The tariff war is ultimately about geopolitics, not the cat and mouse game of supply chains. No one knows if China will respond performatively or powerfully. The Ministry of Commerce response was “China will take resolute measures to defend its rights and interests.” China’s geopoliticians will have to calculate the extent to which forbearance is in their interest, and where to draw lines at which China responds far more aggressively.

    What will the new tariffs mean for other countries in the US sphere of influence? What might happen if the US tells its allies and neutral countries to stop using Chinese green technologies? It would not be new for the US to extend its technological containment internationally; see its efforts to get allies to lock out Huawei’s 5G infrastructure. 

    But the differences between 5G/Chips and green technologies are many. The US restriction of Huawei 5G technology was only partial, and good US-friendly alternatives existed. By contrast, when it comes to green technologies, the whole world lags dramatically behind China, which has revolutionized green industries. For now, there is nowhere else to go but China, and if the US were to insist other countries refrain from partnering with Chinese firms, it would only face isolation. Germany’s Chancellor Scholz just cautioned that it won’t blindly follow the US; we can expect countries to offer “different perspectives.” 

  7. Supply-Side Healthcare?

    Comments Off on Supply-Side Healthcare?

    In 2019, Florida Governor Ron DeSantis signed a bill into law that deregulated new hospital construction and unleashed a “hospital-building boom.” Some sixty-five new hospitals were planned in the three years after DeSantis signed the bill ending decades-old regulations on hospital construction called “certificate of need” (CON) laws, which could amount to as much as a 20 percent increase in total hospitals in the state.1 And it isn’t just Florida: Ohio repealed its own CON laws for hospitals in 2012. Today, Ohio is experiencing its own hospital boom. In 2019, the Cleveland Clinic planned a new facility within seven miles of two other hospitals and two other health systems. “The Columbus area seems sick with medical construction,” reports the Columbus Dispatch, “Every week, it seems, another major project gets underway.” In 2022, the largest and wealthiest hospital system in Massachusetts—Mass General Brigham—received approval for an enormous $2 billion expansion. Nationally, total private construction spending on healthcare facilities has nearly doubled in the past decade, rising from $28.9 billion in 2014 to $50.4 billion in 2023.2

    As some pundits tell it, this type of “supply-side” competition-based expansion is exactly the medicine our healthcare system needs. The root cause of constrained access and rising healthcare costs, the argument goes, is public regulation giving the provider side of the market more leverage, allowing it to extract higher payments from insurers. Greater competition will lead to expanding supply, leveling the playing field between providers and insurers without altering the nature of ownership or public responsibility in the sector. “Call it supply-side economics, but for healthcare,” Matthew Yglesias wrote in Bloomberg, advising that we “focus less on the way insurance works than on expanding care to increase competition and reduce prices.” The Nikansen Center echoes the sentiment and generalizes it across those service industries at the core of American life. In healthcare, higher education and housing, this bipartisan consensus argues, supply expansion is the solution to rising costs.3

    Hospital supply expansion surely is needed in many places in the United States; decades of disadvantage, discrimination, and disinvestment have left many communities—particularly those with low-income and minority populations—deprived of needed healthcare facilities and resources. The problem is that the new “supply-side” gambit won’t succeed in securing access to healthcare for these populations. Nothing in the existing marketplace ensures that new construction will go where it’s needed. All evidence suggests that it will, rather, be put in the service of gaining market share in already lucrative areas. Tampa’s new $246 million BayCare facility, in the Tampa suburb of Wesley Chapel (population 64,866), is one such example. As Kaiser Health News reports, it “doesn’t provide any health care services beyond what patients could receive at a hospital just 2 miles away,” and indeed yet another hospital is under construction a mere five-minute drive away. “The building of new hospitals and the expansion of existing hospitals,” a hospital CEO told the Cleveland Plain Dealer, is mostly “competitive strategy.” Nor has the near doubling of hospital construction expenditures over the last decade done anything to reduce hospital expenditures, which have risen 20 percent in the same period.

    “Supply-side economics, but for healthcare” misdiagnoses the basic problem, which is not the performance of competition but rather the nature of financing and ownership. US hospitals are increasingly dominated by corporate behemoths and private equity firms who usher in skyrocketing prices and inequities while degrading the quality of care. Any effort to expand supply within the existing US healthcare scene will only accelerate this galloping corporate takeover, making us worse off in the process. Moreover, the nature of healthcare—a service with unique characteristics distinct from other commodities—poses profound consequences for the supply-side strategy of harnessing entrepreneurial spirits to expand provision efficiently. Unlike other services, the supply of healthcare induces its own demand, for every human body ultimately fails. Hospital beds, doctors’ hours, and machines can generally be put to some use. Nor is technological innovation likely to increase the cost efficiency or labor productivity of provision; on the contrary, as its capital base has grown, the US healthcare sector has become more—rather than less—labor intensive. 

    Competition in pursuit of endless market-driven growth is a quest toward a mirage. If we hope to overcome the challenges that our contemporary healthcare sector poses to improving community health, a new perspective is needed. Like the venerable services of art and education, whose rising costs were studied by the economist William Baumol more than half a century ago, the fundamental value of healthcare services (and care work more generally) is inextricably tied to the time—labor hours—that workers put into them. Healthcare services are essentially Baumolean: time is, after all, the essence of care. This poses universal challenges for any public health strategy. For if the question of how to allocate the time of our growing healthcare workforce is inseparable from that of supply, then the market-based answer of technological innovation and competition is no answer at all. Where questions of allocation persist, so too lurks the long-dormant idea of healthcare planning.

    The pitfalls of quantitative supply expansion

    In an influential paper published in 1959, health policy thinker Milton Roemer and his colleague reported that some 70 percent of hospital use rates could be explained by bed supply, in part based on their analysis of hospital data from Saskatchewan.4 They famously concluded that “hospital beds that are built tend to be used,” at least when populations are well insured. The idea that supply creates its own demand in healthcare came to be known as “Roemer’s Law.” What might explain this form of “provider-induced demand”? To cynics, it might seem like straightforward fraud: clinicians with time on their hands providing unnecessary care when patients have the means to pay for services (e.g. generous insurance) and don’t know better. That certainly can occur: one field-based study found that Swiss dentists with more open appointments were more likely to recommend unnecessary cavity fillings.5 But this is only a small part of the story. Doctors, after all, are in large part responsible for encouraging patients to undergo unpleasant procedures, medication regimens, and so forth that improve their health—arguably, a form of “nagged” if not “induced” demand. But additionally, spare medical resources can typically be put to some use. An entirely scrupulous primary care clinician whose clinic schedule suddenly opens up is more likely to invite patients to return for follow-up visits at shorter intervals. In other words, the schedule won’t stay open for long—and the extra care might benefit some patients. Similarly, ICU physicians (like me) responsible for triaging patients to either a bed on the regular hospital floor or to the ICU, depending on their severity of illness, are more likely to send borderline cases to intensive care when there is greater availability of open ICU beds (and vice versa). That’s not a bad thing, particularly in the context of fixed costs: the intentional use of unused supply, assuming it is at least marginally helpful (e.g. closer nursing attention in the ICU) can be rational and appropriate.

    Subsequent studies have confirmed Roemer’s basic hypothesis. Multiple recent econometric analyses have found that when a shift of demand affects the use of care by one population—for instance, due to gain (or loss) of insurance coverage—the change in utilization of care by that population tends to be offset by slightly less (or more) care provided to populations whose coverage does not change.6 Economists Sherry Glied and Kai Hong, for instance, found that increases in the use of care by the non-Medicare population due to expanded eligibility for Medicaid had offsetting spillover effects on the (stably insured) Medicare population, who were provided slightly less care of little or no medical value. As they concluded, when a population has a high enough level of insurance coverage, “the aggregate quantity of health care services consumed is largely dependent on the supply side of the healthcare market.” 

    My research with colleagues has come to similar conclusions. We analyzed health survey microdata before and after implementation of Medicare in 1966 and the Affordable Care Act in 2014, finding little evidence for an aggregate increase in society-wide healthcare use despite a substantial expansion of insurance (i.e. of demand), with some redistribution in use towards newly insured populations. Reviewing published utilization effects of some thirteen universal coverage expansions in capitalist nations over the past century, we found generally similar trends. For instance, studies performed in the United Kingdom and Canada examining the implementation of each nation’s universal system (both of which provide free medical care) found that increases in doctor visits among those with low incomes were offset by small reductions in visits among those with the highest incomes.8

    The problem of competitive supply expansion

    But what if, instead of emphasizing the effects of supply expansion on aggregate supply, we were to emphasize its role in enhancing competition (and thereby reducing costs), as “supply siders” suggest? In addition to the Roemerian dynamics that would undercut potential cost-savings from such an approach, there are other reasons why efforts to gin up market competition are a fools’ errand.

    The majority of payments to hospitals in the US today are not set competitively by markets, but administratively by the government, mostly Medicare and Medicaid. Thus, any impact of greater competition would mostly be limited to those prices hospitals charge private insurers (which may in part explain the Niskanen’s and Cato’s embrace of the idea). This expensive minority of the market does drive the upward creep of system-wide costs, but it is as much through the fundamentals of expensive technology, drugs, and the very financial interests of private insurers themselves (which take a large and steady share of rising premiums for their high administrative overhead, including profits) as it is lack of provider competition. After all, the seminal 2019 study casting light on hospital payments from private insurers, published in the Quarterly Journal of Economics, found that prices at monopoly hospitals were 12 percent higher than those located in markets with four or more competing hospitals. That differential is real, but it represents a premium above the industry’s basic costs, which have risen over 200 percent (adjusted for inflation) since 1987.

    It is also essential to stress that a deregulatory, market-driven approach to supply expansion will only exacerbate the uneven, unequal geographic landscape of healthcare in the US. This is already a major problem. One analysis, for instance, found that new cardiac catheterization facilities are more likely to be set up near existing facilities, rather than in places of unmet need: competition for cardiac catheterization patients is the apparent explanation for these redundancies. Meanwhile, excessive provision of cardiac services, notably cardiac stents, remains a significant problem, with supply of the specialty service (again) a likely determinant of the overall provision—and possibly overuse—of this service.9 Similarly, the robust market-driven expansion of neonatal ICU programs between 1991 and 2017 was basically uncorrelated at the regional level with either perinatal risk (a metric of community need for these programs) or of baseline differences in supply, according to another study.

    But perhaps most fundamentally, the supply redundancy that real competition requires is simply not feasible or desirable in many communities, or for many health services. It is largely meaningless when it comes to emergencies: patients do not want to comparison shop on the way to the hospital when exsanguinating from a gunshot wound or struggling to breath with COVID-19 pneumonia. And for specialized surgeries like transplantations, a high patient volume is needed to maintain professional and center expertise: a particular community may only be able to sustain a single such center (if any). Some communities simply require only one hospital. And in cases where there are more than one, it’s better for healthcare workers and patients when they coordinate: as the pandemic revealed, cooperation, not competition, is the needed virtue in healthcare organization.

    But redundancy is the inevitable outcome of the market-driven healthcare system in the US—as the current hospital building boom attests. The expansion of healthcare capital tends to follow operating revenues and firm profitability; in an unequal society (with inequitable health coverage), neither revenues nor profits have any necessary relationship to the underlying health needs of the community. In a recent study, colleagues and I report that the distribution of hospital capital—physical assets like land, structures, and equipment—is linked not to a community’s health needs but to its private wealth. Moreover, we find that the provision and costliness of care provided at the population level is positively correlated with hospital capital.10 With this in mind, the causes of the industry’s persistent cost creep and the inequity in its distribution of services come sharply into focus. Investment flows to well-off populations that are already well-served. In contrast, disadvantaged populations are more likely to see a withering of healthcare capital, as witnessed by stories of hospitals closing within many major US cities even while construction booms elsewhere. A recent study confirms that hospitals are far more likely to close if they are located in Black and socially-disadvantaged communities.

    Far from community-wide population health needs, it is market pressures themselves that appear to be the primary driver of both hospital capital expansion and rising costs in the US market-based healthcare system. The solution to an inequitable distribution of supply—or market-driven consolidation—is not to further unfetter market forces in a fruitless effort to ramp up yet more competition: the answer, quite simply, is public planning that explicitly allocates capital resources based on community health needs and not their potential for revenue production.11

    Grand illusions of a productivity fix

    If a quantitative or competitive expansion of hospital infrastructure is unlikely to reduce society-wide aggregate hospital expenditures, there’s even less hope of a qualitative supply-side transformation in care provision via technological advance that achieves a vaunted productivity revolution. Many hope that investments in new medicines or technologies (e.g. the electronic health record, telehealth, or AI) might dramatically reduce the labor costs of providing healthcare, boosting sector productivity and improving access. However, similar to other forms of care work, the value of healthcare services is directly linked to the quantity of labor-time contained within them. Shortening the average length of a doctor visit, intuitively, renders that visit fundamentally less valuable.12 (Time pressures, no doubt, drive demand for concierge care). Similarly, the productivity of inpatient nursing for hospitalized patients can not be increased without sacrificing the quality of care—i.e. increasing nurse : patient ratios and giving patients less individualized attention. And even if it were desirable, the notion that technology will increase medical productivity is empirically unsubstantiated.13Over the past fifty years—a time of unprecedented expansion in hospital capital and medical (and information) technology—the number of full-time hospital nurse equivalents per inpatient bed day has not fallen, but actually doubled.14

    In a recent study, we similarly found that the number of patient-visit-minutes provided annually per US physician has effectively remained largely unchanged since 1979: none of the enormous technological medical changes of the past half-century increased the number of annual visit minutes doctors can provide (nor would we expect it to).

    Economist William Baumol described this dynamic—and its economic consequences—some time ago, albeit without reference to the medical sector. In the 1960s, he pointed to the live performance of a string quartet as a canonical example of a service where labor productivity cannot increase. Two-hundred years ago, an hour-long quartet production required four musician labor hours—and the same is true today. (Listening to a recording or watching a live stream is, qualitatively, a different product). If we assume that pay for musicians must nevertheless be linked to economy-wide wages, as musicians exist in the broader labor market, then the labor costs to society of string-quartet productions must definitionally rise over time.

    Baumol saw this proceeding towards one of two endpoints. In one scenario, where demand is elastic, the “low-productivity” sector will wither and shrink, and potentially be reduced to a luxury niche market: to some extent, this might be said to have occurred in the case of contemporary classical music. In a second scenario, where demand is less elastic or the service is publicly subsidized, “low-productivity” service sectors will encompass an ever-increasing share of the total workforce. That well describes healthcare, and many care services from education to domestic work, today.

    But healthcare diverges from Baumol’s canonical string quartet in some important respects that have obscured its fundamentally Baumolean characteristics and confused thinking about reform. First, there is the extent of already existing public-financing: in a recent study, colleagues and I demonstrated that the tax-financed share of total healthcare expenditures has soared over the last century, from 9 percent in 1923 to 69 percent in 2020. Second, there is the elasticity of demand: in major trauma, for instance, you pay whatever the price may be. Demand, in that clinical context, tends to be entirely inelastic. Third, healthcare, unlike the string quartet, has undergone massive technological advances in the past century, and indeed has experienced remarkable capital expansion. I estimate an approximate doubling in (inflation-adjusted) hospital capital per hospital bed day over the last twenty-five years.

    Critical to the economics of healthcare is the fact that this rising capital investment has been accompanied by not just stable but rising labor intensity: the number of hospital nurses per hospital bed day, as previously noted, has risen over time, whereas the number of musicians per quartet-hour has remained exactly stable. Fifth, and finally, healthcare is undergoing an unprecedented corporate consolidation that has put it at the commanding heights of the economy. At the same time, on a more fundamental level, the similarities are perhaps more salient: an hour-long consultation with a physician or an hour of bedside nursing has always required, and will require to the end of time, precisely one hour of labor.

    Now, in spite of this, it is obviously (and thankfully) true that capital expansion and technological change can vastly improve medical treatment. And admittedly, this can economize on the labor inputs needed to treat specific disease processes. Tuberculosis (TB) offers a sharp example. Before the development of anti-TB chemotherapy, vast amounts of resources were spent in maintaining and operating sanitoria for tuberculosis patients, and the workforce that cared for them. In 1946, for instance, 412 tuberculosis hospitals with 75,000 beds and 36,000 personnel were in operation, with an average daily census of some 55,000 patients.15 Today, by contrast, no such hospitals exist in the US and TB is generally treatable with some months of oral medications at home. Even without a comprehensive study, we can assume that the labor hours devoted to mitigating TB-suffering, society-wide, is vastly smaller today than it once was. TB-care is more productive.

    But such disease-specific changes in medical “productivity” do not aggregate to total cost savings (or reduced labor inputs) for the nation. For one thing, it is an unfortunate fact that curable infectious diseases are something of the exception to the rule in medicine: most of the burden of illness we experience stems from chronic disease processes (and acute complications thereof) that result from aging and environmental and social conditions. For another, improving health and longevity may actually increase our lifetime healthcare needs. Technological change that vastly reduces medical labor for one illness (say, vaccination leading to smallpox eradication in the 1970s) can increase the opportunities for healthcare services if it increases the size and longevity of the population: a disease process that kills us, after all, prevents us from using healthcare services thereafter. (It is a dark commentary on the state of the healthcare discourse that such facts must be explained.) By one estimate, for instance, helping people to quit smoking will increase their healthcare costs in the long term, which, obviously, is a very good thing. We live to die another day: deferring death is (in a sense) a primary metric of success of medicine. Medical failure, in contrast, can be what actually saves money: the Covid-19 pandemic, tragically, reduced Medicare costs because it killed so many enrollees with high needs.

    A unique economic principle of healthcare hence emerges from these basic realities: as a general rule, the advance of medicine will typically fail to reduce the aggregate labor-time needed to treat disease in a society, and indeed may well increase it. This may in part explain why the explosion of medical technology (and better overall health as reflected in rising life expectancy) over the last century has been accompanied by a giant increase in the share of the workforce employed in healthcare in the US, and globally. Far from “robots” (i.e. physical capital) replacing healthcare jobs, investment in these technologies seem to create them. The advent of new diagnostic tests, procedures, or therapies have not reduced the amount of time that a physician needs to interview a patient, perform a physical, examine imaging studies, review laboratory results, read past medical records, and confer with colleagues. On the contrary, integrating expanding reams of data, formulating an assessment and plan; counseling patients and families and responding thoughtfully to queries and concerns; advocating for patients or pushing back against bureaucracies created to manage capital equipment; and contemplating decisions to operate—all increase the complexity and time required for responsible medical care. As the diagnostic options widen, as the therapeutic armamentarium expands, as the medical literature grows, as patients’ clinical data becomes more voluminous and (perhaps paradoxically) more accessible, the need for time will not shrink and will probably grow. “Time,” the general practitioner, epidemiologist, and writer Julian Tudor Hart wrote, “is the real currency of primary care…”—words that have much broader applicability in the political economy of medicine.

    This principle—its deepening of Baumol’s “cost disease”—has profound consequences for the current supply-side agenda. Our society contains the medical science and technology to fundamentally lengthen and improve the life-course experiences of its constituent individuals. But our economic science—the ideas that explain the underlying patterns of ownership, revenues, and provision of care—has not caught up to the advances in medicine. In fact, our healthcare economy today is likely constraining health outcomes: since 1980, life expectancy in the US has progressively diverged from that of other wealthy nations, translating to hundreds of thousands of excess deaths—or “missing Americans” as some researchers have cast it—each year, even as our health spending has soared.

    The whole is greater than the sum of its parts: increased productivity (from capital and research investment) in the treatment of one disease may have zero impact on productivity and hence costs of the healthcare sector as a whole, while increased supply may induce further demand at the higher level of costs. A new paradigm of demand, supply, and how we understand the value of time in healthcare is urgently needed.

    The true currency of care

    Baumolian dynamics might, at first blush, seem to impose a hard limit on what even transformative healthcare reform could achieve: his “cost disease” is incurable. But there is reason for optimism that a new economics of care could greatly improve population health and happiness. For one thing, supply issues aside, socialized healthcare financing would achieve a rapid shift in demand that would expand access to care and improve health. Today, the demand for care is determined both by health needs and ability to pay. Eliminating the role of the latter determinant via a universal free-at-point-of-use single-payer system would achieve a salubrious, egalitarian shift in healthcare utilization. It would take the ability to pay out of the demand equation and achieve a distribution of healthcare service utilization—or equivalently, of the time of healthcare workers—according to patients’ needs and not means.

    But the importance of government planning in healthcare services remains—and may become even greater should the demand shift away from private insurance ever occur. While corporations have long played a leading role in health insurance, today that role is colossal. Healthcare providers are increasingly being taken over by corporate giants that are, more and more, the employers of clinicians. Corporate providers have been found to be more costly, and to provide worse quality care. This unprecedented corporate takeover of healthcare provision has put these firms at the commanding heights of the US economy, and provides a compelling argument for going beyond socialized healthcare financing towards public and community ownership, too, as colleagues and I have argued. Meanwhile, a health planning approach to supply imbalances involving explicit public funding of new capital investments could achieve a more just and equitable distribution of healthcare infrastructure—or even a measured (and intended) increase in aggregate supply for those services for which we want to see higher average per capita utilization (i.e. primary care). These are supply-side interventions, albeit not what the Niskanen Center has in mind.

    In addition to improving the quality of and access to care services through such innovations, socialized financing can achieve cost savings in both our pharmaceuticals industry and by reducing the enormous administrative complexity of our fragmented, multi-payer insurance system. High prescription drug prices stem not from Baumol’s “Cost Disease” but instead from drug firms’ intellectual property rights. Firms can price such patent-protected drugs wherever they wish, and supply limitations are artificial; a system to develop drugs with public funds and produce them immediately as generics could produce major savings. Even here, however, we should be cautious about overstating savings: the proceeds from such a shift must be accompanied by a major public investment in pharmaceutical research if we wish to maintain, indeed accelerate the quest for better treatments and cures.16 Private insurance companies, similarly, impose large meta-costs beyond the cost of care itself: premiums are spent on plan design, complex systems to deter the use of care and increase profit, exorbitant executive salaries, dividends to shareholders, marketing campaigns, and so forth. As a result, the average private insurer takes over 10 percent of their total revenues for their overhead, compared to about 2 percent for traditional Medicare or Canadian National Health Insurance. The Congressional Budget Office has hence estimated that single-payer financing could save about $400 billion annually by pushing system-wide administrative costs down to that of traditional Medicare.17 On the other side of the industry ledger, meanwhile, hospitals employ armies of billers and coders to secure payments from a multiplicity of private payers, each with their own rules and requirements,18 or even to chase patients for copays and deductibles, at enormous costs that can be halved through single-payer financing.19 Again, however, all such savings will be partially (though not entirely) offset by a redistribution of resources toward improving and expanding the delivery of healthcare to those who today experience inadequate care. For that, after all, is the primary reason why many of us fight for a better healthcare system: not to lower costs but to improve and expand the provision of care.

    An egalitarian, comprehensive healthcare reform could, that is to say, accomplish a great deal on both the demand and supply side of the medical ledger. But what it cannot do is realize a surge in the productivity of healthcare that allows this all to happen on the cheap. For both conservative and left-wing visions of a future where production costs of healthcare plummet due to supply expansion or technological change, as I’ve explored, have no basis in history or in economic theory. Predictions of post-work utopian societies, whether communist or techno-libertarian in orientation, are hence fundamentally wrong, which has far broader implications for left-wing politics and organizing. While the production of goods may practically be limitless when one envisions ever-advancing technology, the provision of services—specifically those services whose basic value is measured in time—has an irrevocable and hard limit. Indeed, that is a primary reason why reducing relative inequality, and not only the absolute economic “floor,” is so necessary in the struggle for a more just society. Relative economic inequality is the exchange currency of time. In a society with only two people—one rich and one poor—a ten-fold pay differential between the two will always allow the former to work one hour in exchange for ten hours of the latter’s time, even if the pay of both were to simultaneously double or treble (or surge 100-fold) due to a space-age industrial revolution.

    Yet these medico-economic realities should not be reason for sorrow. It is but a banal truism that many of the most important things we need for a good life necessitate the provision of time from a fellow human being: we need others to do the things we cannot do for ourselves because we lack the time or ability to do or learn them, or because we wish to put aside at least some of our precious moments for the pursuit of other enjoyable things. We do not desire the version of such services that contain less time because this also means they contain less intrinsic use value. Such time cannot be produced like steel, automobiles, or microchips. Time, we must acknowledge, can only be redistributed.

  8. Partners in Growth?

    Comments Off on Partners in Growth?

    With general elections continuing into early June, Indian Prime Minister Narendra Modi and his ruling Bharatiya Janata Party (BJP) are poised to begin a second decade in power. In the realm of political economy, it is meant to be the decade of Atmanir Bharat or “Self-Reliant India,” Modi’s ambitious agenda of enhancing domestic manufacturing, infrastructural development, and international competitiveness—rhetorically packaged together in the idiom of Hindutva or Hindu nationalism. To realize this vision, as the Wall Street Journal reported last year, the government depends on “six giant companies,” including Reliance (led by Mukesh Ambani, one of the twenty richest billionaires in the world) and the infamous Adani group (who has seemingly recovered its fortunes in the wake of a scandal sparked by the publication of a report by short-seller Hindenburg Research). 

    Of these companies, it is the venerable giant Tata that has promised to make the largest overall investment in India—$90 billion over five years—in key fields such as electric vehicles, military transport, telecommunications, and civil aviation. In a show of financial strength, Tata recently acquired the ailing flag carrier Air India, which it had originally launched in the 1930s as a private venture. Unlike Reliance, which offers a classic example of vertical integration in its core petrochemical enterprises, Tata remains horizontally diversified as a so-called “salt-to-software” conglomerate. Unlike Adani, whose enjoyment of regulatory favors and proximity to Modi personally goes back years to their joint home state of Gujarat, Tata does not quite fit the “crony capitalist” model.1 In its long tradition of philanthropy and nation-building ethos, it also stands apart from the extravagance and aggressiveness of the new tycoons. The austere former Chairman Ratan Tata is reported to have said about Ambani’s twenty-seven-story luxury residence in Mumbai, “That’s what revolutions are made of.” 

    Despite these differences, Tata now appears to be just as firmly embedded in Modi’s agenda as Ambani or Adani. The Indian state has always relied on private corporate capital to generate growth, while capitalists have always sought the holy grail of “state assistance” without “state discipline.”2 But the relationship between the state and corporate capital has grown exceptionally close under Modi, encompassing favorable treatment in awarding contracts and bank loans and in selectively loosening regulations. This trend has raised alarm about the extent of concentration, corruption, and inequality in the Indian economy. With the right political connections in place, the likes of Adani are simply “too big to fail.” There have been a few dissenting voices against government policy among business leaders, such as the late Rahul Bajaj, but for the most part, Modi’s agenda of cultivating a pliant corporate class has gone off with few hitches. The rise of regional and small-scale entrepreneurial capital and the competitive “churn” that accompanied the liberalizing market reforms of the 1990s appear to be stalling or reversing. The result is, according to some observers, an unwieldy hybrid of Silicon Valley and Russian oligarchy, overseen by a “strong but ineffective state” that has not relinquished its dirigiste tendencies—reshuffling rather than withdrawing its patronage. 

    A look back at Tata’s history can help explain how this state of affairs came to be. Established in 1868 as a merchant company active in the cotton and opium trades across the Indian Ocean, Tata ascended to the commanding heights of the emergent national economy through pioneering entrepreneurship in textiles, iron and steel, and hydroelectric power generation. After independence in 1947, the group moved into new sectors, facing up to a newly assertive regulatory state under India’s first Prime Minister, Jawaharlal Nehru, and his daughter Indira Gandhi. In the 1990s, as regulations were partially dismantled, Tata began operating on a truly global scale and took the lead in the information technology sector.3 Today, in the aftermath of the 2008 and 2020 global economic crises, it is at the forefront of a concerted pivot to the domestic market. 

    Throughout these ups and downs, one aspect has remained constant. Tata has been able to survive for over a century and a half by maintaining a strategic distance from the state in all its avatars, from the British Raj to Nehru to Modi. This contrasts both with Tata’s rivals in India (Birla, Ambani, and Adani) and with business groups in other “emerging markets,” from Egypt to Korea to Argentina, which typically “became skillful at capturing the state with rent seeking.”4 One of the key factors behind its autonomy was preserving what geographer Michael Kidron once called the “toe-hold of extraterritoriality”—financial connections with other parts of the world, whether in the form of market access or investment and technology flows.5 Another was the clever use of legal and bureaucratic mechanisms to create spaces of quasi-sovereign power such as company towns and networks of influence through educational and research institutions—making Tata a “state within a state.”6 Today, it is an open question as to whether Tata and other groups who might be so inclined could carve out the necessary room to maneuver in relation to the state. 

    What follows is a closer look at two historical moments of realignment in state-capital relations in India. The first, in the immediate aftermath of independence, shows how Tata was able to use extraterritoriality to enhance autonomy and confront the regulatory state. The second, in the midst of a dual economic and political crisis in the 1970s, shows how the antagonisms of the preceding period were resolved and an enduring “state-business alliance” took shape.7 Viewed from Tata’s perspective, these two moments reveal the vulnerability of corporate capital and the unpredictable nature of its alignment with the state. 

    Independence and the politics of planning

    As it transitioned from trade to manufacturing, Tata walked a fine line between dependence on the colonial state (especially on procurement for military contracts) and engagement with the rising tide of anticolonial mobilization. Its attitude to the campaigns led by the Indian National Congress in the 1920s and 1930s, which involved the mass mobilization of workers and peasants, was mostly lukewarm. Tata’s vision of swadeshi (a capacious term denoting economic self-sufficiency) entailed producing goods within India and, as far as possible, training Indian technicians and managers in complex and capital-intensive industries. Its products, from cotton cloth to pig iron and finished steel, were exported globally until the Great Depression, when Tata shifted focus to domestic markets as financial connections with China and Japan broke down. 

    During the Second World War, the economic powers of the colonial state “as purchaser, regulator, or patron” rapidly increased in inverse proportion with its eroding political control.8 India’s passage from British colony to playing field of great power conflict, first between Britain and the United States and then between the United States and the Soviet Union, allowed Indian business groups to use extraterritoriality in defense of their interests. Preexisting political divisions class morphed into a diffuse scramble for proximity and access to the state, with foreign connections a vital asset. An industrial mission led by the charismatic and cosmopolitan Chairman J. R. D. Tata and his competitor, the astute nationalist G. D. Birla, set out to Britain and the United States in 1945 with high expectations but returned largely empty-handed, their hopes dashed by internal tensions and an overall shortage of foreign exchange. 

    The Congress Party under Nehru, which took office in August 1947, was committed to “planning” (the mantra of the day) but riven by factional differences between Right and Left. In April 1948, a conciliatory Industrial Policy Statement sought to reassure big business and promised a crackdown on labor unrest. The government backed down in the face of threats of an “investment strike” across the entire private sector, welcoming foreign capital for big industrial projects and relaxing majority ownership requirements in joint ventures.9 But business leaders reacted differently to these overtures. Birla willingly approached American investors on Nehru’s behalf, while Tata grew determined to secure foreign investments privately. 

    As it took shape over the following decade, the so-called “Nehruvian” regime entailed reserving certain industrial sectors for the public sector (unfortunately for Tata, precisely those in which it was a first mover like steel and aviation), the imposition of a bureaucratic licensing system, and halting attempts at curbing monopoly power. The state’s economic policy was hardly “socialistic” (as it outwardly proclaimed in 1955) or even foundationally coherent.10 It rather sought to muddle through and make do. Anticipating a later Rumsfeldian adage, Minister of Commerce and Industry T.T. Krishnamachari told Nehru that while “some industrialists were unsavoury, they were the only industrialists the country had.”11Tame as it may appear in retrospect, the Nehruvian order did pose a real problem for Tata, which found itself marginalized in New Delhi. 

    With outright resistance deemed unwise, Tata’s preferred way of engaging with the planning process became the “joint sector” enterprise. In this model, the private partner would agree to relinquish majority control in exchange for capital investment and day-to-day management control—the purest embodiment of state assistance without state discipline. The inspiration was Air-India International, created after the nationalization of foreign routes in 1948. The government held 49 percent of the shares, with an option to acquire an additional 2 percent, while Tata and private shareholders retained the rest.12 The feasibility of this model was put into question by the complete nationalization of airlines in 1953, which incensed J. R .D. and damaged his relationship with Nehru. At the same time, under the aegis of non-alignment, the government was entering into negotiations with foreign partners (German, British, and eventually Soviet) to build three new steel plants instead of funding the expansion of the existing Tata steel plant.    

    In the end, it was the sudden announcement of the Soviet offer to finance one of the plants that indirectly revived Tata’s fortunes, by galvanizing Cold War panic in the Eisenhower administration. J. R. D. secured a $75 million loan from the World Bank guaranteed by the government, despite an earlier reluctance to abandon the “purely private nature of the venture which Tata wishes to maintain.”13 Meanwhile, Birla continued to be deputed by Nehru to negotiate deals in both Moscow and Washington. The discovery of a massive gap in foreign exchange reserves undermined the state’s freedom of policy action, as more “reserved” industries were thrown open to private investment.14 State-capital relations had improved without substantially compromising Tata’s autonomy, but the honeymoon would be short-lived.  

    Slowdown in growth and the origins of liberalization

    By the mid-1960s, in the wake of Nehru’s death and the ascent to power of his daughter, Indira Gandhi, big business was once again under attack. In a landmark report based on freshly disclosed company data, economist R.K. Hazari found a “clear and significant increase in concentration of economic power” between 1951 and 1958. The top two groups, Tata and Birla, controlled “nearly one-fifth of the gross capital stock of all non-government public companies,” while the share of the top four had increased from 20.44 percent to 25.66 percent. Hazari concluded that the Nehruvian licensing regime had failed in its constitutionally mandated objective of bringing about a “wider diffusion of economic power” in India. But he did not advocate breaking up or nationalizing the groups, in part because the web of cross-holdings that kept them together could not be untangled. Moreover, given India’s urgent need for growth after two consecutive years of decline, a “complete embargo” on their expansion would have been “suicidal.” Instead, Hazari proposed evening the scales by continuing to strengthen the public sector as a “countervailing force” and building up small and medium enterprises through preferential licensing and long-term industrial finance.15 

    After the 1967 elections, as she sought to bolster her electoral popularity and respond to a deteriorating economic environment (which included another foreign exchange crisis, a shock devaluation of the rupee, and rising inflation), Indira Gandhi broke with the old guard of the Congress and redefined herself as a populist champion of the poor. As part of this leftward turn, the government implemented a series of radical measures broadly in line with Hazari’s recommendations: the passage of the Monopolies and Restrictive Practices (MRTP) Act (1969), which established a fixed ceiling on group assets, the nationalization of banks and coal mines, and the abolition of the “managing agency” system that enabled the operation of diversified conglomerates. Licensing decisions were centralized in the Prime Minister’s office, sidelining the Planning Commission.16 The carrots to these sticks were the promise to selectively relax restrictions on favored groups and the absence of strong action on tax evasion. If anything, the onerous disclosure requirements under the MRTP Act and its arbitrary enforcement incentivized concealment and “black money.”17  

    In this treacherous landscape, J. R. D. Tata swung between overt criticism of the government (which reached its highest pitch yet) and backdoor compromise. In 1972, a year after the Prime Minister’s triumphal reelection, J. R. D. was invited to record his views on how to spur economic growth. In the so-called Tata Memorandum, he returned to the “joint sector” idea, proposing to fund a doubling of capacity at the Jamshedpur steel plant by creating a new company with 51 percent majority state control but managed by Tata exclusively. Conversely, the same mechanism could be used to open existing public sector companies such as the Shipping Corporation of India and the Indian Oil Company to private investment.18 The Communist Party of India (CPI), tenuously allied with the Congress at the time, accused Tata of trying to “completely corner all the funds of the public-sector financial institutions and the state funds” and divert them “to their industrial networks,” thus recreating the managing agency system by another name. Under such an arrangement, the government would shoulder the major financial burden for “joint sector” ventures while giving up managerial control—a lose-lose proposition.19 The CPI’s attacks failed to resonate as the oil shock of 1973 caused the second drop in annual growth in a decade and Indira Gandhi assumed authoritarian powers to deal with a restive political opposition. 

    From 1975 to 1977, a period known as the Emergency, Indira Gandhi ruled by decree, suspended civil liberties, jailed her opponents by the thousands, and launched a coordinated drive to raise productivity and impose economic and social “discipline” (the mantra of the day). Developmental objectives such as family planning, environmental protection, and urban renewal were coercively implemented.20 In theory, the descent into authoritarianism and the Prime Minister’s continuing demonization of “big money” and “powerful classes” should have brought the conflict between state and capital into the open. Superficially, the Emergency appeared to be a textbook illustration of the worst-case scenario business had been warning against—a socialist dictatorship. However, despite internal divisions in the boardroom, J. R. D. publicly supported the government’s “refreshingly pragmatic and result-orientated approach” and claimed that “no freedom of right matters more today than the freedom from want and the right to work and earn a decent living.”21 This stance came as a surprise to many, but it was prefigured by Tata executives’ repeated calls for the “maintenance of discipline” as the “basic requirement for orderly development”—phrases which could have come straight from the Prime Minister’s speeches.22

    During the Emergency, strikes and lockouts plummeted, licensing restrictions were relaxed, and corporate assets rose faster than ever before (Tata’s grew by 66.6 percent between 1972 and 1977). The state was suddenly responsive to big business demands—disciplining labor and reducing bureaucratic (but not corporate) corruption. In the process, the state’s own political legitimacy came to depend on the support of business and a nascent middle class.23 Even after Indira Gandhi’s electoral loss in 1977, when the Janata coalition of socialist, agrarian, and conservative parties (including the future BJP) briefly threatened the nationalization of the Tata steel plant and half-heartedly tried to revive antimonopolism, state-capital relations remained generally cordial.24

    In the long run, some of the protectionist measures introduced by the Janata government (such as the expulsion of IBM) played a role in catalyzing the rise of the software industry.25 Extraterritoriality came into play once again. Tata Consultancy Services (TCS) had struck an early partnership with IBM’s rival Burroughs in the race to bring computers to India and began recruiting talented Indian software engineers to offer low-cost data processing services to American companies, giving rise to the “offshoring” model.26 A similar story could be told about the spectacular success of the domestic pharmaceutical industry. As Chirashree Das Gupta has insightfully argued, in the 1970s “the effective strategy of the state became to respond to new demands from an emerging new capitalist class using lower-grade technologies in the domestic market that were not directly dependent on the licensing system.”27 Meanwhile, in the absence of a robust antimonopoly regulatory regime or substantive corporate governance reform, the largest groups remained firmly ensconced at the top of the pyramid.28

    Tata in the present

    In 1991, facing an acute balance of payments crisis that mirrored the situation in 1958, the Congress government under Prime Minister P. V. Narasimha Rao and Finance Minister Manmohan Singh introduced sweeping liberalizing reforms, removing many licensing requirements, monopoly controls, and foreign investment restrictions. For its part, Tata responded by embarking on a series of high-profile foreign acquisitions, recalling the 1950s strategy of leveraging extraterritoriality. The purchase of Corus Steel and Jaguar Land Rover in the UK proclaimed Indian capitalism’s triumphant “arrival” on the world stage it had ostensibly abandoned.29 However, the 2008 crisis exposed the risks of going too far down this path. The Corus acquisition turned out to be a poisoned chalice, as Tata Steel lost its competitive advantage derived from captive ore mines in India and became exposed to both Chinese competition and European political instability (in the form of Brexit). Senior Tata leadership grew divided on how to proceed, leading to a boardroom coup and ensuing court battle that tarnished the group’s public image.30

    Domestically, Tata became increasingly enmeshed in lobbying scandals and state-level political machinations during the two terms of the Congress-led United Progressive Alliance (UPA) government (2004–14). Protests over land acquisition in West Bengal stalled plans to manufacture the innovative Nano, a miniature, low-cost “people’s car.” The project then shifted to Gujarat following a generous offer of land and tax concessions by then-Chief Minister Narendra Modi. Ratan Tata joined Ambani in effusively praising Modi as “a leader of grand vision.” The stage was set for the 2014 elections to bring Tata closer to the corridors of state power than it had arguably ever been. 

    Tata’s experience indicates that, even in the midst of an undeniable deglobalizing turn, the Indian economy will never be fully autarkic (nor was it during the heyday of Nehruvianism). Even without the export-led-industrialization (ELI) strategy that underpinned the East Asian “miracle,” Indian business groups have survived by positioning themselves as conduits of foreign capital, technology, and expertise—or, in a more conceptual vein, as mediators between global and national scales of economic activity. Geopolitical shifts can dramatically alter the domestic market, strengthening some groups at the expense of others. In the wake of the COVID-19 pandemic, the Russian invasion of Ukraine, and China’s deteriorating relations with both India and the United States, reshoring manufacturing and increased defense production will become a major source of strength for Indian conglomerates. Due to its extensive horizontal diversification and long track record of investing in capital-intensive and high technology projects, coupled with its embrace of Modi’s Atmanirbhar agenda, Tata should do well in these areas. It is no surprise that India’s first semiconductor assembly plant, announced in February 2024, will be a Tata project. 

    Viewed in historical perspective, state-capital dynamics in India fluctuate constantly and are not always what they seem. It cannot be said that the state has (yet) been fully “captured” by big business, nor can it “engineer” a pliant capitalist class to deliver growth on command. Periods of economic expansion, such as the early 1950s, late 1980s, or mid-2000s, tend to enhance and concentrate corporate power. Slowdowns, as in the mid-1960s, early 1970s, and late 2010s, create realignments as conflicts break out in the open and compromises are reached. A near future of continued high growth with single-party political control and a capitalist class unable or unwilling to exercise its autonomy from the state would be unprecedented.

  9. The Iron Farm Bill

    Comments Off on The Iron Farm Bill

    Agriculture directly accounts for 10 percent of US greenhouse gas emissions.1 These emissions, which do not include onsite fossil-fuel use, come from soil and manure management and the digestive processes of ruminants, mostly cattle. Worse, there is a substantial “carbon opportunity cost” to keeping land in crops when it could instead be turned into forest, a powerful carbon sink.2 Consider that less than 10 percent of the US’s 90 million corn acres directly feed its people. After subtracting exports, the remaining 70 million acres or so are split about evenly between animal feed and ethanol for blending with gasoline.3 In a country overrun with calories, it is evident that there is room to reduce crop acreage and increase woodland coverage, delivering a double climate benefit. 

    The prospects for anything like this happening, however, are slim. Commodity crop associations and their Republican allies are currently seeking to repurpose $20 billion designated for “climate-smart” agriculture in the Biden Administration’s landmark climate law, the Inflation Reduction Act.4 Glenn Thompson, the Republican Chairman of the House Committee on Agriculture, recently described the funds as “riddled with climate sideboards” and called for them to be shifted “toward programs and policies that allow the original conservationists—farmers—to continue to make local decisions that work for them.”5 By absorbing IRA money into the farm bill while dropping the climate goals, Republicans can kill two birds with one stone. They can make funding available for popular, non-climate conservation programs that pay farmers to engage in practices they would likely undertake anyway, while creating budgetary room for what they really want to do: raise “reference prices” to enhance commodity subsidies that primarily benefit large growers. Farmers enrolled in programs such as Price Loss Coverage are indemnified whenever market prices fall below the reference price. Hence, a higher reference price means a higher income floor and less risk.6 

    Even if Democrats protect IRA funding, “climate-smart” agriculture so far appears to mean pouring federal money into often dubious technological solutions designed to sustain the dominant productivist approach of maximizing yields.7 This is in keeping with the IRA’s broader political-economic bent, which favors incentivizing the private sector to innovate and invest in decarbonizing technologies. Daniela Gabor has criticized this approach as “derisking” for business at the expense of public investment. In agricultural policy, however, derisking is nothing new. Since the 1920s, federal provision of subsidies and specialized credit facilities have underwritten farmers’ continued uptake of productivity-enhancing technologies by reducing uncertainties related to outlays for new capital investments.8 The IRA is already playing into a familiar agricultural policy script. 

    There were benefits to this approach in the past, but we now face a climate crisis that demands a fundamental reassessment of agricultural policy. This is a profoundly structural question. The existing policy instruments appear ill-suited for climate goals because they were made by a political coalition oriented toward the ever increasing production of commodity crops and meat. Rather than continue this pattern, today’s rural agenda should be broadened, and it should rechannel a portion of agricultural subsidies toward a reduction of crop acreage in favor of reforestation.

    Origins of the Farm Bill Coalition

    The bulk of US agricultural policy is made through the Farm Bill, which is negotiated every five years or so by a distinctive coalition of interests. At the core of this coalition is a tacit trade of subsidies for agricultural commodity producers that serve farm state interests in exchange for food and nutrition programs that serve primarily urban constituencies in other states.9 This nifty political formula has allowed a vast government apparatus in support of agricultural commodity producers to endure despite the ever-dwindling number of farmers.  Today, farms’ share of US GDP and employment amounts, respectively, to only 0.7 and 1.2 percent.10 

    The farm bill has proven a highly adaptable policy instrument. There is a certain amount of intuitive sense in combining government supports for both crop producers and food consumers, even if the constituencies representing these distinct interests are often at opposite ends of the political spectrum. It is true that the details of program design and costs are subject to constant renegotiation, that additional interests have crowded into the coalition over time, and that the larger political environment impinges in various ways. Yet the basic deal persists, providing essential ballast to navigate the tricky policy-making waters of a sector that will always remain critically important no matter how small its footprint looks in the macroeconomic data.

    To understand how we got here, we can read history through the old political science idea of policy “iron triangles,” which refers to a mutually supporting arrangement of legislators, bureaucrats, and special interests constituting a “subsystem” of the larger political arrangement.11 In the United States, it is associated with the emergence of the congressional “farm bloc” and the American Farm Bureau Federation trade group during the early twentieth century, and its political ascent during the Great Depression and the postwar boom. Operating through the House and Senate agricultural committees, farm-state politicians of both parties united in prioritizing farmer interests formed the base of the triangle. Industry groups, especially so-called “peak” organizations like the Farm Bureau and the various commodity-crop associations, formed the triangle’s second edge. The same organizations also provided farmers with privileged access to administrators in the US Department of Agriculture, the implementing agency that completed the triangle.

    Broadly speaking, this original agricultural iron triangle pursued two purposes somewhat in tension with each other. The first promoted the productive efficiency of American agriculture through technological innovation and capital intensification. The second was to sustain the character of rural America in a world economy of volatile commodity prices and relentless pressure to increase operational scale.12 Defended in terms of the “family farm” ideal, the character of rural America for many—though by no means all—also implied preserving traditional racial, gender and class inequalities. Public policy had two main channels: payments to farmers to smooth out fluctuations in their earnings, and large public investment in scientific research and technological development through the USDA and land-grant universities. 

    The system was extremely successful on the technology and productivity side, but less so in maintaining rural character. In the postwar years, dramatic gains in yields generated agricultural surpluses that flooded international markets and piled up in government warehouses, putting pressure on the farmer payments formula. Capital intensification continued apace, straining the financial resources of many farmers and accelerating long-standing processes of consolidated ownership in the agricultural sector (which to this day drives many farmers out of the business). Suburbanization saw the conversion of much farmland into housing developments that helped draw increasing numbers of Americans out of agriculture and into other areas of the economy. Simultaneously, the rise of non-agricultural employment in rural settings, especially manufacturing, transformed the character of the remaining rural workforce. Importantly, the Civil Rights Movement challenged, if it did not entirely overturn, basic features of the old rural order.

    High yields and new interests

    The long postwar boom therefore saw deep changes in rural America and the agricultural sector. One fundamental consequence was the relative decline of rural voting power within the national electorate. Another was that suburban purchasing power replaced rural purchasing power in the macroeconomic imaginations of liberal politicians and policymakers. For Franklin Roosevelt and key figures in his administration, economic downturns could not be solved without boosting farm incomes to generate rural demand. But for John F. Kennedy and his close advisers, the problem of farm families’ spending power was not how to raise it but how to manage it without weakening the Democrat’s political coalition.  Rural voters were a constituency to placate just enough to avoid losing their districts to Republicans too badly.13 The steady reduction of farmers’ centrality to the national political economy was matched by a rebalancing of agricultural policy’s original dual mandate. The preservation of a particular kind of rural society gave way to a more singular focus on agricultural productivism.

    Production itself was also changing and not only in terms of scale and capital intensity. Postwar trends drove a shift in the agricultural output mix, as meat and manufactured foods became increasingly important.14 The astonishing profusion of commodity crop production required finding new ways to upsell calories for an affluent consumer society. Meat was the economic solution to the historically unprecedented problem of large structural caloric surpluses. That shift has proven particularly significant for climate change. Conventional meat is an inherently inefficient way of making food for humans because most of the solar energy that goes into feed crops gets expended by the animals’ basal metabolic processes. In the case of cattle and other ruminants, that includes the release of enormous quantities of the potent greenhouse gas, methane, by a digestive process called enteric fermentation. Consequently, meat’s rise in American and, increasingly, global diets has significantly contributed to climate change. Enteric fermentation alone accounts for 25 percent of US methane emissions, while feed crop acres could have sequestered a great deal of carbon had they been converted to permanent forest.15

    As early as the 1960s it became clear that postwar changes required that agricultural policy adopt a new political framework. Food assistance—in the form of food aid abroad to serve US foreign policy goals and antihunger programs at home to fight domestic poverty—became a basic political pillar of the agricultural state. The introduction of food assistance complicated the old iron triangle structure by adding a major new set of interest groups and constituencies: anti-poverty advocates who looked to issues of urban hunger rather than rural incomes. Yet this innovation was handled relatively easily by reshuffling agricultural committee assignments to give new voices a seat at the table, bringing antipoverty civil society groups into an enlarged coalition. The approach was formalized in the 1973 farm bill. Over the ensuing half century, this basic legislative framework for financing US agricultural policy would attest to a political formation that was, if no longer exactly a triangle, still durable as iron.

    Agriculture for whom?

    The farm bill coalition has lasted in large part because it is adaptable. New interests have joined the coalitional mix and the structure of commodity supports has changed repeatedly. Besides nutritional assistance, environmental protection has been an important addition to the US agricultural policy coalition—politically pregnant in the age of climate mitigation coalitions. New Deal agricultural policy had included conservation measures, but they were geared toward sustaining production. Their major advocates were scientists, administrators and forward-looking agriculturists concerned about American farming’s long-term viability. But after the 1960s, conservation measures, pushed by movement environmentalists, referenced environmental wellbeing as a good in itself, and dealt with issues such as the health risks to which non-farm constituencies were exposed by chemical insecticides and runoff fertilizers. 

    Nevertheless, the natural resources economist Silvia Secchi argues that even these programs continued to play second fiddle to the goal of expanding commodity production.16 The programs were always voluntary, effectively giving farmers an alternative, countercyclical form of subsidy—restricting planting under a new sign when prices were low—while allowing them to abandon conservation practices when prices were high. This is one reason Republicans are happy to increase conservation funding with IRA money today—so long as climate goals, which would require long-term commitments, are kept out of the equation. 

    From the 1960s, also, as the historian Sarah Phillips has recently argued, commodity support programs were liberalized—even neoliberalized—to subject farmers to more market discipline. The Farm Bureau, together with corn and soybean growers, favored this shift in order to loosen the reins of federal supply management policies. Since the New Deal, the supply management paradigm had aimed to maintain prices at an acceptable level by limiting production. With the postwar Pax Americana’s opening of new overseas markets, some farmers began to see supply management as needlessly restraining. That view was connected to the rise of meat in global diets because corn and soybean were the key feed crops used to fatten animals for slaughter. The Civil Rights Movement also played an important if unexpected role by weakening the power of southern cotton growers and their congressional representatives, who had previously exercised a great deal of control over federal agricultural policy in alliance with midwestern wheat interests. According to the sociologist Bill Winders, the interaction of these global and domestic trends empowered corn and soybean growers, leading to the end of supply management during the 1990s.17

    As new forms of crop insurance have come to replace the old paradigm, private insurance has joined the corn and soybean-led trade interests.18 This has financialized commodity supports. Farmers must now be sophisticated risk managers, a circumstance that presumably favors those best schooled in financial accounting.19 Crop insurance has made the insurance industry a powerful new player in the farm bill coalition: in effect, the federal government has outsourced important features of its commodity crop support policy to private actors who get to enjoy lucrative gains. According to the economists Bruce Babcock and Chad Hart, it now takes two dollars of government spending to deliver one dollar of benefits to farmers—the other dollar goes to insurers.20

    Agricultural policy has grown increasingly labyrinthine under these numerous interests. It now encompasses so many goals and constituents, many of them not directly connected to agriculture at all, that the economists Grodon Rausser and Harry de Grotor describe it as an “iron maze” rather than an iron triangle.21 The expanded purview has certainly come with some important gains. The Supplementary Nutritional Assistance Program, or SNAP, which replaced what were previously known as food stamps, serves more than 40 million recipients each month, constituting an increasingly important part of the American social safety net. Moreover, recent economic analysis shows that a supposed principal downside of SNAP, reduced labor supply, is probably baseless.22

    Despite this heterogenous coalition, the essential feature of the original iron triangle remains operative in one key sense. So long as agricultural policy is routed through the House and Senate agricultural committees and implemented by the USDA—that is, so long as it is done mainly through the farm bill—commodity producers will continue to enjoy home field advantage, notwithstanding the real influence other interests have achieved.

    Food assistance advocates may have won a seat at the table, but not the power to preside. Marcia Fudge’s 2020 campaign to be named the Biden administration’s Secretary of Agriculture is telling in this respect. Representing a large metropolitan area in several prior farm-bill rounds, Fudge was plainly qualified to run an agency with a budget that is 75 percent devoted to SNAP. Yet she was instead given the Department of Housing and Urban Development, while Agriculture went to former Iowa Governor Tom Vilsack, a veteran figure identified with traditional farm politics and its industrial agriculture lobby. This is one reason why the food scholars Gabriel Rosenberg and Jan Dutkiewicz have recently called for the replacement of the USDA with a Department of Food, which would lean toward food consumers rather than producers.23 That urban anti-poverty advocates continue to favor the farm bill arrangement might be explained by their belief that hiving SNAP off into a different configuration of legislative committees and executive agencies would expose the program to cuts or even elimination. The partnership with commodity crop growers offers protection.

    The political longevity of the farm bill reflects the structural advantages afforded both to farm states in Congress and moneyed interests in America. Even if they do not control everything and must adapt to new economic and political pressures, commodity farmers operate within an intrinsically favorable institutional environment; those who would repurpose agricultural policy towards alternative constituencies remain their political captives. In the most recent study of farm bill politics, the political scientist Clare Brock has found that party polarization and the consequent congressional gridlock adds to this long-standing edge. By extending the policymaking timeline, only deep-pocketed lobbyists can afford to see it through.24 In short, while the agricultural policy space may now look more like a maze than a triangle, its new points of entry have not come with new exits. The players remain trapped within the commodity producers’ domain.

    The surest way to do “climate smart” agriculture is simply to convert excess farm acreage into new forests, providing not only a carbon sink but a range of environmental benefits from biodiversity to cleaner water. New England offers something of a precedent: most of the farmland it cleared in the nineteenth century returned to forest in the twentieth.25 Crucially, young forests in the temperate climates that characterize much of the United States are particularly well suited to capturing atmospheric carbon and storing it for long periods.26 While there is little reason to exaggerate what can be achieved here, there is even less reason to diminish it. In the words of forestry experts, trees remain “without a doubt the best carbon capture technology in the world.”27 Several econometric studies have found that at the widely proposed benchmark carbon price of $50 per ton, the conversion of pasture and cropland to new forest would sequester 200 million tons of carbon annually.28 This figure is roughly equal to the current annual carbon sequestration rate of all US forest and woodland.29 Meanwhile, section 45Q of the IRA allows 12-year tax credits of up to $130/ton of geological sequestered CO2 with “enhanced oil recovery.”30

    Commodity crop growers can be expected to fight any kind of acreage reduction agenda. There are signs, however, that their grip on power has loosened. The established coalition has noticeably wobbled in recent iterations of the farm bill cycle as partisan polarization, especially Republicans’ hard right turn, stresses the traditional deal.31 The continued gamble on political attrition in the farm bill process may be lengthening its own odds. Since the 2012–14 negotiations, Freedom Caucus-types have behaved as if they wanted to blow the whole thing up, demanding unacceptable cuts to nutrition programs and, now, a ban on the mailing of the abortion pill mifepristone. But the attempted IRA money grab is the main reason the current cycle has stalled, necessitating an extension of the 2018 law through at least September 2024. The old policy framework, in the words of the political scientist Adam Sheingate, seems to be suffering “decay.”32 The slow accretion of policy aims and constituencies may have degraded the coherence of the core bargain. The situation may also reflect a broader exhaustion of the neoliberal order’s programmatic agenda, of which polarization is itself a symptom. Either way, a “policy window” could be opening up through which the old farm bill policy feedback mechanisms can be replaced with new ones.

    Here it is well to remember that farmers, especially the big ones, constitute a minority of even the rural population, and that reforestation is entirely compatible with what the agronomist and farm policy reformer Sarah Taber calls “working countrysides.” For instance, sustainable tree harvesting can be combined with wood product industries, including environmentally-sound commercial products that double as long-term carbon storage devices, such as mass timber and biochar.33 Public recreational facilities and associated tourism services offer another type of opportunity to allow a forested countryside to earn income in ways different from the current limitations of agricultural policy. The best bet for the environment, from a political economy perspective, is to aim to design positive policy feedback loops that build durable rural constituencies committed to real climate action in agriculture—that is, to create new iron triangles. At any rate, a climate-oriented agricultural politics will have to be creative in how it builds effective shapes of power.

    This article is adapted from the introduction to a special forum, “Revisiting the ‘Iron Triangle’ of Agricultural Policy,” that will appear in the August 2024 issue of the journal Agricultural History. The forum’s essays cover in more detail many of the issues raised here.

  10. Illiberal Developmentalism

    Comments Off on Illiberal Developmentalism

    The election of Prabowo Subianto to the Indonesian presidency two months ago has been warmly welcomed by the country’s business and political elites. The day following the result, the Jakarta composite index rose by 1.3 percent, a tacit sign of assent for President Joko Widodo’s chosen successor, who was always expected to win the vote. Prabowo (as he is popularly known) has promised continuity with Jokowi’s (as he is popularly known) ten-year regime, emphasizing infrastructural development and downstream industrialization. Prabowo also shares Jokowi’s ambition to overcome the so-called middle-income trap before Indonesia’s hundredth anniversary of nationhood in 2045. This is a serious undertaking, which would entail a growth rate of at least 6 or 7 percent. This target was achieved for three decades after the fall of Sukarno in 1967 but could not be maintained in the aftermath of the Asian Financial Crisis; between 1999 and 2014, growth sat at around 5 percent. Despite his effort to attract foreign investment and build up Indonesian industry, Jokowi’s development model proved unable to lift rates any higher. 

    As Prabowo assumes the helm, the pressure to generate growth is only intensifying. With a highly nationalist ideology and a powerful circle of business elites championing his administration, there are concerns that Prabowo will be progressively drawn toward illiberalism. An anti-democratic shift was already visible under Jokowi, and Prabowo’s populist vision of state-led development is likely to exacerbate it. The result may be disastrous, not only for the country’s institutional infrastructure but for vulnerable and marginalized groups at the bottom of the economic pyramid, as state-backed development projects are likely to threaten their livelihoods.  

    Jokowi’s legacy

    Jokowi’s 2014 election, as outlets like the Economist declared, marked a period of “new hope” for the future of Indonesian democracy. For the first time, the country would be led by a layman who had no direct connection with its entrenched political and military aristocracies. As such, it could just as well be said that celebrations were prompted by the defeat of  Prabowo—the former son-in-law of Indonesia’s former dictator General Soeharto—who, in 2014, was considered too close to the establishment. “Jokowi’s election victory,” wrote one reporter for Time, “symbolized the electorate’s triumph over a ruling clique that had long treated this resource-rich nation as a private fiefdom.”1

    Hope for the president-elect also bore an economic dimension. Jokowi set the ambitious goal of raising Indonesia’s annual growth rate above 7 percent, which he hoped to achieve through infrastructure investment and downstream processing. In his 2016 State of the Nation Address, he promised to accelerate the development of key infrastructures, which would be funded with cuts to fuel subsidies. These included the addition of 35,000 megawatts of electricity to the grid, upgrading or building five port hubs and nineteen feeder ports, developing 3,650 kilometers of new roads, and achieving 100 percent access to clean water nationwide. The government also promised to build or rehabilitate dams to raise food production. 

    Jokowi did make some impressive gains: within four years, infrastructure expenditures more than doubled, from USD 13.03 billion in 2014 to USD 26.85 billion in 2018. From 2018 to 2023, the country spent an average of $27 billion per year on infrastructure.2 Around 1,885 km of toll roads were built between 2014 and 2022 and a target was set for building a further 1,800 km of toll roads.3 Jokowi’s administration also built 31,410 km of new (non-toll) roads, as well as 1502 new ports and fifty new airports across the country in the same period. The country’s electricity capacity increased by twenty gigawatts and the electrification ratio rose from 81.5 percent in 2014 to 99.63 percent at the end of 2022.4 Between 2015 and 2022, the government completed the development of thirty-six dams capable of watering 245,103 hectares of crop fields. It aimed to build twenty-five more dams in the following two years.5

    Impressive as this was, the administration also demonstrated crucial shortcomings. Upon election, Jokowi faced immense political resistance that meant a degree of marginalization. In order to regain a position of authority, he retracted his promise to build a narrow-based coalition dominated by non-partisan professionals, instead developing a broad-based coalition government including parties affiliated with Prabowo. The reshuffle increased the proportion of seats Jokowi controlled in the legislature; after having as few as 37 percent of legislative seats in his control, by the end of his second term, he controlled 71 percent.6 Another area of concern under Jokowi has been democratic backsliding in key areas—from horizontal accountability to civil-military relations and civil liberties. The Economist Intelligence Unit estimates that Indonesia’s democracy index declined from its historic peak of 7.03 in 2015 (just a year into Jokowi’s first term) to its historic low of 6.3 in 2020. The score increased to 6.71 in the following year before declining again to 6.53 in 2023.7 According to the Freedom House, political rights and civil liberties continued to decline during this period and Indonesia today is categorized as “partly free.”8

    A key problem is executive aggrandizement.9 Not only did the government suppress party-based opposition, it also repressed its critics and opponents from civil society organizations. It by no means held back from replacing opposition leaders with pro-Jokowi politicians in party leadership circles. This is evident, for example, in the replacement of Suryadarma Ali with Muhammad Romaharmuziy in the Partai Persatuan Pembangunan (United Development Party, PPP) and Aburizal Bakri with Setya Novanto in the Golkar Party in 2016. The intervention had the predictable effect of weakening internal contestation within the party and its capacity for political representation.10 Outside the party system, the government’s repressive measures against critics and opponents within civil-society organizations eroded civil liberties. In 2009, only 17 percent of respondents believed that people were afraid of talking about politics. Five years after Jokowi took office, the figure soared to 43 percent. Fear of arbitrary arrest also rose from 27 percent in 2009 to 38 percent in 2019. 

    The consolidation of power would continue in Jokowi’s second term. Defeating Prabowo for the second time in 2019, Jokowi surprised voters by appointing the latter as his defense minister. The appointment enabled the military to expand its political influence, regaining internal security functions formerly transferred to the police and taking a larger proportion of seats within the state bureaucracy.

    A degree of democratic backsliding, it was argued, was necessary to achieve economic growth. Jokowi’s development projects were predicated on undermining the Corruption Eradication Commission (Komisi Pemberantasan Korupsi, KPK), the only credible law enforcement agency in the country. During his first days in office, he cultivated an appearance of rectitude by integrating the KPK in the selection of ministers and other high-ranked officials. But he backed down when KPK’s aggressive acts against corruption affected his development projects and generated strong resistance from business and political elites. He revised KPK’s legislative independence, which resulted in proliferating corruption. Indonesia’s Corruption Perception Index (CPI) score rose and then fell during Jakowi’s incumbency. It increased from thirty-six in 2015 to thirty-eight in 2021, and the country ranked ninety-sixth out of 180 countries surveyed. Yet Indonesia’s CPI score dropped to thirty-four in 2022, and the country ranked 110th out of 180 countries.

    Jokowi’s economic record is mixed. Even with the increased number of dams, Indonesia is struggling to meet its domestic food demands. Food imports increased significantly under Jokowi’s presidency. In 2014, Indonesia imported nearly 2.2 million tons of rice, fruit, and vegetables at a value of approximately USD 1.6 billion. By 2023, however, food imports had risen to 4.7 million tons with a soaring value of  USD 4.3 billion. 

    Figure 1: Indonesia’s Food Imports, 2014–2023

    Source: Data from Indonesia Statistics

    Logistics is another area of concern. Despite the massive development of roads, ports, and airports, Indonesia’s rating dropped in the World Bank’s Logistics Performance Index (LPI), which includes customs, infrastructure, international shipments, logistics quality and competence, and tracking and tracing. Despite improving from sixty-third in 2016 to forty-sixth in 2018, it sank back down to sixty-third in 2023. This was a rating lower than its neighboring countries, especially Thailand (thirty-fourth), Malaysia (twenty-sixth), and Singapore (first). There is reason to believe that this low performance is linked to higher levels of corruption—countries with higher LPI scores usually score better on the Corruption Perception Index (CPI). 

    Outcomes in downstream processing have been mixed as well. Since 2009, Indonesian mining laws have prohibited the export of unprocessed minerals and mandated companies to process or smelt their ores domestically. The laws gave mining companies five years to develop processing facilities before the export ban would be implemented in 2014. However, delays in the development of smelting facilities by mining companies and changes in the government’s calculations in the mineral sector compromised the implementation of the policy. It was not until Jokowi’s second term that the export ban was implemented seriously. Beginning in 2020, the Minister of Energy and Minerals banned nickel ore exports with the aim of extending the ban to other minerals in the following years.

    Foreign investments in downstream processing under Jokowi have increased significantly (rising from USD 4.36 billion in 2019 to USD 24.63 billion in 2023)11 and, as a result, so have metal exports. Indonesia’s iron and steel exports rose from 4.5 million tons in 2018 to 14.9 million tons in 2022, while ferronickel exports increased from 0.9 million tons to 5.8 million tons during the same period. Job creation as a result of these increases, however, has been weak. The refining and processing of mineral ores is capital- and technology-intensive but this by no means translates into labor absorption. The share of labor in manufacturing, for example, fell from 14.17 percent in 2022 to 13.83 percent in 2023. 

    Potential trajectories

    The election of Prabowo, who is known for his close ties to the oligarchy, signals the growing influence of political and business elites in Indonesia. Prabowo’s military career took off under the reign of his father-in-law, General Soeharto, and he has admitted to his role in abducting student activists—some of them still missing—in  1998. Following some years of political exile in Jordan, in the early 2000s Prabowo returned to Indonesia to build his political career and run his companies, which operate in sectors ranging from coal mining to forestry and plantation. With a total net worth of USD 130.77 million or IDR 2.04 trillion, Prabowo was the wealthiest presidential candidate in 2024.12 His brother and main supporter, Hashim Djojohadikusumo, chair of the Arsari Group, was, according to Forbes, the fortieth richest person in Indonesia in 2022, with total assets worth USD 685 million or IDR 10.4 trillion.13

    Hashim is certainly not the only conglomerate behind Prabowo. A few days before the presidential election, Garibaldi Thohir, CEO and a significant shareholder of Adaro Energy—one of the world’s largest coal exporters—declared his support. A number of national conglomerates also pledged their support to Prabowo’s national campaign, including Aburizal Bakrie, Hatta Rajasa, Pandu Patira Sjahrir, Putri K. Wardhani, Erwin Aksa, Theo Sambuaga, Totok Lusida, and Wishnu Wardhana. Rosan Roeslani, the chair of Prabowo’s National Campaign Team, is the former chair of the Indonesian Chamber of Commerce and Industry (KADIN Indonesia) (2015–2021). According to Jaringan Advokasi Tambang, coal tycoons backed Prabowo as well.14

    This consolidation of political and business power will allow Prabowo to pursue long-term economic development, and continue Jokowi’s infrastructural and downstream industrialization projects. But there is no guarantee that such development will translate into higher rates of growth. Since mineral processing is capital-intensive, most of the benefits from it will be reaped by Chinese companies, which are the main investors.15 Some analysts, including former vice president Jusuf Kalla, have lamented the dismal gains from nickel exports being directed into government’s coffers. Rather than imposing export bans, many have argued that what Indonesia needs most is improved logistics systems. This would allow companies to maximize benefits from mineral exports and strengthen the structure of the domestic industry.16 More resources will also be needed to expand research and development capacity across the board, but especially in agriculture, given farmers’ limited resources.17 

    The government’s food estate program has been criticized not only for violating community-based notions of food sovereignty, but for not delivering on its promise of enhancing food production. The military’s involvement in the program has also attracted criticism. In its current construction, the program serves the interest of the political and business elite more than the farmers.18 Similarly, reports about the negative impacts of nickel mining on the livelihood of local people abound. Not only did the mining exacerbate deforestation trends, it also adversely affected the rivers and coastal areas where mining companies dumped waste. Some evidence also indicates that poverty in the regions where nickel was extracted and processed has increased in the last few years, suggesting the negative impacts of mining on the livelihood of local communities. 

    So far, Prabowo has not indicated how his economic programs would respond to such criticisms. Yet, a few days after his electoral victory was announced, his team stated that the government’s economic programs will focus on three main drivers, namely food, energy, and manufacturing. In line with the leader’s nationalist narrative, the team framed these programs in terms of building economic self-reliance. Prabowo has made the case for economic nationalism before, writing that “we expect the government should not be hesitant to get involved and drive economic growth.” He has broadly received assent on this topic among the bureaucracy, political parties and the private sector, with only a few economists and liberal intellectuals openly criticizing Prabowo’s nationalism.

    In 2024, it seems doubtful whether such a statist model of development will achieve a meaningful distribution of wealth. Income inequality remains fairly high in Indonesia, with a stubborn Gini ratio of 0.39. Addressing inequality will require more than industrial development. It will need a strong, broad-based green coalition to push sustainable and equitable development agendas. The country does not lack vibrant social movements pushing such agendas: JATAM, Greenpeace Indonesia and WALHI (Wahana Lingkungan Hidup Indonesia, or the Indonesian Environmental Forum) all address economic justice, while AMAN (Aliansi Masyarakat Adat Nusantara, or the Alliance of Indigenous Peoples of the Archipelago), SPI (Serikat Petani Indonesia,  or the Indonesian Farmers Union), and KPA (Konsorsium Pembaruan Agraria, or the Consortium for Agrarian Reforms) are focused on economic rights. Indonesia also has numerous trade unions and, more recently, a new Labour Party. Non-governmental organizations such as Ganbate (Gerakan Energi Terbarukan, or the Renewable Energy Movement) have also emerged, and are working on energy transition issues. Despite positive developments, these issues often remain siloed, with organizations operating in isolation from each other. A broad-based green coalition, consisting of these organizations and other actors, including those from the private sector, is needed to address the country’s new political and economic challenges. Credible pressures from such a coalition can be a powerful instrument to defend the economic interests of the lower economic classes and advance the green economy.