Tumgik
sufredux · 5 years
Text
Washington Consensus
I encountered rank disbelief in the Congressmen before whom I was testifying that there were any significant changes in economic policies and attitudes in process in Latin America. After discussion with Fred Bergsten, the director at the Institute for International Economics, where I was (and am) professionally located, we decided to convene a conference to test the extent to which I was right and to put the change in policy attitudes on the record in Washington. A few weeks later I gave a seminar at the Institute for Development Studies in England, where I made much the same argument. I was challenged by Hans Singer to spell out what I meant when I said that many of the countries were changing their policies for the better. This emphasized the need to be very explicit about the policy changes that I was thinking of. I decided that conference that we were planning for the autumn, which 2 we decided to call “Latin American Adjustment: How Much Has Happened?” needed a background paper that would spell out the substance of the policy changes we were interested in. That paper was entitled “What Washington Means by Policy Reform” and was sent to the ten authors who had agreed to write country studies for our conference to try and make sure that they addressed a common set of issues in their papers. That paper said inter alia on its opening page: Th[is] paper identifies and discusses 10 policy instruments about whose proper deployment Washington can muster a reasonable degree of consensus….The paper is intended to elicit comment on both the extent to which the views identified do indeed command a consensus and on whether they deserve to command it. It is hoped that the country studies to be guided by this background paper will comment on the extent to which the Washington consensus is shared in the country in question…. The Washington of this paper is both the political Washington of Congress and senior members of the administration and the technocratic Washington of the international financial institutions, the economic agencies of the U.S. government, the Federal Reserve Board, and the think tanks. The Institute for International Economics made a contribution to codifying and propagating several aspects of the Washington consensus in its publication Toward Renewed Economic Growth in Latin America (Balassa et al. 1986). My opinion at that time was that views had pretty much coalesced on the sort of policies that had long been advocated by the OECD. I specifically did not believe that most of the “neoliberal” innovations1 of the Reagan administration in the United States or the Thatcher government in Britain had survived the demise of the former (Mrs. Thatcher’s government was still in its death throes at the time). The exception was privatization, which was Mrs. Thatcher’s personal gift to the economic policy agenda of the world, and which by 1989 had proved its worth. But I thought all the other new ideas with which Reagan and Thatcher had entered office, notably monetarism, supply-side economics, and minimal government, had by then been discarded as impractical or undesirable fads, so no trace of them can be found in what I labeled the “Washington Consensus.” Of course, acceptance as relevant to the developing world of ideas that had long been motherhood and apple pie in the developed world was a momentous change. All through the Cold War the world had remained frozen in the 1950s’ classification of First, Second, and Third Worlds, each of which was assumed to have its own distinct set of economic laws. 1989 marked the end of the Second World, to the great relief of most of its subjects, and also the end of the intellectual apartheid that had so long assumed that citizens of the Third World behaved quite differently to those of the First World. But the globalization of knowledge never meant general acceptance of neoliberalism by any definition I know of. Content of the Original List The ten reforms that constituted my list were as follows. 1. Fiscal Discipline. This was in the context of a region where almost all countries had run large deficits that led to balance of payments crises and high inflation that hit mainly the poor because the rich could park their money abroad. 2. Reordering Public Expenditure Priorities. This suggested switching expenditure in a progrowth and propoor way, from things like nonmerit subsidies to basic health and education and infrastructure. It did not call for all the burden of achieving fiscal discipline to be placed on expenditure cuts; on the contrary, the intention was to be strictly neutral about the desirable size of the public sector, an issue on which even a hopeless consensus-seeker like me did not imagine that the battle had been resolved with the end of history that was being promulgated at the time. 3. Tax Reform. The aim was a tax system that would combine a broad tax base with moderate marginal tax rates. 4. Liberalizing Interest Rates. In retrospect I wish I had formulated this in a broader way as financial liberalization, stressed that views differed on how fast it should be achieved, and—especially—recognized the importance of accompanying financial liberalization with prudential supervision. 5. A Competitive Exchange Rate2 . I fear I indulged in wishful thinking in asserting that there was a consensus in favor of ensuring that the exchange rate would be competitive, which pretty much implies an intermediate regime; in fact Washington was already beginning to edge toward the two-corner doctrine which holds that a country must either fix firmly or else it must float “cleanly”. 6. Trade Liberalization. I acknowledged that there was a difference of view about how fast trade should be liberalized, but everyone agreed that was the appropriate direction in which to move. 7. Liberalization of Inward Foreign Direct Investment. I specifically did not include comprehensive capital account liberalization, because I did not believe that did or should command a consensus in Washington. 8. Privatization. As noted already, this was the one area in which what originated as a neoliberal idea had won broad acceptance. We have since been made very conscious that it matters a lot how privatization is done: it can be a highly corrupt process that transfers assets to a privileged elite for a fraction of their true value, but the evidence is that it brings benefits (especially in terms of improved service coverage) when done properly, and the privatized enterprise either sells into a competitive market or is properly regulated. 9. Deregulation. This focused specifically on easing barriers to entry and exit, not on abolishing regulations designed for safety or environmental reasons, or to govern prices in a non-competitive industry. 10. Property Rights. This was primarily about providing the informal sector with the ability to gain property rights at acceptable cost (inspired by Hernando de Soto’s analysis). First Reactions The three American discussants whom I had invited to react to my paper were Richard Feinberg (then at the Overseas Development Council), Stanley Fischer (then Chief Economist at the World Bank), and Allan Meltzer (then as now a professor at CarnegieMellon University). Feinberg and Meltzer were intended to make sure that I had not represented as consensual anything that one or other side of the political spectrum would regard as rubbish, while Fischer would play the same safeguard role as regards the IFIs. Fischer was most supportive of the basic thrust of the paper, saying that “there are no longer two competing economic development paradigms” and that “Williamson has captured the growing Washington consensus on what the developing countries should do.” But he pointed to some areas that I had not commented on and where sharp disagreements remained, such as the environment, military spending, a need for more comprehensive financial reform than freeing interest rates, bringing back flight capital, and freeing flows of financial capital.3 It was not my intent to argue that controversy had ended, so I would not take issue with his contention that there remained sharp disagreements on a number of issues (including the desirability of capital account liberalization). And my initial paper did indeed formulate the financial liberalization question too narrowly. Meltzer expressed his pleasure at finding how much the mainstream had learned (according to my account) about the futility of things like policy activism, exploiting the unemployment/inflation tradeoff, and development planning. The two elements of my list on which he concentrated his criticism were once again the interest rate question (though here he focused more on my interim objective of a positive but moderate real interest rate than on the long run objective of interest rate liberalization) and a competitive exchange rate. The criticism of the interest rate objective I regard as merited. His alternative to a competitive exchange rate, namely a currency board, would certainly not be consensual, but the fact that he raised this issue was my first warning that on the exchange rate question I had misrepresented the degree of agreement in Washington. Feinberg started off by suggesting that there really was not much of a consensus at all, but his comment mellowed as it progressed, and he concluded by saying that there was convergence on key concepts though still plenty to argue about. His most memorable line does not appear in his written comment but consisted of the suggestion that I should have labeled my list the Universal Convergence rather than the Washington Consensus, since the extent of agreement is far short of consensus but runs far wider than Washington. He was of course correct on both points, but it was too late to change the terminology. The point about how much more apt it would have been to refer to a universal convergence rather than a Washington consensus was rubbed home in a fourth comment, by Patricio Meller of CIEPLAN in Santiago de Chile. In the months that followed I participated in several meetings where I not only argued that the policies included in my ten points were in fact being adopted fairly widely in Latin America, as our conference had confirmed, but also that this was a good thing and that lagging countries should catch up. I know that I never regarded those ten points as constituting the whole of what should be on the policy agenda, but perhaps I was not always as careful in spelling that out as I should have been. The two points in my original list that seem to me in retrospect least adequate as a summary of conventional thinking are the two identified by Allan Meltzer, namely financial liberalization and exchange-rate policy. The agenda for financial liberalization went broader than interest rates, to include most importantly the liberalization of credit flows, and (as Joe Stiglitz has often pointed out) it needed to be supplemented by prudential supervision if it were not to lead almost inexorably to financial crisis. We already had the experience of the Southern Cone liberalization of the late 1970s to emphasize that point, so I clearly should not have overlooked it. On exchange rate policy I fear I was guilty of wishful thinking in suggesting that opinion had coalesced on something close to my own view, whereas in fact I suspect that even then a majority of Washington opinion would have plumped for either the bipolar view or else (like Meltzer) one of the poles. In arguing that lagging countries should catch up with the policy reforms on my list, I argued on occasion that the East Asian NICs had broadly followed those policies. A Korean discussant (whose name I regret to say escapes me) at a conference in Madison challenged this contention; he argued that their macro policies had indeed been prudent, but also asserted (like Alice Amsden and Robert Wade) that their microeconomic policies had involved an active role for the state quite at variance with the thrust of points 4 and 6- 9 of my list. I think one has to concede that some of the East Asian countries, notably Korea and Taiwan, were far from pursuing laissez-faire during their years of catch-up growth, but this does not prove that their rapid growth was attributable to their departure from liberal policies, as critics of the Washington Consensus seem to assume axiomatically. There were after all two other East Asian countries that grew comparably rapidly, in which the state played a much smaller role. Indeed, one of those—namely Hong Kong—was the closest to a model of laissez-faire that the world has ever seen. It would seem to me more natural to attribute the fast growth of the East Asian NICs to what they had in common, such as fiscal prudence, high savings rates, work ethic, competitive exchange rates, and a focus on education, rather than to what they did differently, such as industrial policy, directed credit, and import protection. Incidentally, one should compare the policy stance of Korea and Taiwan with that of other developing countries, not with a textbook model of perfect competition. Most of the countries that failed to grow comparably fast were even less liberal. So even if it was wrong to treat the East Asian NICs as pin-up examples of the Washington Consensus in action, it is even more misleading to treat them as evidence for rejecting microeconomic liberalization. That controversy cannot be resolved by any simple appeal to what happened in East Asia. 6 But arguments about the content of the Washington Consensus have always been secondary to the wave of indignation unleashed by the name that I pinned on this list of policy reforms. Some of the reformers obviously believed that I had undercut their local standing by calling it a “Washington” agenda, and thus suggesting that these were reforms that were being imposed on them rather than being adopted at their own volition because they recognized that those were the reforms their country needed. When I invented the term I was not thinking of making propaganda for economic reform (insofar as I was contemplating making propaganda, it was propaganda for debt relief in Washington, not propaganda for policy reform in Latin America). From the standpoint of making propaganda for policy reform in Latin America, Moisés Naím (2000) has argued that in fact it was a good term in 1989, the year the coalition led by the United States emerged victorious in the Cold War, when people were searching for a new ideology and the ideology of the victors looked rather appealing. But it was a questionable choice in more normal times, and a terrible one in the world that George W. Bush has created, where mention of Washington is hardly the way to curry support from non-Americans. It was, I fear, a propaganda gift to the old left. Varying Interpretations To judge by the sales of Latin American Adjustment: How Much Has Happened?, the vast majority of those who have launched venomous attacks on the Washington Consensus have not read my account of what I meant by the term. When I read what others mean by it, I discover that it has been interpreted to mean bashing the state, a new imperialism, the creation of a laissez-faire global economy, that the only thing that matters is the growth of GDP, and doubtless much else besides. I submit that it is difficult to find any of these implied by the list of ten policy reforms that I presented earlier. One event that I found extraordinary was to learn that many people in Latin America blamed the adoption of Washington Consensus policies for the collapse of the Argentine economy in 2001. I found this extraordinary because I had for some years been hoping against hope that Argentina would not suffer a collapse like the one that occurred, but was nonetheless driven to the conclusion that it was highly likely because of the fundamental ways in which the country had strayed from two of the most basic precepts of what I had laid out. Specifically, it had adopted a fixed exchange rate that became chronically overvalued (for reasons that were not its fault at all, let me add), and—while its fiscal deficits were smaller than in the 1980s—it had not used its boom years to work down the debt/GDP ratio. Its fiscal policy as the crisis approached was not nearly restrictive enough to sustain the currency board system. None of the good reforms along Washington Consensus lines that Argentina had indeed made during the 1990s—trade liberalization, financial liberalization, privatization, and so on—seemed to me to have the slightest bearing on the crisis. Yet Latin American populists and journalists, and even a few reputable economists, were asserting that the Washington Consensus was somehow to blame for the Argentinean implosion. I am still hoping to learn the causal channel they have in mind. One has to conclude that the term has been used to mean very different things by different people. In fact, it seems to me that there are at least two interpretations of the term beside mine that are in widespread circulation. 7 One uses it to refer to the policies the Bretton Woods institutions applied toward their client countries, or perhaps the attitude of the US government plus the Bretton Woods institutions.4 This seems to me a reasonable, well-defined usage. In the early days after 1989 there was not much difference between my concept and this one, but over time some substantive differences emerged. The Bretton Woods institutions increasingly came to espouse the so-called bipolar doctrine (at least until the implosion of the Argentine economy in 2001, as a direct result of applying one of the supposedly crisis-free regimes), according to which countries should either float their exchange rate “cleanly” or else fix it firmly by adopting some institutional device like a currency board. As pointed out above, that is directly counter to my version of the Washington Consensus, which called for a competitive exchange rate, which necessarily implies an intermediate regime since either fixed or floating rates can easily become overvalued. Again, the Bretton Woods institutions, or at least the IMF, came in the mid-1990s to urge countries to liberalize their capital accounts, whereas my version had deliberately limited the call for liberalization of capital flows to FDI. Both of those deviations from the original version were in my opinion terrible, with the second one bearing the major responsibility for causing the Asian crisis of 1997. But there were also some highly positive differences, as the Bank and Fund came to take up some of the issues that I had not judged sufficiently major in Latin America in 1989 to justify inclusion. I think in particular of institutional issues, especially regarding governance and corruption, in the case of the Bank, and financial sector reform as reflected in standards and codes in the case of the Fund. And by the late 1990s both institutions had replaced their earlier indifference to issues of income distribution by a recognition that it matters profoundly who gains or loses income. The third interpretation of the term “Washington Consensus” uses it as a synonym for neoliberalism or market fundamentalism. This I regard as a thoroughly objectionable perversion of the original meaning. Whatever else the term “Washington Consensus” may mean, it should surely refer to a set of policies that command or commanded a consensus in some significant part of Washington, either the US government or the IFIs or both, or perhaps both plus some other group. Even in the early years of the Reagan administration, or during Bush 43, it would be difficult to contend that any of the distinctively neoliberal policies, such as supply-side economics, monetarism, or minimal government, commanded much of a consensus, certainly not in the IFIs. And it would be preposterous to associate any of those policies with the Clinton administration. Yet most of the political diatribes against the Washington Consensus have been directed against this third concept, with those using the term this way apparently unconcerned with the need to establish that there actually was a consensus in favor of the policies they love to hate.5 Why should the term have come to be used in such different ways? I find it easy enough to see why the second usage emerged. The term initially provided a reasonable description of the policies of the Bretton Woods institutions, and as these evolved the term continued to refer to what these currently were. What puzzles me is how the third usage became so popular. The only hypothesis that has ever seemed to me remotely plausible is that this was an attempt to discredit economic reform by bundling a raft of ideas that deserve to be consigned to oblivion along with the list of commonsense proreform proposals that constituted my original list. This was doubtless facilitated by the name that I had bestowed on my list, which gave an incentive to anyone who disliked the policies or attitudes of the US government or the IFIs to join in a misrepresentation of the policies they were promoting. In any event, surely intellectual integrity demands a conscientious attempt in the future to distinguish alternative concepts of the Washington Consensus. Semantic issues may not be the most exciting ones, but being clear about the way in which terms are being used is a necessary condition for serious professional discussion. The practice of dismissing requests for clarification as tedious pedantry should be unacceptable. Perhaps then more critics would follow the example of the Korean discussant to whom I referred earlier who laid out precisely which elements of my original agenda he objected to. Or if a critic chooses to use the third concept, then surely he should say that he is talking about a concept of the Washington Consensus that has never commanded a consensus in Washington. The Future However much exception I may take to some of the assaults that have been made on the Washington Consensus, I have to admit that I too am uncomfortable if it is interpreted as a comprehensive agenda for economic reform. Even in 1989, there was one objective of economic policy that I regard as of major importance but that found only very tenuous reflection in the Consensus.6 Since then fifteen years have passed, and it would be remarkable (and depressing) if no new ideas worthy of inclusion in the policy agenda had emerged. Hence there are two reasons why my policy agenda of today can differ from the Washington Consensus as I laid it out in 1989: because I am not limiting myself to doctrines able to command a consensus but am presenting what I believe deserves to be done, and because time has passed and ideas have developed. A book that I co-edited last year (Kuczynski and Williamson 2003) addressed the issue of delineating a policy agenda appropriate for Latin America in the current decade. Note that this new agenda, like the original Washington Consensus, was aimed specifically at Latin America at a particular moment of history, rather than claiming to be a text for all countries at all times as many critics have interpreted it to be. We identified four major topics that ought to be included. The first of these is stabilization policy. The need for more pro-active policies to keep the economy on an even keel has been driven home with great force in recent years by the horrifying price that many emerging markets have paid for the crises to which so many have been exposed. When I drew up the Washington Consensus the overwhelming need, at least in Latin America, was to conquer inflation, so that was the macroeconomic objective that I emphasized. Had it occurred to me that my list would be regarded in some quarters as a comprehensive blueprint for policy practitioners, I hope that I would have added the need for policies designed to crisis-proof economies and stabilize them against the business cycle (the sort of measures that Ricardo Ffrench-Davis has advocated under the heading of “reforming the reforms”). A first implication is to use fiscal policy as a countercyclical tool, insofar as possible. The most effective way to do this seems to be to strengthen the automatic stabilizers and let them operate. (It seems unlikely that emerging markets would have more success with discretionary fiscal policy than the developed countries have had.) Most developing countries have been precluded from doing even this by a need to keep the markets happy, which has required deflationary fiscal policy during difficult times. The way to end this is to use booms to work down debt levels to a point at which the market will consider them creditworthy, which means that countercyclical fiscal policy can be initiated only during the boom phase of the cycle. Obviously there are other tools besides fiscal policy that may help minimize the probability of encountering a crisis, and its cost if it nevertheless occurs. Exchange rate policy may be the most crucial, since many of the emerging-market crises of recent years have originated in the attempt to defend a more-or-less fixed exchange rate. For this reason most countries have abandoned the use of fixed or predetermined exchange rates in favor of some version of floating. However, there is still an important difference of view between those who think of floating as implying a commitment on the part of the government not to think about what exchange rate is appropriate, versus those who take the view that floating is simply avoidance of a commitment to defend a particular margin. In the latter view, which I share, it is still perfectly appropriate for a government to have a view on what range of rates would be appropriate, and to slant policy with a view to pushing the rate toward that range, even if it avoids guaranteeing that the rate will stay within some defined margins. In particular, I would argue that while a government should freely allow depreciation in order to avoid or limit the damage of a crisis, it should if necessary be proactive in seeking to limit appreciation in good times, when investors are pushing in money. If a country has a sufficiently efficient and uncorrupt civil service to be able to make capital controls (like the Chilean uncompensated reserve requirement of the 1990s) work (and not all countries do!), then it should be prepared if necessary to use capital controls to limit the inflow of foreign funds and hence help maintain a competitive exchange rate. Monetary policy is also highly pertinent to countercyclical policy. Many countries, especially those that have abandoned a fixed exchange rate and were therefore seeking a new nominal anchor, have told their central bank to use an inflation targeting framework to guide monetary policy. This appears a sensible choice, provided at least 10 that it is not interpreted so rigidly as to preclude some regard for the state of the real economy when setting monetary policy. Recent experience has demonstrated conclusively that the severity of a crisis is magnified when a country has a large volume of debt denominated in foreign exchange (see e.g. Goldstein and Turner 2004). This is because currency depreciation, which does—and should—occur when a crisis develops, increases the real value of the debts of those who have their obligations denominated in foreign currency. If the banks took the exchange risk by borrowing in foreign currency and on-lending in local currency, then their solvency will be threatened directly. If they sought to avoid that risk by on-lending in foreign currency, then their debtors’ financial position will be undermined (especially if they are in the nontradable sector), and the banks are likely to end up with a large volume of bad loans, which may also threaten their solvency. If the government contracted foreign currency debt (or allowed the private sector to shield itself by unloading its foreign currency debt when conditions turned threatening), then the effect of a currency depreciation will be to increase public-sector debt and thereby undermine confidence at a critical time. Whatever the form of such borrowing, it can intensify any difficulties that may emerge. The solution is to curb borrowing in foreign currency. The government can perfectly well just say no when deciding the currency composition of its own borrowing and issue bonds in local currency (as more and more emerging markets are now starting to do). Bank supervision can be used to discourage bank borrowing, and lending, in foreign exchange. The more difficult issue is foreign-currency borrowing by corporations. To prevent that would require the imposition of controls on the form of foreign borrowing. Perhaps it makes more sense to content oneself with discouraging, rather than completely preventing, foreign currency–denominated borrowing. That could be achieved by taxation policy, which could give less tax relief for interest payments on foreign-currency loans, and/or charge higher taxes on interest receipts on such loans. Obviously crisis-proofing an economy may require attention to other issues. For example, in many countries subnational government units face a soft budget constraint, which for well-known reasons is not good for stabilization policy. But the purpose of this section is to give an idea of the issues that are important in designing a policy agenda, not to write a comprehensive account of every issue that may face a policy practitioner, so I will leave this first issue. The second general heading of our policy agenda consisted of pushing on with the liberalizing reforms that were embodied in the original Washington Consensus, and extending them to areas like the labor market where economic performance is being held back by excessive rigidity. One does not have to be some sort of market fundamentalist who believes that less government is better government and that externalities can safely be disregarded in order to recognize the benefits of using market forces to coordinate activity and motivate effort. This proposition is such a basic part of economic thinking that it is actually rather difficult to think of a work that conclusively establishes its truth. But there are a variety of indirect confirmations, from the universal acclaim that meets the abandonment of rationing to the success of emissions trading in reducing pollution at far lower cost than was anticipated. It is certainly true that the move to adopt a more liberal policy stance in many developing countries over the past two decades has as yet had the hoped-for effect of stimulating growth in only a few countries, like India. The results have not been 11 comparably encouraging in, say, Latin America (Ocampo 2004, Kuczynski and Williamson 2003). But the blame for this seems to me to lie in the misguided macroeconomic policies—like allowing exchange rates to become overvalued and making no attempt to stabilize the cycle—that accompanied the microeconomic reforms, rather than in the latter themselves. The same was true in the United Kingdom under Mrs. Thatcher and in New Zealand when Roger Douglas was finance minister; both undertook far-reaching microeconomic liberalizations that can now be seen to have arrested and even reversed the relative decline of those countries, but their peoples saw no benefits for the best part of a decade because of the primitive macro policies that accompanied the micro reform. When we asked what is today most in need of liberalization in Latin America, we concluded that it is the labor market. Around 50 percent of the labor force in many Latin American countries is in the informal sector. This means that they do not enjoy even the most basic social benefits, like health insurance, some form of safeguard against unemployment, and the right to a pension in old age. What people do get is the right to maintain through thick and thin a formal-sector job if they are lucky enough to have one, and a wide range of social benefits that go along with all formal-sector jobs. Not all these benefits appear to be highly valued, to judge by the stories of workers taking second jobs to supplement what they can earn in their guaranteed maximum of 40 hours, or taking another job during their guaranteed summer vacations. So we proposed to flexibilize firing for good reason and curtail the obligation to pay those elements of the social wage that appear less appreciated, in the belief that this will reduce the cost of employing labor in the formal sector and so lead to more hiring and greater efficiency. There is an abundant economic literature that concludes that the net effect of making it easier to fire workers is to increase employment net. The third element of our proposed policy agenda consists of building or strengthening institutions. This is hardly novel; the importance of institution building has in fact become the main new thrust of development economics in the 15 years since the Washington Consensus was first promulgated. Which particular institutions are most in need of strengthening tends to vary from one country to another, so the possibility of generalizing is limited, but archaic judiciaries, rigid civil service bureaucracies, oldfashioned political systems, teachers’ unions focused exclusively on producer interests, and weak financial infrastructures are all common. One institutional reform that we certainly did not advocate was introduction of industrial policy, meaning by this a program that requires some government agency to “pick winners” (to help companies that are judged likely to be able to contribute something special to the national economy). As argued before, there is little reason to think that industrial policies were the key ingredient of success in East Asia (see also Noland and Pack 2003). But we did have a lot more sympathy for a cousin of industrial policy usually referred to as a national innovation system. This does not require government to start making business judgments; it instead has government seek to create an institutional environment in which those firms that want to innovate find the necessary supporting infrastructure. A national innovation system is about government creating institutions to provide technical education, to promote the diffusion of technological information, to fund precompetitive research, to provide tax incentives for R&D, to encourage venture capital, to stimulate the growth of industrial clusters, and so on. While 12 there is still ample scope for productivity to increase in Latin America by copying best practices developed in the rest of the world, it may need an act of Schumpeterian innovation—and therefore the sort of technologically supportive infrastructure that comprises a national innovation system—to bring world best practice to Latin America (ECLAC 1995, part 2). The final element of the policy agenda is intended to combat the neglect of equity that was as true of the Washington Consensus as it has long been of economics in general. We suggested that it is important for governments to target an improved distribution of income in the same way that they target a higher rate of growth. Where there are opportunities for win-win solutions that will both increase growth and improve income distribution (such as, maybe, redirecting public education subsidies from universities to primary schools), they should be exploited. But the more fundamental point is that there is no intellectual justification for arguing that only win-win solutions deserve to be considered. One always needs to be aware of the potential cost in terms of efficiency (or growth) of actions to improve income distribution, but in a highly unequal region like Latin America opportunities for making large distributive gains for modest efficiency costs deserve to be seized. Progressive taxes are the classic instrument for redistributing income. One of the more questionable aspects of the reforms of the past decade in Latin America has been the form that tax reform has tended to take, with a shift in the burden of taxation from income taxes (which are typically at least mildly progressive) to consumption taxes (which are usually at least mildly regressive). While the tax reforms that have occurred have been useful in developing a broader tax base, it is time to reverse the process of shifting from direct to indirect taxation; effort should now focus on increasing direct tax collections. For incentive reasons one may want to avoid increasing the marginal tax rate on earned income, but that still leaves at least three possibilities: • The development of property taxation as a major revenue source (it is the most natural revenue source for the subnational government units that are being spawned by the process of decentralization that has rightly become so popular). • The elimination of tax loopholes, not only so as to increase revenue but also to simplify tax obligations and thus aid enforcement. • Better tax collection, particularly of the income earned on flight capital parked abroad, which will require the signing of tax information-sharing arrangements with at least the principal havens for capital flight. Increased tax revenue needs to be used to increase spending on basic social services, including a social safety net as well as education and health, so that the net effect will be a significant impact in terms of reducing inequality, particularly by expanding opportunities for the poor. With the best will in the world, however, what is achievable through the tax system is limited, in part by the fact that one of the things that money is good at buying is advice on how to minimize a tax bill. Really significant improvements in distribution will come only by remedying the fundamental weakness that causes poverty, which is that too many people lack the assets that enable them to work their way out of poverty. The basic principle of a market economy is that people exchange like value for like value. Hence in 13 order to earn a decent living the poor must have the opportunity to offer something that others want and will pay to buy: those who have nothing worthwhile to offer because they have no assets are unable to earn a decent living. The solution is not to abolish the market economy, which was tried in the communist countries for 70 years and proved a disastrous dead end, but to give the poor access to assets that will enable them to make and sell things that others will pay to buy. That means: • Education. There is no hope unless the poor get more human capital than they have had in the past. Latin America has made some progress in improving education in the last decade, but it is still lagging on a world scale. • Titling programs to provide property rights to the informal sector and allow Hernando de Soto’s “mystery of capital” to be unlocked (de Soto 2000). • Land reform. The Brazilian program of recent years to help peasants buy land from latifundia landlords provides a model. Landlords do not feel their vital interests to be threatened and therefore they do not resort to extreme measures to thwart the program. Property rights are respected. The peasants get opportunities but not handouts, which seems to be what they want. • Microcredit. Organizations to supply microcredit are spreading, but they still serve only about 2 million of Latin America’s 200 million poor. The biggest obstacle to an expanded program consists of the very high real interest rates that have been common in the region. These high interest rates mean either that microcredit programs have a substantial fiscal cost and create an incentive to divert funds to the less poor (if interest rates are subsidized), or (otherwise) that they do not convey much benefit to the borrowers. Macro policy in a number of countries needs to aim to reduce market interest rates over time, which will inter alia facilitate the spread of microcredit. In the best of worlds such policies will take time to produce a social revolution, for the very basic reason that they rely on the creation of new assets, and it takes time to produce new assets. But, unlike populist programs, they do have the potential to produce a real social revolution if they are pursued steadfastly. And they could do so without undermining the wellbeing of the rich, thus holding out the hope that these traditionally fragmented societies might finally begin to develop real social cohesion. Concluding Remarks Some may ask whether it matters whether people declare themselves for or against the Washington Consensus. If the battles are essentially semantic, why don’t we all jump on its grave and get on with the serious work of pursuing an updated policy agenda? Good question, but perhaps there is a serious answer. When a serious economist attacks the Washington Consensus, the world at large interprets that as saying that he believes there is a serious intellectual case against disciplined macroeconomic policies, the use of markets, and trade liberalization—the three core ideas that were embodied in the original list and that are identified with the IFIs. Perhaps there is such a case, but I have not found it argued in Stiglitz (2002) or anywhere else. If the term is being used as a pseudonym for market fundamentalism, then the public read into it a declaration that the 14 IFIs are committed to market fundamentalism. That is a caricature. We have no business to be propagating caricatures. Everyone agrees that the Washington Consensus did not contain all the answers to the questions of 1989, let alone that it addresses all the new issues that have arisen since then. So of course we need to go beyond it. That is the purpose of this conference, to which I hope the penultimate section of this paper will contribute. References Balassa, Bela, Gerardo Bueno, Pedro-Pablo Kuczynski, and Mario Henrique Simonsen. 1986. Toward Renewed Economic Growth in Latin America. (Washington: Institute for International Economics.) De Soto, Hernando. 2002. The Mystery of Capital: Why Capitalism Triumphs in the West and Fails Everywhere Else. (London: Black Swan.) Economic Commission for Latin America and the Caribbean (ECLAC). 1995. Latin America and the Caribbean: Policies to Improve Linkages with the Global Economy. (Santiago: ECLAC.) Goldstein, Morris, and Philip Turner. 2004. Controlling Currency Mismatches in Emerging Markets. (Washington: Institute for International Economics.) Kuczynski, Pedro-Pablo, and John Williamson (eds.). After the Washington Consensus: Restarting Growth and Reform in Latin America. (Washington: Institute for International Economics.) Naím, Moisés. 2000. “Washington Consensus or Washington Confusion?”, Foreign Policy, Spring. Noland, Marcus, and Howard Pack. 2003. Industrial Policy in an Era of Globalization: Lessons from Asia. (Washington: Institute for International Economics.) Ocampo, José Antonio. 2004. “Latin America’s Growth and Equity Frustrations During Structural Reforms”. Journal of Economic Perspectives, Spring, 18(2), pp. 67-88. Stiglitz, Jospeh E. 2002. Globalization and Its Discontents. (New York and London: Norton.) Williamson, John. 1990 Latin American Adjustment: How Much Has Happened? (Washington: Institute for International Economics.)
1 note · View note
sufredux · 5 years
Text
The Progressive Case Against Protectionism
It has almost become the new Washington consensus: decades of growing economic openness have hurt American workers, increased inequality, and gutted the middle class, and new restrictions on trade and immigration can work to reverse the damage. This view is a near reversal of the bipartisan consensus in favor of openness to the world that defined U.S. economic policy for decades. From the end of World War II on, under both Democratic and Republican control, Congress and the White House consistently favored free trade and relatively unrestrictive immigration policies. Candidates would make protectionist noises to appease various constituencies from time to time, but by and large, such rhetoric was confined to the margins. Almost never did it translate into actual policy.
Then came the 2016 presidential election. Donald Trump found a wide audience when he identified the chief enemy of the American worker as foreigners: trading partners that had struck disastrous trade agreements with Washington and immigrants who were taking jobs from native-born Americans. Everyday workers, Trump alleged, had been let down by a political class beholden to globalist economic ideas. In office, he has followed through on his nationalist agenda, withdrawing the United States from the Trans-Pacific Partnership (TPP) and routinely levying higher tariffs on trading partners. On immigration, he has implemented draconian policies against asylum seekers at the border and undocumented immigrants within the United States, as well as reducing quotas for legal immigrants and slowing down the processing of their applications.
Stay informed.In-depth analysis delivered weekly.
SIGN UP
But Trump has not been alone in his battle against economic openness. During the 2016 campaign, he was joined in his calls for protectionism by the Democratic primary candidate Bernie Sanders, who also blamed bad trade agreements for the plight of the American worker. Even the Democratic nominee, Hillary Clinton, who as secretary of state had championed the TPP, was forced by political necessity to abandon her earlier support for the agreement. Democrats have not, fortunately, mimicked Trump’s anti-immigrant rhetoric, but when it comes to free trade, their support has often been lukewarm at best. While some Democrats have criticized Trump’s counterproductive tariffs and disruptive trade wars, many of them hesitate when asked if they would repudiate the administration’s trade policies, especially with respect to China. The political winds have shifted; now, it seems as if those who purport to sympathize with workers and stand up for the middle class must also question the merits of economic openness.
Trade, not tariffs, will improve the plight of regular Americans.
American workers have indeed been left behind, but open economic policies remain in their best interest: by reducing prices for consumers and companies, free trade helps workers more than it hurts them, and by creating jobs, offering complementary skills, and paying taxes, so do immigrants. Instead of hawking discredited nationalist economic ideas, politicians seeking to improve Americans’ economic lot—especially progressives focused on reducing inequality and rebuilding the middle class—should be looking to domestic policy to address workers’ needs, while also improving trade agreements and increasing immigration. That, not tariffs and walls, is what it will take to improve the plight of regular Americans.
THE TRADE BOOGEYMAN
Forty years of widening inequality and slow wage growth have left many Americans searching for answers. It may be tempting, then, to blame the United States’ trading partners, many of which have experienced remarkable jumps in GDP and wages. China, perhaps the most spectacular example, saw its GDP per capita expand more than 22-fold from 1980 to 2018—in terms of 2010 U.S. dollars, from $350 to $7,750. Yet during the same period, U.S. GDP per capita grew from $28,600 to $54,500. That’s less in relative terms—advanced economies usually grow more slowly than poor ones—but far more in absolute terms, and enough to significantly boost standards of living.
The problem, however, is that the gains have not been evenly shared. Adjusted for inflation, the average income of the bottom 50 percent of earners stayed nearly flat between 1980 and 2014. For those in the 50th to 90th percentiles, it grew by about 40 percent, lagging far behind expectations based on the experience of prior generations. Among the top one percent, meanwhile, average income has skyrocketed, ballooning by 205 percent over the same period. No wonder so many Americans are disappointed. The U.S. economy has failed to achieve its most basic aim: generating inclusive growth.
Trade does deserve some of the blame. When the United States buys goods from labor-abundant countries such as China and India, the demand for domestic labor falls. This appears to be what happened after the big surge in Chinese imports to the United States in the early years of this century. In a series of oft-cited research papers about “the China shock,” the economists David Autor, David Dorn, and Gordon Hanson estimated that trade with China may have displaced the jobs of one million to two million Americans during this period. But it’s important to keep those numbers in perspective. The U.S. economy is a dynamic place, with more than six million jobs lost and created every single quarter. Moreover, the share of Americans working in manufacturing has been declining steadily since 1950, even as growth in trade has waxed and waned—suggesting that factors other than trade are also at play.
The U.S. economy has failed to achieve its most basic aim: generating inclusive growth.
Indeed, the U.S. economy has experienced other huge changes. Workers have lost bargaining power as unionization has declined (from 30 percent of the labor force in 1960 to less than 11 percent today) and large companies have steadily increased their market power (corporate profits as a share of GDP are 50 percent higher than they were in prior decades). Perhaps most important, technology has disrupted countless industries and lowered the demand for less educated labor. Most economists believe that technological change is a far more important factor than international trade in explaining the disappointing outcomes in American labor markets. Across all industries, the returns to education have increased, as less educated workers are disproportionately displaced by automation and computerization. And although manufacturing output continues to rise, manufacturing employment has fallen, as capital takes the place of labor and workers steadily move into the service industry. Yet in spite of all this evidence about the effects of technological change, politicians still point fingers at foreigners.
THE MYTH OF BAD DEALS
Critics of trade on both the left and the right contend that much of the problem has to do with bad trade deals that Washington has struck. On the left, the concern is that trade agreements have prioritized the interests of corporations over those of workers. On the right, it is that trade agreements have focused on the goal of international cooperation at the expense of U.S. interests. Trump has argued that U.S. trade deals have been tilted against the United States, contributing to the large trade deficit (meaning that the country imports more than it exports) and hollowing out the manufacturing sector. Sanders has echoed these concerns in the past, for example, claiming that the North American Free Trade Agreement (NAFTA) cost 43,000 jobs in Michigan and is behind Detroit’s urban decline.
But just as trade in general is not to blame for the woes of the American worker, neither are the specifics of individual trade deals. In fact, the terms of trade agreements are typically highly favorable to the United States. That’s because such deals usually require U.S. trading partners to lower their trade barriers far more than the United States must, since Washington tends to start off with much lower trade barriers. Such was certainly the case with Mexico, which, prior to NAFTA, had tariffs that averaged ten percent, compared with U.S. tariffs that averaged two percent.
This is not to say that trade agreements cannot be improved; useful tweaks could counter the excessive prioritization of intellectual property and reduce the reach of the mechanism by which investors and states resolve disputes, which critics allege gives companies too much power to fight health and environmental regulations. The TPP attempted to modernize NAFTA by placing a greater emphasis on the rights of workers and protecting the environment, and future agreements could go even further.
That said, it is easy to overstate the stakes here. Even ideal trade agreements would do little to address economic inequality and wage stagnation, because trade agreements themselves have little to do with those problems. Compared with other factors—the growth of trade in general, technological change, the decline of unionization, and so on—the details of trade agreements are nearly inconsequential. In fact, in the late 1990s, just after the adoption of NAFTA, the United States saw some of the strongest wage growth in four decades. As studies by researchers at the Congressional Research Service and the Peterson Institute for International Economics have shown, any disruption to the labor market caused by NAFTA was dwarfed by other considerations, especially technological change. And even when trade has cost jobs, as with the China shock, the effect did not depend on the particulars of any trade deal. There was and is no U.S. trade agreement with China, just the “most favored nation” status the country was granted when it joined the World Trade Organization in 2001—a status that it would have been hard to deny China, given the country’s massive and growing economy. What really mattered was the mere fact of China’s emergence as an economic powerhouse.
Critics of trade are also dead wrong when they argue that U.S. agreements have expanded the trade deficit. In fact, it’s the result of borrowing. As economists have long understood, trade deficits emerge whenever a country spends more than it earns, and trade surpluses arise whenever a country earns more than it spends. Trade deficits and surpluses are simply the flip side of international borrowing and lending. Some countries, such as the United States, are borrowers. They consume more of others’ goods than they send abroad, and they pay the difference in IOUs (which take the form of foreign investment in U.S. stocks, bonds, and real estate). Other countries, such as Germany, are lenders. They loan money abroad, accruing foreign assets, but receive less in imports than they send in exports. Which country is getting the better end of the deal? It is hard to say. U.S. households enjoy consuming more now, but they will eventually have to repay the debt; German households get returns on their investments abroad, but they forgo consumption in the present.
What this means is that if policymakers wish to reduce the U.S. trade deficit—and for now, it is not alarmingly large—they should reduce borrowing, which they can accomplish by shrinking the budget deficit. Instead, policymakers are moving in the opposite direction: the budget deficit has swelled in recent years, especially after the 2017 tax cuts. The new U.S. tariffs, meanwhile, have done nothing to improve the trade deficit. That came as no surprise to economists.
THE PRICE OF PROTECTIONISM
As easily debunked as these myths about trade are, they clearly have a powerful hold on policymakers. That is troubling not merely for what it reflects about the state of public discourse; it also has profound real-world implications. As they lambast trade, politicians are increasingly reaching for protectionist policies. Yet for American workers, such measures only add insult to injury, making their lives even more precarious. They do so in four distinct ways.
First and foremost, tariffs act as regressive taxes on consumption. Although the Trump administration likes to claim that foreigners pay the price of tariffs, in truth, the costs are passed along to consumers, who must pay more for the imports they buy. (By this past spring, the cost of the trade war that began in 2018 exceeded $400 per year for the average U.S. household.) Beyond that, tariffs fall disproportionately on the poor, both because the poor consume more of their income and because a higher share of their spending goes to heavily tariffed products, such as food and clothing. That is one reason why progressives in the early twentieth century, outraged by the inequality of the Gilded Age, pushed for moving away from tariffs and toward a federal income tax: it was widely recognized that tariffs largely spared the rich at the expense of the poor. Now, the reverse is happening. After having championed tax cuts that disproportionately benefited well-off Americans, the administration has tried to collect more revenue from regressive taxes on trade.
Second, tariffs and trade wars wreak havoc in U.S. labor markets by raising costs for American companies. Many large U.S. manufacturers are heavily dependent on imports. Boeing is a top U.S. exporter, but it is also a major importer, relying on crucial parts from around the world. General Motors now pays over $1 billion in annual tariffs, no doubt one factor behind the company’s recent decision to shutter a plant in Ohio. When tariffs interrupt global supply chains, they disadvantage U.S. companies relative to foreign ones. If the goal is to make the United States a more internationally competitive place to locate jobs and direct investment, protectionism is a completely backward approach.
Tariffs and trade wars wreak havoc in U.S. labor markets by raising costs for American companies.
Third, trading partners do not sit on their hands when Washington raises tariffs on their products. Already, the Chinese, the Indians, and the Europeans have slapped serious retaliatory tariffs on U.S. goods. The victims of these measures include soybean farmers in Iowa and Minnesota (who have lost market share to Canada as Chinese buyers look elsewhere) and whiskey distillers in Kentucky and Tennessee (who have seen their exports to Europe and elsewhere plummet).
Finally, trade wars harm the global economy and U.S. trading partners, weakening Washington’s network of alliances and jeopardizing the cooperation required to deal with pressing international problems. Recent meetings of the G-7 and the G-20 have been dominated by discussions aimed at diffusing trade conflicts, distracting precious diplomatic attention from climate change and nuclear nonproliferation. It is easy to take peace and international cooperation for granted, but they are prerequisites for the success of the U.S. economy in the decades ahead. The world is witnessing another rise in economic nationalism, which makes it easy for politicians and publics to embrace nationalist tendencies in other spheres. It is worth remembering that after the last era of globalization came to a halt, what followed was the Great Depression and World War II.
PEOPLE POWER
Protectionism is harmful for most American workers, but even more destructive are policies that make the United States less welcoming to immigrants. Setting aside the Trump administration’s actions against refugees and the undocumented—a serious moral stain on the country—its efforts to limit immigration are also economically harmful.
Immigration has long been an enormous boon for the U.S. economy. Study after study has shown that it is good for economic growth, innovation, entrepreneurship, and job creation and that almost all economic classes within the United States benefit from it. Even though only 14 percent of the current U.S. population is foreign-born, immigrants create a disproportionate number of businesses. Fifty-five percent of the United States’ $1 billion startups were founded or co-founded by immigrants, and more than 40 percent of the Fortune 500 companies were founded or co-founded by immigrants or their children. In recent decades, immigrants have accounted for more than 50 percent of the U.S.-affiliated academics who have won Nobel Prizes in scientific fields.
Immigrants also provide countless skills that complement those of native-born American workers. Highly educated foreigners with technological skills (such as computer programmers) make up for persistent shortages in the U.S. high-tech sector, and they complement native-born workers who have more cultural fluency or communication skills. Less skilled immigrants also fill labor shortages in areas such as agriculture and eldercare, where it is often difficult to find native-born workers willing to take jobs.
Immigration has long been an enormous boon for the U.S. economy.
There is little evidence that immigration lowers the wages of most native-born workers, although there is some limited evidence that it may cut into the wages or hours of two groups: high school dropouts and prior waves of immigrants. In the case of high school dropouts, however, there are far better ways to help them (such as strengthening the educational system) than restricting immigration. As for prior waves of immigrants, given how substantial their economic gains from migration are—often, they earn large multiples of what they would have made back home—it’s hard to justify their subsequent slower wage growth as a policy concern.
Immigrants have another economic benefit: they relieve demographic pressures on public budgets. In many rich countries, population growth has slowed to such an extent that the government’s fiscal burden of caring for the elderly is enormous. In Japan, there are eight retired people for every ten workers; in Italy, there are five retirees for every ten workers. In the United States and Canada, although the budget pressures of an aging population remain, higher immigration levels contribute to a healthier ratio of three retirees for every ten workers. It also helps that recent immigrants have above-average fertility rates.
Many objections to immigration are cultural in nature, and these, too, have little grounding in reality. There is no evidence that immigrants, even undocumented ones, increase crime rates. Nor is there evidence that they refuse to integrate; in fact, they are assimilating faster than previous generations of immigrants did.
Given the many benefits from immigration, greater restrictions on it pose several threats to American workers. Already, the United States is beginning to lose foreign talent, which will hurt economic growth. For two years straight, the number of foreign students studying in U.S. universities has fallen, which is a particular shame since these students disproportionately study science, technology, engineering, and mathematics—areas in which the country faces large skills shortages. Encouraging such students to stay in the country after graduation would help the United States maintain its edge in innovation and promote economic growth. Instead, the Trump administration is discouraging foreign students with visa delays and a constant stream of nationalist rhetoric. Restricting immigration also harms the economy in other ways. It keeps out job creators and people whose skills complement those of native-born workers. And it increases the pressure on the budget, since restrictions will lead to a higher ratio of retirees to workers.
A more sensible immigration policy would make it easier for foreign students to stay in the United States after graduation, admit more immigrants through lotteries, accept more refugees, and provide a compassionate path to citizenship for undocumented immigrants currently living in the United States. Promoting U.S. interests means more immigration, not less.
WHAT WORKS
While reducing trade and immigration damages the prospects of American workers, free trade and increased immigration are not enough to ensure their prosperity. Indeed, despite decades of relative openness to trade and immigration, wages remain stagnant and inequality high. This has dire implications. As the economist Heather Boushey has argued, inequality undermines the U.S. economy by inhibiting competition and stifling the supply of talent and ideas. Unmet economic expectations also fuel voter discontent and political polarization, making it easy to blame outsiders and embrace counterproductive policies. For the sake of both the country’s economy and its politics, economic growth needs to be much more inclusive.
To achieve that, the United States needs, above all, a tax system that ensures that economic prosperity lifts all boats. The Earned Income Tax Credit is a powerful tool in that regard. A credit targeted at lower-income workers that grows as those workers earn more, the EITC subsidizes their work, making each hour of it more lucrative. This credit should be expanded in size, it should reach further up the income distribution, and it should be made more generous for childless workers—changes that would particularly benefit those lower- and middle-class Americans who have seen their wages stagnate in recent decades. This policy would work well alongside an increase in the federal minimum wage, which would help combat the increased market power of employers relative to employees.
Beyond these steps, the federal government should set up a wage insurance program, which could make up some of the difference in lower wages for workers who have been displaced by foreign competition, technological change, domestic competition, natural disasters, or other forces. The federal government should also make greater investments in infrastructure, education, and research, all of which would benefit workers by increasing their productivity and thus their incomes. And it should strengthen the safety net, making improved health-care access and affordability a top priority.
None of this will be cheap, of course. To raise revenue, the U.S. tax system needs to be modernized. For corporations, Congress should curb international tax avoidance, closing loopholes and reforming minimum taxes so as to raise government revenues without chasing profits offshore. Congress should also strengthen individual and estate taxation, and it can do so without resorting to extreme rates. For the income tax, it can cap or end various deductions and preferences; for the estate tax, it can raise rates and reduce exceptions. And it can beef up enforcement of both. Congress should also enact a long-overdue carbon tax. Coupled with the other policies, a carbon tax could raise substantial revenue without harming poor and middle-class Americans, and it would fight climate change.
For the sake of both the country’s economy and its politics, economic growth needs to be much more inclusive.
Finally, policymakers need to reckon with corporations’ growing market power. They should modernize antitrust laws to put more emphasis on labor and modernize labor laws to suit the nature of work today, making sure that they adequately protect those in the service sector and those in the gig economy. Although large companies are often good for consumers, their market power narrows the share of the economy that ends up in the hands of workers. So the balance of power between companies and their workers needs to be recalibrated from both ends: policies should empower labor movements and combat companies’ abuses of market power.
In the end, global markets have many wonderful benefits, but they need to be accompanied by strong domestic policies to ensure that the benefits of international trade (as well as technological change and other forces) are felt by all. Otherwise, economic discontent festers, empowering nationalist politicians who offer easy answers and peddle wrong-headed policies.
American workers have every reason to expect more from the economy, but restrictions on trade and immigration ultimately damage their interests. What those who care about reducing inequality and helping workers must realize, then, is that protectionism and nativism set back their cause. Not only do these policies have direct negative effects; they also distract from more effective policies that go straight to the problem at hand. On both sides of the aisle, it’s time for politicians to stop vilifying outsiders and focus instead on policies that actually solve the very real problems afflicting so many Americans.
0 notes
sufredux · 5 years
Text
Lessons From Vietnam on Leaving Afghanistan
The prospect of an end to the conflict in Afghanistan has led many U.S. foreign policy experts to ponder the ignoble conclusion of another war, now a half century past. Vietnam reportedly offers a cautionary tale for some Pentagon officials who worry about reliving the ignominious events of 1975, when the North Vietnamese and the National Liberation Front (NLF) marched triumphantly into Saigon and the last Americans, along with some South Vietnamese allies, struggled frantically to escape by helicopter. Former U.S. Ambassador to Afghanistan Ryan Crocker and others who worry about the humanitarian and geopolitical consequences of withdrawing from Afghanistan warn of a “Vietnam redux” and hear “echoes of America’s retreat from Vietnam.”  They seem to fear an Afghanistan syndrome, like the so-called Vietnam syndrome before it, that could cripple the United States’ ability to intervene militarily.
Just how similar was the war in Vietnam to the war in Afghanistan, and how similar are their endings likely to be? What will be the consequences of U.S. withdrawal for Afghans and Americans—and what lessons might the United States take from Vietnam to mitigate them?
Vietnam and Afghanistan are both reputed “graveyards of empires,” countries fiercely resistant to the will of even the most powerful outsider. The American wars in both countries were offshoots of larger global conflicts: Vietnam was a Cold War front and Afghanistan a front in former U.S. President George W. Bush’s “war on terror.” In both cases, local insurgent forces who fought the United States took the long view, determined to wait out their superpower foe. “You have the watches,” an Afghan insurgent told an American reporter, “we have the time.”
The United States and North Vietnam negotiated the 1973 peace settlement directly with each other, ignoring their respective allies, the government of South Vietnam and the NLF. In Afghanistan, the United States is now negotiating directly with the Taliban, sidestepping its ally, the government of President Ashraf Ghani. The U.S. ally in Kabul, like its ally in South Vietnam, controls only a fragment of its territory, exercises weak leadership, and is afflicted with political and governmental dysfunction as well as rampant corruption. The Afghan military, like its South Vietnamese counterpart, depends on U.S. financial aid and support. And just as in Vietnam, the timing of a U.S. troop withdrawal is an essential element of any agreement. Now, as then, U.S. officials seek a “decent interval,” in the phrase coined by Henry Kissinger, national security adviser to U.S. President Richard Nixon, between the departure of the United States and the fall of its allied government.
“You have the watches,” an Afghan insurgent told an American reporter, “we have the time.”
For all that, the two wars are also strikingly dissimilar—beginning with their entirely incomparable scale. U.S. troops in Vietnam peaked at slightly more than half a million, of which more than 58,000 were killed. The United States has committed barely a fifth the forces to Afghanistan and has lost fewer than 3,000. Of course, the enemy is incomparable, too. North Vietnam was a formidable foe with one of the world’s largest armies and substantial outside support from the Soviet bloc and China. The U.S. enemy in Afghanistan is mainly the Taliban insurgents, a far smaller military force backed mostly by Pakistan. No great power rivalry adds complexity or cost to the Afghan war.
The war in Vietnam provoked an outcry at home that would define a legacy shared by no U.S. war before or since. By the time the Paris peace negotiations began in earnest in 1972, that war was deeply and irredeemably unpopular in the United States. Domestic pressure left Nixon and Kissinger, his chief negotiator, little choice but to settle quickly for the best terms possible. In part because of the rift Vietnam opened in the American social fabric, the United States has fought in Afghanistan with a volunteer army, employed far fewer troops, and sought to keep casualties low. There have been no war taxes to rile the public, no street demonstrations to rattle decision-makers. Media coverage has been limited and boosterish. Polls show that a solid majority of Americans think the war was a mistake, doubt that progress is being made, and want to get out. But unlike Vietnam, the present war has not aroused opposition potent enough to force discussion of a withdrawal.
GETTING OUT
Oddly, the major impetus for extrication from Afghanistan has come not from strategic thinkers or antiwar protesters but from a chief executive who is a foreign policy neophyte and who often behaves quite erratically. As a candidate for the U.S. presidency, Donald Trump expressed concern about costly, never-ending wars, such as the one in Afghanistan, and vowed to terminate them. When he took office, his advisers persuaded him to authorize a small increase in U.S. troops instead. Now, with the departure of establishment figures such as former U.S. National Security Adviser H. R. McMaster and former U.S. Secretary of Defense James Mattis, Trump has again set out, in his words, to extricate the nation from these “endless wars” and “bring our folks home.” The foreign policy establishment and some more hawkish senators, such as Lindsey Graham, the Republican from South Carolina, have sought to obstruct or at least delay the president’s plans and sustain the commitment in Afghanistan.
The ignominious end to the war in Vietnam haunts this discussion. Many Americans retain indelible images of North Vietnam’s devastating final offensive against the South, the complete collapse of the Saigon government and its army, and the desperate, belated efforts of Americans and South Vietnamese to escape the onslaught. For a nation accustomed to victory in war, such memories are searing. Would a withdrawal from Afghanistan look like Vietnam and have similar consequences for Afghans—and Americans?
At this point, the details of the agreement under negotiation are unsettled. The Taliban seeks an early withdrawal of U.S. troops; the United States favors a process that could take up to three years. U.S. negotiators seek guarantees from the Taliban that terrorists will not again use Afghan territory as a base from which to strike the United States. There is no way to ensure that the Taliban would keep such a pledge, but the group apparently has its own concerns about al Qaeda and the Islamic State (also known as ISIS), both of which have nests scattered across Afghanistan, and might seek to curtail their activities for its own reasons. The agreement includes a provision for a cease-fire, which would likely hold only as long as the various parties in Afghanistan want it to—perhaps not long. In Vietnam, the Saigon government broke the cease-fire before the ink on the 1973 agreement was dry; North Vietnam was not far behind.
We can’t know what will happen in Afghanistan when the United States withdraws. One possibility is that the country will revert to a Taliban-dominated nation-state and a patchwork of ethnic groups and warlords, just as before 2001. As North Vietnam did in the South after 1975, a Taliban government might try to impose its ideology on the part of Afghanistan it controls, in this case re-creating an Islamic state similar to the one it ran before it was deposed, with all the obvious implications for human rights and the treatment of women.
HISTORY LESSONS
Using historical analogy to inform policy decisions is tricky at best and perilous at worst. Nonetheless, Vietnam may offer some useful lessons for postwar Afghanistan. For instance, Nixon deluded himself into thinking that the promise of economic aid and the threat of renewed bombing would give him leverage over North Vietnam after U.S. troops withdrew. In reality, the Watergate scandal and fierce opposition in Congress and the country to any form of reintervention tied his hands. Even without Watergate, Nixon would likely not have been able to forestall North Vietnam’s victory. In Afghanistan, similarly, the United States will have little influence on events on the ground after it has left. The reintroduction of troops seems highly improbable; the most the United States might do would be to attack terrorist bases with bombs and missiles, as it has done in Syria and elsewhere.
Vietnam should remind us of the costs of wishful thinking in the final stages of war.
Vietnam should also remind us of the costs of wishful thinking in the final stages of war. In the spring of 1975, U.S. Ambassador to South Vietnam Graham Martin stubbornly refused even to plan for withdrawal, for fear of encouraging the enemy and discouraging the South Vietnamese—a stance that made the U.S. departure more chaotic than it might have been. A quiet, well-planned, orderly withdrawal from Afghanistan would look much different from a Vietnam-like exit under extreme duress. And it would militate against the “Afghanistan syndrome” that many foreign policy analysts fear.
Most Read Articles
               The India Dividend                                
New Delhi Remains Washington’s Best Hope in Asia
Robert D. Blackwill and Ashley J. Tellis
               A Deal With the Taliban Is Only the First Step Toward Peace                                
The Real Negotiations Are About to Begin
Johnny Walsh
               This Is Your Brain on Nationalism                                
The Biology of Us and Them
Robert Sapolsky
               Plan B in Venezuela                                
Washington Must Give Up on an Ideal Strategy in Favor of an Achievable One
Michael J. Camilleri
               The Old World and the Middle Kingdom                                
Europe Wakes Up to China’s Rise
Julianne Smith and Torrey Taussig
Another lesson from Vietnam is the critical importance of consulting with U.S. allies well in advance of departure. Nixon’s announcement of U.S. troop withdrawals from Vietnam just hours after informing allies of the decision deeply antagonized several of those governments. Australia, the most zealous U.S. ally at the start of the war, responded by disengaging from Vietnam even more rapidly than the United States. The Trump administration should avoid repeating this mistake; unfortunately, its track record in dealing with allies does not bode well in this area. American officials would also do well to follow the courageous example of U.S. President Gerald Ford in welcoming South Vietnamese refugees by providing for the emigration of those Afghans who have been most closely tied to the United States—again, no simple task, especially given the current administration’s hostility toward immigration.
Because the war in Afghanistan has largely been invisible to most Americans, the domestic political effect of the withdrawal will likely be less dramatic than with Vietnam. A flap in Congress over who lost Afghanistan seems improbable. The most the withdrawal might do in terms of domestic politics is widen the intraparty rift between Trumpian nationalists and mainstream Republicans and sharpen the already discernible public weariness with costly and interminable conflicts abroad. A survey commissioned by the Eurasia Group Foundation in 2018 shows that a majority of Americans favor a more nationalist approach that prioritizes urgent needs at home over costly campaigns to remake the world in America’s image. This represents a shift in attitudes away from public acceptance of a more interventionist policy in the aftermath of 9/11 and exposes a widening gap between the views of the public and those of foreign policy elites—a gap that leaders will have to address when framing future policies.
The options before the United States today are familiar ones. Washington could escalate in hopes of winning the war; it could persist just as it has so far, inviting a prolonged stalemate; or it could put an end to a failed venture that has lasted 18 years and whose long-term costs may run to trillions of dollars. The choice seems obvious. The United States must abandon its fixation with abstractions, such as credibility or the fear of appearing weak, and act instead on the basis of common sense. The most enduring lesson of Vietnam—and Afghanistan—may be that there is no good way out of a bad war except to end it.
0 notes
sufredux · 5 years
Text
What Really Happened in Congo
It didn’t take long for Congo’s transition from Belgian colony to sovereign state to turn ugly. Both the Soviet Union and the United States were keeping a close eye on the mineral-rich country at the heart of Africa when, on June 30, 1960, it gained independence under a democratically elected government headed by Prime Minister Patrice Lumumba. A charismatic nationalist, Lumumba led the only party in parliament with a nationwide, rather than ethnic or regional, base. Within days, however, Congo’s troops mutinied against their all-white officer corps (a holdover from the colonial era) and started terrorizing the European population. Belgium responded by sending forces to reoccupy the country and helping Congo’s richest province, Katanga, secede. The United States, declining the appeals for help from the new Congolese government, instead threw its support behind a UN peacekeeping mission, which it hoped would obviate any Congolese requests for Soviet military assistance. But Lumumba quickly came into conflict with the UN for its failure to expel the Belgian troops and end Katanga’s secession. After issuing a series of shifting ultimatums to the UN, he turned to Moscow for help, which responded by sending transport planes to fly Lumumba’s troops into Katanga.
That’s when the Eisenhower administration sent in the CIA. In the decades that followed, the dominant narrative in U.S. foreign policy circles portrayed the U.S. covert action in Congo as a surgical, low-cost success. Even the 1975 U.S. Senate investigation by the Church Committee, which was heavily critical of the CIA, concluded that of the five covert paramilitary campaigns it studied, the operation in Congo was the only one that “achieved its objectives.” Those who hold this view credit the U.S. government with avoiding a direct military confrontation with the Soviet Union and China while foiling the communists’ attempts to gain influence over a key African country. They acknowledge that the CIA contributed to the fall of Lumumba, who lost a power struggle with Joseph Mobutu, the pro-Western head of Congo’s army, in September 1960. But they maintain that even though the CIA plotted to assassinate Lumumba -- once even trying to get a recruit to poison his toothpaste or food -- it never did so, and had no hand in his eventual murder, in January 1961. They also recognize the agency’s contribution to the military defeat of Lumumba’s followers. As for Mobutu, who would go on to become one of Africa’s most enduring and venal leaders, proponents of the orthodox account argue that his faults became clear only later, many years after CIA involvement had run its course.
                 Stay informed. In-depth analysis delivered weekly.                    
Sign Up
Over the years, many scholars and journalists have challenged parts of this orthodoxy, and public perception has begun to catch up. But their case has been hampered by the shortage of official documentary evidence. Recently, however, new evidence has become available, and it paints a far darker picture than even the critics imagined. The key sources include files from the Church Committee, which have been slowly declassified over the last 20 years; a 2001 Belgian parliamentary investigation into Lumumba’s murder; and a new volume, released last year, of Foreign Relations of the United States, the State Department series that presents a document-by-document record of U.S. decision-making. The new volume, on Congo, contains the most extensive set of CIA operational documents ever published.
We now know that even though the threat of communism in Congo was quite weak at the time of Congo’s independence, the CIA engaged in pervasive political meddling and paramilitary action between 1960 and 1968 to ensure that the country retained a pro-Western government and to help its pathetic military on the battlefield. So extensive were these efforts that at the time, they ranked as the largest covert operation in the agency’s history, costing an estimated $90–$150 million in current dollars, not counting the aircraft, weapons, and transportation and maintenance services provided by the Defense Department. The CIA had a hand in every one of Congo’s major political turning points during the period and maintained a financial and political relationship with every head of its government. And contrary to the conclusion of the Church Committee, Lawrence Devlin, the CIA station chief in Congo for most of the period, had direct influence over the events that led to Lumumba’s death.
Not only was U.S. involvement extensive; it was also malignant. The CIA’s use of bribery and paramilitary force succeeded in keeping a narrow, politically weak clique in power for most of Congo’s first decade of independence. And the very nature of the CIA’s aid discouraged Congolese politicians from building genuine bases of support and adopting responsible policies. The agency’s legacy of clients and techniques contributed to a long-running spiral of decline, which was characterized by corruption, political turmoil, and dependence on Western military intervention. So dysfunctional was the state that in 1997 it outright collapsed -- leaving behind instability that continues to this day.
PLAYING POLITICS
In the beginning, U.S. covert action in Congo was exclusively political in nature. Washington worried that Lumumba was too erratic and too close to the Soviets and that if he stayed in power, Congo could fall into further chaos and turn communist. Allen Dulles, the director of the CIA, cabled the CIA station in Léopoldville, the capital, in August 1960: “We conclude that his removal must be an urgent and prime objective and that under existing conditions this should be a high priority of our covert action.” So the CIA station, in tandem with Belgian intelligence officials, subsidized two opposition senators who attempted to organize a vote of no confidence against Lumumba’s government. The plan was for Joseph Kasavubu, Congo’s president and Lumumba’s rival, to dissolve the government after the vote and nominate one of the senators as the new prime minister. The CIA also funded anti-Lumumba street demonstrations, labor movements, and propaganda.
But Kasavubu, encouraged by the Belgians, jumped the gun and publicly fired Lumumba two days before the vote was to be held. Lumumba responded by refusing to withdraw and continuing to dominate parliament, which would have to approve a new government. Devlin quickly found a solution to the stalemate in Mobutu, the 29-year-old army chief of staff. In two meetings, Mobutu told Devlin that he was moving troops to the capital and pleaded for U.S. help in acting against Lumumba. Devlin agreed to finance his efforts, subsequently telling CIA debriefers that, as the new Foreign Relations of the United States volume puts it, “this was the beginning of the plan for Mobutu to take over the government.” On September 14, Mobutu announced that he was suspending parliament and the constitution. He sacked Lumumba and kept on Kasavubu, but now Mobutu was the power behind the throne.
The CIA rushed to his side with more money, warnings about assassination plots, and recommendations for ministerial appointments. It counseled Mobutu to reject reconciliation with Lumumba and instead arrest him and his key associates, advice Mobutu readily accepted. Devlin became not just the paymaster but also an influential de facto member of the government he had helped install. His principal vehicle was the so-called Binza Group, a caucus of Mobutu’s political allies that got its name from the Léopoldville suburb where most of them lived. It included Mobutu’s security chief and his foreign and finance ministers. In the months after the coup, the group consulted Devlin on major political and military matters, especially those dealing with Lumumba, who was now under house arrest but protected by UN troops.
The group almost always heeded Devlin’s advice. In October, for example, Mobutu threatened to expand his power by firing President Kasavubu -- which would have deprived the government of its last shred of political legitimacy. So Devlin persuaded him to accept a compromise instead, under which Mobutu would work with a council of associates -- all paid by the CIA -- that would choose cabinet ministers for Kasavubu and control parliament. Devlin also convinced the Binza Group to drop a risky plan to attack Lumumba’s UN security detail and arrest Lumumba.
On January 14, 1961, Devlin was informed by a government leader that Lumumba, who had escaped from UN protection and been captured by Mobutu’s troops, was about to be transferred to the Belgian-backed secessionist province of South Kasai, whose leader had vowed to murder him. In his subsequent, January 17 cable reporting this critical contact to CIA headquarters, Devlin gave no indication that he had voiced any opposition to the plan. Given his intimate working relationship with Congo’s rulers and his previous successful interventions with them concerning Lumumba, Devlin’s permissive stance was undoubtedly a major factor in the government’s decision to move Lumumba.
But Devlin did more than give a green light to the transfer. He also deliberately kept Washington out of the loop -- an exception for a covert program that was being closely managed by the CIA, the State Department, and the National Security Council. On the same day that he was informed of Lumumba’s prospective transfer, Devlin learned that the State Department had denied his and CIA headquarters’ urgent request for funds to pay off a key Congolese garrison on the verge of a mutiny that threatened to restore Lumumba to power. John F. Kennedy was to take office in six days, and the State Department considered the request “one of high policy” that should wait for the new administration to decide.
Seeing his preferred method for preventing Lumumba’s comeback blocked, Devlin may have viewed the impending transfer as a promising Plan B. But he also knew that if he told headquarters about the plan, it would consult the State Department, which, given its response to his last request, would almost certainly have considered the U.S. position on the transfer a matter for the incoming administration. All of that meant that if Washington had been fully informed about the plot, it might well have tried to put the brakes on it through Devlin, the Binza Group, and their Belgian advisers. Moreover, Devlin knew that the Kennedy transition team was reconsidering the Eisenhower administration’s hard-line policy toward Lumumba. So even as he communicated with headquarters about other matters, Devlin withheld information about the planned transfer for three days, until the move was already under way. In a last-minute switch, Lumumba was sent to Katanga, the other Belgian-supported secessionist province, whose powerful interior minister had repeatedly called for his scalp. By the time Devlin’s January 17 cable arrived in Washington, Lumumba had been shot dead in Katanga.
Rather than end the struggle for control of Congo, Lumumba’s assassination only intensified it. In August 1961, the United States, under pressure from the UN and a pro-Lumumba state in eastern Congo, agreed that the Congolese parliament should reconvene to select a new national government. But the CIA used bribes to ensure that the new government was led by its ally Cyrille Adoula. While the resulting power-sharing deal did include some Lumumbists, as Lumumba’s supporters were called, the most important positions went to members of the Binza Group (with Mobutu himself remaining head of the army).
Once Adoula was in office, the CIA provided him with a public relations firm to help him bolster his image abroad and an adviser who wrote speeches for him. The CIA also bribed parliament, the Binza Group, a labor union, and an organization of tribal chiefs to back the new leader. Meanwhile, Devlin continued to behave like a member of the government. At the Binza Group’s behest, he persuaded Adoula not to make concessions to his Lumumbist deputy prime minister. When Adoula decided to fire Mobutu, Devlin convinced him to drop the idea. Adoula even asked Devlin to canvas political leaders in order to gauge his own parliamentary support. In November 1961, after only a year and a half on the job, Devlin cabled CIA headquarters that the agency could “take major credit for the fall of the Lumumba [government], the success of the Mobutu coup and considerable credit for Adoula’s nomination as premier.”
UP IN THE AIR
Adoula’s government didn’t perform as well as Washington had hoped: soldiers were forced to live off the land, avaricious officials looted the Treasury, and inflation sapped the incomes of everyone else. After Adoula removed nearly all his Lumumbist ministers and dissolved parliament in 1963, the Lumumbists returned to their home provinces and took to arms. By early 1964, their rebellion had swept across almost half the country. Alarmed by the insurgency, the Binza Group and Kasavubu decided to replace Adoula with someone they thought would deal with it more effectively: Moise Tshombe, the former secessionist leader of Katanga, whose breakaway government had murdered Lumumba in 1961. The CIA acquiesced to the change, adding tribal supporters of Tshombe and other key politicians to its existing payroll. It also added a major paramilitary thrust to its political program in Congo.
The agency endowed Tshombe’s new government with an “instant air force” to defeat the rebels, who were then receiving modest advisory and financial assistance from the Chinese. The unit, composed mainly of American planes piloted by Cuban exiles, enabled the advance of white mercenaries (predominantly South Africans and Rhodesians) who were leading the Congolese government forces. In August 1964, a National Security Council committee had signed off on a plan for 41 combat and transport aircraft and almost 200 personnel (Cuban air crews and European ground maintenance workers). In early 1965, the CIA added a small navy, also staffed by Cubans, to the mix to hamper shipments of military supplies to the rebels from neighboring Tanzania across Lake Tanganyika.
Washington was joining a particularly bloody conflict. When they seized rebel-held areas, the white mercenaries and government forces indiscriminately slaughtered the rebels and civilians they found there. Although there was no systematic counting of the casualties, it is estimated that at least 100,000 Congolese perished during this phase of the war. The insurgents killed about 300 Americans and Europeans whom they had taken hostage following the fall of Stanleyville, the rebel capital.
By the fall of 1965, the Congolese army and its foreign helpers had largely succeeded in regaining control of the country, but another threat loomed: increasing political competition between President Kasavubu and Prime Minister Tshombe. Both the U.S. government and the Binza Group feared that the conflict between the two men could cause one of the contenders to look for support from the more radical African regimes. As the crisis reached its apogee, Mobutu told Devlin that he was considering launching another coup, to replace both Kasavubu and Tshombe, or finding some other unidentified solution. On November 22, the United States responded by increasing CIA financing for Mobutu’s officers and giving Mobutu carte blanche to act as he saw fit.
Within three days, Mobutu bloodlessly seized power, a result that Devlin called “the best possible solution.” The CIA responded with still more money, which Mobutu used to pay off key officers, political leaders, and tribal chiefs. Throughout 1966 and 1967, the agency forwarded Mobutu intelligence about threats to his regime, uncovering a number of major plots (one of which ended with the public hanging of the alleged conspirators). And the CIA’s covert air force, along with overt transportation help from the Pentagon, helped Mobutu fend off two mercenary-led mutinies.
In October 1966, Mobutu threw out the U.S. ambassador for failing to show enough respect for his newly elevated status and stopped requesting his monthly CIA stipend. Two years later, Mobutu changed his mind and asked the CIA for more money -- which he got. By then, the CIA had wrapped up its paramilitary program and limited its political funding to four key people other than Mobutu. From the U.S. perspective, with no more legal opposition to control and no more Lumumbist rebels or pro-Tshombe mercenaries to fight, Congo could be transitioned to purely overt U.S. military and economic assistance.
Hands tied: Lumumba's capture in Léopoldville, December 1960.
THE DAMAGE DONE
Unfortunately, the full picture of the CIA’s involvement in Congo remains partly obscured. Concerned about protecting its sources and methods, the agency managed to delay the publication of the new volume of Foreign Relations of the United States for over a decade. And the version that was finally released takes an overly cautious approach to redactions, withholding four documents in their entirety, cutting 22 by more than a paragraph, omitting the financial costs of specific activities, and attempting to guard the identities of the CIA’s key Congolese clients besides Mobutu. Five decades after the events in question, most of these excisions seem hard to justify, especially given that historians, journalists, and even Devlin himself have already exposed the main actors’ identities.
Still, it is clear that the CIA programs of the 1960s distorted Congolese politics for decades to come. This is not to argue that in the absence of U.S. meddling, Congo would have established a Western-style representative government. But even in a region with plenty of autocracies, the country has stood out for its extreme dysfunction. Ever since the CIA’s intervention, Congo’s leaders have been distinguished by a unique combination of qualities: scant political legitimacy, little capacity for governing, and corruption so extensive that it devours institutions and norms. In the years following U.S. covert action, these qualities led to economic disaster, recurrent political instability, and Western military intervention. Finally, in 1997, rebels headed by a former Lumumbist and backed by military forces from Rwanda, Uganda, and Angola sent Mobutu packing, leading to a regional war that would kill more than three and a half million people over the next decade.
Of course, the main author of all of this misrule was Mobutu. But given that he would never have been able to consolidate control were it not for the CIA cash he distributed to his allies, as he himself admitted to the agency, the United States must bear some responsibility for what Mobutu wrought. Furthermore, the CIA’s predominant techniques -- corruption and external force -- constituted a tutorial on irresponsible governance. Weaned on the agency’s bribery, Mobutu and his associates never had to compete for the affection of the broader public and develop a real political base and had no incentive to put the state’s resources to good use. And because Mobutu could depend on the CIA’s paramilitary support, he felt no pressure to develop even a minimally capable military. In fact, even though he managed in the chaos of independence to be appointed army chief of staff, he was an incompetent military leader. By 1964, his army had, according to Averell Harriman, the U.S. undersecretary of state for political affairs, proved its “worthlessness,” being incapable of securing key territory without help from foreign mercenaries. What Mobutu was immensely talented at, of course, was the skill that the Americans had taught him: wheedling bribes. Twice, he even persuaded Devlin to reimburse him for army funds that he claimed to have used for unauthorized expenses or CIA objectives, arguing that if rivals discovered the misuse, they might charge him with corruption.
U.S. officials outside the CIA learned of Mobutu’s flaws early on. Following the 1965 coup, a State Department memorandum cautioned, “It is too early to discern where Mobutu will draw the line between corruption and the ‘normal’ use of payments and patronage to facilitate government operations.”
During the white mercenary mutinies of 1966 and 1967, U.S. cables and memorandums were scathing. A National Security Council memo to the White House chief of staff described Mobutu as “somewhat inept and his chances of pulling the Congo up by its bootstraps are indeed remote.”
The U.S. ambassador to Congo, Robert McBride, labeled Mobutu “irrational” and “highly unstable.” President Lyndon Johnson’s national security adviser at the time, Walt Rostow, called him “an irritating and often stupid” man who “can be cruel to the point of inhumanity.” In 1968, McBride sent a cable to the State Department that took note of the president’s new luxury airplane, plan for parks modeled on Versailles, thoughts of building a replica of Saint Peter’s Basilica in each of Congo’s three largest cities, and acquisition of a Swiss villa. McBride concluded,
I believe there is nothing which can be done to restrain these frivolous Presidential expenditures because Mobutu has apparently risen in souffle-like grandiloquence. I feel that to call his attention to the dangers of this type of thing . . . would be to incur instant wrath.
However, I felt a brief report should be made on [this] regrettable phenomenon because I believe it is the most serious problem facing Congo at present time and the fault is that of the President and the uncontrollable spending is emanating directly from him. Furthermore, it occurred to me this might have an effect on US policies towards the present regime in the Congo.
OUR MAN
What McBride seemed not to realize was that eight years of covert action had done much to rule out any alternative U.S. policy, then or ever. The CIA had not only fostered a regime; it had stamped it “made in America” for future policymakers in Washington. As Mobutu’s government lurched from crisis to crisis, it continued to enjoy U.S. and Western financial and military help. Over the years, many in Congress and some dissidents in the State Department did urge the U.S. government to push for economic and political reforms in the country that Mobutu had renamed Zaire in 1971. Failing that, they said, it should distance itself from Mobutu and cultivate political ties with the opposition. When the Cold War ended, Congress finally cut off military and nonhumanitarian assistance. Yet even afterward, as the regime entered its death throes, U.S. officials could not bring themselves to abandon it and support the peaceful democratic transition proposed by the rising opposition.
Clinging to a longtime friendly dictator, even as his flaws become more risky for U.S. interests, is a well-known pathology of U.S. foreign policy. In the case of Congo, the relationship had been created and nurtured by CIA covert action. This endowed it with a special aura of intimacy, visible in the possessive language that U.S. officials used when referring to Mobutu. For Devlin, Mobutu became “almost our only anchor to the windward.” During the escalating battle between Kasavubu and Tshombe, Harold Saunders, a member of the National Security Council staff, wrote that Mobutu should be the one to resolve the conflict -- by military means if necessary -- because “he is already our man.” Ten years after the subsequent coup, Edward Mulcahy, the deputy assistant secretary of state for African affairs, testified in Congress, “We do have . . . a warm spot in our hearts for President Mobutu. At a time when our aid and advice were critical for the development of Zaire, he was good enough -- and I might say wise enough -- to accept our suggestions and our counsel to the great profit of the state.”
Like other such questionable commitments, the United States’ long support for Mobutu was rationalized as necessary because there was no alternative but chaos. In reality, Washington squandered opportunities to push for major reforms. After Congolese exiles from Angola unsuccessfully invaded Zaire twice in the late 1970s, the United States failed to use the leverage provided by the resulting Western military intervention to seek a more inclusive government. During the opposition ferment that swept Zaire in the 1980s, it refused to support the popular demand for a second party. Even when a strong democracy movement compelled Mobutu to make political concessions in the early 1990s, the George H. W. Bush administration prevented Herman Cohen, its assistant secretary of state for African affairs, from calling for Mobutu’s resignation after Mobutu reneged on his commitments. And although the Clinton administration banned visas for Mobutu’s associates, it also endorsed his laughable plan for “free elections.”
Covert action produced a Congolese government that largely supported U.S. foreign policy, but it burdened U.S. diplomacy in Africa for decades. In particular, the overthrow and murder of Lumumba and the support for Tshombe’s white mercenaries angered African nationalists and soured U.S. relations with many key countries, including Algeria, Ghana, Kenya, and Tanzania; these actions also antagonized liberation movements in Angola, Mozambique, South Africa, and Zim­babwe. The resentment and suspicion that the CIA’s program in Congo engendered subsided slightly as the agency’s involvement there declined, but they never disappeared, and they would resurface throughout the 1970s and 1980s whenever the West (and in particular the CIA) intervened in the region.
A COMMUNIST CONGO?
The root of the CIA’s intervention in Congo was an overhyped analysis of the communist threat. Congo scholars have long been skeptical of the notion that had Lumumba stayed in power, his government would have fallen under the sway of the Soviet Union or China. At the time, even some U.S. officials had doubts. In 1962, shortly after he retired as director of the CIA, Dulles admitted, “I think that we overrated the Soviet danger, let’s say, in the Congo.” The Kennedy administration’s initial policy paper, soon modified, advocated a broad-based government of “all principal political elements in the Congo,” to be followed by the release of Lumumba. Even at the height of the rebellion, in 1964, National Security Adviser McGeorge Bundy wrote Johnson, “What is very unclear is how deep the Chinese hand is in the rebel efforts. Harriman thinks it is pretty deep; most of the intelligence community thinks it is more marginal.” In November 1964, Michael Hoyt -- the U.S. consul in rebel-held Stanleyville, who had just been released from over three months of captivity -- informed policymakers that the leaders of the Lumumbist insurgency were “within the Congolese political spectrum” and that they were “essentially pragmatic and followed their own interests.”
The skeptics were right: Lumumba was never a communist, and he would not have yielded to foreign control. He and his supporters had cut their political teeth in the struggle against colonialism, and they found any form of external domination anathema. They were far more interested in nonalignment, and the foreigners they identified with were other African independence leaders, not Khrushchev or Mao. Lumumba and his followers also understood that the communist world could never replace the massive European investment and 10,000 Belgian technicians that served as the foundation for Congo’s Western-oriented economy. Even when they accepted Soviet military assistance to help reunify their country or contest their political exclusion, they continued to appeal for support from the United States, the rest of the West, and other African countries. Yet Washington refused to help.
Archives from the former Soviet bloc confirm that although Moscow was eager to squeeze every propaganda advantage it could from the West’s difficulties in Congo, it understood that Lumumba and his followers were no Marxists, and it hedged its support for them accordingly. Following Mobutu’s 1960 coup, Moscow meekly withdrew its airplanes and military advisers from the country and did nothing to help Lumumba. It provided little aid to his successors until Lumumba’s assassination and the capture of Stanleyville by white mercenaries outraged the rest of Africa. Even then, the Soviet Union dispatched arms but no advisers to teach the recipients how to use them. Soviet and Chinese military assistance were also constrained by the need to secure transport rights through neighboring African states, which was not always forthcoming.
In retrospect, it is clear that the U.S. officials directing Congo policy inappropriately projected their Cold War experiences in Europe, Asia, and Latin America onto Africa, where the conditions were completely different. In Congo, there had been no Soviet military occupation and no significant Marxist or communist party or cadres. Tragically, Washington spurned an alternative policy: engaging diplomatically with Lumumba and his successors as part of a broad effort to keep the Cold War out of Congo. Instead, it annointed Mobutu and other members of the Binza Group as Belgium’s heirs. Impatient and inexperienced as he was, Lumumba represented his country’s best hope for a successful postcolonial era. There is every reason to believe that working with him and other incipient democratic forces would have better served both the United States and Congo.
0 notes
sufredux · 5 years
Text
The Broken Bargain
Nationalism and nativism are roiling politics on every continent. With the election of President Donald Trump in the United States, the growing power of right-wing populist parties in Europe, and the ascent of strongmen in states such as China, the Philippines, and Turkey, liberals around the world are struggling to respond to populist nationalism. Today’s nationalists decry the “globalist” liberalism of international institutions. They attack liberal elites as sellouts who care more about foreigners than their fellow citizens. And they promise to put national, rather than global, interests first.
The populist onslaught has, understandably, prompted many liberals to conclude that nationalism itself is a threat to the U.S.-led liberal order. Yet historically, liberalism and nationalism have often been complementary. After World War II, the United States crafted a liberal order that balanced the need for international cooperation with popular demands for national autonomy, curbing the aggressive nationalist impulses that had proved so disastrous in the interwar years. The postwar order was based on strong democratic welfare states supported by international institutions, such as the World Bank and the International Monetary Fund (IMF), that coordinated economic policy between states while granting them the flexibility to act in their own national interest. The political scientist John Ruggie has called this arrangement “embedded liberalism,” because it embraced free markets while subjecting them to institutionalized political control at both the domestic and the international level—a bargain that held for several decades.
Yet over the past 30 years, liberalism has become disembedded. Elites in the United States and Europe have steadily dismantled the political controls that once allowed national governments to manage capitalism. They have constrained democratic politics to fit the logic of international markets and shifted policymaking to unaccountable bureaucracies or supranational institutions such as the EU. This has created the conditions for the present surge of populist nationalism. To contain it, policymakers will have to return to what worked in the past, finding new ways to reconcile national accountability and international cooperation in a globalized world. The proper response to populism, in other words, is not to abandon liberal internationalism but to re-embed it.
A pro-EU demonstrator in London, shortly after the Brexit referendum, June 2016.
THE GREAT TRANSFORMATION
Nationalism is generally understood as the doctrine that the cultural unit of the nation, whether defined along civic or ethnic lines, should be congruent with the political unit of the state. For most of history, political loyalties did not coincide with national boundaries. This began to change in early modern Europe following the Protestant Reformation, as centralized states secured monopolies on violence and legal authority within their territory, gradually displacing the Catholic Church and transnational dynastic networks. At the same time, early commercial capitalism was shifting economic power away from rural landlords and toward the thriving urban middle classes. The state increasingly fused with its nation, a distinctive people that contributed blood and treasure to the state and that, in exchange, insisted on the right to participate in government. Over time, the nationalist claim to popular self-determination became the handmaiden of democracy.
During the nineteenth century, nation-states in western Europe (as well as European settler colonies such as the United States) developed strong civic institutions, such as universalistic legal codes and national educational systems, that could assimilate diverse groups into a shared cultural identity. (In eastern European countries and other late-developing states, however, different ethnic groups gained political consciousness while still living together in multinational empires—there, homogeneity was achieved not through assimilating civic institutions but through war, ethnic cleansing, and expulsion.) One of the most widely invoked theorists of nationalism, Ernest Gellner, argued that this process of internal cultural homogenization was driven by the requirements of industrial capitalism. In order to participate in national economies, workers needed to speak the national language and be fully integrated into the national culture. In countries with a strong civic state, these pressures transformed the nation-state into a culturally, politically, and economically integrated unit.
By the early decades of the twentieth century, however, tensions had begun to emerge between liberal capitalism and nationalist democracy. Nineteenth-century capitalism relied on automatic market controls, such as the gold standard, to regulate financial relations between states. Governments lacked both the will and the ability to intervene in the economy, whether by spending to counteract downturns in the business cycle or by acting as the lender of last resort to forestall bank runs. Instead, they let the invisible hand of the market correct imbalances, imposing painful costs on the vast majority of their citizens.
This laissez-faire policy became politically untenable during the late nineteenth and early twentieth centuries, as more and more people gained the right to vote. After the crash of 1929 and the Great Depression, enfranchised citizens could demand that their national leaders assert control over the economy in order to protect them from harsh economic adjustments. In some countries, such as Germany and Japan, this led to the ascent of militantly nationalist governments that created state-directed cartel economies and pursued imperial expansion abroad. In others, such as the United States under President Franklin Roosevelt, governments instituted a form of social democratic capitalism, in which the state provided a social safety net and launched employment programs during hard times. In both cases, states were attempting to address what the economic historian Karl Polanyi, in The Great Transformation, identified as the central tension of liberal democratic capitalism: the contradiction between democratic rule, with its respect for popular self-determination, and market logic, which holds that the economy should be left to operate with limited government interference.
During the interwar years, the world’s leading liberal powers—France, the United Kingdom, and the United States—had made tentative efforts to create an international order to manage this tension. U.S. President Woodrow Wilson’s Fourteen Points called for a world of independent national democracies, and his proposal for a League of Nations promised a peaceful means for resolving international disputes. In practice, the United States refused to join the League of Nations, and the British and the French ensured that the Treaty of Versailles humiliated Germany. But despite these shortcomings, the interwar liberal order functioned, for a time. The 1922 Washington Naval Treaty initially helped prevent a naval arms race between Japan and the Western allies. The 1925 Pact of Locarno guaranteed Germany’s western border. And the 1924 Dawes Plan and the 1929 Young Plan provided the Weimar government with enough liquidity to pay reparations while also funding urban infrastructure improvements and social welfare provisions. The system held until the collapse of the international economy after 1929. In both Germany and Japan, the resulting economic crisis discredited liberal and social democratic political parties, leading to the rise of authoritarian nationalists who promised to defend their people against the vicissitudes of the market and the treachery of foreign and domestic enemies.
It was only after World War II that liberal internationalists, led by those in the United States and the United Kingdom, learned how to manage the tension between free markets and national autonomy. The Marshall Plan, in which the United States, beginning in 1948, provided financial assistance to western Europe, did more than provide capital for postwar reconstruction. It also conditioned this aid on governments opening their economies to international trade, thereby strengthening liberal political coalitions between workers (who benefited from cheaper goods imported from abroad) and export-oriented capitalists (who gained access to global markets for their products). The institutions that came out of the 1944 Bretton Woods conference, including the World Bank and the IMF, offered loans and financial aid so that states could adjust to the fluctuations of the international market. As originally intended, this postwar system, which included the precursor to the EU, the European Economic Community, as well as the Bretton Woods institutions, was designed not to supersede national states but to allow them to cooperate while retaining policy autonomy. Crucially, leading democracies such as France, the United Kingdom, the United States, and West Germany decided to share some of their sovereignty in international organizations, which made their nation-states stronger rather than weaker. In more recent decades, however, these hard-won lessons have been set aside.
DISEMBEDDING LIBERALISM
For the first few decades following World War II, embedded liberalism—characterized by strong domestic welfare states supported by international institutions—succeeded in granting autonomy and democratic legitimacy to nation-states while curbing aggressive nationalism. Yet as early as the 1970s, this arrangement came under pressure from structural changes to the global economy and ideological assaults from libertarians and advocates of supra- and trans-nationalism. The resulting erosion of embedded liberalism has paved the way for the nationalist revival of today.
The Bretton Woods system had relied on countries fixing their exchange rates with the U.S. dollar, which was in turn backed by gold. But already by the early 1970s, chronic U.S. trade deficits and the increasing competitiveness of European and Japanese exports were making this system untenable. At the same time, the United States was experiencing “stagflation”—a combination of high unemployment and high inflation that was resistant to the traditional Keynesian strategies, such as government spending, on which postwar economic management had relied. In response, U.S. President Richard Nixon suspended the dollar’s convertibility to gold in 1971, moving toward an unregulated market system of floating exchange rates. Other structural developments also put embedded liberalism under strain: the globalization of production and markets strengthened the relative power of capital, which was highly mobile, over labor, which was less so. This weakened the power of traditional labor unions, undermining the capital-labor bargain at the center of the postwar order.
These economic trends were accompanied by ideological developments that challenged both core principles of embedded liberalism: social democratic regulation of the economy and the political primacy of the nation-state. The first of these developments was the rise of free-market fundamentalism, pioneered by economists such as Friedrich Hayek and Milton Friedman and adopted by political leaders such as British Prime Minister Margaret Thatcher and U.S. President Ronald Reagan. Beginning with Thatcher’s election in 1979, these leaders and their ideological backers sought to drastically curtail the welfare state and return to the laissez-faire policies of the nineteenth -century. This market fundamentalism was initially used by the right as a cudgel against the social democratic left, but over time it was adopted by leaders of center-left parties, such as French President François Mitterrand, U.S. President Bill Clinton, and British Prime Minister Tony Blair, who during the 1980s and 1990s pushed through financial deregulation and cuts to the welfare state. These policies hurt members of the white working class, alienating them from the political system and the center-left parties that had traditionally protected their interests.
The other element of the ideological assault on embedded liberalism came from enthusiasts of supra- and trans-nationalism. In an influential 1997 essay in this magazine, Jessica Mathews argued that technological change and the end of the Cold War had rendered the nation-state obsolete. Its functions, according to Mathews and other, like-minded thinkers, would be usurped by supranational organizations such as the EU, coordinating institutions such as the World Trade Organization, and various transnational networks of activists, experts, and innovators. In 1993, for instance, Europe had adopted a common market and created the bureaucratic edifice of the EU to administer the resulting flows of goods, money, and people. This was followed by the adoption of the euro in 2002. Although intended to promote European integration, the euro effectively stripped its members of monetary sovereignty, greatly reducing their policy autonomy.
This transnational paradise, moreover, left little room for democracy. The gradual transfer of authority from national governments to Brussels has put considerable power in the hands of unelected technocrats. Europeans who are unhappy with EU policies have no way to vote out the bureaucrats in Brussels; their only effective way to impose democratic accountability is through national elections, creating a strong incentive for nationalist mobilization. Different European countries have different policy equilibriums based on the preferences of their voters, the needs of their national economies, and the rhetorical strategies of their national political elites. The search for nationally tailored solutions, however, is confounded by the EU’s requirement that all member states agree on a policy in lockstep. After the 2015 migrant crisis, initiated by Germany’s decision to briefly open its borders, Brussels began cajoling and coercing other EU member states to accept some of the migrants in the name of burden sharing. Small wonder, then, that Hungarians, Italians, and Poles who opposed immigration began flocking to nationalist politicians who promised to resist pressure from the EU. Similar policy divergences on economic austerity have also been expressed in terms of national resentments—between Germans and Greeks, for instance—and have fueled mobilization against Brussels.
Scholars debate whether populist nationalism in the United States and Europe arises mainly from economic or cultural grievances, but the most persuasive explanation is that nationalist political entrepreneurs have combined both grievances into a narrative about perfidious elites who coddle undeserving out-groups—immigrants and minorities—while treating the nation’s true people with contempt. In this view, elites use bureaucratic and legal red tape to shield themselves from accountability and enforce politically correct speech norms to silence their critics. This story doesn’t fit the facts—among other anomalies, residents of rural regions with few immigrants are among the most dedicated opponents of refugees—but it should not be surprising that a narrative of self-dealing elites and dangerous immigrants has resonated, given humans’ well-known propensity for in-group bias. Nativistic prejudice is latent, ready to be activated in times of cultural flux or economic strain when traditional elites seem unresponsive.
A different face of the contemporary nationalist revival is the rise of authoritarian populism in developing states such as Brazil, India, the Philippines, and Turkey. Similar to older rising illiberal powers, such as nineteenth-century Germany, these countries have been able to use the so-called advantages of backwardness—cheap labor, technology transfers, and state-directed resource allocation—to grow rapidly; that is, until they reach approximately one-fourth of U.S. GDP per capita. Beyond that point, growth tends to slow markedly unless states follow in the footsteps of reformers such as Japan, South Korea, and Taiwan and adopt the full panoply of liberal institutions. Often, however, their governments eschew liberal reform. Instead, facing stagnating growth and inefficiencies from corruption, they double down on some combination of demagogic nationalism, repression, and crippling overinvestment in massive infrastructure projects, which are designed to retain the support of business elites. In such cases, it is the responsibility of these states’ liberal economic partners to press for reforms—at the risk, however, of triggering even more nationalist backlash.
Supporters of the far-right Golden Dawn party at a rally in Athens, Greece, January 2015.
IF IT AIN’T BROKE, DON’T FIX IT
How, then, should leaders respond to the rise of nationalism? The first step is to recognize that the tension fueling contemporary nationalism is not new. It is precisely the tension identified by Polanyi, which the embedded liberal order of the postwar years was designed to manage: the contradiction between free markets and national autonomy. Illiberal nationalism has never been particularly successful at governing, but it is a temptation whenever liberalism drifts too far away from democratic accountability.
Historically, this contradiction has been resolved only through an order of democratic welfare states supported by international institutions, which grant them the policy flexibility to adjust to market fluctuations without inflicting undue pain on their citizens. Resolving today’s nationalist dilemma will require abandoning laissez-faire economics and unaccountable supranationalism and returning to the principles of embedded liberalism, updated for the present day. This, in turn, calls for a revival of the basic practices of postwar liberalism: national-level democratic accountability, economic coordination through international institutions, and compromise on competing priorities.
Today, political polarization makes compromise seem unlikely. Both illiberal nationalists and cosmopolitan elites have, in their own way, doubled down on one-sided solutions, seeking to rout their opponents rather than reach a durable settlement. Trump calls for a border wall and a ban on Muslim immigration, and his opponents continue to speak as if immigration and refugee policy is a matter of abstract legal and moral commitments rather than a subject for democratic deliberation. In Europe, meanwhile, the Germans cling to austerity policies that punish countries such as Greece and Italy, and illiberal populists fume against EU restrictions on their autonomy.
Yet the very failure of these one-sided measures may open up space for a renewed embedded liberalism. In the United States, President Barack Obama’s Affordable Care Act, which has mostly survived despite egregious assaults from the right, is a clear example of what a modern embedded liberal solution might look like. It strengthened the welfare state by vastly expanding access to state-subsidized health care and accommodating the needs of the private sector—an echo of the domestic capital-labor compromises that made the postwar order possible.
Similar arrangements might be sought on immigration. For instance, rich countries might agree to coordinate investment in poorer ones in order to stabilize migration flows by improving conditions in the source countries. These arrangements should be institutionalized before the next crisis hits, not improvised as they were in 2015–16, when Germany and the EU hurriedly struck a deal with Turkey, paying Ankara billions of euros in exchange for housing refugees. And although international institutions such as the EU should play a role in coordinating immigration policy, democratic states must be allowed to tailor their own policies to the preferences of their voters. Pressuring countries to accept more migrants than they want simply plays into the hands of illiberal populists. And giving the populists some of what they want now may improve the prospects for embedded liberal compromises in the future. In December 2018, Hungarians began protesting in massive numbers against their nationalist government’s policy of forced overtime, which had been enacted due to labor shortages. Faced with such problems, some of the country’s anti-immigration zealots may soon begin to reassess their stance.
In the essay in which he coined the term “embedded liberalism,” Ruggie noted that institutionalized power always serves a social purpose. The purpose of the postwar order, in his view, had been to reach a compromise between the competing imperatives of liberal markets and national autonomy. Today’s crisis of liberalism stems in large part from a loss of this purpose. The institutions of the present international order have ceased responding to the wishes of national electorates.
The evidence of the past century suggests, however, that democratic accountability is necessary for both political stability and economic welfare. And even today, nation-states remain the most reliable political form for achieving and sustaining democracy. It is likely impossible to remake them in order to better conform to the needs of global markets and transnational institutions, and even if it were possible, it would be a bad idea. Instead, defenders of the liberal project must begin adapting institutions to once again fit the shape of democratic nation-states. This was the original dream of the embedded liberal order; now is the time to revive it.
0 notes
sufredux · 5 years
Text
False Flags
There appears to be one indisputable global trend today: the rise of nationalism. Self-described nationalists now lead not only the world’s largest autocracies but also some of its most populous democracies, including Brazil, India, and the United States. A deepening fault line seems to divide cosmopolitans and nationalists, advocates of “drawbridge down” and “drawbridge up.” And it seems that more and more people are opting for the latter—for “closed” over “open.”
They do so, many commentators claim, because they feel threatened by something called “globalism” and crave to have their particular national identities recognized and affirmed. According to this now conventional narrative, today’s surge of nationalist passions represents a return to normal: the attempts to create a more integrated world after the Cold War were a mere historical blip, and humanity’s tribal passions have now been reawakened.
This, however, is a deeply flawed interpretation of the current moment. In reality, the leaders described as “nationalists” are better understood as populist poseurs who have won support by drawing on the rhetoric and imagery of nationalism. Unfortunately, they have managed to convince not only their supporters but also their opponents that they are responding to deep nationalist yearnings among ordinary people. The more that defenders of liberalism and the liberal order buy the stories these leaders (and associated movements) are selling and adopt the framing and rhetoric of populism, the more they allow their opponents’ ideas to shape political debates. In doing so, parties and institutions of the center-left and the center-right are helping bring about the very thing they hope to avoid: more closed societies and less global cooperation to address common problems.
THE PEOPLE AND THE NATION
What the past few years have witnessed is not the rise of nationalism per se but the rise of one variant of it: nationalist populism. “Nationalism” and “populism” are often conflated, but they refer to different phenomena. The most charitable definition of “nationalism” is the idea that cultural communities should ideally possess their own states and that loyalty to fellow nationals ought to trump other obligations. “Populism,” meanwhile, is sometimes taken to be a shorthand for “criticism of elites,” and it is true that populists, when in opposition, criticize sitting governments and other parties. More important, however, is their claim that they and they alone represent what they usually call “the real people” or “the silent majority.” Populists thus declare all other contenders for power to be illegitimate. In this way, populists’ complaints are always fundamentally personal and moral: the problem, invariably, is that their adversaries are corrupt. In this sense, populists are indeed antiestablishment. But populists also deem citizens who do not take their side to be inauthentic, not part of “the real people”: they are un-American, un-Polish, un-Turkish, and so on. Populism attacks not merely elites and establishments but also the very idea of political pluralism—with vulnerable minorities usually becoming the first victims.
Turkish President Recep Tayyip Erdogan and Hungarian Prime Minister Viktor Orban in Budapest, Hungary, October 2018
This antipluralism explains why populist leaders tend to take their countries in an authoritarian direction if they have sufficient power and if countervailing forces, such as an independent judiciary or free media, are not strong enough to resist them. Such leaders reject all criticisms with the claim that they are merely executing the people’s will. They seek out and thrive on conflict; their political business model is permanent culture war. In a way, they reduce all political questions to questions of belonging: whoever disagrees with them is labeled an “enemy of the people.”
Populism is not a doctrine; it is more like a frame. And all populists have to fill the frame with content that will explain who “the real people” are and what they want. That content can take many different forms and can draw on ideas from the left or the right. From the late 1990s until his death in 2013, the Venezuelan populist leader Hugo Chávez created a disastrous “socialism for the twenty-first century” in his country, wrecking its economy and demonizing all of his opponents in the process. Today’s right-wing populists mostly draw on nationalist ideas, such as distrust of international institutions (even if a nation joined such organizations voluntarily), economic protectionism, and hostility to the idea of providing development aid to other countries. These beliefs often cross over into nativism or racism, as when nationalist populists promote the idea that only native-born citizens are entitled to jobs and benefits or insinuate that some immigrants can never be loyal citizens. To be sure, one can be a nationalist without being a populist; a leader can maintain that national loyalties come first without saying that he or she alone can represent the nation. But today, all right-wing populists are nationalists. They promise to take back control on behalf of “the real people,” which in their definition is never the population as a whole. Nigel Farage, the leader of the far-right UK Independence Party at the time of the Brexit vote, celebrated the outcome as a “victory for real people,” implying that the 48 percent of British voters who preferred that their country stay in the EU were not properly part of the nation.
DON’T BELIEVE THE HYPE
The potent combination of nationalism and populism has spread in recent years. A populist playbook—perhaps even a populist art of governance—has emerged as politicians in disparate countries have studied and learned from one another’s experiences. In 2011, Jaroslaw Kaczynski, who leads Poland’s populist ruling Law and Justice party, announced that he wanted to create “Budapest in Warsaw,” and he has systematically copied the strategies pioneered by Prime Minister Viktor Orban in Hungary. On the other side of the world, Jair Bolsonaro got elected president by following the playbook, railing against immigration (even though more people leave Brazil than enter) and declaring, “Brazil above all, God above everyone.”
To some observers, it appears that nationalist populists have profited from a bitter backlash against globalization and increasing cultural diversity. This has, in fact, become the conventional wisdom not only among populists themselves but also among academics and liberal opponents of populism. The irony, however, is that although critics often charge populists with peddling reductive messages, it is these same critics who now grasp at simple explanations for populism’s rise. In doing so, many liberal observers play right into their opponents’ hands by taking at face value and even amplifying the dubious stories that nationalist populists tell about their own success.
For example, Orban has claimed that the 2010 parliamentary elections in Hungary constituted a “revolution at the voting booths” and that Hungarians had endorsed what he has described as his “Christian and national” vision of an “illiberal democracy.” In reality, all that happened was that a majority of Hungarians were deeply disappointed by the country’s left-wing government and did what standard democratic theory recommended they do: they voted for the main opposition party, Orban’s Fidesz. By the next time Hungarians went to the polls, in 2014, Orban had gerrymandered the electoral map in Fidesz’s favor; erected the Orwellian-sounding System of National Cooperation, which included drastic restrictions on media pluralism and civil society; and weakened the independence of the judiciary and other sources of checks and balances.
A carnival float depicting the leader of Poland’s ruling Law and Justice party, Jaroslaw Kaczynski, and Hungarian Prime Minister Viktor Orban in Düsseldorf, Germany, February 2018
Similarly, in the 2016 U.S. presidential election, “the people” did not comprehensively endorse a nationalist “America first” agenda. Rather, in more mundane fashion, citizens who identified as Republicans came out to vote for their party’s candidate, who was not a typical politician but also hardly the leader of a spontaneous grass-roots antiglobalization movement. Donald Trump ultimately won the backing of the party machinery; the enthusiastic support of establishment Republican figures such as Chris Christie, Newt Gingrich, and Rudy Giuliani; and near-constant cheerleading on Fox News. As the political scientists Christopher Achen and Larry Bartels have argued, it turned out to be a fairly normal election, albeit with an abnormal Republican candidate who faced a deeply unpopular Democratic contender.
Likewise, Bolsonaro did not win last year’s presidential election in Brazil because a majority of Brazilians wanted a nationalist military dictatorship. The bulk of Bolsonaro’s support came from citizens fed up with the corruption of traditional political elites from across the political spectrum and unwilling to return the left-wing Workers’ Party to power. It also helped that the country’s powerful agricultural sector and, eventually, its financial and industrial elites threw their weight behind the far-right candidate—as did influential evangelical Christian leaders.
As the political scientist Cas Mudde has pointed out, nationalist populists often represent not a silent majority but a very loud minority. They do not come to power because their ideology is an unstoppable world-historical force. Rather, they depend on the center-right’s willingness to collaborate with them—as was the case for Trump, Bolsonaro, and the pro-Brexit campaigners—or they win by at least partly hiding their intentions, as was the case with Orban.
Once in power, most nationalist populists don’t actually work to take back control on the people’s behalf, as they promised to do. Instead, they perform a sort of nationalist pantomime of largely symbolic gestures: for example, promising to build walls (which achieve nothing concrete other than inciting hatred against minorities) or occasionally having the state seize a multinational company. Behind the scenes, such leaders are generally quite accommodating of international institutions and multinational corporations. They are concerned less with genuinely reasserting their countries’ autonomy than with appearing to do so.
Take Trump, for instance. He has threatened individual companies that planned to close facilities in the United States. But he has also stripped away labor regulations at a breakneck pace, making it hard to claim that he cares about protecting workers. Likewise, after deriding the North American Free Trade Agreement during his campaign, Trump wound up negotiating a new trade deal with Canada and Mexico whose terms are substantially similar to those of NAFTA. In Hungary, Orban has nationalized some industries and railed against foreign corporations that he claimed exploited the Hungarian people. Yet his government recently passed a law that allows employers to demand that workers put in 400 hours of overtime each year, up from the prior limit of 250 hours—and to withhold payment for that extra labor for up to three years. The main beneficiaries of this measure (dubbed “the slave law” by its critics) are the German car companies that employ thousands of Hungarian factory workers.
NOT EVERY FIGHT IS CULTURAL
Many politicians, especially those from mainstream center-right parties, have been at a loss when it comes to countering nationalist populism. Increasingly, though, they are betting on a seemingly paradoxical strategy of what one might call “destruction through imitation.” Austrian Chancellor Sebastian Kurz and Dutch Prime Minister Mark Rutte, for example, have tried to outflank their far-right competitors with tough talk on refugees, Islam, and immigration.
This strategy is unlikely to succeed in the long run, but it is bound to do serious damage to European democracy. No matter how fast one chases populists to the fringes, it’s almost impossible to catch them. Extremist outfits such as the Danish People’s Party or the Party for Freedom of the far-right Dutch provocateur Geert Wilders will never be satisfied with the immigration proposals of more established parties, no matter how restrictive they are. And their supporters are unlikely to switch their allegiances: they’ll continue to prefer the originals over the imitators.
A deeper concern is the effect that established parties making opportunistic shifts in response to the populist threat will have. First, they denounce populists as demagogues peddling lies. Then, when support for populists grows, mainstream politicians begin to suggest that the populists have intuited, or even firmly know, something about people’s concerns and anxieties that others haven’t, or don’t. This reflects an understanding of democratic representation as an almost mechanical system for reproducing existing interests, ideas, and even identities. In this view, savvy populist political entrepreneurs discover trends within the polity and then import them into the political system.
Supporters of Brazilian President Jair Bolsonaro in Brasília, Brazil, October 2018
But that is not how democracy really works. Representation is a dynamic process, in which citizens’ self-perceptions and identities are heavily influenced by what they see, hear, and read: images, words, and ideas produced and circulated by politicians, the media, civil society, and even friends and family members. Modern democracy is a two-way street, in which representative systems do not merely reflect interests and political identities; they shape them, as well.
Nationalist populists have benefited greatly from this process, as media organizations and scholars have adopted their framing and rhetoric, with the effect of ratifying and amplifying their messages. Casual, seemingly self-evident accounts of “ordinary people” who have been “left behind” or “disrespected” and who fear “the destruction of their culture” need to be treated with extreme caution: they are not necessarily accurate descriptions of people’s lived experience. One can frame, say, the French government’s recent decision to raise taxes on gasoline and to introduce tighter speed limits in the countryside—steps that spurred the “yellow vest” protest movement—as demonstrating disrespect for a “way of life” in rural and exurban areas. But a more mundane interpretation is that the French government simply failed to see how particular policies would have different effects on different parts of the population. The government failed at distributive justice, not at cultural recognition.
Across Europe and the United States, journalists and analysts have posited that many people—especially older white people—feel disrespected by elites. It’s hard to ascertain how many people have directly encountered disrespect. But virtually day and night—on talk radio, on TV news programs, and on social media—millions of people are told that they feel disrespected. What is routinely presented as a cultural conflict between supposedly authentic rural heartlands and cosmopolitan cities usually involves a much less dramatic fight over how opportunities are distributed through regulatory and infrastructure decisions: from the price of airline ticket for flights to more remote areas, to the status of community banks, to policies that determine the cost of housing in big cities.
By casting all issues in cultural terms and by embracing the idea that populists have developed a unique purchase on people’s concerns and anxieties, established parties and media organizations have created something akin to a self-fulfilling prophecy. Once the entire political spectrum adopts populist language about voters’ interests and identities, more and more people will begin to understand themselves and their interests in those terms. For example, voters fed up with established center-right parties might initially cast protest votes for populist parties such as the far-right Alternative for Germany (AfD) or outsider political candidates such as Trump. But if those voters are then continuously portrayed as “AfD people” or as members of “Trump’s base,” they may well come to adopt those identities and develop a more permanent sense of allegiance to the party or politician who at first represented little more than a way to express dissatisfaction with the status quo. Eventually, as mainstream parties opportunistically adapt their messages and media commentators lazily repeat populist talking points, the entire political spectrum can shift rightward.
BEAT THEM, DON’T JOIN THEM                                                                          
This argument may sound like liberal wishful thinking: “People are not nearly as nationalist as populists claim! Conflicts are really all about material interests and not about culture!” But the point is not that fights over culture and identity are illusory or illegitimate just because populists always happen to promote them. Rather, the point is that establishment institutions are too quickly turning to culture and identity to explain politics. In this way, they are playing into populists’ hands—doing their jobs for them, in effect.
Consider, for example, populist attacks on “globalists” who favor “open borders.” Even center-left parties are now ritually distancing themselves from that idea, even though, in reality, no politician of any consequence anywhere wants to open all borders. Even among political philosophers not constrained by political concerns, only a very small minority calls for the abolition of frontiers. It is true that advocates of global governance and economic globalization have made serious blunders: they often presented their vision of the world as an inevitable outcome, as when British Prime Minister Tony Blair asserted in 2005 that debating globalization was like “debating whether autumn should follow summer.” Some supporters of free trade falsely claimed that everyone would benefit from a more integrated world. But nationalist populists don’t truly want to address those errors. They seek, instead, to cynically exploit them in order to weaken democratic institutions and lump together advocates of globalization, transnational tax evaders, and high-flying private equity investors—along with human rights advocates and immigrants, refugees, and many other marginalized groups—into an undifferentiated “cosmopolitan, rootless elite”: a “them” to pit against an “us.”
There are deep and often legitimate conflicts about trade, immigration, and the shape of the international order. Liberals should not present their choices on these issues as self-evidently correct or as purely win-win; they must convincingly make the case for their ideas and justify their stance to the disadvantaged. But they should also not adopt the framing and rhetoric of populists, opportunistic center-right politicians, and academics who make careers out of explaining away xenophobic views as merely symptoms of economic anxiety. Doing so will lead liberals to make preemptive concessions that betray their ideals.
0 notes
sufredux · 5 years
Text
Why Nationalism Works
Nationalism has a bad reputation today. It is, in the minds of many educated Westerners, a dangerous ideology. Some acknowledge the virtues of patriotism, understood as the benign affection for one’s homeland; at the same time, they see nationalism as narrow-minded and immoral, promoting blind loyalty to a country over deeper commitments to justice and humanity. In a January 2019 speech to his country’s diplomatic corps, German President Frank-Walter Steinmeier put this view in stark terms: “Nationalism,” he said, “is an ideological poison.”
In recent years, populists across the West have sought to invert this moral hierarchy. They have proudly claimed the mantle of nationalism, promising to defend the interests of the majority against immigrant minorities and out-of-touch elites. Their critics, meanwhile, cling to the established distinction between malign nationalism and worthy patriotism. In a thinly veiled shot at U.S. President Donald Trump, a self-described nationalist, French President Emmanuel Macron declared last November that “nationalism is a betrayal of patriotism.”
The popular distinction between patriotism and nationalism echoes the one made by scholars who contrast “civic” nationalism, according to which all citizens, regardless of their cultural background, count as members of the nation, with “ethnic” nationalism, in which ancestry and language determine national identity. Yet efforts to draw a hard line between good, civic patriotism and bad, ethnic nationalism overlook the common roots of both. Patriotism is a form of nationalism. They are ideological brothers, not distant cousins.
At their core, all forms of nationalism share the same two tenets: first, that members of the nation, understood as a group of equal citizens with a shared history and future political destiny, should rule the state, and second, that they should do so in the interests of the nation. Nationalism is thus opposed to foreign rule by members of other nations, as in colonial empires and many dynastic kingdoms, as well as to rulers who disregard the perspectives and needs of the majority.
Over the past two centuries, nationalism has been combined with all manner of other political ideologies. Liberal nationalism flourished in nineteenth-century Europe and Latin America, fascist nationalism triumphed in Italy and Germany during the interwar period, and Marxist nationalism motivated the anticolonial movements that spread across the “global South” after the end of World War II. Today, nearly everyone, left and right, accepts the legitimacy of nationalism’s two basic tenets. This becomes clearer when contrasting nationalism with other doctrines of state legitimacy. In theocracies, the state should be ruled in the name of God, as in the Vatican or the caliphate of the Islamic State (or ISIS). In dynastic kingdoms, the state is owned and ruled by a family, as in Saudi Arabia. In the Soviet Union, the state was ruled in the name of a class: the international proletariat.
Since the fall of the Soviet Union, the world has become a world of nation-states governed according to nationalist principles. Identifying nationalism exclusively with the political right means misunderstanding the nature of nationalism and ignoring how deeply it has shaped almost all modern political ideologies, including liberal and progressive ones. It has provided the ideological foundation for institutions such as democracy, the welfare state, and public education, all of which were justified in the name of a unified people with a shared sense of purpose and mutual obligation. Nationalism was one of the great motivating forces that helped beat back Nazi Germany and imperial Japan. And nationalists liberated the large majority of humanity from European colonial domination.
Nationalism is not an irrational sentiment that can be banished from contemporary politics through enlightening education; it is one of the modern world’s foundational principles and is more widely accepted than its critics acknowledge. Who in the United States would agree to be ruled by French noblemen? Who in Nigeria would publicly call for the British to come back?
With few exceptions, we are all nationalists today.
THE NATION IS BORN
Nationalism is a relatively recent invention. In 1750, vast multinational empires—Austrian, British, Chinese, French, Ottoman, Russian, and Spanish—governed most of the world. But then came the American Revolution, in 1775, and the French Revolution, in 1789. The doctrine of nationalism—rule in the name of a nationally defined people—spread gradually across the globe. Over the next two centuries, empire after empire dissolved into a series of nation-states. In 1900, roughly 35 percent of the globe’s surface was governed by nation-states; by 1950, it was already 70 percent. Today, only half a dozen dynastic kingdoms and theocracies remain.
Where did nationalism come from, and why did it prove so popular? Its roots reach back to early modern Europe. European politics in this period—roughly, the sixteenth through the eighteenth centuries—was characterized by intense warfare between increasingly centralized, bureaucratic states. By the end of the eighteenth century, these states had largely displaced other institutions (such as churches) as the main providers of public goods within their territory, and they had eliminated or co-opted competing centers of power, such as the independent nobility. The centralization of power, moreover, promoted the spread of a common language within each state, at least among the literate, and provided a shared focus for the emerging civil society organizations that were then becoming preoccupied with matters of state.
Europe’s competitive and war-prone multistate system drove rulers to extract ever more taxes from their populations and to expand the role of commoners in the military. This, in turn, gave commoners leverage to demand from their rulers increased political participation, equality before the law, and better provision of public goods. In the end, a new compact emerged: that rulers should govern in the population’s interests, and that as long as they did so, the ruled owed them political loyalty, soldiers, and taxes. Nationalism at once reflected and justified this new compact. It held that the rulers and the ruled both belonged to the same nation and thus shared a common historical origin and future political destiny. Political elites would look after the interests of the common people rather than those of their dynasty.
Why was this new model of statehood so attractive? Early nation-states—France, the Netherlands, the United Kingdom, and the United States—quickly became more powerful than the old dynastic kingdoms and empires. Nationalism allowed rulers to raise more taxes from the ruled and to count on their political loyalty. Perhaps most important, nation-states proved able to defeat empires on the battlefield. Universal military conscription—invented by the revolutionary government of France—enabled nation-states to recruit massive armies whose soldiers were motivated to fight for their fatherland. From 1816 to 2001, nation-states won somewhere between 70 and 90 percent of their wars with empires or dynastic states.
As the nation-states of western Europe and the United States came to dominate the international system, ambitious elites around the world sought to match the West’s economic and military power by emulating its nationalist political model. Perhaps the most famous example is Japan, where in 1868, a group of young Japanese noblemen overthrew the feudal aristocracy, centralized power under the emperor, and embarked on an ambitious program to transform Japan into a modern, industrialized nation-state—a development known as the Meiji Restoration. Only one generation later, Japan was able to challenge Western military power in East Asia.
Nationalism did not spread only because of its appeal to ambitious political elites, however. It was also attractive for the common people, because the nation-state offered a better exchange relationship with the government than any previous model of statehood had. Instead of graduated rights based on social status, nationalism promised the equality of all citizens before the law. Instead of restricting political leadership to the nobility, it opened up political careers to talented commoners. Instead of leaving the provision of public goods to guilds, villages, and religious institutions, nationalism brought the power of the modern state to bear in promoting the common good. And instead of perpetuating elite contempt for the uncultured plebs, nationalism elevated the status of the common people by making them the new source of sovereignty and by moving popular culture to the center of the symbolic universe.
With few exceptions, we are all nationalists today.
THE BENEFITS OF NATIONALISM
In countries where the nationalist compact between the rulers and the ruled was realized, the population came to identify with the idea of the nation as an extended family whose members owed one another loyalty and support. Where rulers held up their end of the bargain, that is, citizens embraced a nationalist vision of the world. This laid the foundation for a host of other positive developments.
One of these was democracy, which flourished where national identity was able to supersede other identities, such as those centered on religious, ethnic, or tribal communities. Nationalism provided the answer to the classic boundary question of democracy: Who are the people in whose name the government should rule? By limiting the franchise to members of the nation and excluding foreigners from voting, democracy and nationalism entered an enduring marriage.
At the same time as nationalism established a new hierarchy of rights between members (citizens) and nonmembers (foreigners), it tended to promote equality within the nation itself. Because nationalist ideology holds that the people represent a united body without differences of status, it reinforced the Enlightenment ideal that all citizens should be equal in the eyes of the law. Nationalism, in other words, entered into a symbiotic relationship with the principle of equality. In Europe, in particular, the shift from dynastic rule to the nation-state often went hand in hand with a transition to a representative form of government and the rule of law. These early democracies initially restricted full legal and voting rights to male property owners, but over time, those rights were extended to all citizens of the nation—in the United States, first to poor white men, then to white women and people of color.
Nationalism also helped establish modern welfare states. A sense of mutual obligation and shared political destiny popularized the idea that members of the nation—even perfect strangers—should support one another in times of hardship. The first modern welfare state was created in Germany during the late nineteenth century at the behest of the conservative chancellor Otto von Bismarck, who saw it as a way to ensure the working class’ loyalty to the German nation rather than the international proletariat. The majority of Europe’s welfare states, however, were established after periods of nationalist fervor, mostly after World War II in response to calls for national solidarity in the wake of shared suffering and sacrifice.
BLOODY BANNERS
Yet as any student of history knows, nationalism also has a dark side. Loyalty to the nation can lead to the demonization of others, whether foreigners or allegedly disloyal domestic minorities. Globally, the rise of nationalism has increased the frequency of war: over the last two centuries, the foundation of the first nationalist organization in a country has been associated with an increase in the yearly probability of that country experiencing a full-scale war, from an average of 1.1 percent to an average of 2.5 percent.
About one-third of all contemporary states were born in a nationalist war of independence against imperial armies. The birth of new nation-states has also been accompanied by some of history’s most violent episodes of ethnic cleansing, generally of minorities that were considered disloyal to the nation or suspected of collaborating with its enemies. During the two Balkan wars preceding World War I, newly independent Bulgaria, Greece, and Serbia divided up the European parts of the Ottoman Empire among themselves, expelling millions of Muslims across the new border into the rest of the empire. Then, during World War I, the Ottoman government engaged in massive killings of Armenian civilians. During World War II, Hitler’s vilification of the Jews—whom he blamed for the rise of Bolshevism, which he saw as a threat to his plans for a German empire in eastern Europe—eventually led to the Holocaust. After the end of that war, millions of German civilians were expelled from the newly re-created Czechoslovakian and Polish states. And in 1947, massive numbers of Hindus and Muslims were killed in communal violence when India and Pakistan became independent states.
Ethnic cleansing is perhaps the most egregious form of nationalist violence, but it is relatively rare. More frequent are civil wars, fought either by nationalist minorities who wish to break away from an existing state or between ethnic groups competing to dominate a newly independent state. Since 1945, 31 countries have experienced secessionist violence and 28 have seen armed struggles over the ethnic composition of the national government.
INCLUSIVE AND EXCLUSIVE
Although nationalism has a propensity for violence, that violence is unevenly distributed. Many countries have remained peaceful after their transition to a nation-state. Understanding why requires focusing on how governing coalitions emerge and how the boundaries of the nation are drawn. In some countries, majorities and minorities are represented in the highest levels of the national government from the outset. Switzerland, for instance, integrated French-, German-, and Italian-speaking groups into an enduring power-sharing arrangement that no one has ever questioned since the modern state was founded, in 1848. Correspondingly, Swiss nationalist discourse portrays all three linguistic groups as equally worthy members of the national family. There has never been a movement by the French- or the Italian-speaking Swiss minority to secede from the state.
In other countries, however, the state was captured by the elites of a particular ethnic group, who then proceeded to shut other groups out of political power. This raises the specter not just of ethnic cleansing pursued by paranoid state elites but also of secessionism or civil war launched by the excluded groups themselves, who feel that the state lacks legitimacy because it violates the nationalist principle of self-rule. Contemporary Syria offers an extreme example of this scenario: the presidency, the cabinet, the army, the secret service, and the higher levels of the bureaucracy are all dominated by Alawites, who make up just 12 percent of the country’s population. It should come as no surprise that many members of Syria’s Sunni Arab majority have been willing to fight a long and bloody civil war against what they regard as alien rule.
Whether the configuration of power in a specific country developed in a more inclusive or exclusive direction is a matter of history, stretching back before the rise of the modern nation-state. Inclusive ruling coalitions—and a correspondingly encompassing nationalism—have tended to arise in countries with a long history of centralized, bureaucratic statehood. Today, such states are better able to provide their citizens with public goods. This makes them more attractive as alliance partners for ordinary citizens, who shift their political loyalty away from ethnic, religious, and tribal leaders and toward the state, allowing for the emergence of more diverse political alliances. A long history of centralized statehood also fosters the adoption of a common language, which again makes it easier to build political alliances across ethnic divides. Finally, in countries where civil society developed relatively early (as it did in Switzerland), multiethnic alliances for promoting shared interests have been more likely to emerge, eventually leading to multiethnic ruling elites and more encompassing national identities.
BUILDING A BETTER NATIONALISM
Unfortunately, these deep historical roots mean that it is difficult, especially for outsiders, to promote inclusive ruling coalitions in countries that lack the conditions for their emergence, as is the case in many parts of the developing world. Western governments and international institutions, such as the World Bank, can help establish these conditions by pursuing long-term policies that increase governments’ capacity to provide public goods, encourage the flourishing of civil society organizations, and promote linguistic integration. But such policies should strengthen states, not undermine them or seek to perform their functions. Direct foreign help can reduce, rather than foster, the legitimacy of national governments. Analysis of surveys conducted by the Asia Foundation in Afghanistan from 2006 to 2015 shows that Afghans had a more positive view of Taliban violence after foreigners sponsored public goods projects in their districts.
In the United States and many other old democracies, the problem of fostering inclusive ruling coalitions and national identities is different. Sections of the white working classes in these countries abandoned center-left parties after those parties began to embrace immigration and free trade. The white working classes also resent their cultural marginalization by liberal elites, who champion diversity while presenting whites, heterosexuals, and men as the enemies of progress. The white working classes find populist nationalism attractive because it promises to prioritize their interests, shield them from competition from immigrants or lower-paid workers abroad, and restore their central and dignified place in the national culture. Populists didn’t have to invent the idea that the state should care primarily for core members of the nation; it has always been deeply embedded in the institutional fabric of the nation-state, ready to be activated once its potential audience grew large enough.
Overcoming these citizens’ alienation and resentment will require both cultural and economic solutions. Western governments should develop public goods projects that benefit people of all colors, regions, and class backgrounds, thereby avoiding the toxic perception of ethnic or political favoritism. Reassuring working-class, economically marginalized populations that they, too, can count on the solidarity of their more affluent and competitive fellow citizens might go a long way toward reducing the appeal of resentment-driven, anti-immigrant populism. This should go hand in hand with a new form of inclusive nationalism. In the United States, liberals such as the intellectual historian Mark Lilla and moderate conservatives such as the political scientist Francis Fukuyama have recently suggested how such a national narrative might be constructed: by embracing both majorities and minorities, emphasizing their shared interests rather than pitting white men against a coalition of minorities, as is done today by progressives and populist nationalists alike.
In both the developed and the developing world, nationalism is here to stay. There is currently no other principle on which to base the international state system. (Universalistic cosmopolitanism, for instance, has little purchase outside the philosophy departments of Western universities.) And it is unclear if transnational institutions such as the European Union will ever be able to assume the core functions of national governments, including welfare and defense, which would allow them to gain popular legitimacy.
The challenge for both old and new nation-states is to renew the national contract between the rulers and the ruled by building—or rebuilding—inclusive coalitions that tie the two together. Benign forms of popular nationalism follow from political inclusion. They cannot be imposed by ideological policing from above, nor by attempting to educate citizens about what they should regard as their true interests. In order to promote better forms of nationalism, leaders will have to become better nationalists, and learn to look out for the interests of all their people.
0 notes
sufredux · 5 years
Text
The Importance of Elsewhere
In October 2016, British Prime Minister Theresa May made her first speech to a Conservative conference as party leader. Evidently seeking to capture the populist spirit of the Brexit vote that brought down her predecessor, she spoke of “a sense—deep, profound, and, let’s face it, often justified—that many people have today that the world works well for a privileged few, but not for them.” What was needed to challenge this, May argued, was a “spirit of citizenship” lacking among the business elites that made up one strand of her party’s base. Citizenship, she said, “means a commitment to the men and women who live around you, who work for you, who buy the goods and services you sell.” She continued:
Today, too many people in positions of power behave as though they have more in common with international elites than with the people down the road, the people they employ, the people they pass on the street. But if you believe you are a citizen of the world, you are a citizen of nowhere. You don’t understand what citizenship means.
Although May never used the term, her target was clear: the so-called cosmopolitan elite.
Days after this speech, I was giving a lecture on nationalism for the BBC. The prime minister had been talking in Birmingham, the only one of the five largest British cities that had voted—by the barest of margins, 50.4 percent to 49.6 percent—for Brexit. I was speaking in the largest Scottish city, Glasgow, where two-thirds of the population had voted to stay in the EU, just as every other Scottish district did. Naturally, somebody asked me what I thought about May’s “citizen of nowhere” comment.
The cosmopolitan task, in fact, is to be able to focus on both far and near.
It wasn’t the first time I’d heard such a charge, and it won’t be the last. In the character of Mrs. Jellyby, the “telescopic philanthropist” of Bleak House, Charles Dickens memorably invoked someone who neglects her own children as she makes improving plans for the inhabi-tants of a far-off land and whose eyes “had a curious habit of seeming to look a long way off,” as if “they could see nothing nearer than Africa!” The attitude that May evoked has a similar affliction: it’s that of the frequent flyer who can scarcely glimpse his earthbound compatriots through the clouds.
But this is nearly the opposite of cosmopolitanism. The cosmopolitan task, in fact, is to be able to focus on both far and near. Cosmopolitanism is an expansive act of the moral imagination. It sees human beings as shaping their lives within nesting memberships: a family, a neighborhood, a plurality of overlapping identity groups, spiraling out to encompass all humanity. It asks us to be many things, because we are many things. And if its critics have seldom been more clamorous, the creed has never been so necessary.
NOWHERE MEN
Cosmopolitanism was born in the fourth century BC as an act of defiance, when Diogenes the Cynic—who came from Sinope, a Greek-speaking city on the Black Sea—first claimed he was a kosmopolitês. The word, which seems to be a neologism of his own, translates more or less as “citizen of the world.” Diogenes was fond of challenging the common sense of his day, and this word was meant to have a paradox built into it: a politês was a free adult male citizen of a polis, one of the self-governing Greek towns in southeastern Europe and Asia Minor, and the kosmos was, well, the whole of the universe. It would have been obvious to any of Diogenes’ contemporaries that you couldn’t belong to the universe in the same way as you belonged to a town such as Athens, which had some 30,000 free male adult citizens in his day (and a total population of perhaps 100,000). It was a contradiction in terms as obvious as the one in “global village,” a phrase coined by the media theorist Marshall McLuhan a little more than half a century ago. Village equals small; globe equals enormous. Cosmopolitanism takes something small and familiar and projects it onto a whole world of strangers.
Nonetheless, this paradoxical formulation has come to enjoy extraordinary appeal around the planet. Conservative populism may be on the rise in Europe, but in a 2016 study conducted by the BBC, nearly three-quarters of the Chinese and Nigerians polled—along with more than half of the Brazilians, Canadians, and Ghanaians polled—said that they saw themselves “more as a global citizen” than a citizen of their own country. Even two in five Americans felt the same way.
A Chinese tourist supporting Atlético Madrid at the Champions League Final in Milan, Italy, May 2016
Yet there is something misleading about this conception of identity. The BBC poll presupposes that one must weigh the relative importance of global and local allegiances against each other, as if they were bound to be in competition. That seems to be the wrong way to think about things. After all, I am, like millions of people, a voting member of at least three political entities: New York City, New York State, and the United States. If asked which I was more committed to, I’d have a hard time knowing how to answer. I’d feel the same puzzlement if my metaphorical citizenship of the world were added to the list. Because citizenship is a kind of identity, its pull, like that of all identities, varies with the context and the issue. During mayoral elections, it matters most that I’m a New Yorker; in senatorial elections, the city, the state, and the country all matter to me. In presidential elections, I also find myself thinking as both a citizen of the United States and a citizen of the world. So many of the gravest problems that face us—from climate change to pandemics—simply don’t respect political borders.
In her speech to her fellow Conservatives, May was asking not just for a sense of citizenship but also for patriotism, an attachment that is emotional, not merely procedural. Yet there’s no reason a patriot cannot feel strongly in some moments about the fate of the earth, just as a patriot can feel strongly about the prospects of a city. Managing multiple citizenships is something everyone has to do: if people can harbor allegiances to a city and a country, whose interests can diverge, why should it be baffling to speak of an allegiance to the wider world? My father, Joe Appiah, was an independence leader of Ghana and titled his autobiography The Autobiography of an African Patriot; he saw no inconsistency in telling his children, in the letter he left for us when he died, that we should remember always that we were citizens of the world.
PATRIOTIC COSMOPOLITANS
That thought is one my father probably got from Marcus Aurelius, the second-century Roman emperor whose Meditations lived alongside the Bible on his bedside table. Marcus wrote that for him, as a human being, his city and fatherland was the universe. It’s easy to dismiss this as so much imperial grandeur, and yet the point of the metaphor for Stoics such as Marcus was that people were obliged to take care of the whole community, to act responsibly with regard to the well-being of all their fellow world citizens. That has been the central thought of the cosmopolitan tradition for more than two millennia.
But there is something else important in that tradition, which developed more clearly in European cosmopolitanism in the eighteenth century: a recognition and celebration of the fact that our fellow world citizens, in their different places, with their different languages, cultures, and traditions, merit not just our moral concern but also our interest and curiosity. Interactions with foreigners, precisely because they are different, can open us up to new possibilities, as we can open up new possibilities to them. In understanding the metaphor of global citizenship, both the concern for strangers and the curiosity about them matter.
The German intellectual historian Friedrich Meinecke explored the modern philosophical origins of this idea in his 1907 book, Cosmopolitanism and the National State. Through a careful reading of German intellectuals from the Enlightenment until the late nineteenth century, he showed how the rise of German nationalism was intimately intertwined with a form of cosmopolitanism. In the late eighteenth century, Johann Gottfried Herder and other cosmopolitan thinkers began imagining a German nation that brought together the German-speaking peoples of dozens of independent states into a union founded on a shared culture and language, a shared national spirit.
Interactions with foreigners, precisely because they are different, can open us up to new possibilities, as we can open up new possibilities to them.
It took a century for modern Germany to achieve that vision (although without the German-speaking parts of the Austro-Hungarian Empire). In 1871, a Prussian monarch presided over the unification of more than two dozen federated kingdoms, duchies, principalities, and independent cities. But as Meinecke showed, the thinkers behind this accomplishment were deeply respectful of the national spirits and peoples of other nations, as well. In true cosmopolitan spirit, Herder revered the literature and arts of foreigners. His ideas about national culture inspired a generation of folklorists, including the Brothers Grimm, but he also wrote essays on Shakespeare and Homer. One could be both cosmopolitan and patriotic; indeed, for the great liberal nationalists of the nineteenth century, patriotism was ultimately a vehicle for cosmopolitanism. It’s why Giuseppe Mazzini, a champion of Italian unification, urged his fellow citizens to “embrace the whole human family in your affections.”
The stock modern slander against the cosmopolitans—which played a central role in anti-Semitic Soviet propaganda under Stalin in the period after World War II—is that they are “rootless.” This accusation reflects not just moral blindness but also intellectual confusion. What’s distinctive about modern cosmopolitanism is its celebration of the contribution of every nation to the chorus of humanity. It is about sharing. And you cannot share if you have nothing to bring to the table. Cosmopolitans worthy of the label have rhizomes, spreading horizontally, as well as taproots, delving deep; they are anything but rootless.
At a protest to demonstrate London’s solidarity with the EU, June 2016
Another corollary of cosmopolitanism is worth stressing: in respecting the rights of others to be different from themselves, cosmopolitans extend that right to the uncosmopolitan. The thought that every human being matters—the universalism at the heart of cosmopolitanism—is not optional. Cosmopolitanism is thus also committed to the idea that individuals and societies have the right to settle for themselves many questions about what is worthwhile and many features of their social arrangements. In particular, many people value a sense of place and wish to be surrounded by others who speak a familiar language and who follow customs they think of as their own. Those people—the British journalist David Goodhart has dubbed them “Somewheres,” in contrast to “Anywheres”—are entitled to shape a social world that allows them these things, that grants them the proverbial comforts of home. And if they want to sustain those comforts by keeping away people unlike themselves or cultural imports from elsewhere, then (assuming certain moral basics of nondiscrimination are observed) that is their right.
The thought that every human being matters—the universalism at the heart of cosmopolitanism—is not optional.
The problem, of course, is that these uncosmopolitan localists live in societies with others who think differently. They must cohabit with the cosmopolitans, just as the cosmopolitans must cohabit with them. Furthermore, societies have moral and legal duties to admit at least some foreigners—namely, those escaping persecution and death. Those obligations are shared by the community of nations, so the burden must be distributed fairly. But each society must contribute to meeting the need.
The fact that the localists share societies with cosmopolitans in countries that have duties to asylum seekers constrains the ways in which the localist camp can achieve the comforts of home. But the existence of the localists constrains what the cosmopolitans can do, as well. Democracy is about respecting the legitimate desires of fellow citizens and seeking to accommodate them when you reasonably can.
PLAYING FAVORITES
If nationalism and cosmopolitanism are, far from being incompatible, actually intertwined, how has cosmopolitanism become such a handy bugbear for those who, like the political strategist Steve Bannon, seek to ally themselves with the spirit of nationalism? One reason is that some people have made excessive claims on behalf of cosmopolitanism. They have often been seduced by this tempting line of thought: if everybody matters, then they must matter equally, and if that is true, then each of us has the same moral obligations to everyone. Partiality—favoring those to whom one is connected by blood or culture or territory—can look morally arbitrary. The real enemy of those who worry about “citizens of nowhere” is not a reasonable cosmopolitanism but the different idea, occasionally espoused by people calling themselves “citizens of the world,” that it is wrong to be partial to your own place or people.
What the impartial version of cosmopolitanism fails to understand is that the fact of everybody’s mattering equally from the perspective of universal morality does not mean that each of us has the same obligations to everyone. I have a particular fondness for my nephews and nieces, one that does not extend to your nephews and nieces. Indeed, I believe it would be morally wrong not to favor my relatives when it comes to distributing my limited attention and treasure. Does it follow that I must hate your nephews and nieces or try to shape the world to their disadvantage? Surely not. I can recognize the legitimate moral interests of your family, while still paying special attention to mine. It’s not that my family matters more than yours; it’s that it matters more to me. And requiring people to pay special attention to their own is, as the great cosmopolitan philosopher Martha Nussbaum once put it, “the only sensible way to do good.”
Some people have made excessive claims on behalf of cosmopolitanism.
We generally have a stronger attachment to those with whom we grew up and with whom we make our lives than we do to those outside the family. But we can still favor those with whom we share projects or identities, and it is a distinct feature of human psychology that we are capable of intense feelings around identities that are shared with millions or billions of strangers. Indeed, this characteristic is evident in the forms of nationalism that do not give rise to respect for other nations—as Herder’s did—but explode instead in hostility and xenophobia. That side of nationalism needs taming, and cosmopolitanism is one means of mastering it. But it is absurd to miss the other side of nationalism: its capacity to bring people together in projects such as creating a social welfare state or building a society of equals.
GLOBAL IDENTITY POLITICS
Beyond the charge that cosmopolitanism is inconsistent with nationalism, another objection to it holds that humanity as a whole is too abstract to generate a powerful sense of identity. But scale simply cannot be the problem. There are nearly 1.4 billion Chinese, and yet their Chinese identification is a real force in their lives and politics. The modern nation-state has always been a community too large for everyone to meet face-to-face; it has always been held together not by literal companionship but by imaginative identification. Cosmopolitans extend their imaginations only a small step further, and in doing so, they do not have to imagine away their roots. Gertrude Stein, the Pittsburgh-born, Oakland-raised writer who lived in Paris for four decades, was right: “What good are roots,” she asked, “if you can’t take them with you?”
To speak for global citizenship is not to oppose local citizenship, then. My father, a self-described citizen of the world, was deeply involved in the political life of his hometown, Kumasi, the capital of the old empire of Ashanti, to which he was proud to belong. He was active, too, in the Organization of African Unity (which became the African Union). He served his country, Ghana, at the UN, in which he also believed passionately. He loved Ashanti traditions, proverbs, and folktales, as well as Shakespeare; as a lawyer, he admired Cicero, whom he would quote at the drop of a hat, but also Thurgood Marshall and Mahatma Gandhi. He listened to the music of Bessie Smith (the African American “Empress of the Blues”), Sophie Tucker (a Ukrainian-born vaudeville star), and Umm Kulthum (an Egyptian singer), and he sang along to the work of the English musical-theater duo Gilbert and Sullivan. None of that stopped him from joining the Ghanaian independence movement, serving in Ghana’s national parliament, or laying the foundations of pro bono legal work in the country. He recognized that what May called the “bonds and obligations that make our society work” are global as well as local. He saw that those obligations existed not only in his home country and his hometown but also in the international arena. He recognized what that very English poet Philip Larkin once called “the importance of elsewhere.”
Those who deny the importance of elsewhere have withdrawn from the world, where the greatest challenges and threats must be confronted by a community of nations, with a genuine sense of obligation that transcends borders. Today, atmospheric carbon dioxide levels are at their highest point in 800,000 years. Oceanic acidification worsens each year. And according to the UN, there were almost 260 million international migrants in 2017, many fleeing war and oppression in Africa, the Middle East, and Asia.
As populist demagogues around the world exploit the churn of economic discontent, the danger is that the politics of engagement could give way to the politics of withdrawal. A successful cosmopolitanism must keep its eyes on matters near and far, promoting political systems that also work for localists. The Anywheres must extend their concern to the Somewheres. But forgetting that we are all citizens of the world—a small, warming, intensely vulnerable world—would be a reckless relaxation of vigilance. Elsewhere has never been more important.
0 notes
sufredux · 5 years
Text
A New Americanism
In 1986, the Pulitzer Prize–winning, bowtie-wearing Stanford historian Carl Degler delivered something other than the usual pipe-smoking, scotch-on-the-rocks, after-dinner disquisition that had plagued the evening program of the annual meeting of the American Historical Association for nearly all of its centurylong history. Instead, Degler, a gentle and quietly heroic man, accused his colleagues of nothing short of dereliction of duty: appalled by nationalism, they had abandoned the study of the nation.
“We can write history that implicitly denies or ignores the nation-state, but it would be a history that flew in the face of what people who live in a nation-state require and demand,” Degler said that night in Chicago. He issued a warning: “If we historians fail to provide a nationally defined history, others less critical and less informed will take over the job for us.”
The nation-state was in decline, said the wise men of the time. The world had grown global. Why bother to study the nation? Nationalism, an infant in the nineteenth century, had become, in the first half of the twentieth, a monster. But in the second half, it was nearly dead—a stumbling, ghastly wraith, at least outside postcolonial states. And historians seemed to believe that if they stopped studying it, it would die sooner: starved, neglected, and abandoned.
Francis Fukuyama is a political scientist, not a historian. But his 1989 essay “The End of History?” illustrated Degler’s point. Fascism and communism were dead, Fukuyama announced at the end of the Cold War. Nationalism, the greatest remaining threat to liberalism, had been “defanged” in the West, and in other parts of the world where it was still kicking, well, that wasn’t quite nationalism. “The vast majority of the world’s nationalist movements do not have a political program beyond the negative desire of independence from some other group or people, and do not offer anything like a comprehensive agenda for socio-economic organization,” Fukuyama wrote. (Needless to say, he has since had to walk a lot of this back, writing in his most recent book about the “unexpected” populist nationalism of Russia’s Vladimir Putin, Poland’s Jaroslaw Kaczynski, Hungary’s Viktor Orban, Turkey’s Recep Tayyip Erdogan, the Philippines’ Rodrigo Duterte, and the United States’ Donald Trump.)
Fukuyama was hardly alone in pronouncing nationalism all but dead. A lot of other people had, too. That’s what worried Degler.
Nation-states, when they form, imagine a past. That, at least in part, accounts for why modern historical writing arose with the nation-state. For more than a century, the nation-state was the central object of historical inquiry. From George Bancroft in the 1830s through, say, Arthur Schlesinger, Jr., or Richard Hofstadter, studying American history meant studying the American nation. As the historian John Higham put it, “From the middle of the nineteenth century until the 1960s, the nation was the grand subject of American history.” Over that same stretch of time, the United States experienced a civil war, emancipation, reconstruction, segregation, two world wars, and unprecedented immigration—making the task even more essential. “A history in common is fundamental to sustaining the affiliation that constitutes national subjects,” the historian Thomas Bender once observed. “Nations are, among other things, a collective agreement, partly coerced, to affirm a common history as the basis for a shared future.”
Officers of the American Historical Association at their annual meeting in Washington, D.C., December 1889
But in the 1970s, studying the nation fell out of favor in the American historical profession. Most historians started looking at either smaller or bigger things, investigating the experiences and cultures of social groups or taking the broad vantage promised by global history. This turn produced excellent scholarship. But meanwhile, who was doing the work of providing a legible past and a plausible future—a nation—to the people who lived in the United States? Charlatans, stooges, and tyrants. The endurance of nationalism proves that there’s never any shortage of blackguards willing to prop up people’s sense of themselves and their destiny with a tissue of myths and prophecies, prejudices and hatreds, or to empty out old rubbish bags full of festering resentments and calls to violence. When historians abandon the study of the nation, when scholars stop trying to write a common history for a people, nationalism doesn’t die. Instead, it eats liberalism.
Maybe it’s too late to restore a common history, too late for historians to make a difference. But is there any option other than to try to craft a new American history—one that could foster a new Americanism?
THE NATION AND THE STATE
The United States is different from other nations—every nation is different from every other—and its nationalism is different, too. To review: a nation is a people with common origins, and a state is a political community governed by laws. A nation-state is a political community governed by laws that unites a people with a supposedly common ancestry. When nation-states arose out of city-states and kingdoms and empires, they explained themselves by telling stories about their origins—stories meant to suggest that everyone in, say, “the French nation” had common ancestors, when they of course did not. As I wrote in my book These Truths, “Very often, histories of nation-states are little more than myths that hide the seams that stitch the nation to the state.”
But in the American case, the origins of the nation can be found in those seams. When the United States declared its independence, in 1776, it became a state, but what made it a nation? The fiction that its people shared a common ancestry was absurd on its face; they came from all over, and, after having waged a war against Great Britain, just about the last thing they wanted to celebrate was their Britishness. Long after independence, most Americans saw the United States not as a nation but, true to the name, as a confederation of states. That’s what made arguing for ratification of the Constitution an uphill battle; it’s also why the Constitution’s advocates called themselves “Federalists,” when they were in fact nationalists, in the sense that they were proposing to replace a federal system, under the Articles of Confederation, with a national system. When John Jay insisted, in The Federalist Papers, no. 2, “that Providence has been pleased to give this one connected country to one united people—a people descended from the same ancestors, speaking the same language, professing the same religion, attached to the same principles of government, very similar in their manners and customs,” he was whistling in the dark.
One way to turn a state into a nation is to write its history.
It was the lack of these similarities that led Federalists such as Noah Webster to attempt to manufacture a national character by urging Americans to adopt distinctive spelling. “Language, as well as government should be national,” Webster wrote in 1789. “America should have her own distinct from all the world.” That got the United States “favor” instead of “favour.” It did not, however, make the United States a nation. And by 1828, when Webster published his monumental American Dictionary of the English Language, he did not include the word “nationalism,” which had no meaning or currency in the United States in the 1820s. Not until the 1840s, when European nations were swept up in what has been called “the age of nationalities,” did Americans come to think of themselves as belonging to a nation, with a destiny.
This course of events is so unusual, in the matter of nation building, that the historian David Armitage has suggested that the United States is something other than a nation-state. “What we mean by nationalism is the desire of nations (however defined) to possess states to create the peculiar hybrid we call the nation-state,” Armitage writes, but “there’s also a beast we might call the state-nation, which arises when the state is formed before the development of any sense of national consciousness. The United States might be seen as a, perhaps the only, spectacular example of the latter”—not a nation-state but a state-nation.
One way to turn a state into a nation is to write its history. The first substantial history of the American nation, Bancroft’s ten-volume History of the United States, From the Discovery of the American Continent, was published between 1834 and 1874. Bancroft wasn’t only a historian; he was also a politician who served in the administrations of three U.S. presidents, including as secretary of war in the age of American continental expansion. An architect of manifest destiny, Bancroft wrote his history in an attempt to make the United States’ founding appear inevitable, its growth inexorable, and its history ancient. De-emphasizing its British inheritance, he celebrated the United States as a pluralistic and cosmopolitan nation, with ancestors all over the world:
The origin of the language we speak carries us to India; our religion is from Palestine; of the hymns sung in our churches, some were first heard in Italy, some in the deserts of Arabia, some on the banks of the Euphrates; our arts come from Greece; our jurisprudence from Rome.
Nineteenth-century nationalism was liberal, a product of the Enlightenment. It rested on an analogy between the individual and the collective. As the American theorist of nationalism Hans Kohn once wrote, “The concept of national self-determination—transferring the ideal of liberty from the individual to the organic collectivity—was raised as the banner of liberalism.”
Liberal nationalism, as an idea, is fundamentally historical. Nineteenth-century Americans understood the nation-state within the context of an emerging set of ideas about human rights: namely, that the power of the state guaranteed everyone eligible for citizenship the same set of irrevocable political rights. The future Massachusetts senator Charles Sumner offered this interpretation in 1849:
Here is the Great Charter of every human being drawing vital breath upon this soil, whatever may be his condition, and whoever may be his parents. He may be poor, weak, humble, or black,—he may be of Caucasian, Jewish, Indian, or Ethiopian race,—he may be of French, German, English, or Irish extraction; but before the Constitution of Massachusetts all these distinctions disappear. . . . He is a MAN, the equal of all his fellow-men. He is one of the children of the State, which, like an impartial parent, regards all of its offspring with an equal care.
Or as the Prussian-born American political philosopher Francis Lieber, a great influence on Sumner, wrote, “Without a national character, states cannot obtain that longevity and continuity of political society which is necessary for our progress.” Lieber’s most influential essay, “Nationalism: A Fragment of Political Science,” appeared in 1860, on the very eve of the Civil War.
THE UNION AND THE CONFEDERACY
The American Civil War was a struggle over two competing ideas of the nation-state. This struggle has never ended; it has just moved around.
In the antebellum United States, Northerners, and especially northern abolitionists, drew a contrast between (northern) nationalism and (southern) sectionalism. “We must cultivate a national, instead of a sectional patriotism” urged one Michigan congressman in 1850. But Southerners were nationalists, too. It’s just that their nationalism was what would now be termed “illiberal” or “ethnic,” as opposed to the Northerners’ liberal or civic nationalism. This distinction has been subjected to much criticism, on the grounds that it’s nothing more than a way of calling one kind of nationalism good and another bad. But the nationalism of the North and that of the South were in fact different, and much of U.S. history has been a battle between them.
“Ours is the government of the white man,” the American statesman John C. Calhoun declared in 1848, arguing against admitting Mexicans as citizens of the United States. “This Government was made by our fathers on the white basis,” the American politician Stephen Douglas said in 1858. “It was made by white men for the benefit of white men and their posterity forever.”
Abraham Lincoln, building on arguments made by black abolitionists, exposed Douglas’ history as fiction. “I believe the entire records of the world, from the date of the Declaration of Independence up to within three years ago, may be searched in vain for one single affirmation, from one single man, that the negro was not included in the Declaration of Independence,” Lincoln said during a debate with Douglas in Galesburg, Illinois, in 1858. He continued:
I think I may defy Judge Douglas to show that he ever said so, that Washington ever said so, that any President ever said so, that any member of Congress ever said so, or that any living man upon the whole earth ever said so, until the necessities of the present policy of the Democratic party, in regard to slavery, had to invent that affirmation.
No matter, the founders of the Confederacy answered: we will craft a new constitution, based on white supremacy. In 1861, the Confederacy’s newly elected vice president, Alexander Stephens, delivered a speech in Savannah in which he explained that the ideas that lay behind the U.S. Constitution “rested upon the assumption of the equality of races”—here ceding Lincoln’s argument—but that “our new government is founded upon exactly the opposite ideas; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery is his natural and moral condition.”
The North won the war. But the battle between liberal and illiberal nationalism raged on, especially during the debates over the 14th and 15th Amendments, which marked a second founding of the United States on terms set by liberal ideas about the rights of citizens and the powers of nation-states—namely, birthright citizenship, equal rights, universal (male) suffrage, and legal protections for noncitizens. These Reconstruction-era amendments also led to debates over immigration, racial and gender equality, and the limits of citizenship. Under the terms of the 14th Amendment, children of Chinese immigrants born in the United States would be U.S. citizens. Few major political figures talked about Chinese immigrants in favorable terms. Typical was the virulent prejudice expressed by William Higby, a one-time miner and Republican congressman from California. “The Chinese are nothing but a pagan race,” Higby said in 1866. “You cannot make good citizens of them.” And opponents of the 15th Amendment found both African American voting and Chinese citizenship scandalous. Fumed Garrett Davis, a Democratic senator from Kentucky: “I want no negro government; I want no Mongolian government; I want the government of the white man which our fathers incorporated.”
The most significant statement in this debate was made by a man born into slavery who had sought his own freedom and fought for decades for emancipation, citizenship, and equal rights. In 1869, in front of audiences across the country, Frederick Douglass delivered one of the most important and least read speeches in American political history, urging the ratification of the 14th and 15th Amendments in the spirit of establishing a “composite nation.” He spoke, he said, “to the question of whether we are the better or the worse for being composed of different races of men.” If nations, which are essential for progress, form from similarity, what of nations like the United States, which are formed out of difference, Native American, African, European, Asian, and every possible mixture, “the most conspicuous example of composite nationality in the world”?
A statue of Frederick Douglass pictured behind U.S. President Barack Obama at a ceremony commemorating the 150th anniversary of the 13th Amendment in Washington, D.C., December 2015
To Republicans like Higby, who objected to Chinese immigration and to birthright citizenship, and to Democrats like Davis, who objected to citizenship and voting rights for anyone other than white men, Douglass offered an impassioned reply. As for the Chinese: “Do you ask, if I would favor such immigration? I answer, I would. Would you have them naturalized, and have them invested with all the rights of American citizenship? I would. Would you allow them to vote? I would.” As for future generations, and future immigrants to the United States, Douglass said, “I want a home here not only for the negro, the mulatto and the Latin races; but I want the Asiatic to find a home here in the United States, and feel at home here, both for his sake and for ours.” For Douglass, progress could only come in this new form of a nation, the composite nation. “We shall spread the network of our science and civilization over all who seek their shelter, whether from Asia, Africa, or the Isles of the sea,” he said, and “all shall here bow to the same law, speak the same language, support the same Government, enjoy the same liberty, vibrate with the same national enthusiasm, and seek the same national ends.” That was Douglass’ new Americanism. It did not prevail.
Emancipation and Reconstruction, the historian and civil rights activist W. E. B. Du Bois would write in 1935, was “the finest effort to achieve democracy . . . this world had ever seen.” But that effort had been betrayed by white Northerners and white Southerners who patched the United States back together by inventing a myth that the war was not a fight over slavery at all but merely a struggle between the nation and the states. “We fell under the leadership of those who would compromise with truth in the past in order to make peace in the present,” Du Bois wrote bitterly. Douglass’ new Americanism was thus forgotten. So was Du Bois’ reckoning with American history.
NATIONAL HISTORIES
The American Historical Association was founded in 1884—two years after the French philosopher Ernest Renan wrote his signal essay, “What Is a Nation?” Nationalism was taking a turn, away from liberalism and toward illiberalism, including in Germany, beginning with the “blood and iron” of Bismarck. A driver of this change was the emergence of mass politics, under whose terms nation-states “depended on the participation of the ordinary citizen to an extent not previously envisaged,” as the historian Eric Hobsbawm once wrote. That “placed the question of the ‘nation,’ and the citizen’s feelings towards whatever he regarded as his ‘nation,’ ‘nationality’ or other centre of loyalty, at the top of the political agenda.”
This transformation began in the United States in the 1880s, with the rise of Jim Crow laws, and with a regime of immigration restriction, starting with the Chinese Exclusion Act, the first federal law restricting immigration, which was passed in 1882. Both betrayed the promises and constitutional guarantees made by the 14th and 15th Amendments. Fighting to realize that promise would be the work of standard-bearers who included Ida B. Wells, who led a campaign against lynching, and Wong Chin Foo, who founded the Chinese Equal Rights League in 1892, insisting, “We claim a common manhood with all other nationalities.”
The uglier and more illiberal nationalism got, the more liberals became convinced of the impossibility of liberal nationalism.
But the white men who delivered speeches at the annual meetings of the American Historical Association during those years had little interest in discussing racial segregation, the disenfranchisement of black men, or immigration restriction. Frederick Jackson Turner drew historians’ attention to the frontier. Others contemplated the challenges of populism and socialism. Progressive-era historians explained the American nation as a product of conflict “between democracy and privilege, the poor versus the rich, the farmers against the monopolists, the workers against the corporations, and, at times, the Free-Soilers against the slaveholders,” as Degler observed. And a great many association presidents, notably Woodrow Wilson, mourned what had come to be called “the Lost Cause of the Confederacy.” All offered national histories that left out the origins and endurance of racial inequality.
Meanwhile, nationalism changed, beginning in the 1910s and especially in the 1930s. And the uglier and more illiberal nationalism got, the more liberals became convinced of the impossibility of liberal nationalism. In the United States, nationalism largely took the form of economic protectionism and isolationism. In 1917, the publishing magnate William Randolph Hearst, opposing U.S. involvement in World War I, began calling for “America first,” and he took the same position in 1938, insisting that “Americans should maintain the traditional policy of our great and independent nation—great largely because it is independent.”
In the years before the United States entered World War II, a fringe even supported Hitler; Charles Coughlin—a priest, near presidential candidate, and wildly popular broadcaster—took to the radio to preach anti-Semitism and admiration for Hitler and the Nazi Party and called on his audience to form a new political party, the Christian Front. In 1939, about 20,000 Americans, some dressed in Nazi uniforms, gathered in Madison Square Garden, decorated with swastikas and American flags, with posters declaring a “Mass Demonstration for True Americanism,” where they denounced the New Deal as the “Jew Deal.” Hitler, for his part, expressed admiration for the Confederacy and regret that “the beginnings of a great new social order based on the principle of slavery and inequality were destroyed by the war.” As one arm of a campaign to widen divisions in the United States and weaken American resolve, Nazi propaganda distributed in the Jim Crow South called for the repeal of the 14th and 15th Amendments.
The “America first” supporter Charles Lindbergh, who, not irrelevantly, had become famous by flying across the Atlantic alone, based his nationalism on geography. “One need only glance at a map to see where our true frontiers lie,” he said in 1939. “What more could we ask than the Atlantic Ocean on the east and the Pacific on the west?” (This President Franklin Roosevelt answered in 1940, declaring the dream that the United States was “a lone island,” to be, in fact, a nightmare, “the nightmare of a people lodged in prison, handcuffed, hungry, and fed through the bars from day to day by the contemptuous, unpitying masters of other continents.”)
In the wake of World War II, American historians wrote the history of the United States as a story of consensus, an unvarying “liberal tradition in America,” according to the political scientist Louis Hartz, that appeared to stretch forward in time into an unvarying liberal future. Schlesinger, writing in 1949, argued that liberals occupied “the vital center” of American politics. These historians had plenty of blind spots—they were especially blind to the forces of conservatism and fundamentalism—but they nevertheless offered an expansive, liberal account of the history of the American nation and the American people.
The last, best single-volume popular history of the United States written in the twentieth century was Degler’s 1959 book, Out of Our Past: The Forces That Shaped Modern America: a stunning, sweeping account that, greatly influenced by Du Bois, placed race, slavery, segregation, and civil rights at the center of the story, alongside liberty, rights, revolution, freedom, and equality. Astonishingly, it was Degler’s first book. It was also the last of its kind.
THE DECLINE OF NATIONAL HISTORY
If love of the nation is what drove American historians to the study of the past in the nineteenth century, hatred for nationalism drove American historians away from it in the second half of the twentieth century.
It had long been clear that nationalism was a contrivance, an artifice, a fiction. After World War II, while U.S. President Harry Truman was helping establish what came to be called “the liberal international order,” internationalists began predicting the end of the nation-state, with the Harvard political scientist Rupert Emerson declaring that “the nation and the nation-state are anachronisms in the atomic age.” By the 1960s, nationalism looked rather worse than an anachronism. Meanwhile, with the coming of the Vietnam War, American historians stopped studying the nation-state in part out of a fear of complicity with atrocities of U.S. foreign policy and regimes of political oppression at home. “The professional practice of history writing and teaching flourished as the handmaiden of nation-making; the nation provided both support and an appreciative audience,” Bender observed in Rethinking American History in a Global Age in 2002. “Only recently,” he continued, “and because of the uncertain status of the nation-state has it been recognized that history as a professional discipline is part of its own substantive narrative and not at all sufficiently self-conscious about the implications of that circularity.” Since then, historians have only become more self-conscious, to the point of paralysis. If nationalism was a pathology, the thinking went, the writing of national histories was one of its symptoms, just another form of mythmaking.
If love of the nation is what drove American historians to the study of the past in the nineteenth century, hatred for nationalism drove American historians away from it in the second half of the twentieth century.
Something else was going on, too. Beginning in the 1960s, women and people of color entered the historical profession and wrote new, rich, revolutionary histories, asking different questions and drawing different conclusions. Historical scholarship exploded, and got immeasurably richer and more sophisticated. In a there-goes-the-neighborhood moment, many older historians questioned the value of this scholarship. Degler did not; instead, he contributed to it. Most historians who wrote about race were not white and most historians who wrote about women were not men, but Degler, a white man, was one of two male co-founders of the National Organization for Women and won a Pulitzer in 1972 for a book called Neither Black nor White. Still, he shared the concern expressed by Higham that most new American historical scholarship was “not about the United States but merely in the United States.”
By 1986, when Degler rose from his chair to deliver his address before the American Historical Association, a lot of historians in the United States had begun advocating a kind of historical cosmopolitanism, writing global rather than national history. Degler didn’t have much patience for this. A few years later, after the onset of civil war in Bosnia, the political philosopher Michael Walzer grimly announced that “the tribes have returned.” They had never left. They’d only become harder for historians to see, because they weren’t really looking anymore.
A NEW AMERICAN HISTORY
Writing national history creates plenty of problems. But not writing national history creates more problems, and these problems are worse.
What would a new Americanism and a new American history look like? They might look rather a lot like the composite nationalism imagined by Douglass and the clear-eyed histories written by Du Bois. They might take as their starting point the description of the American experiment and its challenges offered by Douglass in 1869:
A Government founded upon justice, and recognizing the equal rights of all men; claiming no higher authority for existence, or sanction for its laws, than nature, reason, and the regularly ascertained will of the people; steadily refusing to put its sword and purse in the service of any religious creed or family, is a standing offense to most of the Governments of the world, and to some narrow and bigoted people among ourselves.
At the close of the Cold War, some commentators concluded that the American experiment had ended in triumph, that the United States had become all the world. But the American experiment had not in fact ended. A nation founded on revolution and universal rights will forever struggle against chaos and the forces of particularism. A nation born in contradiction will forever fight over the meaning of its history. But that doesn’t mean history is meaningless, or that anyone can afford to sit out the fight.
“The history of the United States at the present time does not seek to answer any significant questions,” Degler told his audience some three decades ago. If American historians don’t start asking and answering those sorts of questions, other people will, he warned. They’ll echo Calhoun and Douglas and Father Coughlin. They’ll lament “American carnage.” They’ll call immigrants “animals” and other states “shithole countries.” They’ll adopt the slogan “America first.” They’ll say they can “make America great again.” They’ll call themselves “nationalists.” Their history will be a fiction. They will say that they alone love this country. They will be wrong.
CORRECTION APPENDED (February 26, 2019)
An earlier version of this article misidentified the U.S. president who began building the liberal international order after World War II. It was Harry Truman, not Franklin Roosevelt.
0 notes
sufredux · 5 years
Text
Israel and the Post-American Middle East
Was the feud between U.S. President Barack Obama and Israeli Prime Minister Benjamin Netanyahu, first over settlements and then over Iran, a watershed? Netanyahu, it is claimed, turned U.S. support of Israel into a partisan issue. Liberals, including many American Jews, are said to be fed up with Israel’s “occupation,” which will mark its 50th anniversary next year. The weakening of Israel’s democratic ethos is supposedly undercutting the “shared values” argument for the relationship. Some say Israel’s dogged adherence to an “unsus­tainable” status quo in the West Bank has made it a liability in a region in the throes of change. Israel, it is claimed, is slipping into pariah status, imposed by the global movement for Boycott, Divestment, and Sanctions (BDS).
Biblical-style lamentations over Israel’s final corruption have been a staple of the state’s critics and die-hard anti-Zionists for 70 years. Never have they been so detached from reality. Of course, Israel has changed—decidedly for the better. By every measure, Israel is more globalized, prosperous, and democratic than at any time in its history. As nearby parts of the Middle East slip under waves of ruthless sectarian strife, Israel’s minor­ities rest secure. As Europe staggers under the weight of unwanted Muslim migrants, Israel welcomes thousands of Jewish immigrants from Europe. As other Mediterranean countries struggle with debt and unemployment, Israel boasts a growing economy, supported by waves of foreign investment.
Politically, Netanyahu’s tenure has been Israel’s least tumultuous. Netanyahu has served longer than any other Israeli prime minister except David Ben-Gurion, yet he has led Israel in only one ground war: the limited Operation Protective Edge in Gaza in 2014. “I’d feel better if our partner was not the trigger-happy Netanyahu,” wrote the New York Times columnist Maureen Dowd four years ago. But Netanyahu hasn’t pulled triggers, even against Iran. The Israeli electorate keeps returning him to office precisely because he is risk averse: no needless wars, but no ambitious peace plans either. Although this may produce “overwhelming frustration” in Obama’s White House, in Vice President Joe Biden’s scolding phrase, it suits the majority of Israeli Jews just fine.
Netanyahu’s endurance fuels the frustration of Israel’s diminished left, too: thwarted at the ballot box, they comfort themselves with a false notion that Israel’s democracy is endangered. The right made similar claims 20 years ago, culminating in the assassination of Prime Minister Yitzhak Rabin. Anti-democratic forces exist in all democracies, but in Israel, they are either outside the system or confined in smaller parties, Jewish and Arab alike. There is no mechanism by which an outlier could capture one of the main political parties in a populist upsurge, as now seems likely in the United States. Under com­parable pressures of terrorism and war, even old democracies have wavered, but Israel’s record of fair, free elections testifies to the depth of its homegrown democratic ethos, reinforced by a vig­orous press and a vigilant judiciary.
Israel is more globalized, prosperous, and democratic than at any time in its history.
Israel is also more secure than ever. In 1948, only 700,000 Jews faced the daunting challenge of winning independence against the arrayed armies of the Arab world. Ben-Gurion’s top com­manders warned him that Israel had only a 50-50 chance of victory. Today, there are over six million Israeli Jews, and Israel is among the world’s most formidable military powers. It has a qualitative edge over any imaginable combination of enemies, and the ongoing digitalization of warfare has played precisely to Israel’s strengths. The Arab states have dropped out of the competition, leaving the field to die-hard Islamists on Israel’s borders. They champion “resistance,” but their primitive rocketry and tunnel digging are ineffective. The only credible threat to a viable Israel would be a nuclear Iran. No one doubts that if Iran ever breaks out, Israel could deploy its own nuclear deterrent, independent of any constraining alliance.
And what of the Palestinians? There is no near solution to this enduring conflict, but Israel has been adept at containing its effects. There is occupied territory, but there is also unoccupied territory. Israel maintains an over-the-horizon security footprint in most of the West Bank; Israeli-Palestinian security cooperation fills in most of the gaps. The Palestinian Authority, in the words of one wag, has become a “mini-Jordan,” buttressed by a combination of foreign aid, economic growth, and the usual corruption. By the standards of today’s Middle East, the Israeli-Palestinian conflict remains stable. It is prosecuted mostly at a distance, through maneuvering in international bodies and campaigns for and against BDS. These are high-decibel, low-impact confrontations. Yossi Vardi, Israel’s most famous high-tech entrepreneur, summarizes the mainstream Israeli view: “I’m not at all concerned about the economic effect of BDS. We have been subject to boycotts before.” And they were much worse.
Every political party in Israel has its own preferred solution to the conflict, but no solution offers an unequivocal advantage over the status quo. “The occupation as it is now can last forever, and it is better than any alternative”—this opinion, issued in April by Benny Ziffer, the literary editor of the liberal, left-wing Haaretz, summarizes the present Israeli consensus. It is debatable whether the two-state option has expired. But the reality on the ground doesn’t resemble one state either. Half a century after the 1967 war, only five percent of Israelis live in West Bank settlements, and half of them live in the five blocs that would be retained by Israel in any two-state scenario.
In the meantime, Egypt, Jordan, Saudi Arabia, and the United Arab Emirates are all shaking hands with Israel, some­times before the cameras. Israel and Russia are assiduously courting each other; still farther afield, Israel’s relations with China and India are booming. The genuine pariah of the Middle East is the Syrian regime, which never deigned to make peace with Israel. This last so-called steadfast Arab state is consumed from within by a great bloodbath; its nuclear project and massive stocks of chemical weapons are a distant memory.
The only credible threat to a viable Israel would be a nuclear Iran.
Israel faces all manner of potential threats and challenges, but never has it been more thoroughly prepared to meet them. The notion popular among some Israeli pundits that their compatriots live in a perpetual state of paralyzing fear misleads both Israel’s allies and its adversaries. Israel’s leaders are cautious but confident, not easily panicked, and practiced in the very long game that everyone plays in the Middle East. Nothing leaves them so unmoved as the vacuous mantra that the status quo is unsustainable. Israel’s survival has always depended on its willingness to sustain the status quo that it has created, driving its adversaries to resignation—and compromise. This is more an art than a science, but such resolve has served Israel well over time.
THE SUPERPOWER RETREATS
Still, there is a looming cloud on Israel’s horizon. It isn’t Iran’s delayed nukes, academe’s threats of boycott, or Palestinian maneuvers at the UN. It is a huge power vacuum. The United States, after a wildly erratic spree of misadventures, is backing out of the region. It is cutting its exposure to a Middle East that has consistently defied American expecta­tions and denied successive American presidents the “mission accomplished” moments they crave. The disengage­ment began before Obama entered the White House, but he has accelerated it, coming to see the Middle East as a region to be avoided because it “could not be fixed—not on his watch, and not for a generation to come.” (This was the bottom-line impression of the journalist Jeffrey Goldberg, to whom Obama granted his legacy interview on foreign policy.)
If history is precedent, this is more than a pivot. Over the last century, the Turks, the British, the French, and the Russians each had their moment in the Middle East, but prolonging it proved costly as their power ebbed. They gave up the pursuit of dominance and settled for influence. A decade ago, in the pages of this magazine, Richard Haass, the president of the Council on Foreign Relations, predicted that the United States had reached just this point: “The American era in the Middle East,” he announced, “. . . has ended.” He went on: “The United States will continue to enjoy more influence in the region than any other outside power, but its influence will be reduced from what it once was.” That was a debatable proposition in 2006; now in 2016, Obama has made it indisputable.
Israel faces all manner of potential threats and challenges, but never has it been more thoroughly prepared to meet them.
There are several ways to make a retreat seem other than it is. The Obama administration’s tack has been to create the illusion of a stable equilibrium, by cutting the United States’ commitments to its allies and mollifying its adversaries. And so, suddenly, none of the United States’ traditional friends is good enough to justify its full confidence. The great power must conceal its own weariness, so it pretends to be frustrated by the inconstancy of “free riders.” The result­ing complaints about Israel (as well as Egypt and Saudi Arabia) serve just such a narrative.
Israel’s leaders aren’t shy about warning against the consequences of this posture, but they are careful not to think out loud about Israeli options in a post-American Middle East. Israel wants a new memo­randum of understanding with the United States, the bigger the better, as compensation for the Iran nuclear deal. It is in Israel’s interest to emphasize the importance of the U.S.-Israeli rela­tionship as the bedrock of regional stability going forward.
But how far forward is another question. Even as Israel seeks to deepen the United States’ commitment in the short term, it knows that the unshakable bond won’t last in perpetuity. This is a lesson of history. The leaders of the Zionist movement always sought to ally their project with the dominant power of the day, but they had lived through too much European history to think that great power is ever abiding. In the twentieth century, they witnessed the collapse of old empires and the rise of new ones, each staking its claim to the Middle East in turn, each making promises and then rescinding them. When the United States’ turn came, the emerging superpower didn’t rush to embrace the Jews. They were alone during the 1930s, when the gates of the United States were closed to them. They were alone during the Holocaust, when the United States awoke too late. They were alone in 1948, when the United States placed Israel under an arms embargo, and in 1967, when a U.S. president explicitly told the Israelis that if they went to war, they would be alone.
After 1967, Israel nestled in the Pax Americana. The subsequent decades of the “special relationship” have so deepened Israel’s dependence on the United States in the military realm that many Israelis can no longer remember how Israel managed to survive without all that U.S. hardware. Israel’s own armies of supporters in the United States, especially in the Jewish community, reinforce this mindset as they assure themselves that were it not for their lobbying efforts in Washington, Israel would be in mortal peril.
But the Obama administration has given Israelis a preview of just how the unshakable bond is likely to be shaken. This prospect might seem alarming to Israel’s supporters, but the inevitable turn of the wheel was precisely the reason Zionist Jews sought sovereign independence in the first place. An independent Israel is a guarantee against the day when the Jews will again find themselves alone, and it is an operating premise of Israeli strategic thought that such a day will come.
ISRAEL ALONE
This conviction, far from paralyzing Israel, propels it to expand its options, diversify its relationships, and build its independent capabilities. The Middle East of the next 50 years will be differ­ent from that of the last 100. There will be no hegemony-seeking outside powers. The costs of pursuing full-spectrum dominance are too high; the rewards are too few. Outside powers will pursue specific goals, related to oil or terrorism. But large swaths of the Middle East will be left to their fate, to dissolve and re-form in unpredictable ways. Israel may be asked by weaker neighbors to extend its security net to include them, as it has done for decades for Jordan. Arab concern about Iran is already doing more to normalize Israel in the region than the ever-elusive and ever-inconclusive peace process. Israel, once the fulcrum of regional conflict, will loom like a pillar of regional stability—not only for its own people but also for its neighbors, threatened by a rising tide of political fragmentation, economic contraction, radical Islam, and sectarian hatred.
Israel is planning to outlast the United States in the Middle East.
So Israel is planning to outlast the United States in the Middle East. Israelis roll their eyes when the United States insinuates that it best understands Israel’s genuine long-term interests, which Israel is supposedly too traumatized or confused to discern. Although Israel has made plenty of tactical mistakes, it is hard to argue that its strategy has been anything but a success. And given the wobbly record of the United States in achieving or even defining its interests in the Middle East, it is hard to say the same about U.S. strategy. The Obama administration has placed its bet on the Iran deal, but even the deal’s most ardent advocates no longer claim to see the “arc of history” in the Middle East. In the face of the collapse of the Arab Spring, the Syrian dead, the millions of refugees, and the rise of the Islamic State, or ISIS, who can say in which direction the arc points? Or where the Iran deal will lead?
One other common American mantra deserves to be shelved. “Precisely because of our friendship,” said Obama five years ago, “it is important that we tell the truth: the status quo is unsustainable, and Israel too must act boldly to advance a lasting peace.” It is time for the United States to abandon this mantra, or at least modify it. Only if Israel’s adversaries conclude that Israel can sustain the status quo indefinitely—Israel’s military supremacy, its economic advantage, and, yes, its occupation—is there any hope that they will reconcile themselves to Israel’s existence as a Jewish state. Statements like Obama’s don’t sway Israel’s government, which knows better, but they do fuel Arab and Iranian rejection of Israel among those who believe that the United States no longer has Israel’s back. For Israel’s enemies, drawing the conclusion that Israel is thus weak would be a tragic mistake: Israel is well positioned to sustain the status quo all by itself. Its long-term strategy is predicated on it.
A new U.S. administration will offer an opportunity to revisit U.S. policy, or at least U.S. rhetoric. One of the candidates, Hillary Clinton, made a statement as secretary of state in Jerusalem in 2010 that came closer to reality and practicality. “The status quo is unsustainable,” she said, echoing the usual line. But she added this: “Now, that doesn’t mean that it can’t be sustained for a year or a decade, or two or three, but fundamentally, the status quo is unsustainable.” Translation: the status quo may not be optimal, but it is sustainable, for as long as it takes.
As the United States steps back from the Middle East, this is the message Washington should send if it wants to assist Israel and other U.S. allies in filling the vacuum it will leave behind.
0 notes
sufredux · 5 years
Text
Blood for Soil
Since the French Revolution, nationalism—the idea that state borders should coincide with national communities—has constituted the core source of political legitimacy around the world. As nationalism spread from western Europe in the early nineteenth century, it became increasingly ethnic in nature. In places where the state and the nation did not match up, such as Germany, Italy, and most of eastern Europe, the nation tended to be defined in terms of ethnicity, which led to violent processes of unification or secession. At the beginning of the twentieth century, ethnic nationalism came to disrupt political borders even more, leading to the breakup of multiethnic empires, including the Habsburg, Ottoman, and Russian ones. By changing the size of Europe’s political units, this undermined the balance of power and contributed to two world wars.
But then came the liberal norms and institutions established in the wake of World War II. Principles such as territorial integrity and universal human rights and bodies such as the United Nations managed to reduce ethnonationalist conflict in most parts of the world. Today, large interstate wars and violent land grabs are almost entirely a thing of the past. The rate of ethnic civil war has fallen, too.
But now, ethnic nationalism is back with a vengeance. In 2016, British voters chose to leave the EU out of a belief that the postnational vision of that body undermined British sovereignty and threatened to overwhelm the United Kingdom with immigrants from Africa, the Middle East, and the less developed parts of Europe. Donald Trump won the White House that same year by tapping into fears that the United States was being invaded by Mexicans and Muslims. And in office, Trump has not only fanned the flames of ethnic nationalism; he has also denigrated and damaged the norms and institutions designed to save humankind from such forces.
Other leaders around the world have eagerly embraced their own versions of ethnic nationalism. Across Europe, right-wing populist parties that oppose the EU and immigration have gained greater electoral shares. In Austria, Hungary, Italy, Norway, and Poland, among others, they even hold executive power. The brunt of ethnic nationalism has targeted migrants and other foreigners, but ethnic minorities that have long existed in countries have been on the receiving end of this wave, too, as illustrated by the resurgence of anti-Semitism in Hungary and growing discrimination against the Roma in Italy. Brazil, India, Russia, and Turkey, once some of the most promising emerging democracies, have increasingly rejected liberal values. They are defining their governing ideology in narrowly ethnic terms and giving militants more room to attack those who do not belong to the dominant ethnic group. Ethnic nationalism now exerts more influence than it has at any point since World War II.
That fact has been bemoaned for all sorts of reasons, from the uptick in hate crimes against immigrants it has caused to the damage it has done to the post–World War II order. Yet the scariest thing about today’s ethnic nationalism is that it could bring a return to the ills that accompanied its past ascendance: major violent upheavals both within and among countries. Should ethnic nationalism continue its march, it risks fueling destabilizing civil unrest in multiethnic states around the world—and even violent border disputes that could reverse the long decline of interstate war. Politicians need to resist the electoral temptations of exclusionary politics at home and reconfirm their commitment to the norms and institutions of cooperation abroad. Those who toy with ethnic nationalism are playing with fire.
IT’S BACK
At the end of the Cold War, there were warning signs that ethnic conflict might return. But at the time, any fear of that actually happening seemed unwarranted. As the scholar Ted Robert Gurr pointed out in this magazine in 2000, despite the violence in the former Yugoslavia and in Rwanda, the frequency of ethnic conflict had actually decreased since the mid-1990s. Pointing to inclusive policies and pragmatic compromises that had prevented and resolved ethnic conflicts, he argued that the trend toward peace would continue. Gurr’s essay reflected the liberal optimism that characterized the decades after the Cold War. Globalization was transforming the world. Borders seemed to be withering away. The optimism was not simply fanciful, and today, ethnic conflict is far less common than it was three decades ago.
Those who toy with ethnic nationalism are playing with fire.
A big reason is that governments are increasingly accommodating minorities. That’s what the political scientists Kristian Gleditsch, Julian Wucherpfennig, and I concluded after analyzing a data set of ethnic relations that starts in 1993. We found that discrimination against ethnic groups and their exclusion from executive power—major drivers of conflict—are declining globally. Outside the exception of the Middle East, where minorities in Bahrain, Iraq, Israel, Saudi Arabia, and Syria continue to struggle for influence, ethnic groups are increasingly being included in power-sharing deals. Since World War II, the percentage of the world’s population that lives in countries engaging in some form of ethnic power sharing has grown from a quarter to roughly a half. Some groups have been granted autonomous rule—for example, the Acehnese in Indonesia and the indigenous Aymara and Quechua communities in Bolivia. The UN’s globe-spanning peacekeeping operations, meanwhile, are helping prevent the outbreak of new hostilities between old belligerents, and efforts to promote democracy are making governments more responsive to minorities and thus convincing such groups to settle their scores at the ballot box rather than on the battlefield.
Our data also show that the number of rebelling ethnic groups has increased only in the Middle East. Outside that region, the trend is moving in the opposite direction. In the mid-1990s, about three percent of the average country’s population was composed of groups that rebelled against the government; today, the share has fallen to roughly half of that. Moreover, based on a global comparison of the concessions made to various ethnic groups in terms of rights, autonomy, and power sharing, we found strong evidence that such moves have helped prevent new conflicts and end old ones. By and large, the post–Cold War efforts to stave off ethnic nationalism and prevent war appear to have worked relatively well.
Yet there have long been signs that it is too soon to declare victory over ethnic nationalism. Around the turn of the millennium, right-wing populist parties gained strength in Europe. In 2005, the treaty to establish an EU constitution was defeated by French and Dutch voters, suggesting that Europeans still cared greatly about national identity. In 2008, the financial crisis started to undermine confidence in globalization (and weakened the EU). The upheavals that rocked the Arab world beginning in late 2010, rather than marking an expansion of democracy, brought instability and strife.
Throughout the nineteenth and twentieth centuries, nationalism tended to appear in waves, and it is unlikely that the current one has finished washing over the world. Moreover, it comes at a time when the bulwarks against conflict appear to be giving way: democracies around the world are backsliding, and peacekeeping budgets are under renewed pressure. Ever since it first appeared, ethnic nationalism has had violent consequences. There is good reason to worry that the current surge will, too.
THE ROAD TO VIOLENCE
Rising ethnic nationalism leads to conflict in several different ways. The key variable, recent research has found, is access to power. When ethnic groups lack it, they are especially likely to seek it through violence. Oftentimes in multiethnic states, elites of a particular group come to dominate the government and exclude other, weaker groups, even if the leaders’ own group represents a minority of the country’s population. Such is the case in Syria, where President Bashar al-Assad, a member of the Alawite minority, a Shiite sect that composes 12 percent of the population, nominally runs a country that is 74 percent Sunni. That disparity has fueled widespread grievances among other ethnic groups and led to a civil war that has so far caused at least 400,000 deaths and triggered a wave of migration that has destabilized Europe. Most of the time, however, the groups struggling for power are minorities, such as the Tutsis, who launched a civil war in Rwanda in 1990, or the Sunnis in Iraq, who are still fighting to win a seat at the table there.
Scholars have consistently found that inequality along ethnic lines increases the risk of rebellion.
It’s not just a lack of political power that can motivate ethnic groups to take up arms under the banner of nationalism; economic, social, and cultural inequality can, too. Scholars have consistently found that inequality along ethnic lines increases the risk of rebellion. The economist Frances Stewart, for example, has shown that such inequality is much more likely to lead to violent conflict than inequality among individuals, because it is far easier to mobilize people along ethnic lines. Similarly, my own collaborative research has found that the risk of rebellion increases rapidly with economic inequality along ethnic lines; for example, the average Chechen is six times as poor as the average Russian, which translates into a tenfold increase in the propensity for rebellion.
These findings are not limited to ethnic groups caught in power struggles over the control of existing countries; they also apply to minorities seeking self-rule. States usually view such demands as anathema to their sovereignty, and so they often resist making even limited compromises with the groups issuing them. They are disinclined, for example, to grant them regional autonomy. This stubbornness, in turn, tends to radicalize the aggrieved minority, causing them to aim instead for full-fledged independence, often through violence. Look no further than the Catholics in Northern Ireland, the Basques in Spain, the Kurds in Iraq and Turkey, and several different ethnic groups in Myanmar.
Tumblr media
A man walks past a mural in west Belfast, Northern Ireland, February 2017 
Toby Melville/REUTERS
Ethnic nationalism can cause conflict in another way, too: by leading to calls for territorial unity among a single ethnic group divided by international borders, which encourages rebels to rise up against their current states. After the breakup of Yugoslavia left ethnic Serbs stranded in several countries, their leader, Slobodan Milosevic, capitalized on the resulting resentment and advanced claims on territory in Croatia and Bosnia and Herzegovina. Frequently, nostalgia is invoked. Characterizing the collapse of the Soviet Union as “the greatest geopolitical catastrophe of the century,” Russian President Vladimir Putin has annexed Crimea and invaded eastern Ukraine and justified these moves by talking of the unification of the Russian nation. Turkish President Recep Tayyip Erdogan has drawn heavily on the past glory of the Ottoman Empire to extend his country’s influence far beyond its current borders. Hungarian Prime Minister Viktor Orban has similarly invoked the Habsburg empire, accepting Russian help to back Hungarian-minority militias inside Ukraine that advocate separatism.
Ethnic nationalism is most likely to lead to civil war, but it can also trigger interstate war by encouraging leaders to make the sorts of domestic appeals that can increase tensions with foreign countries. That dynamic has been at play in the disputes between Armenia and Azerbaijan, India and Pakistan, and Greece and Turkey. Researchers have found some evidence that political inequality along ethnic lines makes things worse: when ethnonationalist leaders believe that their kin communities in neighboring countries are being treated badly, they are more inclined to come to their rescue with military force.
What’s more, those ethnonationalist leaders are typically hostile to international organizations that favor minority rights, multiethnic governance, and compromise. In their eyes, calls for power sharing contradict their ethnic group’s rightful dominance. They view the protection of human rights and the rule of law, as well as humanitarian interventions, such as peacekeeping operations, as direct threats to their ethnonationalist agendas, and so they work to undermine them. Russia has explicitly sought to weaken international law and international institutions in order to create more room for its own project of occupation in Crimea. Israel has done the same thing in the service of its occupation of the West Bank. Trump, who has called for an end to U.S. sanctions on Russia and moved the U.S. embassy from Tel Aviv to Jerusalem, has actively backed these ethnonationalist impulses, further encouraging the erosion of the postwar consensus that put a cap on ethnic conflict.
If all of these are the risk factors for ethnic nationalism sliding into ethnic conflict, then where are they most prevalent today? Statistical analysis suggests that the ethnically diverse but still relatively peaceful countries most at risk of descending into violence are Ethiopia, Iran, Pakistan, and the Republic of the Congo. These are all developing countries with histories of conflict and where minorities face discrimination and exclusion from power.
The risk of conflict in the developed world is much lower, but even there, ethnic nationalism could well threaten peace. In Spain, the rise of the new right-wing populist party Vox has put pressure on two center-right parties, the People’s Party and Citizens, to become even less willing to compromise with Catalan nationalists, setting the stage for an enduring standoff that could turn violent if Madrid resorts to even harsher repressive measures. In Northern Ireland, Brexit could lead to the reimposition of customs checks on the border with the Republic of Ireland, a development that could destroy the agreement that has kept the peace since 1998. In eastern Europe, the return of ethnic nationalism threatens to reawaken so-called frozen conflicts, interstate disputes that were stopped in place first by the Soviet Union and then by the EU. Beyond the outbreak of new wars, the weakening of liberal pressures to share power and respect minority rights will likely embolden ethnonationalists to perpetuate ongoing conflicts—particularly the long-standing ones in Israel, Myanmar, and Turkey. Across the globe, after seven decades of steady progress toward peace, the trend could soon be reversed.
THE PATH TO PEACE
In order to head off such destructive consequences, it may be tempting to see ethnic nationalism as part of the solution rather than the problem. Instead of trying to resist such urges, the thinking goes, one should encourage them, since they are likely to bring political borders in line with national borders, thus eliminating the grievances at the root of the problem. Some scholars, such as Edward Luttwak, have even recommended that ethnic groups simply be allowed to fight it out, arguing that the short-term pain of war is worth the long-term benefit of the stability that comes when ethnic dominance replaces ethnic diversity. Yet as the case of Syria has shown, such harsh strategies tend to perpetuate resentment, not consolidate peace.
Others, such as the political scientist Chaim Kaufmann, contend that the best way to diffuse ethnic conflict is to partition a state along ethnic lines and then transfer populations among the new political entities so that each group has its own territory. After World War II, for example, Western policymakers supported population transfers in the hopes that they would lead to, in the words of the historian Tony Judt, “a Europe of nation states more ethnically homogenous than ever before.” The problem with this option, however, is that even with large-scale ethnic cleansing—which tends to be both bloody and morally dubious—there is no guarantee that separation will create sufficiently neat dividing lines. If Catalonia broke free from Spain, for example, a new minority problem would crop up within Catalonia, since many non-Catalans would still live there.
Of course, where widespread violence and hatred have destroyed all potential for peaceful cohabitation, ethnic separation may well constitute the only viable solution. That’s why, for example, the two-state solution to the Israeli-Palestinian conflict still enjoys widespread support, at least outside Israel. Yet the problem remains that there are no clear criteria for just how violent and generally hopeless a situation needs to be to justify division. Without such a clear benchmark, secessionism could destabilize interstate borders around the world. Disgruntled groups and irredentist states the world over would have more cause to resort to arms to boost their influence.
Even large-scale ethnic cleansing—which tends to be both bloody and morally dubious—will not create sufficiently neat dividing lines.
Although there are good reasons to be skeptical of these radical solutions of ethnic separation, nationalism cannot be wished away. Despite the emergence of such organizations as the EU, supranational bodies are not going to replace nation-states anytime soon, because people still mostly identify with their nation, rather than with remote and unelected regional bodies. For the EU, for example, the problem is not the lack of stronger decision-making authority but the absence of pan-European solidarity of the type that would allow, say, Germans to see themselves as part of the same political community as Greeks. Thus, any hope of replacing the nation-state is bound to be futile in the near future.
CONTAINING NATIONALISM
Nationalism should therefore be contained, not abolished. And to truly contain ethnic nationalism, governments will have to address its deeper causes, not just its immediate effects. Both supply and demand—that is, the willingness of governments to implement ethnonationalist policies and the appetite for such policies among populations—will have to be decreased.
On the supply side, political elites need to reinstate the informal taboo against explicitly discriminatory appeals and policies. Ultimately, there is no place for the tolerance of intolerance. What is required is courage on the part of centrist politicians to fight bigotry and defend the basic principles of human decency. Multiethnic democracies will also have to take more forceful steps to resist foreign attempts to stoke grievances among their ethnic groups and sow domestic divisions, such as Russia’s interference campaign during the 2016 U.S. presidential election, when, for example, Kremlin-backed operatives masqueraded as Black Lives Matter activists on social media to stir up racial conflict.
Within international organizations, governments must defend core liberal values more strenuously. In the case of the EU, that means cutting the financial support for illiberal member states and perhaps even creating a new, truly liberal European organization with more stringent membership criteria. It also means doubling down on the promotion of inclusive practices such as power sharing. The UN and regional organizations, such as the EU and the African Union, have done much to encourage such solutions. A weakening of these organizations could also undermine the norms they are reinforcing. Inclusive practices tend to spread from state to state, but so do exclusive ones: just as it did in 1930s Europe, the commitment to power sharing and group rights has now started to slip in eastern Europe and in other parts of the world, including sub-Saharan Africa.
As for the demand side, ethnic nationalism tends to attract the most support from those who have been disadvantaged by globalization and laissez-faire capitalism. Populist demagogues have an easy time exploiting growing socioeconomic inequalities, especially those between states’ geographic centers and their peripheries, and they blame ethnically distinct immigrants or resident minorities. Part of the answer is to retool immigration policies so as to better integrate newcomers. Yet without policies that reduce inequality, populist appeals that depict out-groups as welfare sponges will only gain traction. So governments hoping to tamp down ethnic nationalism should set up programs that offer job training to the unemployed in depressed regions, and they should prevent the further hollowing out of welfare programs. Although the economic problems on which ethnic nationalism feeds are most acute in the United States and the United Kingdom, inequality has been increasing across western Europe, and many of the welfare states in the region have been hit hard by austerity policies.
Ultimately, however, the answer to ethnic nationalism goes beyond narrow economic fixes; political elites must argue explicitly for ethnic tolerance and supranational cooperation, portraying them as matters of basic human decency and security. In Europe, politicians have preferred to use the EU as a scapegoat for their own failings rather than point out its crucial contribution to peace. Setting aside the question of whether and how the EU should be reformed, European political elites would do well to address their own homemade problems of socioeconomic inequality and regional underdevelopment. They should stop pretending that draconian cuts to immigration levels will do the trick when it comes to countering populism and ethnic nationalism.
As the violent first half of the twentieth century recedes into history, it becomes harder and harder to invoke the specter of ethnic conflict. It would be tragic if memories of that past were forgotten. For what they suggest is that the journey from ethnic nationalism to ethnic war may not be so long, after all.
0 notes
sufredux · 5 years
Text
America in Decay
The creation of the U.S. Forest Service at the turn of the twentieth century was the premier example of American state building during the Progressive Era. Prior to the passage of the Pendleton Act in 1883, public offices in the United States had been allocated by political parties on the basis of patronage. The Forest Service, in contrast, was the prototype of a new model of merit-based bureaucracy. It was staffed with university-educated agronomists and foresters chosen on the basis of competence and technical expertise, and its defining struggle was the successful effort by its initial leader, Gifford Pinchot, to secure bureaucratic autonomy and escape routine interference by Congress. At the time, the idea that forestry professionals, rather than politicians, should manage public lands and handle the department’s staffing was revolutionary, but it was vindicated by the service’s impressive performance. Several major academic studies have treated its early decades as a classic case of successful public administration.
Today, however, many regard the Forest Service as a highly dysfunctional bureaucracy performing an outmoded mission with the wrong tools. It is still staffed by professional foresters, many highly dedicated to the agency’s mission, but it has lost a great deal of the autonomy it won under Pinchot. It operates under multiple and often contradictory mandates from Congress and the courts and costs taxpayers a substantial amount of money while achieving questionable aims. The service’s internal decision-making system is often gridlocked, and the high degree of staff morale and cohesion that Pinchot worked so hard to foster has been lost. These days, books are written arguing that the Forest Service ought to be abolished altogether. If the Forest Service’s creation exemplified the development of the modern American state, its decline exemplifies that state’s decay.
Civil service reform in the late nineteenth century was promoted by academics and activists such as Francis Lieber, Woodrow Wilson, and Frank Goodnow, who believed in the ability of modern natural science to solve human problems. Wilson, like his contemporary Max Weber, distinguished between politics and administration. Politics, he argued, was a domain of final ends, subject to democratic contestation, but administration was a realm of implementation, which could be studied empirically and subjected to scientific analysis.
The belief that public administration could be turned into a science now seems naive and misplaced. But back then, even in advanced countries, governments were run largely by political hacks or corrupt municipal bosses, so it was perfectly reasonable to demand that public officials be selected on the basis of education and merit rather than cronyism. The problem with scientific management is that even the most qualified scientists of the day occasionally get things wrong, and sometimes in a big way. And unfortunately, this is what happened to the Forest Service with regard to what ended up becoming one of its crucial missions, the fighting of forest fires.
Pinchot had created a high-quality agency devoted to one basic goal: managing the sustainable exploitation of forest resources. The Great Idaho Fire of 1910, however, burned some three million acres and killed at least 85 people, and the subsequent political outcry led the Forest Service to focus increasingly not just on timber harvesting but also on wildfire suppression. Yet the early proponents of scientific forestry didn’t properly understand the role of fires in woodland ecology. Forest fires are a natural occurrence and serve an important function in maintaining the health of western forests. Shade-intolerant trees, such as ponderosa pines, lodgepole pines, and giant sequoias, require periodic fires to clear areas in which they can regenerate, and once fires were suppressed, these trees were invaded by species such as the Douglas fir. (Lodgepole pines actually require fires to propagate their seeds.) Over the years, many American forests developed high tree densities and huge buildups of dry understory, so that when fires did occur, they became much larger and more destructive.
After catastrophes such as the huge Yellowstone fires in 1988, which ended up burning nearly 800,000 acres in the park and took several months to control, the public began to take notice. Ecologists began criticizing the very objective of fire prevention, and in the mid-1990s, the Forest Service reversed course and officially adopted a “let burn” approach. But years of misguided policies could not simply be erased, since so many forests had become gigantic tinderboxes.
As a result of population growth in the American West, moreover, in the later decades of the twentieth century, many more people began living in areas vulnerable to wildfires. As are people choosing to live on floodplains or on barrier islands, so these individuals were exposing themselves to undue risks that were mitigated by what essentially was government-subsidized insurance. Through their elected representatives, they lobbied hard to make sure the Forest Service and other federal agencies responsible for forest management were given the resources to continue fighting fires that could threaten their property. Under these circumstances, rational cost-benefit analysis proved difficult, and rather than try to justify a decision not to act, the government could easily end up spending $1 million to protect a $100,000 home.
Mission on the move: fighting flames near Camp Mather, California, August 2013.
While all this was going on, the original mission of the Forest Service was eroding. Timber harvests in national forests, for example, plunged, from roughly 11 billion to roughly three billion board feet per year in the 1990s alone. This was due partly to the changing economics of the timber industry, but it was also due to a change in national values. With the rise of environmental consciousness, natural forests were increasingly seen as havens to be protected for their own sake, not economic resources to be exploited. And even in terms of economic exploitation, the Forest Service had not been doing a good job. Timber was being marketed at well below the costs of operations; the agency’s timber pricing was inefficient; and as with all government agencies, the Forest Service had an incentive to increase its costs rather than contain them.
The Forest Service’s performance deteriorated, in short, because it lost the autonomy it had gained under Pinchot. The problem began with the displacement of a single departmental mission by multiple and potentially conflicting ones. In the middle decades of the twentieth century, firefighting began to displace timber exploitation, but then firefighting itself became controversial and was displaced by conservation. None of the old missions was discarded, however, and each attracted outside interest groups that supported different departmental factions: consumers of timber, homeowners, real estate developers, environmentalists, aspiring firefighters, and so forth. Congress, meanwhile, which had been excluded from the micromanagement of land sales under Pinchot, reinserted itself by issuing various legislative mandates, forcing the Forest Service to pursue several different goals, some of them at odds with one another.
Thus, the small, cohesive agency created by Pinchot and celebrated by scholars slowly evolved into a large, Balkanized one. It became subject to many of the maladies affecting government agencies more generally: its officials came to be more interested in protecting their budgets and jobs than in the efficient performance of their mission. And they clung to old mandates even when both science and the society around them were changing.
The story of the U.S. Forest Service is not an isolated case but representative of a broader trend of political decay; public administration specialists have documented a steady deterioration in the overall quality of American government for more than a generation. In many ways, the U.S. bureaucracy has moved away from the Weberian ideal of an energetic and efficient organization staffed by people chosen for their ability and technical knowledge. The system as a whole is less merit-based: rather than coming from top schools, 45 percent of recent new hires to the federal service are veterans, as mandated by Congress. And a number of surveys of the federal work force paint a depressing picture. According to the scholar Paul Light, “Federal employees appear to be more motivated by compensation than mission, ensnared in careers that cannot compete with business and nonprofits, troubled by the lack of resources to do their jobs, dissatisfied with the rewards for a job well done and the lack of consequences for a job done poorly, and unwilling to trust their own organizations.”
WHY INSTITUTIONS DECAY
In his classic work Political Order in Changing Societies, the political scientist Samuel Huntington used the term “political decay” to explain political instability in many newly independent countries after World War II. Huntington argued that socioeconomic modernization caused problems for traditional political orders, leading to the mobilization of new social groups whose participation could not be accommodated by existing political institutions. Political decay was caused by the inability of institutions to adapt to changing circumstances. Decay was thus in many ways a condition of political development: the old had to break down in order to make way for the new. But the transitions could be extremely chaotic and violent, and there was no guarantee that the old political institutions would continuously and peacefully adapt to new conditions.
This model is a good starting point for a broader understanding of political decay more generally. Institutions are “stable, valued, recurring patterns of behavior,” as Huntington put it, the most important function of which is to facilitate collective action. Without some set of clear and relatively stable rules, human beings would have to renegotiate their interactions at every turn. Such rules are often culturally determined and vary across different societies and eras, but the capacity to create and adhere to them is genetically hard-wired into the human brain. A natural tendency to conformism helps give institutions inertia and is what has allowed human societies to achieve levels of social cooperation unmatched by any other animal species.
The very stability of institutions, however, is also the source of political decay. Institutions are created to meet the demands of specific circumstances, but then circumstances change and institutions fail to adapt. One reason is cognitive: people develop mental models of how the world works and tend to stick to them, even in the face of contradictory evidence. Another reason is group interest: institutions create favored classes of insiders who develop a stake in the status quo and resist pressures to reform.
In theory, democracy, and particularly the Madisonian version of democracy that was enshrined in the U.S. Constitution, should mitigate the problem of such insider capture by preventing the emergence of a dominant faction or elite that can use its political power to tyrannize over the country. It does so by spreading power among a series of competing branches of government and allowing for competition among different interests across a large and diverse country.
But Madisonian democracy frequently fails to perform as advertised. Elite insiders typically have superior access to power and information, which they use to protect their interests. Ordinary voters will not get angry at a corrupt politician if they don’t know that money is being stolen in the first place. Cognitive rigidities or beliefs may also prevent social groups from mobilizing in their own interests. For example, in the United States, many working-class voters support candidates promising to lower taxes on the wealthy, despite the fact that such tax cuts will arguably deprive them of important government services.
Furthermore, different groups have different abilities to organize to defend their interests. Sugar producers and corn growers are geographically concentrated and focused on the prices of their products, unlike ordinary consumers or taxpayers, who are dispersed and for whom the prices of these commodities are only a small part of their budgets. Given institutional rules that often favor special interests (such as the fact that Florida and Iowa, where sugar and corn are grown, are electoral swing states), those groups develop an outsized influence over agricultural and trade policy. Similarly, middle-class groups are usually much more willing and able to defend their interests, such as the preservation of the home mortgage tax deduction, than are the poor. This makes such universal entitlements as Social Security or health insurance much easier to defend politically than programs targeting the poor only.
Finally, liberal democracy is almost universally associated with market economies, which tend to produce winners and losers and amplify what James Madison termed the “different and unequal faculties of acquiring property.” This type of economic inequality is not in itself a bad thing, insofar as it stimulates innovation and growth and occurs under conditions of equal access to the economic system. It becomes highly problematic, however, when the economic winners seek to convert their wealth into unequal political influence. They can do so by bribing a legislator or a bureaucrat, that is, on a transactional basis, or, what is more damaging, by changing the institutional rules to favor themselves -- for example, by closing off competition in markets they already dominate, tilting the playing field ever more steeply in their favor.
Political decay thus occurs when institutions fail to adapt to changing external circumstances, either out of intellectual rigidities or because of the power of incumbent elites to protect their positions and block change. Decay can afflict any type of political system, authoritarian or democratic. And while democratic political systems theoretically have self-correcting mechanisms that allow them to reform, they also open themselves up to decay by legitimating the activities of powerful interest groups that can block needed change.
This is precisely what has been happening in the United States in recent decades, as many of its political institutions have become increasingly dysfunctional. A combination of intellectual rigidity and the power of entrenched political actors is preventing those institutions from being reformed. And there is no guarantee that the situation will change much without a major shock to the political order.
A STATE OF COURTS AND PARTIES
Modern liberal democracies have three branches of government -- the executive, the judiciary, and the legislature -- corresponding to the three basic categories of political institutions: the state, the rule of law, and democracy. The executive is the branch that uses power to enforce rules and carry out policy; the judiciary and the legislature constrain power and direct it to public purposes. In its institutional priorities, the United States, with its long-standing tradition of distrust of government power, has always emphasized the role of the institutions of constraint -- the judiciary and the legislature -- over the state. The political scientist Stephen Skowronek has characterized American politics during the nineteenth century as a “state of courts and parties,” where government functions that in Europe would have been performed by an executive-branch bureaucracy were performed by judges and elected representatives instead. The creation of a modern, centralized, merit-based bureaucracy capable of exercising jurisdiction over the whole territory of the country began only in the 1880s, and the number of professional civil servants increased slowly up through the New Deal a half century later. These changes came far later and more hesitantly than in countries such as France, Germany, and the United Kingdom.
The shift to a more modern administrative state was accompanied by an enormous growth in the size of government during the middle decades of the twentieth century. Overall levels of both taxes and government spending have not changed very much since the 1970s; despite the backlash against the welfare state that began with President Ronald Reagan’s election in 1980, “big government” seems very difficult to dismantle. But the apparently irreversible increase in the scope of government in the twentieth century has masked a large decay in its quality. This is largely because the United States has returned in certain ways to being a “state of courts and parties,” that is, one in which the courts and the legislature have usurped many of the proper functions of the executive, making the operation of the government as a whole both incoherent and inefficient.
The story of the courts is one of the steadily increasing judicialization of functions that in other developed democracies are handled by administrative bureaucracies, leading to an explosion of costly litigation, slowness of decision-making, and highly inconsistent enforcement of laws. In the United States today, instead of being constraints on government, courts have become alternative instruments for the expansion of government.
There has been a parallel usurpation by Congress. Interest groups, having lost their ability to corrupt legislators directly through bribery, have found other means of capturing and controlling legislators. These interest groups exercise influence way out of proportion to their place in society, distort both taxes and spending, and raise overall deficit levels by their ability to manipulate the budget in their favor. They also undermine the quality of public administration through the multiple mandates they induce Congress to support.
Both phenomena -- the judicialization of administration and the spread of interest-group influence -- tend to undermine the trust that people have in government. Distrust of government then perpetuates and feeds on itself. Distrust of executive agencies leads to demands for more legal checks on administration, which reduces the quality and effectiveness of government. At the same time, demand for government services induces Congress to impose new mandates on the executive, which often prove difficult, if not impossible, to fulfill. Both processes lead to a reduction of bureaucratic autonomy, which in turn leads to rigid, rule-bound, uncreative, and incoherent government.
The result is a crisis of representation, in which ordinary citizens feel that their supposedly democratic government no longer truly reflects their interests and is under the control of a variety of shadowy elites. What is ironic and peculiar about this phenomenon is that this crisis of representation has occurred in large part because of reforms designed to make the system more democratic. In fact, these days there is too much law and too much democracy relative to American state capacity.
JUDGES GONE WILD
One of the great turning points in twentieth-century U.S. history was the Supreme Court’s 1954 Brown v. Board of Education decision overturning the 1896 Plessy v. Ferguson case, which had upheld legal segregation. The Brown decision was the starting point for the civil rights movement, which succeeded in dismantling the formal barriers to racial equality and guaranteed the rights of African Americans and other minorities. The model of using the courts to enforce new social rules was then followed by many other social movements, from environmental protection and consumer safety to women’s rights and gay marriage.
So familiar is this heroic narrative to Americans that they are seldom aware of how peculiar an approach to social change it is. The primary mover in the Brown case was the National Association for the Advancement of Colored People, a private voluntary association that filed a class-action suit against the Topeka, Kansas, Board of Education on behalf of a small group of parents and their children. The initiative had to come from private groups, of course, because both the state government and the U.S. Congress were blocked by pro-segregation forces. The NAACP continued to press the case on appeal all the way to the Supreme Court, where it was represented by the future Supreme Court justice Thurgood Marshall. What was arguably one of the most important changes in American public policy came about not because Congress as representative of the American people voted for it but because private individuals litigated through the court system to change the rules. Later changes such as the Civil Rights Act and the Voting Rights Act were the result of congressional action, but even in these cases, the enforcement of national law was left up to the initiative of private parties and carried out by courts.
There is virtually no other liberal democracy that proceeds in this fashion. All European countries have gone through similar changes in the legal status of racial and ethnic minorities, women, and gays in the second half of the twentieth century. But in France, Germany, and the United Kingdom, the same result was achieved not using the courts but through a national justice ministry acting on behalf of a parliamentary majority. The legislative rule change was driven by public pressure from social groups and the media but was carried out by the government itself and not by private parties acting in conjunction with the justice system.
The origins of the U.S. approach lie in the historical sequence by which its three sets of institutions evolved. In countries such as France and Germany, law came first, followed by a modern state, and only later by democracy. In the United States, by contrast, a very deep tradition of English common law came first, followed by democracy, and only later by the development of a modern state. Although the last of these institutions was put into place during the Progressive Era and the New Deal, the American state has always remained weaker and less capable than its European or Asian counterparts. More important, American political culture since the founding has been built around distrust of executive authority.
This history has resulted in what the legal scholar Robert Kagan labels a system of “adversarial legalism.” While lawyers have played an outsized role in American public life since the beginning of the republic, their role expanded dramatically during the turbulent years of social change in the 1960s and 1970s. Congress passed more than two dozen major pieces of civil rights and environment legislation in this period, covering issues from product safety to toxic waste cleanup to private pension funds to occupational safety and health. This constituted a huge expansion of the regulatory state, one that businesses and conservatives are fond of complaining about today.
Yet what makes this system so unwieldy is not the level of regulation per se but the highly legalistic way in which it is pursued. Congress mandated the creation of an alphabet soup of new federal agencies, such as the Equal Employment Opportunity Commission, the Environmental Protection Agency, and the Occupational Safety and Health Administration, but it was not willing to cleanly delegate to these bodies the kind of rule-making authority and enforcement power that European or Japanese state institutions enjoy. What it did instead was turn over to the courts the responsibility for monitoring and enforcing the law. Congress deliberately encouraged litigation by expanding standing (that is, who has a right to sue) to an ever-wider circle of parties, many of which were only distantly affected by a particular rule.
The political scientist R. Shep Melnick, for example, has described the way that the federal courts rewrote Title VII of the 1964 Civil Rights Act, “turning a weak law focusing primarily on intentional discrimination into a bold mandate to compensate for past discrimination.” Instead of providing a federal bureaucracy with adequate enforcement power, the political scientist Sean Farhang explained, “the key move of Republicans in the Senate . . . was to substantially privatize the prosecutorial function. They made private lawsuits the dominant mode of Title VII enforcement, creating an engine that would, in the years to come, produce levels of private enforcement litigation beyond their imagining.” Across the board, private enforcement cases grew in number from less than 100 per year in the late 1960s to 10,000 in the 1980s and over 22,000 by the late 1990s.
Thus, conflicts that in Sweden or Japan would be solved through quiet consultations between interested parties in the bureaucracy are fought out through formal litigation in the U.S. court system. This has a number of unfortunate consequences for public administration, leading to a process characterized, in Farhang’s words, by “uncertainty, procedural complexity, redundancy, lack of finality, high transaction costs.” By keeping enforcement out of the bureaucracy, it also makes the system far less accountable.
The explosion of opportunities for litigation gave access, and therefore power, to many formerly excluded groups, beginning with African Americans. For this reason, litigation and the right to sue have been jealously guarded by many on the progressive left. But it also entailed large costs in terms of the quality of public policy. Kagan illustrates this with the case of the dredging of Oakland Harbor, in California. During the 1970s, the Port of Oakland initiated plans to dredge the harbor in anticipation of the new, larger classes of container ships that were then coming into service. The plan, however, had to be approved by a host of federal agencies, including the Army Corps of Engineers, the Fish and Wildlife Service, the National Marine Fisheries Service, and the Environmental Protection Agency, as well as their counterparts in the state of California. A succession of alternative plans for disposing of toxic materials dredged from the harbor were challenged in the courts, and each successive plan entailed prolonged delays and higher costs. The reaction of the Environmental Protection Agency to these lawsuits was to retreat into a defensive crouch and not take action. The final plan to proceed with the dredging was not forthcoming until 1994, at an ultimate cost that was many times the original estimates. A comparable expansion of the Port of Rotterdam, in the Netherlands, was accomplished in a fraction of the time.
Examples such as this can be found across the entire range of activities undertaken by the U.S. government. Many of the travails of the Forest Service can be attributed to the ways in which its judgments could be second-guessed through the court system. This effectively brought to a halt all logging on lands it and the Bureau of Land Management operated in the Pacific Northwest during the early 1990s, as a result of threats to the spotted owl, which was protected under the Endangered Species Act.
When used as an instrument of enforcement, the courts have morphed from constraints on government to mechanisms by which the scope of government has expanded enormously. For example, special-education programs for handicapped and disabled children have mushroomed in size and cost since the mid-1970s as a result of an expansive mandate legislated by Congress in 1974. This mandate was built, however, on earlier findings by federal district courts that special-needs children had rights, which are much harder than mere interests to trade off against other goods or to subject to cost-benefit criteria.
The solution to this problem is not necessarily the one advocated by many conservatives and libertarians, which is to simply eliminate regulation and close down bureaucracies. The ends that government is serving, such as the regulation of toxic waste or environmental protection or special education, are important ones that private markets will not pursue if left to their own devices. Conservatives often fail to see that it is the very distrust of government that leads the American system into a far less efficient court-based approach to regulation than that chosen in democracies with stronger executive branches.
But the attitude of progressives and liberals is equally problematic. They, too, have distrusted bureaucracies, such as the ones that produced segregated school systems in the South or the ones captured by big business, and they have been happy to inject unelected judges into the making of social policy when legislators have proved insufficiently supportive.
A decentralized, legalistic approach to administration dovetails with the other notable feature of the U.S. political system: its openness to the influence of interest groups. Such groups can get their way by suing the government directly. But they have another, even more powerful channel, one that controls significantly more resources: Congress.
LIBERTY AND PRIVILEGE
With the exception of some ambassadorships and top posts in government departments, U.S. political parties are no longer in the business of distributing government offices to loyal political supporters. But the trading of political influence for money has come in through the backdoor, in a form that is perfectly legal and much harder to eradicate. Criminalized bribery is narrowly defined in U.S. law as a transaction in which a politician and a private party explicitly agree on a specific quid pro quo. What is not covered by the law is what biologists call reciprocal altruism, or what an anthropologist might label a gift exchange. In a relationship of reciprocal altruism, one person confers a benefit on another with no explicit expectation that it will buy a return favor. Indeed, if one gives someone a gift and then immediately demands a gift in return, the recipient is likely to feel offended and refuse what is offered. In a gift exchange, the receiver incurs not a legal obligation to provide some specific good or service but rather a moral obligation to return the favor in some way later on. It is this sort of transaction that the U.S. lobbying industry is built around.
Kin selection and reciprocal altruism are two natural modes of human sociability. Modern states create strict rules and incentives to overcome the tendency to favor family and friends, including practices such as civil service examinations, merit qualifications, conflict-of-interest regulations, and antibribery and anticorruption laws. But the force of natural sociability is so strong that it keeps finding a way to penetrate the system.
Over the past half century, the American state has been “repatrimonialized,” in much the same way as the Chinese state in the Later Han dynasty, the Mamluk regime in Turkey just before its defeat by the Ottomans, and the French state under the ancien régime were. Rules blocking nepotism are still strong enough to prevent overt favoritism from being a common political feature in contemporary U.S. politics (although it is interesting to note how strong the urge to form political dynasties is, with all of the Kennedys, Bushes, Clintons, and the like). Politicians do not typically reward family members with jobs; what they do is engage in bad behavior on behalf of their families, taking money from interest groups and favors from lobbyists in order to make sure that their children are able to attend elite schools and colleges, for example.
Reciprocal altruism, meanwhile, is rampant in Washington and is the primary channel through which interest groups have succeeded in corrupting government. As the legal scholar Lawrence Lessig points out, interest groups are able to influence members of Congress legally simply by making donations and waiting for unspecified return favors. And sometimes, the legislator is the one initiating the gift exchange, favoring an interest group in the expectation that he will get some sort of benefit from it after leaving office.
The explosion of interest groups and lobbying in Washington has been astonishing, with the number of firms with registered lobbyists rising from 175 in 1971 to roughly 2,500 a decade later, and then to 13,700 lobbyists spending about $3.5 billion by 2009. Some scholars have argued that all this money and activity has not resulted in measurable changes in policy along the lines desired by the lobbyists, implausible as this may seem. But oftentimes, the impact of interest groups and lobbyists is not to stimulate new policies but to make existing legislation much worse than it would otherwise be. The legislative process in the United States has always been much more fragmented than in countries with parliamentary systems and disciplined parties. The welter of congressional committees with overlapping jurisdictions often leads to multiple and conflicting mandates for action. This decentralized legislative process produces incoherent laws and virtually invites involvement by interest groups, which, if not powerful enough to shape overall legislation, can at least protect their specific interests.
For example, the health-care bill pushed by the Obama administration in 2010 turned into something of a monstrosity during the legislative process as a result of all the concessions and side payments that had to be made to interest groups ranging from doctors to insurance companies to the pharmaceutical industry. In other cases, the impact of interest groups was to block legislation harmful to their interests. The simplest and most effective response to the 2008 financial crisis and the hugely unpopular taxpayer bailouts of large banks would have been a law that put a hard cap on the size of financial institutions or a law that dramatically raised capital requirements, which would have had much the same effect. If a cap on size existed, banks taking foolish risks could go bankrupt without triggering a systemic crisis and a government bailout. Like the Depression-era Glass-Steagall Act, such a law could have been written on a couple of sheets of paper. But this possibility was not seriously considered during the congressional deliberations on financial regulation.
What emerged instead was the Dodd-Frank Wall Street Reform and Consumer Protection Act, which, while better than no regulation at all, extended to hundreds of pages of legislation and mandated reams of further detailed rules that will impose huge costs on banks and consumers down the road. Rather than simply capping bank size, it created the Financial Stability Oversight Council, which was assigned the enormous task of assessing and managing institutions posing systemic risks, a move that in the end will still not solve the problem of banks being “too big to fail.” Although no one will ever find a smoking gun linking banks’ campaign contributions to the votes of specific members of Congress, it defies belief that the banking industry’s legions of lobbyists did not have a major impact in preventing the simpler solution of simply breaking up the big banks or subjecting them to stringent capital requirements.
Ordinary Americans express widespread disdain for the impact of interest groups and money on Congress. The perception that the democratic process has been corrupted or hijacked is not an exclusive concern of either end of the political spectrum; both Tea Party Republicans and liberal Democrats believe that interest groups are exercising undue political influence and feathering their own nests. As a result, polls show that trust in Congress has fallen to historically low levels, barely above single digits -- and the respondents have a point. Of the old elites in France prior to the Revolution, Alexis de Tocqueville said that they mistook privilege for liberty, that is, they sought protection from state power that applied to them alone and not generally to all citizens. In the contemporary United States, elites speak the language of liberty but are perfectly happy to settle for privilege.
WHAT MADISON GOT WRONG
The economist Mancur Olson made one of the most famous arguments about the malign effects of interest-group politics on economic growth and, ultimately, democracy in his 1982 book The Rise and Decline of Nations. Looking particularly at the long-term economic decline of the United Kingdom throughout the twentieth century, he argued that in times of peace and stability, democracies tended to accumulate ever-increasing numbers of interest groups. Instead of pursuing wealth-creating economic activities, these groups used the political system to extract benefits or rents for themselves. These rents were collectively unproductive and costly to the public as a whole. But the general public had a collective-action problem and could not organize as effectively as, for example, the banking industry or corn producers to protect their interests. The result was the steady diversion of energy to rent-seeking activities over time, a process that could be halted only by a large shock such as a war or a revolution.
This highly negative narrative about interest groups stands in sharp contrast to a much more positive one about the benefits of civil society, or voluntary associations, to the health of democracy. Tocqueville noted in Democracy in America that Americans had a strong propensity to organize private associations, which he argued were schools for democracy because they taught private individuals the skills of coming together for public purposes. Individuals by themselves were weak; only by coming together for common purposes could they, among other things, resist tyrannical government. This perspective was carried forward in the late twentieth century by scholars such as Robert Putnam, who argued that this very propensity to organize -- “social capital” -- was both good for democracy and endangered.
Madison himself had a relatively benign view of interest groups. Even if one did not approve of the ends that a particular group was seeking, he argued, the diversity of groups over a large country would be sufficient to prevent domination by any one of them. As the political scientist Theodore Lowi has noted, “pluralist” political theory in the mid-twentieth century concurred with Madison: the cacophony of interest groups would collectively interact to produce a public interest, just as competition in a free market would provide public benefit through individuals’ following their narrow self-interests. There were no grounds for the government to regulate this process, since there was no higher authority that could define a public interest standing above the narrow concerns of interest groups. The Supreme Court in its Buckley v. Valeo and Citizens United decisions, which struck down certain limits on campaign spending by groups, was in effect affirming the benign interpretation of what Lowi has labeled “interest group liberalism.”
How can these diametrically opposed narratives be reconciled? The most obvious way is to try to distinguish a “good” civil society organization from a “bad” interest group. The former could be said to be driven by passions, the latter by interests. A civil society organization might be a nonprofit such as a church group seeking to build houses for the poor or else a lobbying organization promoting a public policy it believed to be in the public interest, such as the protection of coastal habitats. An interest group might be a lobbying firm representing the tobacco industry or large banks, whose objective was to maximize the profits of the companies supporting it.
Unfortunately, this distinction does not hold up to theoretical scrutiny. Just because a group proclaims that it is acting in the public interest does not mean that it is actually doing so. For example, a medical advocacy group that wanted more dollars allocated to combating a particular disease might actually distort public priorities by diverting funds from more widespread and damaging diseases, simply because it is better at public relations. And because an interest group is self-interested doesn’t mean that its claims are illegitimate or that it does not have a right to be represented within the political system. If a poorly-thought-out regulation would seriously damage the interests of an industry and its workers, the relevant interest group has a right to make that known to Congress. In fact, such lobbyists are often some of the most important sources of information about the consequences of government action.
The most salient argument against interest-group pluralism has to do with distorted representation. In his 1960 book The Semisovereign People, E. E. Schattschneider argued that the actual practice of democracy in the United States had nothing to do with its popular image as a government “of the people, by the people, for the people.” He noted that political outcomes seldom correspond with popular preferences, that there is a very low level of participation and political awareness, and that real decisions are taken by much smaller groups of organized interests. A similar argument is buried in Olson’s framework, since Olson notes that not all groups are equally capable of organizing for collective action. The interest groups that contend for the attention of Congress represent not the whole American people but the best-organized and (what often amounts to the same thing) most richly endowed parts of American society. This tends to work against the interests of the unorganized, who are often poor, poorly educated, or otherwise marginalized.
The political scientist Morris Fiorina has provided substantial evidence that what he labels the American “political class” is far more polarized than the American people themselves. But the majorities supporting middle-of-the-road positions do not feel very passionately about them, and they are largely unorganized. This means that politics is defined by well-organized activists, whether in the parties and Congress, the media, or in lobbying and interest groups. The sum of these activist groups does not yield a compromise position; it leads instead to polarization and deadlocked politics.
There is a further problem with the pluralistic view, which sees the public interest as nothing more than the aggregation of individual private interests: it undermines the possibility of deliberation and the process by which individual preferences are shaped by dialogue and communication. Both classical Athenian democracy and the New England town hall meetings celebrated by Tocqueville were cases in which citizens spoke directly to one another about the common interests of their communities. It is easy to idealize these instances of small-scale democracy, or to minimize the real differences that exist in large societies. But as any organizer of focus groups will tell you, people’s views on highly emotional subjects, from immigration to abortion to drugs, will change just 30 minutes into a face-to-face discussion with people of differing views, provided that they are all given the same information and ground rules that enforce civility. One of the problems of pluralism, then, is the assumption that interests are fixed and that the role of the legislator is simply to act as a transmission belt for them, rather than having his own views that can be shaped by deliberation.
THE RISE OF VETOCRACY
The U.S. Constitution protects individual liberties through a complex system of checks and balances that were deliberately designed by the founders to constrain the power of the state. American government arose in the context of a revolution against British monarchical authority and drew on even deeper wellsprings of resistance to the king during the English Civil War. Intense distrust of government and a reliance on the spontaneous activities of dispersed individuals have been hallmarks of American politics ever since.
As Huntington pointed out, in the U.S. constitutional system, powers are not so much functionally divided as replicated across the branches, leading to periodic usurpations of one branch by another and conflicts over which branch should predominate. Federalism often does not cleanly delegate specific powers to the appropriate level of government; rather, it duplicates them at multiple levels, giving federal, state, and local authorities jurisdiction over, for example, toxic waste disposal. Under such a system of redundant and non-hierarchical authority, different parts of the government are easily able to block one another. In conjunction with the general judicialization of politics and the widespread influence of interest groups, the result is an unbalanced form of government that undermines the prospects of necessary collective action -- something that might more appropriately be called “vetocracy.”
The two dominant American political parties have become more ideologically polarized than at any time since the late nineteenth century. There has been a partisan geographic sorting, with virtually the entire South moving from Democratic to Republican and Republicans becoming virtually extinct in the Northeast. Since the breakdown of the New Deal coalition and the end of the Democrats’ hegemony in Congress in the 1980s, the two parties have become more evenly balanced and have repeatedly exchanged control over the presidency and Congress. This higher degree of partisan competition, in turn, along with liberalized campaign-finance guidelines, has fueled an arms race between the parties for funding and has undermined personal comity between them. The parties have also increased their homogeneity through their control, in most states, over redistricting, which allows them to gerrymander voting districts to increase their chances of reelection. The spread of primaries, meanwhile, has put the choice of party candidates into the hands of the relatively small number of activists who turn out for these elections.
Polarization is not the end of the story, however. Democratic political systems are not supposed to end conflict; rather, they are meant to peacefully resolve and mitigate it through agreed-on rules. A good political system is one that encourages the emergence of political outcomes representing the interests of as large a part of the population as possible. But when polarization confronts the United States’ Madisonian check-and-balance political system, the result is particularly devastating.
Democracies must balance the need to allow full opportunities for political participation for all, on the one hand, and the need to get things done, on the other. Ideally, democratic decisions would be taken by consensus, with every member of the community consenting. This is what typically happens in families, and how band- and tribal-level societies often make decisions. The efficiency of consensual decision-making, however, deteriorates rapidly as groups become larger and more diverse, and so for most groups, decisions are made not by consensus but with the consent of some subset of the population. The smaller the percentage of the group necessary to take a decision, the more easily and efficiently it can be made, but at the expense of long-run buy-in.
Even systems of majority rule deviate from an ideal democratic procedure, since they can disenfranchise nearly half the population. Indeed, under a plurality, or “first past the post,” electoral system, decisions can be taken for the whole community by a minority of voters. Systems such as these are adopted not on the basis of any deep principle of justice but rather as an expedient that allows decisions of some sort to be made. Democracies also create various other mechanisms, such as cloture rules (enabling the cutting off of debate), rules restricting the ability of legislators to offer amendments, and so-called reversionary rules, which allow for action in the event that a legislature can’t come to agreement.
The delegation of powers to different political actors enables them to block action by the whole body. The U.S. political system has far more of these checks and balances, or what political scientists call “veto points,” than other contemporary democracies, raising the costs of collective action and in some cases make it impossible altogether. In earlier periods of U.S. history, when one party or another was dominant, this system served to moderate the will of the majority and force it to pay greater attention to minorities than it otherwise might have. But in the more evenly balanced, highly competitive party system that has arisen since the 1980s, it has become a formula for gridlock.
By contrast, the so-called Westminster system, which evolved in England in the years following the Glorious Revolution of 1688, is one of the most decisive in the democratic world because, in its pure form, it has very few veto points. British citizens have one large, formal check on government, their ability to periodically elect Parliament. (The tradition of free media in the United Kingdom is another important informal check.) In all other respects, however, the system concentrates, rather than diffuses, power. The pure Westminster system has only a single, all-powerful legislative chamber -- no separate presidency, no powerful upper house, no written constitution and therefore no judicial review, and no federalism or constitutionally mandated devolution of powers to localities. It has a plurality voting system that, along with strong party discipline, tends to produce a two-party system and strong parliamentary majorities. The British equivalent of the cloture rule requires only a simple majority of the members of Parliament to be present to call the question; American-style filibustering is not allowed. The parliamentary majority chooses a government with strong executive powers, and when it makes a legislative decision, it generally cannot be stymied by courts, states, municipalities, or other bodies. This is why the British system is often described as a “democratic dictatorship.”
For all its concentrated powers, the Westminster system nonetheless remains fundamentally democratic, because if voters don’t like the government it produces, they can vote it out of office. In fact, with a vote of no confidence, they can do so immediately, without waiting for the end of a presidential term. This means that governments are more sensitive to perceptions of their general performance than to the needs of particular interest groups or lobbies.
The Westminster system produces stronger governments than those in the United States, as can be seen by comparing their budget processes. In the United Kingdom, national budgets are drawn up by professional civil servants acting under instructions from the cabinet and the prime minister. The budget is then presented by the chancellor of the exchequer to the House of Commons, which votes to approve it in a single up-or-down vote, usually within a week or two.
In the United States, by contrast, Congress has primary authority over the budget. Presidents make initial proposals, but these are largely aspirational documents that do not determine what eventually emerges. The executive branch’s Office of Management and Budget has no formal powers over the budget, acting as simply one more lobbying organization supporting the president’s preferences. The budget works its way through a complex set of committees over a period of months, and what finally emerges for ratification by the two houses of Congress is the product of innumerable deals struck with individual members to secure their support -- since with no party discipline, the congressional leadership cannot compel members to support its preferences.
The openness and never-ending character of the U.S. budget process gives lobbyists and interest groups multiple points at which to exercise influence. In most European parliamentary systems, it would make no sense for an interest group to lobby an individual member of parliament, since the rules of party discipline would give that legislator little or no influence over the party leadership’s position. In the United States, by contrast, an influential committee chairmanship confers enormous powers to modify legislation and therefore becomes the target of enormous lobbying activity.
Of the challenges facing developed democracies, one of the most important is the problem of the unsustainability of their existing welfare-state commitments. The existing social contracts underlying contemporary welfare states were negotiated several generations ago, when birthrates were higher, lifespans were shorter, and economic growth rates were robust. The availability of finance has allowed all modern democracies to keep pushing this problem into the future, but at some point, the underlying demographic reality will set in.
These problems are not insuperable. The debt-to-GDP ratios of both the United Kingdom and the United States coming out of World War II were higher than they are today. Sweden, Finland, and other Scandinavian countries found their large welfare states in crisis during the 1990s and were able to make adjustments to their tax and spending levels. Australia succeeded in eliminating almost all its external debt, even prior to the huge resource boom of the early years of this century. But dealing with these problems requires a healthy, well-functioning political system, which the United States does not currently have. Congress has abdicated one of its most basic responsibilities, having failed to follow its own rules for the orderly passing of budgets several years in a row now.
The classic Westminster system no longer exists anywhere in the world, including the United Kingdom itself, as that country has gradually adopted more checks and balances. Nonetheless, the United Kingdom still has far fewer veto points than does the United States, as do most parliamentary systems in Europe and Asia. (Certain Latin American countries, having copied the U.S. presidential system in the nineteenth century, have similar problems with gridlock and politicized administration.)
Budgeting is not the only aspect of government that is handled differently in the United States. In parliamentary systems, a great deal of legislation is formulated by the executive branch with heavy technocratic input from the permanent civil service. Ministries are accountable to parliament, and hence ultimately to voters, through the ministers who head them, but this type of hierarchical system can take a longer-term strategic view and produce much more coherent legislation.
Such a system is utterly foreign to the political culture in Washington, where Congress jealously guards its right to legislate -- even though the often incoherent product is what helps produce a large, sprawling, and less accountable government. Congress’ multiple committees frequently produce duplicate and overlapping programs or create several agencies with similar purposes. The Pentagon, for example, operates under nearly 500 mandates to report annually to Congress on various issues. These never expire, and executing them consumes huge amounts of time and energy. Congress has created about 50 separate programs for worker retraining and 82 separate projects to improve teacher quality.
Financial-sector regulation is split between the Federal Reserve, the Treasury Department, the Securities and Exchange Commission, the Federal Deposit Insurance Corporation, the National Credit Union Administration, the Commodity Futures Trading Commission, the Federal Housing Finance Agency, and a host of state attorneys general who have decided to take on the banking sector. The federal agencies are overseen by different congressional committees, which are loath to give up their turf to a more coherent and unified regulator. This system was easy to game so as to bring about the deregulation of the financial sector in the late 1990s; re-regulating it after the recent financial crisis has proved much more difficult.
CONGRESSIONAL DELEGATION
Vetocracy is only half the story of the U.S. political system. In other respects, Congress delegates huge powers to the executive branch, which allow the latter to operate rapidly and sometimes with a very low degree of accountability. Such areas of delegation include the Federal Reserve, the intelligence agencies, the military, and a host of quasi-independent commissions and regulatory agencies that together constitute the huge administrative state that emerged during the Progressive Era and the New Deal.
While many American libertarians and conservatives would like to abolish these agencies altogether, it is hard to see how it would be possible to govern properly under modern circumstances without them. The United States today has a huge, complex national economy, situated in a globalized world economy that moves with extraordinary speed. During the acute phase of the financial crisis that unfolded after the collapse of Lehman Brothers in September 2008, the Federal Reserve and the Treasury Department had to make massive decisions overnight, decisions that involved flooding markets with trillions of dollars of liquidity, propping up individual banks, and imposing new regulations. The severity of the crisis led Congress to appropriate $700 billion for the Troubled Asset Relief Program largely on the say-so of the Bush administration. There has been a lot of second-guessing of individual decisions made during this period, but the idea that such a crisis could have been managed by any other branch of government is ludicrous. The same applies to national security issues, where the president is in effect tasked with making decisions on how to respond to nuclear and terrorist threats that potentially affect the lives of millions of Americans. It is for this reason that Alexander Hamilton, in The Federalist Papers, no. 70, spoke of the need for “energy in the executive.”
There is intense populist distrust of elite institutions in the United States, together with calls to abolish them (as in the case of the Federal Reserve) or make them more transparent. Ironically, however, polls show the highest degree of approval for precisely those institutions, such as the military or NASA, that are the least subject to immediate democratic oversight. Part of the reason they are admired is that they can actually get things done. By contrast, the most democratic institution, the House of Representatives, receives disastrously low levels of approval, and Congress more broadly is regarded (not inaccurately) as a talking shop where partisan games prevent almost anything useful from happening.
In full perspective, therefore, the U.S. political system presents a complex picture in which checks and balances excessively constrain decision-making on the part of majorities, but in which there are also many instances of potentially dangerous delegations of authority to poorly accountable institutions. One major problem is that these delegations are seldom made cleanly. Congress frequently fails in its duty to provide clear legislative guidance on how a particular agency is to perform its task, leaving it up to the agency itself to write its own mandate. In doing so, Congress hopes that if things don’t work out, the courts will step in to correct the abuses. Excessive delegation and vetocracy thus become intertwined.
In a parliamentary system, the majority party or coalition controls the government directly; members of parliament become ministers who have the authority to change the rules of the bureaucracies they control. Parliamentary systems can be blocked if parties are excessively fragmented and coalitions unstable, as has been the case frequently in Italy. But once a parliamentary majority has been established, there is a relatively straight-forward delegation of authority to an executive agency.
Such delegations are harder to achieve, however, in a presidential system. The obvious solution to a legislature’s inability to act is to transfer more authority to the separately elected executive. Latin American countries with presidential systems have been notorious for gridlock and ineffective legislatures and have often cut through the maze by granting presidents emergency powers -- which, in turn, has often led to other kinds of abuses. Under conditions of divided government, when the party controlling one or both houses of Congress is different from the one controlling the presidency, strengthening the executive at the expense of Congress becomes a matter of partisan politics. Delegating more authority to President Barack Obama is the last thing that House Republicans want to do today.
In many respects, the American system of checks and balances compares unfavorably with parliamentary systems when it comes to the ability to balance the need for strong state action with law and accountability. Parliamentary systems tend not to judicialize administration to nearly the same extent; they have proliferated government agencies less, they write more coherent legislation, and they are less subject to interest-group influence. Germany, the Netherlands, and the Scandinavian countries, in particular, have been able to sustain higher levels of trust in government, which makes public administration less adversarial, more consensual, and better able to adapt to changing conditions of globalization. (High-trust arrangements, however, tend to work best in relatively small, homogeneous societies, and those in these countries have been showing signs of strain as their societies have become more diverse as a result of immigration and cultural change.)
The picture looks a bit different for the EU as a whole. Recent decades have seen a large increase in the number and sophistication of lobbying groups in Europe, for example. These days, corporations, trade associations, and environmental, consumer, and labor rights groups all operate at both national and EU-wide levels. And with the shift of policymaking away from national capitals to Brussels, the European system as a whole is beginning to resemble that of the United States in depressing ways. Europe’s individual parliamentary systems may allow for fewer veto points than the U.S. system of checks and balances, but with the addition of a large European layer, many more veto points have been added. This means that European interest groups are increasingly able to venue shop: if they cannot get favorable treatment at the national level, they can go to Brussels, or vice versa. The growth of the EU has also Americanized Europe with respect to the role of the judiciary. Although European judges remain more reluctant than their U.S. counterparts to insert themselves into political matters, the new structure of European jurisprudence, with its multiple and overlapping levels, has increased, rather than decreased, the number of judicial vetoes in the system.
NO WAY OUT
The U.S. political system has decayed over time because its traditional system of checks and balances has deepened and become increasingly rigid. In an environment of sharp political polarization, this decentralized system is less and less able to represent majority interests and gives excessive representation to the views of interest groups and activist organizations that collectively do not add up to a sovereign American people.
This is not the first time that the U.S. political system has been polarized and indecisive. In the middle decades of the nineteenth century, it could not make up its mind about the extension of slavery to the territories, and in the later decades of the century, it couldn’t decide if the country was a fundamentally agrarian society or an industrial one. The Madisonian system of checks and balances and the clientelistic, party-driven political system that emerged in the nineteenth century were adequate for governing an isolated, largely agrarian country. They could not, however, resolve the acute political crisis produced by the question of the extension of slavery, nor deal with a continental-scale economy increasingly knit together by new transportation and communications technologies.
Today, once again, the United States is trapped by its political institutions. Because Americans distrust government, they are generally unwilling to delegate to it the authority to make decisions, as happens in other democracies. Instead, Congress mandates complex rules that reduce the government’s autonomy and cause decision-making to be slow and expensive. The government then doesn’t perform well, which confirms people’s lack of trust in it. Under these circumstances, they are reluctant to pay higher taxes, which they feel the government will simply waste. But without appropriate -resources, the government can’t function properly, again creating a self-fulfilling prophecy.
Two obstacles stand in the way of reversing the trend toward decay. The first is a matter of politics. Many political actors in the United States recognize that the system isn’t working well but nonetheless have strong interests in keeping things as they are. Neither political party has an incentive to cut itself off from access to interest-group money, and the interest groups don’t want a system in which money won’t buy influence. As happened in the 1880s, a reform coalition has to emerge that unites groups without a stake in the current system. But achieving collective action among such out-groups is very difficult; they need leadership and a clear agenda, neither of which is currently present.
The second problem is a matter of ideas. The traditional American solution to perceived governmental dysfunction has been to try to expand democratic participation and transparency. This happened at a national level in the 1970s, for example, as reformers pushed for more open primaries, greater citizen access to the courts, and round-the-clock media coverage of Congress, even as states such as California expanded their use of ballot initiatives to get around unresponsive government. But as the political scientist Bruce Cain has pointed out, most citizens have neither the time, nor the background, nor the inclination to grapple with complex public policy issues; expanding participation has simply paved the way for well-organized groups of activists to gain more power. The obvious solution to this problem would be to roll back some of the would-be democratizing reforms, but no one dares suggest that what the country needs is a bit less participation and transparency.
The depressing bottom line is that given how self-reinforcing the country’s political malaise is, and how unlikely the prospects for constructive incremental reform are, the decay of American politics will probably continue until some external shock comes along to catalyze a true reform coalition and galvanize it into action.
0 notes
sufredux · 5 years
Text
American Political Decay or Renewal?
Two years ago, I argued in these pages that America was suffering from political decay. The country’s constitutional system of checks and balances, combined with partisan polarization and the rise of well-financed interest groups, had combined to yield what I labeled “vetocracy,” a situation in which it was easier to stop government from doing things than it was to use govern­ment to promote the common good. Recurrent budgetary crises, stagnating bureaucracy, and a lack of policy innovation were the hall­marks of a political system in disarray.
On the surface, the 2016 presidential election seems to be bearing out this analysis. The once proud Republican Party lost control of its nominating process to Donald Trump’s hostile takeover and is riven with deep internal contradictions. On the Democratic side, meanwhile, the ultra-insider Hillary Clinton has faced surprisingly strong competition from Bernie Sanders, a 74-year-old self-proclaimed demo­cratic socialist. Whatever the issue—from immigration to financial reform to trade to stagnating incomes—large numbers of voters on both sides of the spectrum have risen up against what they see as a corrupt, self-dealing Establishment, turning to radical outsiders in the hopes of a purifying cleanse.
In fact, however, the turbulent campaign has shown that American democracy is in some ways in better working order than expected. Whatever one might think of their choices, voters have flocked to the polls in state after state and wrested control of the political narrative from organized interest groups and oligarchs. Jeb Bush, the son and brother of presidents who once seemed the inevitable Republican choice, ignominiously withdrew from the race in February after having blown through more than $130 million (together with his super PAC). Sanders, meanwhile, limiting himself to small donations and pledging to disempower the financial elite that supports his opponent, has raised even more than Bush and nipped at Clinton’s heels throughout.
The real story of this election is that after several decades, American democracy is finally responding to the rise of inequality and the economic stagnation experienced by most of the population. Social class is now back at the heart of American politics, trumping other cleavages—race, ethnicity, gender, sexual orientation, geography—that had dominated discussion in recent elections.
The gap between the fortunes of elites and those of the rest of the public has been growing for two generations, but only now is it coming to dominate national politics. What really needs to be explained is not why populists have been able to make such gains this cycle but why it took them so long to do so. Moreover, although it is good to know that the U.S. political system is less ossified and less in thrall to monied elites than many assumed, the nostrums being hawked by the populist crusaders are nearly entirely unhelpful, and if embraced, they would stifle growth, exacerbate malaise, and make the situation worse rather than better. So now that the elites have been shocked out of their smug complacency, the time has come for them to devise more workable solutions to the problems they can no longer deny or ignore.
THE SOCIAL BASIS OF POPULISM
In recent years, it has become ever harder to deny that incomes have been stagnating for most U.S. citizens even as elites have done better than ever, generating rising inequality throughout American society. Certain basic facts, such as the enormously increased share of national wealth taken by the top one percent, and indeed the top 0.1 percent, are increasingly uncontested. What is new this political cycle is that attention has started to turn from the excesses of the oligarchy to the straitened circumstances of those left behind.
Two recent books—Charles Murray’s Coming Apart and Robert Putnam’s Our Kids—lay out the new social reality in painful detail. Murray and Putnam are at opposite ends of the political spectrum, one a libertarian conservative and the other a mainstream liberal, yet the data they report are virtually identical. Working-class incomes have declined over the past generation, most dramatically for white men with a high school education or less. For this group, Trump’s slogan, “Make America Great Again!” has real meaning. But the pathologies they suffer from go much deeper and are revealed in data on crime, drug use, and single-parent families.
Back in the 1980s, there was a broad national conversation about the emergence of an African American underclass—that is, a mass of underemployed and underskilled people whose poverty seemed self-replicating because it led to broken families that were unable to transmit the kinds of social norms and behaviors required to compete in the job market. Today, the white working class is in virtually the same position as the black underclass was back then.
During the run-up to the primary in New Hampshire—a state that is about as white and rural as any in the country—many Americans were likely surprised to learn that voters’ most important concern there was heroin addiction. In fact, opioid and methamphetamine addiction have become as epidemic in rural white communities in states such as Indiana and Kentucky as crack was in the inner city a generation ago. A recent paper by the economists Anne Case and Angus Deaton showed that the death rates for white non-Hispanic middle-aged men in the United States rose between 1999 and 2013, even as they fell for virtually every other population group and in every other rich country. The causes of this increase appear to have been suicide, drugs, and alcohol—nearly half a million excess deaths over what would have been expected. And crime rates for this group have skyrocketed as well.
American democracy is finally responding to the economic stagnation of most of the population.
This increasingly bleak reality, however, scarcely registered with American elites—not least because over the same period, they themselves were doing quite well. People with at least a college education have seen their fortunes rise over the decades. Rates of divorce and single-parent families have decreased among this group, neighborhood crime has fallen steadily, cities have been reclaimed for young urbanites, and technologies such as the Internet and social media have powered social trust and new forms of community engagement. For this group, helicopter parents are a bigger problem than latchkey children.
THE FAILURE OF POLITICS
Given the enormity of the social shift that has occurred, the real question is not why the United States has populism in 2016 but why the explosion did not occur much earlier. And here there has indeed been a problem of representation in American institutions: neither political party has served the declining group well.
In recent decades, the Republican Party has been an uneasy coalition of business elites and social conservatives, the former providing money, and the latter primary votes. The business elites, represented by the editorial page of The Wall Street Journal, have been principled advocates of economic liberalism: free markets, free trade, and open immigration. It was Republicans who provided the votes to pass trade legislation such as the North American Free Trade Agreement and the recent trade promotion authority (more commonly known as “fast track”). Their business backers clearly benefit from both the import of foreign labor, skilled and unskilled, and a global trading system that allows them to export and invest around the globe. Republicans pushed for the dismantling of the Depression-era system of bank regulation that laid the groundwork for the subprime meltdown and the resulting financial crisis of 2008. And they have been ideologically committed to cutting taxes on wealthy Americans, undermining the power of labor unions, and reducing social services that stood to benefit the less well-off.
This agenda ran directly counter to the interests of the working class. The causes of the working class’ decline are complex, having to do as much with technological change as with factors touched by public policy. And yet it is undeniable that the pro-market shift promoted by Republican elites in recent decades has exerted downward pressure on working-class incomes, both by exposing workers to more ruthless technological and global competition and by paring back various protections and social benefits left over from the New Deal. (Countries such as Germany and the Netherlands, which have done more to protect their workers, have not seen comparable increases in inequality.) It should not be surprising, therefore, that the biggest and most emotional fight this year is the one taking place within the Republican Party, as its working-class base expresses a clear preference for more nationalist economic policies.
The Democrats, for their part, have traditionally seen themselves as champions of the common man and can still count on a shrinking base of trade union members to help get out the vote. But they have also failed this constituency. Since the rise of Bill Clinton’s “third way,” elites in the Democratic Party have embraced the post-Reagan consensus on the benefits of free trade and immigration. They were complicit in the dismantling of bank regulation in the 1990s and have tried to buy off, rather than support, the labor movement over its objections to trade agreements.
But the more important problem with the Democrats is that the party has embraced identity politics as its core value. The party has won recent elections by mobilizing a coalition of population segments: women, African Americans, young urbanites, gays, and environmentalists. The one group it has completely lost touch with is the same white working class that was the bedrock of Franklin Roosevelt’s New Deal coalition. The white working class began voting Republican in the 1980s over cultural issues such as patriotism, gun rights, abortion, and religion. Clinton won back enough of them in the 1990s to be elected twice (with pluralities each time), but since then, they have been a more reliable constituency for the Republican Party, despite the fact that elite Republican economic policies are at odds with their economic interests. This is why, in a Quinnipiac University survey released in April, 80 percent of Trump’s supporters polled said they felt that “the government has gone too far in assisting minority groups,” and 85 percent agreed that “America has lost its identity.”
The Democrats’ fixation with identity explains one of the great mysteries of contemporary American politics—why rural working-class whites, particularly in southern states with limited social services, have flocked to the banner of the Republicans even though they have been among the greatest beneficiaries of Republican-opposed programs, such as Barack Obama’s Affordable Care Act. One reason is their perception that Obamacare was designed to benefit people other than themselves—in part because Democrats have lost their ability to speak to such voters (in contrast to in the 1930s, when southern rural whites were key supporters of Democratic Party welfare state initiatives such as the Tennessee Valley Authority).
THE END OF AN ERA?
Trump’s policy pronouncements are confused and contradictory, coming as they do from a narcissistic media manipulator with no clear underlying ideology. But the common theme that has made him attractive to so many Republican primary voters is one that he shares to some extent with Sanders: an economic nationalist agenda designed to protect and restore the jobs of American workers. This explains both his opposition to immigration—not just illegal immigration but also skilled workers coming in on H1B visas—and his condemnation of American companies that move plants abroad to save on labor costs. He has criticized not only China for its currency manipulation but also friendly countries such as Japan and South Korea for undermining the United States’ manufacturing base. And of course he is dead set against further trade liberalization, such as the Trans-Pacific Partnership in Asia and the Transatlantic Trade and Investment Partnership with Europe.
All of this sounds like total heresy to anyone who has taken a basic college-level course in trade theory, where models from the Ricardian one of comparative advantage to the Heckscher-Ohlin factor endow­ment theory tell you that free trade is a win-win for trading partners, increasing all countries’ aggregate incomes. And indeed, global output has exploded over the past two generations, as world trade and investment have been liberalized under the broad framework of the General Agreement on Tariffs and Trade and then the World Trade Organization, increasing fourfold between 1970 and 2008. Globalization has been responsible for lifting hundreds of millions of people out of poverty in countries such as China and India and has generated unfathomable amounts of wealth in the United States.
Yet this consensus on the benefits of economic liberalization, shared by elites in both political parties, is not immune from criticism. Built into all the existing trade models is the conclusion that trade liberalization, while boosting aggregate income, will have potentially adverse distributional consequences—it will, in other words, create winners and losers. One recent study estimated that import competition from China was responsible for the loss of between two million and 2.4 million U.S. jobs from 1999 to 2011.
The standard response from trade economists is to argue that the gains from trade are sufficient to more than adequately compensate the losers, ideally through job training that will equip them with new skills. And thus, every major piece of trade legislation has been accompanied by a host of worker-retraining measures, as well as a phasing in of new rules to allow workers time to adjust.
In practice, however, this adjustment has often failed to materialize. The U.S. government has run 47 uncoordinated federal job-retraining programs (since consolidated into about a dozen), in addition to countless state-level ones. These have collectively failed to move large numbers of workers into higher-skilled positions. This is partly a failure of implementation, but it is also a failure of concept: it is not clear what kind of training can transform a 55-year-old assembly-line worker into a computer programmer or a Web designer. Nor does standard trade theory take account of the political economy of investment. Capital has always had collective-action advantages over labor, because it is more concentrated and easier to coordinate. This was one of the early arguments in favor of trade unionism, which has been severely eroded in the United States since the 1980s. And capital’s advantages only increase with the high degree of capital mobility that has arisen in today’s globalized world. Labor has become more mobile as well, but it is far more constrained. The bargaining advantages of unions are quickly undermined by employers who can threaten to relocate not just to a right-to-work state but also to a completely different country.
The American political system will not be fixed unless popular anger is linked to good policies.
Labor-cost differentials between the United States and many developing countries are so great that it is hard to imagine what sorts of policies could ultimately have protected the mass of low-skilled jobs. Perhaps not even Trump believes that shoes and shirts should still be made in America. Every industrialized nation in the world, including those that are much more committed to protecting their manufacturing bases, such as Germany and Japan, has seen a decline in the relative share of manufacturing over the past few decades. And even China itself is beginning to lose jobs to automation and to lower-cost producers in places such as Bangladesh and Vietnam.
And yet the experience of a country such as Germany suggests that the path followed by the United States was not inevitable. German business elites never sought to undermine the power of their trade unions; to this day, wages are set across the German economy through government-sponsored negotiations between employers and unions. As a result, German labor costs are about 25 percent higher than their American counterparts. And yet Germany remains the third-largest exporter in the world, and the share of manufacturing employment in Germany, although declining, has remained consistently higher than that in the United States. Unlike the French and the Italians, the Germans have not sought to protect existing jobs through a thicket of labor laws; under Chancellor Gerhard Schröder’s Agenda 2010 reforms, it became easier to lay off redundant workers. And yet the country has invested heavily in improving working-class skills through its apprenticeship program and other active labor-market interventions. The Germans also sought to protect more of the country’s supply chain from endless outsourcing, connecting its fabled Mittelstand, that is, its small and medium-size businesses, to its large employers.
In the United States, in contrast, economists and public intellectuals portrayed the shift from a manufacturing economy to a postindustrial service-based one as inevitable, even something to be welcomed and hastened. Like the buggy whip makers of old, supposedly, manufac­turing workers would retool themselves, becoming knowledge workers in a flexible, outsourced, part-time new economy, where their new skills would earn them higher wages. Despite occasional gestures, however, neither political party took the retooling agenda seriously, as the centerpiece of a necessary adjustment process, nor did they invest in social programs designed to cushion the working class as it tried to adjust. And so white workers, like African Americans in earlier decades, were on their own.
A voter arrives to cast their ballot in the Wisconsin presidential primary election at a voting station in Milwaukee, Wisconsin, April 2016.
The first decade of the century could have played out very differently. The Chinese today are not manipulating their currency to boost exports; if anything, they have been trying recently to support the value of the yuan in order to prevent capital flight. But they certainly did manipulate their currency in the years following the Asian financial crisis of 1997–98 and the dot-com crash of 2000–2001. It would have been entirely feasible for Washington to have threatened, or actually imposed, tariffs against Chinese imports back then in response. This would have entailed risks: consumer prices would have increased, and interest rates would have risen had the Chinese responded by not buying U.S. debt. Yet this possibility was not taken seriously by U.S. elites, for fear that it would start a slide down the slippery slope of protectionism. As a result, more than two million jobs were lost in the ensuing decade.
A WAY FORWARD?
Trump may have fastened onto something real in American society, but he is a singularly inappropriate instrument for taking advantage of the reform moment that this electoral upheaval represents. You cannot unwind 50 years of trade liberalization by imposing unilateral tariffs or filing criminal indictments against American multinationals that outsource jobs. At this point, the United States’ economy is so interconnected with that of the rest of the world that the dangers of a global retreat into protectionism are all too real. Trump’s proposals to abolish Obamacare would throw millions of working-class Americans off health insurance, and his proposed tax cuts would add more than $10 trillion to the deficit over the next decade while benefiting only the rich. The country does need strong leadership, but by an institutional reformer who can make government truly effective, not by a personalistic demagogue who is willing to flout established rules.
Nonetheless, if elites profess to be genuinely concerned about inequality and the declining working class, they need to rethink some of their long-standing positions on immigration, trade, and investment. The intellectual challenge is to see whether it is possible to back away from globalization without cratering both the national and the global economy, with the goal of trading a little aggregate national income for greater domestic income equality.
Clearly, some changes are more workable than others, with immigra­tion being at the top of the theoretically doable list. Comprehensive immigration reform has been in the works for more than a decade now and has failed for two reasons. First, opponents are opposed to “amnesty,” that is, giving existing undocumented immigrants a path to citizenship. But the second reason has to do with enforcement: critics point out that existing laws are not enforced and that earlier promises to enforce them have not been kept.
The idea that the government could deport 11 million people from the country, many of them with children who are U.S. citizens, seems highly implausible. So some form of amnesty appears inevitable. Immigration critics are right, however, that the United States has been very lax in enforcement. Doing this properly would require not a wall but something like a national biometric ID card, heavy investment in courts and police, and, above all, the political will to sanction employers who violate the rules. Moving to a much more restrictive policy on legal immigration, in which some form of amnesty for existing immigrants is exchanged for genuine efforts to enforce new and tougher rules, would not be economically disastrous. When the country did this before, in 1924, the way was paved, in certain respects, for the golden age of U.S. equality in the 1940s and 1950s.
It is harder to see a way forward on trade and investment, other than not ratifying existing deals such as the Trans-Pacific Partnership—which would not be extremely risky. The world is increasingly popu­lated with economic nationalists, and a course reversal by Washington—which has built and sustained the current liberal international system—could well trigger a tidal wave of reprisals. Perhaps one place to start is to figure out a way to persuade U.S. multinationals, which currently are sitting on more than $2 trillion in cash outside the United States, to bring their money home for domestic investment. U.S. corporate tax rates are among the highest in the Organization for Economic Cooperation and Development; reducing them sharply while eliminating the myriad tax subsidies and exemptions that corporations have negotiated for themselves is a policy that could find support in both parties.
Another initiative would be a massive campaign to rebuild American infrastructure. The American Society of Civil Engineers estimates that it would take $3.6 trillion to adequately upgrade the country’s infrastructure by 2020. The United States could borrow $1 trillion while interest rates are low and use it to fund a massive infrastructure initiative that would create huge numbers of jobs while raising U.S. productivity in the long run. Hillary Clinton has proposed spending $275 billion, but that number is too modest.
But attempts to accomplish either goal would bump into the more routine dysfunctions of the American political system, where vetocracy prevents either tax reform or infrastructure investment. The American system makes it too easy for well-organized interest groups to block legislation and to “capture” new initiatives for their own purposes. So fixing the system to reduce veto points and streamline decision-making would have to be part of the reform agenda itself. Necessary changes should include eliminating both senatorial holds and the routine use of the filibuster and delegating budgeting and the formulation of complex legislation to smaller, more expert groups that can present coherent packages to Congress for up-or-down votes.
This is why the unexpected emergence of Trump and Sanders may signal a big opportunity. For all his faults, Trump has broken with the Republican orthodoxy that has prevailed since Ronald Reagan, a low-tax, small-safety-net orthodoxy that benefits corporations much more than their workers. Sanders similarly has mobilized the backlash from the left that has been so conspicuously missing since 2008.
“Populism” is the label that political elites attach to policies supported by ordinary citizens that they don’t like. There is of course no reason why democratic voters should always choose wisely, particularly in an age when globalization makes policy choices so complex. But elites don’t always choose correctly either, and their dismissal of the popular choice often masks the nakedness of their own positions. Popular mobilizations are neither inherently bad nor inherently good; they can do great things, as during the Progressive era and the New Deal, but also terrible ones, as in Europe during the 1930s. The American political system has in fact suffered from substantial decay, and it will not be fixed unless popular anger is linked to wise leadership and good policies. It is still not too late for this to emerge.
0 notes
sufredux · 5 years
Text
Israel Among the Nations
In 1996, Ehud Barak, who was then Israel’s foreign minister and would later serve as prime minister, charac­terized Israel as “a modern and prosperous villa in the middle of the jungle.” Twenty years later, as political turmoil and vio­lence engulf the Middle East, that harsh metaphor captures better than ever the way most Israelis see their country and its place in the region. Their standard of living has never been higher. Their country’s economy is robust, and Israel’s entrepreneurial spirit remains the envy of the world. In 2015, Israel ranked as the planet’s fifth-happiest country on the Organization for Economic Cooperation and Development’s Better Life Index, topped only by Denmark, Finland, Iceland, and Switzerland. In its first half century of existence, Israeli soldiers fought a war virtually every decade against well-armed conventional Arab armies. Today, the threat of such a war has vastly diminished, and the Israeli military has never been stronger, both in absolute terms and relative to its neighbors.
Now, however, it is Israeli civilians, not soldiers, who are the primary targets of Israel’s enemies. They are vulnerable to rockets fired by Hamas from Gaza and by Hezbollah from Lebanon, which have killed over 100 Israelis since 2004. And in the past year, new forms of violence have emerged, as Palestinians have targeted Israelis in over 150 seemingly uncoordinated stabbings and more than 50 attacks in which drivers have intentionally rammed pedestrians with their cars. Israel’s citizens feel more vulnerable in a personal sense, walking their streets, than they have since perhaps the 1948 War of Independence. Even during the second intifada, the Palestinian revolt that lasted from 2000 until 2005 and claimed the lives of more than 1,000 Israeli civilians, Jews believed they knew where it was safe to go and where it wasn’t. That’s not true today: in a recent poll conducted by the Israel Democracy Institute, nearly 70 percent of Israeli Jews surveyed said they greatly or moderately feared that they or people close to them would be harmed by the wave of violence that has swept the country since last October.
Meanwhile, chaos appears to loom across almost every border. A bloody and devastating civil war rages in Syria, where the regime of Bashar al-Assad and the jihadists of the Islamic State (also known as ISIS) seem intent on out­doing each other in brutality. Neigh­boring Jordan has long served as a buffer of sorts to Israel’s east, but it is now struggling under the burden of hosting more than a million Syrian refugees. And ISIS and other jihadist organizations roam the virtual no man’s land of the Sinai Peninsula, which the somewhat wobbly Egyptian government has struggled to secure.
Confronted with threats at home and disorder all around, many Israelis have come to feel that the idealistic aspirations of earlier eras—all those dreams of peaceful coexistence with the Palestinians and with the greater Arab world—were naive at best and profoundly misplaced at worst. A sense of bitterness, resignation, and hope­lessness now prevails. Many Israeli politicians seem to see greater advantage in stoking, rather than countering, such sentiments. For example, rather than point to the benefits that peace agreements and negotiated territorial concessions have produced, Israeli Prime Minister Benjamin Netanyahu emphasizes how other territorial withdrawals—ones that were unilateral and unaccompanied by peace agreements—have resulted in further attacks against Israel.
Yet inside Israel’s defense establish­ment, headquartered at the Kirya mili­tary complex in Tel Aviv, the picture is more nuanced. Israel’s security chiefs share their compatriots’ sense that the Middle East has become chaotic and that today’s threats are more diffuse and inchoate than those Israel used to face. But these officials also recognize that their country is far from defenseless and that the threat of a conventional conflict has virtually disappeared. As the army’s recently leaked National Intelligence Estimate for 2016 concluded, Israel faces no current threat of war and only a low probability of war in the coming year. In fact, the analysts who prepared the document argue that the turmoil sweeping the Middle East may even have improved Israel’s strategic position.
The disconnect between public attitudes, political rhetoric, and military risk assessments reflects a kind of sensory overload. Israeli strategic planners can agree on a long list of threats and chal­lenges but not on how to prioritize them. Like Israel’s political leaders, they suffer from a deep sense of strategic confusion. So far, their response has been to hunker down and ride out the turbulence. That is a natural reaction. But it’s also a risky one, which could lead Israel to forgo the kind of subtle, clever approaches it has adopted in the past when faced with complex threats. For all the danger Israel faces today, the current turmoil has also created real opportunities for Israel to improve its strategic position. But these will come to naught unless the government can see them clearly—and find the strength to take advantage of them.
FRIENDS OLD AND NEW
Although the chaos and violence currently tearing apart the Middle East is deeply unsettling, the changes that have swept the region in recent years have actually led to a closer alignment and stronger relations between Israel and its only official partners in the Arab world, Egypt and Jordan. The peace treaty that Egypt and Israel signed in 1979 removed Israel’s single largest military threat and effectively ended the era of all-out war between the Arabs and the Israelis. It remains one of the most important contributors to Israel’s security, since it ensures that the country will not be attacked by multiple armies on multiple fronts simultaneously, as it was in 1948, 1967, and 1973. Despite the tumult of the 2010–11 Arab uprisings, including an Egyptian revolution that briefly brought the anti-Zionist Muslim Brotherhood to power, the peace treaty has proved durable and critical for both countries. Even the Islamist Egyptian president Mohamed Morsi acknowl­edged the treaty’s importance and never sought to challenge or abrogate it. When the military deposed Morsi in July 2013, Egyptian-Israeli ties grew stronger than ever, with both sides firmly aligning against Hamas in Gaza, which is sand­wiched between them. Egyptian and Israeli national security interests have converged to such a degree that in 2014, when Hamas rocket attacks provoked an intense 50-day Israeli military campaign in Gaza, Egypt clearly sided with Israel and even waved off U.S. efforts to bring an early halt to the fighting.
In the post–Arab Spring period, Israel has also drawn closer to Jordan, the country with which it shares its longest border. The open cooperation facilitated by the peace treaty that the two countries signed in 1994 has proved crucial to Israel’s domestic and regional security interests. Jordan has played an instrumental role in helping defuse tensions at the Jerusalem holy site known to Muslims as Haram al-Sharif, or the Noble Sanctuary, and to Jews as the Temple Mount. Jordan is also helping absorb some spillover from the unrest roiling Iraq and Syria. Security coop­eration between Israel and Jordan is flourishing, particularly since both share a common interest in securing Jordan’s border with Syria and in countering Islamists across the region.
Farther afield, Israel has also made some new friends and strengthened ties with old ones. In a sense, it has developed a new version of the “periphery doctrine” that the country pursued in the 1950s, when it established warm ties with important non-Arab states on the outer edges of the Middle East, such as Ethiopia, Iran, and Turkey. Since Israel’s strategic relationship with Turkey broke down in 2010, Israel has forged new partnerships with Cyprus and Greece, both bitter foes of the Turkish government. Israel has also developed closer ties with a number of African countries, which has allowed it to increase its influence on the conti­nent and to interdict arms flows to militants in the Sinai and Gaza. And India—which, as a leader of the Non-Aligned Movement, once kept Israel at arm’s length—has developed extensive commercial, military, and diplomatic ties with the Jewish state in recent years.
Israel’s citizens feel more vulnerable walking their streets than they have since perhaps the 1948 War of Independence.
Relations with Russia have also improved markedly: indeed, Netanyahu and Russian President Vladimir Putin clearly enjoy a better relationship with each other than either does with U.S. President Barack Obama. Washington and Moscow have argued viciously over the civil war in Syria; Israel, in contrast, appears to have established some clear rules of the road with Russia for operations there. According to press reports, Russia even temporarily transferred some military officers to Israel’s military headquarters in Tel Aviv in order to improve coordination and prevent acci­dental clashes in the skies above Syria.
UNCLEAR AND PRESENT DANGERS
Despite such gains, Israel still faces many threats and potential dangers, and the country’s leaders can’t seem to agree on which are most pressing. President Reuven Rivlin, currently one of the country’s most popular and widely respected officials, recently suggested that ISIS might be the greatest present danger. Yet few in Israel’s defense establishment—which comprises Israel’s military, intelligence, and national security agencies—agree with that position. They largely see ISIS as an indirect problem, one that represents a bigger threat to regional stability and the viability of Israel’s neighbors than it does to the country’s own security.
The more direct and urgent danger, most believe, comes from Iran and its two main militant allies: Hamas and Hezbollah. Indeed, in January, then Defense Minister Moshe Yaalon declared that he would rather face ISIS in the Golan Heights than see Iranian troops or their proxies occupy that area. Israeli leaders see Iran as a rising revisionist power and have watched nervously as it has built significant influence, if not quite dominance, in Iraq, Lebanon, Syria, and Yemen.
Yet underneath this general consensus, Israeli leaders don’t agree on the precise nature of the danger Iran represents. In recent years, Netanyahu has warned that Iran (or at least a nuclear-armed Iran) could constitute an “existential threat” to Israel. Yet that formulation has been vigorously disputed even by other security hawks, such as Barak—despite the fact that Barak reportedly advocated a military strike on Iran’s nuclear facilities as recently as 2012. To them, a nuclear-armed Iran would represent an intolerable threat but not an existential one.
Netanyahu continues to object to the deal Iran struck last year with the United States and other major powers that requires Iran to significantly curtail its nuclear program in exchange for relief from international sanctions. Yet many of Israel’s security professionals have adopted the view that the agreement, although flawed, has pushed the Islamic Republic further away from acquiring a bomb—even further, perhaps, than an Israeli military strike would have. They believe that Tehran has signifi­cantly reduced its stockpile of enriched uranium and the number of centrifuges it operates and that Iran’s ability to produce plutonium has been eliminated, for the time being.
Still, virtually all Israeli officials view Iran as implacably hostile and expansionist. And Israel has taken it upon itself to act as the most stringent international monitor of Iran’s compli­ance with the nuclear agreement, vig­ilantly pointing out every infraction. But Israel is struggling to determine what, if anything, to do with the addi­tional time—somewhere between five and 15 years—that the nuclear agree­ment with Iran has put on the clock.
YOU'LL NEVER WALK ALONE
For many decades, Israel enjoyed a high degree of freedom when considering how to respond to the various threats it faced. David Ben-Gurion, the country’s founding father, pursued a delicate strategy of “nonidentification,” courting support from global powers but avoiding the constraints of formal alliances. Today, Israelis still ferociously cling to this idea of independence and to the need for the country to be able to “defend itself, by itself,” as the popular phrase goes.
Yet the reality has long since shifted. Like other medium-size powers, Israel cannot match every possible threat by itself. Most Israelis recognize that truth, and the state has grown increasingly dependent on its only reliable friend, the United States, with which it has developed a de facto strategic partner­ship over the last 30 years or so.
Many in Israel’s defense establishment believe that the turmoil sweeping the Middle East may have actually improved Israel’s strategic position.
Israel’s lack of complete independence was demonstrated most dramatically during the standoff between Netanyahu and Obama over Iran. Israel had mobi­lized its formidable military and intel­ligence resources to prevent Iran from developing a nuclear breakout capacity. Even as the United States and other great powers initiated talks with Iran, Israel’s air force stepped up its training, and its officials began planning a preven­tive attack. But faced with stiff opposition from the Obama administration, Israel’s government ultimately stood down. Israel had been deterred—not by Tehran but by Washington.
Still, that episode has created little if any new distance between the two allies; on the contrary, the Israelis have sought to move even deeper into the American embrace. Despite the sour personal relations between Netanyahu and Obama, their two countries are now negotiating a new ten-year military assistance program that will replace and expand an expiring agreement that has ensured over $3 billion in annual U.S. military assistance for the past decade. And it is almost certain that whoever moves into the White House next year will seek to improve U.S. relations with Netanyahu’s government.
A FORMAL ALLIANCE
Improving relations with Washington and perhaps changing the structure of the U.S.-Israeli relationship represent one of the best ways for Israel to take advantage of this uncertain moment—not by merely seeking a return to the state of affairs before Obama but by forging an even stronger bond with the United States. Israelis regularly refer to the Americans as allies. Yet the United States and Israel have no formal, treaty-based alliance. There have been times when Israel seriously contemplated pushing for such an arrangement. But in each instance, it decided against doing so, fearing that the price Washington would likely demand—territorial con­ces­sions to the Arabs—would prove too high.
Improving relations with Washington would be one of the best ways for Israel to take advantage of this uncertain moment.
Today, Israel’s ambivalence stems from different factors. First, the Israelis fear that an alliance with the United States would force them to relinquish even more of their military indepen­dence, potentially preventing them from conducting certain military actions, ones along the lines of the 2007 Israeli air strike against an incipient Syrian nuclear facility, which the Israelis undertook after extensive consultations with the United States but without American participation. An alliance would also challenge the idea of Israeli self-reliance, which is central to the country’s de­fining ethos.
But as the dispute over Iran’s nuclear program showed, when push comes to shove, Israel is already willing to constrain itself and accept a high level of depen­dence in order to protect its close rela­tionship with the United States. And other U.S. allies, such as Turkey, have initiated military actions when they believed their national interests were threatened, regardless of Washington’s views. A formal U.S.-Israeli alliance, therefore, would not necessarily have a significant practical effect on Israeli freedom of maneuver. Israel’s other major reservation regarding an alliance stems from a belief that the United States backs Israel partly because the Americans know that the Israelis will never ask U.S. soldiers to fight on Israel’s behalf. But a formal alliance would still allow Israel to maintain its commitment to not ask for American boots on the ground.
An alliance would offer significant benefits to Israel. First and foremost, it would provide an ironclad security guarantee: any attack on Israel would be met and rebuffed by the United States. During the Iran imbroglio, Obama repeatedly pledged that the United States “will always have Israel’s back.” But he never specifically, publicly prom­ised to protect Israel against an Iranian attack. A treaty with Washington would ensure a lasting commitment of exactly that kind.
A formal alliance would also allow the Israelis to stop worrying, as they frequently do, about the contingent nature of their partnership with the United States. How much longer, they wonder, can Jerusalem safely rely on Washington to maintain their informal, quasi alliance? Many Israelis worry that the two countries will drift further apart as each undergoes demographic, political, and social changes. This may be happening already. A poll recently conducted by the Pew Research Center indicated that each U.S. generation is less sympathetic toward Israel than its predecessor. There is no guarantee that the strong pro-Israel consensus that has long been a bipartisan feature of U.S. politics will endure for­ever. Now is therefore the time for Israel to lock in the existing benefits of its relationship with Washington.
TAKE THE INITIATIVE
Closer to home, a second extremely important opportunity for Israel to consider involves its relationships with a number of Arab states that have histor­ically wanted nothing to do with it. In ways unforeseen and largely unintended, Obama may have made a greater contri­bution to improving these relationships than he ever thought possible. His efforts to pivot the United States away from the Middle East while negotiating with Iran highlighted a number of interests that Israel shares with the Sunni Arab countries—the very same states Israel battled ferociously during the first 50 years of its existence.
In the last decade, the centuries-old Sunni-Shiite divide has grown into a chasm, fueled by—and, in turn, fueling—the rivalry between the Sunni Arab powers and an Iranian-led Shiite bloc. The sectarian split has replaced the region’s traditional fault line—the Arab-Israeli conflict—and has begun to reorder the Middle East in surprising ways. Israel and the Sunni Arab states now more clearly share a chief foe, in Iran, and a sense of concern over U.S. retrenchment.
Israel should leverage this change to shape a better future for itself among its neighbors. Some Israelis worry that the Sunni Arab states may be too unstable or unreliable to act as partners. But Israel should seize on their sense of weakness and their openness to explore a formal peace initiative.
Talking with the Arabs might have strategic benefits even if it fails to unlock the stalemate with the Palestinians.
In September 1967, following the Arabs’ devastating defeat in the Six-Day War—during which Israel captured all of Jerusalem and the west bank of the Jordan River—the Arab League convened in Khartoum, Sudan, and issued its now-infamous declaration of what came to be known as “the three no’s”: no peace with Israel, no recognition of Israel, and no negotiations with Israel. Israel responded by casting itself as the reasonable party, willing to trade territory for peace, and took every opportunity to portray the Arabs as inexorably hostile and belligerent.
But the Arab wall of rejection cracked a decade later, when Egyptian President Anwar al-Sadat traveled to Jerusalem and made peace. And the wall arguably crumbled altogether in 2002, when the Arab League collec­tively endorsed a proposal put forward by Saudi Crown Prince Abdullah (who was king from 2005 until his death last year) that offered Israel the prospect of peace, security, and normal relations in exchange for a complete Israeli withdrawal to the pre-1967 borders, a move the Arab states see as the only way to begin resolving the Israeli-Palestinian conflict.
The Israelis had ample cause for skepticism. First, the timing was poor. One day prior to the Arab League’s endorsement of the plan, Israel suffered a massive terrorist attack in which 30 Israelis in the coastal city of Netanya were killed at a Passover Seder; the bloodshed left the country in no mood to negotiate with its enemies. More substantively, the Israelis doubted that the Arabs could ever be flexible enough on their demand for a “right of return” for Palestinian refugees. And the Israelis also believed that the Arabs were only pretending to reach out to them in order to curry favor with Washington so as to gain leverage in the run-up to an antici­pated U.S. invasion of Iraq, which the Arab states opposed.
But the Arab Peace Initiative has proved to be more than a tactical ploy: for the past 14 years, the Arab League has stood by it, even in the face of intense public anger in the Arab and wider Muslim world over Israel’s military actions in Lebanon and Gaza. On the “right of return,” the Arabs have called for “a just and agreed solution,” suggesting there may be some room for flexibility. And in 2013, the league even made modifications to the plan to make it more attractive to Israel: for example, the proposal now incorporates the notion of negotiated land swaps between Israel and the Palestinians, which shows that it is not a take-it-or-leave-it proposal. Emissaries from Egypt and Jordan have traveled to Israel on behalf of the Arab League to allay Israeli apprehensions. Prince Turki al-Faisal, a former head of Saudi intelligence and former ambassador to the United States, has met publicly with prominent Israelis and reached out to the Israeli public through interviews with various Israeli media outlets. Throughout, however, Turki has made it clear that there can be no progress in broader Arab-Israeli relations without addressing the Palestinian issue.
The Israeli government has yet to offer an official response to the plan, and Israel’s leaders have essentially ignored it. There have been a few exceptions: Dan Meridor, a former Likud deputy prime minister, and Yair Lapid, who leads the center-right party Yesh Atid, have both supported the idea of consid­ering the Arab initiative under certain conditions. And a number of former chiefs of the Mossad, the Israeli foreign intel­ligence service, including Danny Yatom and Meir Dagan, have decried Israel’s lack of a positive response. But for the most part, the Arab plan has been met with Israeli silence. After decades of bemoaning Arab rejectionism, Israel now finds itself branded the reject­ionist party itself—by the Arabs.
The staunchest Israeli critics of the Arab Peace Initiative argue that given the chaos and instability plaguing the region, it’s not even clear how long the current Sunni Arab governments will stay in power: Why negotiate with them when they are so weak? Critics also point out that the Palestinians seem unwilling or unable to conclude a deal—so why give them a veto over Israel’s regional relations? The answer is that talking with the Arabs might have strategic benefits even if it fails to unlock the stalemate with the Palestin­ians. Better contacts between Israel and the Sunni Arab states, particularly Saudi Arabia, could help forge a more united front against Iran. Israel could test the Arab plan’s sincerity and in doing so open up a channel to the broader Arab world by expressing a desire to negotiate with Saudi Arabia and other Arab League states, while maintaining certain Israeli reservations about some of the plan’s elements. As one senior Israeli official recently told me, “Never before have we been offered so much while being asked for so little in return.”
NOTHING VENTURED...
If Israel prefers not to deal with the Arab Peace Initiative, then it should consider offering up its own regional peace initia­tive, which Netanyahu has declined to do. Many Israelis, even within the prime minister’s camp, have been frustrated by their leader’s passivity on this front. Indeed, Netanyahu’s tenure has been defined not by right-wing extremism, as many of Israel’s detractors claim, but by risk aversion. In his more than seven years in power, Netanyahu has neglected to articulate a vision—much less offer a clear plan—for how Israel could achieve peace and consolidate its security and economic gains. Given the narrow right-wing base on which his government rests, Netanyahu is understandably reluctant to hint at the types of conces­sions he would be prepared to make for peace. But in adopting a wait-and-see attitude toward the political changes that are roiling the Middle East, Israel is forfeiting a chance to help set the international agenda in a way that would be favorable to it.
Every previous Israeli prime minister has recognized that when it comes to statecraft, Israel can play either offense (initiating peace negotiations on its own terms) or defense (resisting attempts by its friends and adversaries alike to force it to the table on terms Israel dislikes). Offense—taking the battle to its adversaries—is far more consonant with the traditional Israeli political ethos. Israel would gain considerable support from its friends and allies by outlining a vision for peace and an approach toward real­izing it. And the country will con­tinue to pay a price if it fails to do so.
Israelis rightly point out that their conflict with the Arabs no longer defines the region’s politics. But that condition will not last forever: an almost inevitable future outbreak of violence in Gaza, the West Bank, or Lebanon will surely return the world’s attention to Israel, and the major powers will once again call on it to try to make concessions. What is more, while Israel sits on its hands, the other parties to the conflict are pushing forward with their own agendas. Israel’s friends, including the United States, are weighing plans to propose new peace efforts before the end of this year. Meanwhile, Palestinian officials are seeking new ways to confront or isolate Israel, by gaining ever more official recognition at the UN and by mobilizing international boycotts of Israeli goods and scholarship.
By outlining a plan for peace now, precisely when the Middle East is experiencing unrest and turmoil, Israel has an opportunity to explore the pos­sibility of new relationships in its neighborhood and better ones in the rest of the world. Israel ought to apply to its foreign relations the same innovative, entrepreneurial spirit that has allowed the country to thrive in the technological and military realms. Laying out a vision would not imply a naive denial of harsh realities. Instead, Israel would improve its standing by deciding, after many years of inaction, to simply try.
0 notes
sufredux · 5 years
Text
Israels Second Class Citizens
When the world focuses on the Arab-Israeli crisis today, the plight of the 4.6 million Palestinians living in the Gaza Strip and the West Bank gets most of the attention. But another pressing question haunts Israeli politics: the status and future of Israel’s own Arab citizens, who number around 1.7 million and make up around 21 percent of its popu­lation. Over the past few decades, Arabs in Israel have steadily improved their economic lot and strengthened their civil society, securing a prominent place in the country’s politics in the process. But since 2009, when Benjamin Netanyahu began his second term as prime minister, they have also seen their rights erode, as the government has taken a number of steps to disenfranchise them. Israeli policymakers have long defined their state as both Jewish and democratic, but these recent actions have shown that the govern­ment now emphasizes the former at the expense of the latter.
This onslaught has triggered a debate among the leaders of the Arab commu­nity in Israel over how to respond. One camp wants Arab citizens to deepen their integration into mainstream society and join forces with the Israeli left to push for equality on the national stage. The other urges Arabs to withdraw from national politics altogether, creating autonomous cultural, educational, and political institutions instead. At the moment, Arab political leaders seem to favor the former approach. But the best strategy would be for Arabs to synthesize these competing visions into a unified program: one that calls on the Israeli government to integrate Israel’s Arab citizens into existing polit­ical structures even as it demands greater autonomy in such areas as edu­cational and cultural policy. The goal would be a system that grants Jews and Arabs equality in shared institutions and protects the rights of both to shape their own communities.
LEFT OUT AND MOVING UP              
Israel’s Arab citizens are the descen­dants of the approximately 150,000 Palestinians who stayed in the country following the expulsion of the majority of their brethren around the time of Israel’s establishment in 1948. Over the two decades that followed, Israel’s remain­ing Arabs suffered from high rates of poverty and low standards of living, had few opportunities for education, and were governed by martial law, which imposed various restrictions on them, from limitations on domestic and inter­national travel to constraints on setting up new businesses. To prevent the emergence of independent Arab centers of power, the Israeli government also closely supervised the activity of Arab municipal and religious institutions and arrested many Arab activists.
Since 1966, when martial law was lifted, the situation of Arab citizens has improved greatly. Consider education: in 1960, only 60 Arab students were enrolled in Israeli universities; today, there are more than 20,000 Arab uni­versity students in the country, two-thirds of whom are female, and around 10,000 Arab Israelis study abroad. Living standards have also risen, as has the status of women, and a strong middle class has emerged.
In 2014, the most recent year for which data are available, 66 of the 112 towns in Israel with more than 5,000 residents had virtually all-Arab popula­tions. And thanks to high birthrates and a young population—half of Israel’s Arab citizens are under the age of 20, whereas only 30 percent of Jewish Israelis are—the Arab Israeli popula­tion is likely to keep growing fast, with or without more support from the government. (Some Israeli officials have described the grow­ing Arab population as a threat to the Jewish majority; in fact, since the Jewish population is also growing, it is likely that Arabs will continue to make up only around 20 percent of Israel’s popula­tion over the next three decades.)
In short, Arabs in Israel are wealthier, healthier, and more numerous than ever before. Yet by most measures of well-being, they still lag behind their Jewish counterparts. In 2013, the most recent year for which data are available, the median annual income of Israel’s Arab households was around $27,000; for Jewish households, it was around $47,000, nearly 75 percent higher. The infant mortality rate is more than twice as high among Arabs as it is among Jews. Arabs are also underrepresented in Israel’s bureaucracy and academic institutions, making up less than two percent of the senior faculty in the country’s universities. And Arabs remain deeply segregated from Israel’s Jewish population: 90 percent of Arabs live in almost exclusively Arab towns and villages, and with just a few exceptions, Arab and Jewish children attend separate schools. (Nevertheless, Arabs and Jews remain relatively open to integration: a 2015 survey by the Israeli sociologist Sammy Smooha found that more than half of Israel’s Arabs and Jews supported the idea of Arabs living in Jewish-majority neighborhoods.)
What is more, when it comes to government support in such areas as the allocation of land for new construction, financing for cultural institutions, and educational funding, Arabs suffer from ongoing discrimination, despite some recent progress. Arabs make up around 21 percent of Israel’s population, but according to the Mossawa Center, a nongovernmental organization that advocates for Israel’s Arab citizens, Arab communities receive only seven percent of government funds for public transportation and only three percent of the Israeli Ministry of Culture and Sport’s budget is allocated for Arab cultural institutions; Arab schools are also significantly underresourced. (Toward the end of 2015, the Israeli government approved a five-year economic develop­ment program for Israel’s Arab community, worth up to $4 billion, that will increase funding for housing, education, infra­structure, transportation, and women’s employment. Although the plan represents a step in the right direction, the exact amount of funding that will be allocated to each of these areas remains unclear, as does the process by which its implementation will be monitored.) And then there is the fact that Israel defines itself along ethnonationalist lines that exclude the Arab minority—from a national anthem that famously describes the yearning of a Jewish soul for a homeland in Zion to a flag that displays a Star of David. In these ways, the Israeli government has maintained the dominance of the Jewish majority and denied Arabs genuine equality.
Arabs in Israel thus confront a frustrating confluence of factors: on the one hand, they enjoy a rising socioeco­nomic position; on the other, they face a government that in many respects has prevented them from achieving true equal­ity. How they respond to this frustrating dynamic, and how the Israeli govern­ment reacts, will have an enormous impact on the future of Israeli society, politics, and security.
THE INTERNAL DIVIDE
Arabs in Israel are not politically mono­lithic, and their goals vary. Their civic organizations, political activists, and public intellectuals offer competing visions for both the community’s internal develop­ment and its relationship with the state.
Broadly speaking, however, their agendas tend to fall into one of two frameworks, each based on a different understanding of Arab Israelis’ split identity. The first—call it a “discourse of difference”—suggests that Arabs’ ethno­cultural identity, rather than their Israeli citizenship, should be the starting point of their demands for change. By this logic, the Israeli government should empower Arabs to autonomously govern their own communities, by, for example, encouraging Arab officials to reform the curricula of Arab schools. The second—a “discourse of recognition”—takes Israeli citizenship, rather than Arab identity, as its starting point. This framework suggests that equality will be achieved when the state recognizes Arabs as equal Israeli citizens and equitably integrates them into existing institutions.
By most measures of well-being, Israel's Arab citizens still lag behind their Jewish counterparts.
For now, the latter approach seems to be dominant among Arabs in Israel. But even across this divide, there are a number of areas of consensus. Arabs of all political tendencies tend to condemn the government’s current policies as segre­gationist and discriminatory; many also contend that the government’s professed commitments to democracy and to the Jewish character of the state are irreconcilable. Nor are these the only points on which most Arabs agree: around 71 percent of Arabs in Israel support a two-state solution to the Israeli-Palestinian conflict, according to a 2015 survey, and only 18 percent reject the coexistence of Arabs and Jews in Israel.
The various strains of Arab political thought were brought together in December 2006, when a group of Arab activists and intellectuals published a declaration, The Future Vision of the Palestinian Arabs in Israel, that sought to define Arabs’ relationship with the state and their hopes for the country’s future. The document, which I co-authored, called on the Israeli government to recognize its responsibility for the expulsion of Palestinians around the time of Israeli independence and to consider paying reparations to the descendants of the displaced; to grant Arab citizens greater autonomy in managing their cultural, religious, and educational affairs; to enshrine Arabs’ rights to full equality; and, perhaps most striking, to legally define Israel as a homeland for both Arabs and Jews—a direct challenge to the historically Jewish character of the state.
Ratified by the National Committee for the Heads of the Arab Local Author­ities in Israel (a body that represents all of Israel’s Arabs), the document was embraced by the Arab public: a poll I conducted in 2008 with the sociologist Nohad Ali found that, despite their many differences, more than 80 percent of Arab Israelis supported its main proposals. In the years since its release, politicians representing some of Israel’s major Arab political parties have repeatedly called on the govern­ment to act on its demands. But Jewish leaders in the Israeli government, media, and academia have largely opposed the document. The board of the Israel Democracy Institute, a think tank, produced a statement in January 2007 arguing that the Future Vision report, as well as two other documents released by Arab activists in 2006, “den[ied] the very nature of Israel as a Jewish and democratic state” and declaring that the institute “reject[ed] this denial and its implication that there is an inescapable contradiction between the state’s Jewish and democratic nature.”
Bedouin children play in Umm el-Hiran, an unrecognized Bedouin village near the southern Israeli city of Beersheba, March 2016.
PARLIAMENTARY PREJUDICE
Arab-Jewish relations got even worse in the years after 2009, when Netanyahu returned to the premiership. Since then, the Israeli government has taken numerous steps to further hold back Arab citizens, from rules that limit the rights of Arabs to live in certain Jewish villages to a law that restricts the ability of Palestinians in the West Bank to obtain Israeli citizenship if they marry an Arab citizen of Israel. (Foreign Jews of any nationality, meanwhile, can become Israeli citizens without establishing family ties to Israelis.) In the Negev desert, home to most of Israel’s Bedouins, the government has introduced projects that aim to cement Jewish control of the land, by, for example, demolishing unrec­ognized Bedouin settlements and establishing planned Jewish towns in their place. More generally, the Netanyahu government has stepped up the official rhetoric affirming the need to strengthen the Jewish character of the state.
In March 2014, the Knesset passed a law raising the threshold for representation in the legislature from two percent to 3.25 percent of the popular vote. The move threatened to strip the four so-called Arab parties—Balad, Hadash, Ta’al, and the Islamic Movement in Israel’s southern branch—of their seats in the election of 2015. It was a reminder that the Israeli government’s anti-Arab policies derive as much from the calcu­lation on the part of the Netanyahu government that weakening the poli­tical position of Arabs might keep left-wing parties from regaining power as from the prejudices of some Israeli officials.
Largely to prevent their exclusion from the Knesset, the Arab parties banded together in January 2015 to create the Joint List, a big-tent political party that ran on a single ticket in the election held that March. On election day, Netanyahu sought to boost Jewish turnout by making the racially charged claim that Arab voters were “streaming in droves to polling stations.” The Joint List was remarkably successful nevertheless. Some 82 percent of Israel’s Arab voters cast a ballot in support of it. With 13 seats, it emerged as the third-largest political party in the Knesset after Netanyahu’s Likud Party and the center-left Zionist Union. Even more impressive, the Joint List managed to increase turnout among Arab voters by seven percentage points, from 56.5 per­­cent in the 2013 election to 63.5 percent in 2015. This surge suggests that Arabs in Israel have become more confident that their elected representatives can over­come their differences and act as an effec­tive united force in the Israeli establishment—in short, that national politics offer a path toward change. At least when it comes to parliamentary representation, right-wing efforts to impede the progress of the country’s Arabs have not succeeded.
Regardless of the state’s choices, Arabs in Israel can still shape their own fate.
Rather than accept this show of strength, Netanyahu’s coalition responded with further measures meant to weaken Arabs’ political position. In November 2015, his government outlawed the northern branch of the Islamic Move­ment, an Islamist organization that has rallied a substantial portion of the Arab community around opposition to what it describes as Jewish threats to Muslim holy sites in Jerusalem. And in February of this year, after three Arab parliamentar­ians visited the families of Palestinians who were killed after attacking Israelis, Jewish lawmakers introduced a so-called suspension bill that would allow a three-fourths majority of the Knesset to eject any representative deemed to have denied the Jewish character of the state or incited violence. The Arab popu­lation views the proposed law as a direct attempt to sideline their representatives on the national stage. “Despite the delegit­imization campaign against us and the raising of the electoral threshold, we decided to remain part of Israeli poli­tics,” Ayman Odeh, an Arab parlia­mentarian who heads the Joint List, said during a debate on the proposed rule in the Knesset in February. “Yet we continue to be harassed.”
CITIZENS, UNITED
These developments have intensified the search for a new approach among Arab elites. Two main alternatives have emerged. The first, headed by Odeh, argues that Arab Israelis should work with the Israeli left to unseat the Netanyahu government and replace it with a center-left coalition that is willing to resume the peace talks with the Palestinians and consider major steps to advance the equality and integration of Arab citizens. The second, led by the northern branch of the Islamic Movement, as well as those Knesset members on the Joint List who represent Balad, opposes forming a coalition with the Israeli left. Both camps support the creation of a separate political body to represent Arab citizens, but whereas the former believes that such a body should supplement Arab voters’ current representation in the Knesset, the latter believes it should replace it.
These competing platforms have split the Arab public. In the 2015 survey conducted by the sociologist Smooha, 76 percent of Arab Israelis polled supported the Joint List’s coop­eration with Jewish parties in the Knesset. But 33 percent of Arab respondents voiced support for a boycott of Knesset elections; 19 percent supported the use of any means, including violence, to secure equal rights; and 54 percent said that a domestic intifada would be justified if the situation of Arabs does not sub­stantially improve.
The future of the Arabs in Israel depends in part on their ability to over­come these internal divisions, which have hindered the ability of the Arab leadership to achieve progress. Disagree­ment among Arab leaders as to whether a directly elected Arab political institution should replace or supplement Arabs’ representation in the Knesset, for example, has so far left the Arab population without an elected body of its own. In fact, it should be possible to synthesize these competing visions into a unified program that pushes for equal repre­sentation in existing institutions and greater autonomy when it comes to educational and cultural policy. No matter what shape such a platform takes, however, it should commit Arab activists to nonviolence, and it should clearly demand that the Israeli government abolish discrimination in the allocation of state resources. Finally, since broad support for Arabs’ demands for change will make them more effective, Arabs should invite Jews in Israel, Jewish organi­zations outside the country, Arabs and Palestinians in the region, and others in the international community that are sympathetic to their cause to endorse the platform.
But in many ways, the future of the Arabs in Israel hinges on developments over which they have little control. The first is how the Netanyahu government and its successors manage Israel’s conflict with the Palestinians in the Gaza Strip and the West Bank: whereas open vio­lence between Israel and the Palestinians tends to exacerbate anti-Arab sentiment among Israel’s Jewish majority, a solution to the conflict could set the stage for reconciliation among Arabs and Jews in Israel. The second, of course, is how the Israeli government treats its own Arab citizens. Regardless of the state’s choices, however, Arabs in Israel can still shape their own fate—but that will require settling on a unified political program.
0 notes
sufredux · 5 years
Text
Israel’s Second-Class Citizens
When the world focuses on the Arab-Israeli crisis today, the plight of the 4.6 million Palestinians living in the Gaza Strip and the West Bank gets most of the attention. But another pressing question haunts Israeli politics: the status and future of Israel’s own Arab citizens, who number around 1.7 million and make up around 21 percent of its popu­lation. Over the past few decades, Arabs in Israel have steadily improved their economic lot and strengthened their civil society, securing a prominent place in the country’s politics in the process. But since 2009, when Benjamin Netanyahu began his second term as prime minister, they have also seen their rights erode, as the government has taken a number of steps to disenfranchise them. Israeli policymakers have long defined their state as both Jewish and democratic, but these recent actions have shown that the govern­ment now emphasizes the former at the expense of the latter.
This onslaught has triggered a debate among the leaders of the Arab commu­nity in Israel over how to respond. One camp wants Arab citizens to deepen their integration into mainstream society and join forces with the Israeli left to push for equality on the national stage. The other urges Arabs to withdraw from national politics altogether, creating autonomous cultural, educational, and political institutions instead. At the moment, Arab political leaders seem to favor the former approach. But the best strategy would be for Arabs to synthesize these competing visions into a unified program: one that calls on the Israeli government to integrate Israel’s Arab citizens into existing polit­ical structures even as it demands greater autonomy in such areas as edu­cational and cultural policy. The goal would be a system that grants Jews and Arabs equality in shared institutions and protects the rights of both to shape their own communities.
LEFT OUT AND MOVING UP             
Israel’s Arab citizens are the descen­dants of the approximately 150,000 Palestinians who stayed in the country following the expulsion of the majority of their brethren around the time of Israel’s establishment in 1948. Over the two decades that followed, Israel’s remain­ing Arabs suffered from high rates of poverty and low standards of living, had few opportunities for education, and were governed by martial law, which imposed various restrictions on them, from limitations on domestic and inter­national travel to constraints on setting up new businesses. To prevent the emergence of independent Arab centers of power, the Israeli government also closely supervised the activity of Arab municipal and religious institutions and arrested many Arab activists.
Since 1966, when martial law was lifted, the situation of Arab citizens has improved greatly. Consider education: in 1960, only 60 Arab students were enrolled in Israeli universities; today, there are more than 20,000 Arab uni­versity students in the country, two-thirds of whom are female, and around 10,000 Arab Israelis study abroad. Living standards have also risen, as has the status of women, and a strong middle class has emerged.
In 2014, the most recent year for which data are available, 66 of the 112 towns in Israel with more than 5,000 residents had virtually all-Arab popula­tions. And thanks to high birthrates and a young population—half of Israel’s Arab citizens are under the age of 20, whereas only 30 percent of Jewish Israelis are—the Arab Israeli popula­tion is likely to keep growing fast, with or without more support from the government. (Some Israeli officials have described the grow­ing Arab population as a threat to the Jewish majority; in fact, since the Jewish population is also growing, it is likely that Arabs will continue to make up only around 20 percent of Israel’s popula­tion over the next three decades.)
In short, Arabs in Israel are wealthier, healthier, and more numerous than ever before. Yet by most measures of well-being, they still lag behind their Jewish counterparts. In 2013, the most recent year for which data are available, the median annual income of Israel’s Arab households was around $27,000; for Jewish households, it was around $47,000, nearly 75 percent higher. The infant mortality rate is more than twice as high among Arabs as it is among Jews. Arabs are also underrepresented in Israel’s bureaucracy and academic institutions, making up less than two percent of the senior faculty in the country’s universities. And Arabs remain deeply segregated from Israel’s Jewish population: 90 percent of Arabs live in almost exclusively Arab towns and villages, and with just a few exceptions, Arab and Jewish children attend separate schools. (Nevertheless, Arabs and Jews remain relatively open to integration: a 2015 survey by the Israeli sociologist Sammy Smooha found that more than half of Israel’s Arabs and Jews supported the idea of Arabs living in Jewish-majority neighborhoods.)
What is more, when it comes to government support in such areas as the allocation of land for new construction, financing for cultural institutions, and educational funding, Arabs suffer from ongoing discrimination, despite some recent progress. Arabs make up around 21 percent of Israel’s population, but according to the Mossawa Center, a nongovernmental organization that advocates for Israel’s Arab citizens, Arab communities receive only seven percent of government funds for public transportation and only three percent of the Israeli Ministry of Culture and Sport’s budget is allocated for Arab cultural institutions; Arab schools are also significantly underresourced. (Toward the end of 2015, the Israeli government approved a five-year economic develop­ment program for Israel’s Arab community, worth up to $4 billion, that will increase funding for housing, education, infra­structure, transportation, and women’s employment. Although the plan represents a step in the right direction, the exact amount of funding that will be allocated to each of these areas remains unclear, as does the process by which its implementation will be monitored.) And then there is the fact that Israel defines itself along ethnonationalist lines that exclude the Arab minority—from a national anthem that famously describes the yearning of a Jewish soul for a homeland in Zion to a flag that displays a Star of David. In these ways, the Israeli government has maintained the dominance of the Jewish majority and denied Arabs genuine equality.
Arabs in Israel thus confront a frustrating confluence of factors: on the one hand, they enjoy a rising socioeco­nomic position; on the other, they face a government that in many respects has prevented them from achieving true equal­ity. How they respond to this frustrating dynamic, and how the Israeli govern­ment reacts, will have an enormous impact on the future of Israeli society, politics, and security.
THE INTERNAL DIVIDE
Arabs in Israel are not politically mono­lithic, and their goals vary. Their civic organizations, political activists, and public intellectuals offer competing visions for both the community’s internal develop­ment and its relationship with the state.
Broadly speaking, however, their agendas tend to fall into one of two frameworks, each based on a different understanding of Arab Israelis’ split identity. The first—call it a “discourse of difference”—suggests that Arabs’ ethno­cultural identity, rather than their Israeli citizenship, should be the starting point of their demands for change. By this logic, the Israeli government should empower Arabs to autonomously govern their own communities, by, for example, encouraging Arab officials to reform the curricula of Arab schools. The second—a “discourse of recognition”—takes Israeli citizenship, rather than Arab identity, as its starting point. This framework suggests that equality will be achieved when the state recognizes Arabs as equal Israeli citizens and equitably integrates them into existing institutions.
By most measures of well-being, Israel's Arab citizens still lag behind their Jewish counterparts.
For now, the latter approach seems to be dominant among Arabs in Israel. But even across this divide, there are a number of areas of consensus. Arabs of all political tendencies tend to condemn the government’s current policies as segre­gationist and discriminatory; many also contend that the government’s professed commitments to democracy and to the Jewish character of the state are irreconcilable. Nor are these the only points on which most Arabs agree: around 71 percent of Arabs in Israel support a two-state solution to the Israeli-Palestinian conflict, according to a 2015 survey, and only 18 percent reject the coexistence of Arabs and Jews in Israel.
The various strains of Arab political thought were brought together in December 2006, when a group of Arab activists and intellectuals published a declaration, The Future Vision of the Palestinian Arabs in Israel, that sought to define Arabs’ relationship with the state and their hopes for the country’s future. The document, which I co-authored, called on the Israeli government to recognize its responsibility for the expulsion of Palestinians around the time of Israeli independence and to consider paying reparations to the descendants of the displaced; to grant Arab citizens greater autonomy in managing their cultural, religious, and educational affairs; to enshrine Arabs’ rights to full equality; and, perhaps most striking, to legally define Israel as a homeland for both Arabs and Jews—a direct challenge to the historically Jewish character of the state.
Ratified by the National Committee for the Heads of the Arab Local Author­ities in Israel (a body that represents all of Israel’s Arabs), the document was embraced by the Arab public: a poll I conducted in 2008 with the sociologist Nohad Ali found that, despite their many differences, more than 80 percent of Arab Israelis supported its main proposals. In the years since its release, politicians representing some of Israel’s major Arab political parties have repeatedly called on the govern­ment to act on its demands. But Jewish leaders in the Israeli government, media, and academia have largely opposed the document. The board of the Israel Democracy Institute, a think tank, produced a statement in January 2007 arguing that the Future Vision report, as well as two other documents released by Arab activists in 2006, “den[ied] the very nature of Israel as a Jewish and democratic state” and declaring that the institute “reject[ed] this denial and its implication that there is an inescapable contradiction between the state’s Jewish and democratic nature.”
Bedouin children play in Umm el-Hiran, an unrecognized Bedouin village near the southern Israeli city of Beersheba, March 2016.
PARLIAMENTARY PREJUDICE
Arab-Jewish relations got even worse in the years after 2009, when Netanyahu returned to the premiership. Since then, the Israeli government has taken numerous steps to further hold back Arab citizens, from rules that limit the rights of Arabs to live in certain Jewish villages to a law that restricts the ability of Palestinians in the West Bank to obtain Israeli citizenship if they marry an Arab citizen of Israel. (Foreign Jews of any nationality, meanwhile, can become Israeli citizens without establishing family ties to Israelis.) In the Negev desert, home to most of Israel’s Bedouins, the government has introduced projects that aim to cement Jewish control of the land, by, for example, demolishing unrec­ognized Bedouin settlements and establishing planned Jewish towns in their place. More generally, the Netanyahu government has stepped up the official rhetoric affirming the need to strengthen the Jewish character of the state.
In March 2014, the Knesset passed a law raising the threshold for representation in the legislature from two percent to 3.25 percent of the popular vote. The move threatened to strip the four so-called Arab parties—Balad, Hadash, Ta’al, and the Islamic Movement in Israel’s southern branch—of their seats in the election of 2015. It was a reminder that the Israeli government’s anti-Arab policies derive as much from the calcu­lation on the part of the Netanyahu government that weakening the poli­tical position of Arabs might keep left-wing parties from regaining power as from the prejudices of some Israeli officials.
Largely to prevent their exclusion from the Knesset, the Arab parties banded together in January 2015 to create the Joint List, a big-tent political party that ran on a single ticket in the election held that March. On election day, Netanyahu sought to boost Jewish turnout by making the racially charged claim that Arab voters were “streaming in droves to polling stations.” The Joint List was remarkably successful nevertheless. Some 82 percent of Israel’s Arab voters cast a ballot in support of it. With 13 seats, it emerged as the third-largest political party in the Knesset after Netanyahu’s Likud Party and the center-left Zionist Union. Even more impressive, the Joint List managed to increase turnout among Arab voters by seven percentage points, from 56.5 per­­cent in the 2013 election to 63.5 percent in 2015. This surge suggests that Arabs in Israel have become more confident that their elected representatives can over­come their differences and act as an effec­tive united force in the Israeli establishment—in short, that national politics offer a path toward change. At least when it comes to parliamentary representation, right-wing efforts to impede the progress of the country’s Arabs have not succeeded.
Regardless of the state’s choices, Arabs in Israel can still shape their own fate.
Rather than accept this show of strength, Netanyahu’s coalition responded with further measures meant to weaken Arabs’ political position. In November 2015, his government outlawed the northern branch of the Islamic Move­ment, an Islamist organization that has rallied a substantial portion of the Arab community around opposition to what it describes as Jewish threats to Muslim holy sites in Jerusalem. And in February of this year, after three Arab parliamentar­ians visited the families of Palestinians who were killed after attacking Israelis, Jewish lawmakers introduced a so-called suspension bill that would allow a three-fourths majority of the Knesset to eject any representative deemed to have denied the Jewish character of the state or incited violence. The Arab popu­lation views the proposed law as a direct attempt to sideline their representatives on the national stage. “Despite the delegit­imization campaign against us and the raising of the electoral threshold, we decided to remain part of Israeli poli­tics,” Ayman Odeh, an Arab parlia­mentarian who heads the Joint List, said during a debate on the proposed rule in the Knesset in February. “Yet we continue to be harassed.”
CITIZENS, UNITED
These developments have intensified the search for a new approach among Arab elites. Two main alternatives have emerged. The first, headed by Odeh, argues that Arab Israelis should work with the Israeli left to unseat the Netanyahu government and replace it with a center-left coalition that is willing to resume the peace talks with the Palestinians and consider major steps to advance the equality and integration of Arab citizens. The second, led by the northern branch of the Islamic Movement, as well as those Knesset members on the Joint List who represent Balad, opposes forming a coalition with the Israeli left. Both camps support the creation of a separate political body to represent Arab citizens, but whereas the former believes that such a body should supplement Arab voters’ current representation in the Knesset, the latter believes it should replace it.
These competing platforms have split the Arab public. In the 2015 survey conducted by the sociologist Smooha, 76 percent of Arab Israelis polled supported the Joint List’s coop­eration with Jewish parties in the Knesset. But 33 percent of Arab respondents voiced support for a boycott of Knesset elections; 19 percent supported the use of any means, including violence, to secure equal rights; and 54 percent said that a domestic intifada would be justified if the situation of Arabs does not sub­stantially improve.
The future of the Arabs in Israel depends in part on their ability to over­come these internal divisions, which have hindered the ability of the Arab leadership to achieve progress. Disagree­ment among Arab leaders as to whether a directly elected Arab political institution should replace or supplement Arabs’ representation in the Knesset, for example, has so far left the Arab population without an elected body of its own. In fact, it should be possible to synthesize these competing visions into a unified program that pushes for equal repre­sentation in existing institutions and greater autonomy when it comes to educational and cultural policy. No matter what shape such a platform takes, however, it should commit Arab activists to nonviolence, and it should clearly demand that the Israeli government abolish discrimination in the allocation of state resources. Finally, since broad support for Arabs’ demands for change will make them more effective, Arabs should invite Jews in Israel, Jewish organi­zations outside the country, Arabs and Palestinians in the region, and others in the international community that are sympathetic to their cause to endorse the platform.
But in many ways, the future of the Arabs in Israel hinges on developments over which they have little control. The first is how the Netanyahu government and its successors manage Israel’s conflict with the Palestinians in the Gaza Strip and the West Bank: whereas open vio­lence between Israel and the Palestinians tends to exacerbate anti-Arab sentiment among Israel’s Jewish majority, a solution to the conflict could set the stage for reconciliation among Arabs and Jews in Israel. The second, of course, is how the Israeli government treats its own Arab citizens. Regardless of the state’s choices, however, Arabs in Israel can still shape their own fate—but that will require settling on a unified political program.
0 notes
sufredux · 5 years
Text
Dictators Last stand
It has been a good decade for dictatorship. The global influence of the world’s most powerful authoritarian countries, China and Russia, has grown rapidly. For the first time since the late nineteenth century, the cumulative GDP of autocracies now equals or exceeds that of Western liberal democracies. Even ideologically, autocrats appear to be on the offensive: at the G-20 summit in June, for instance, President Vladimir Putin dropped his normal pretense that Russia is living up to liberal democratic standards, declaring instead that “modern liberalism” has become “obsolete.”Conversely, it has been a terrible decade for democracy. According to Freedom House, the world is now in the 13th consecutive year of a global democratic recession. Democracies have collapsed or eroded in every region, from Burundi to Hungary, Thailand to Venezuela. Most troubling of all, democratic institutions have proved to be surprisingly brittle in countries where they once seemed stable and secure.Stay informed.In-depth analysis delivered weekly.Sign UpIn 2014, I suggested in these pages that a rising tide of populist parties and candidates could inflict serious damage on democratic institutions. At the time, my argument was widely contested. The scholarly consensus held that demagogues would never win power in the long-established democracies of North America and western Europe. And even if they did, they would be constrained by those countries’ strong institutions and vibrant civil societies. Today, that old consensus is dead. The ascent of Donald Trump in the United States, Matteo Salvini in Italy, and Jair Bolsonaro in Brazil has demonstrated that populists can indeed win power in some of the most affluent and long-established democracies in the world. And the rapid erosion of democracy in countries such as Hungary and Venezuela has shown that populists really can turn their countries into competitive authoritarian regimes or outright dictatorships. The controversial argument I made five years ago has become the conventional wisdom.But this new consensus is now in danger of hardening into an equally misguided orthodoxy. Whereas scholars used to hope that it was only a matter of time until some of the world’s most powerful autocracies would be forced to democratize, they now concede too readily that these regimes have permanently solved the challenge of sustaining their legitimacy. Having once believed that liberal democracy was the obvious endpoint of mankind’s political evolution, many experts now assume that billions of people around the world will happily forgo individual freedom and collective self-determination. Naive optimism has given way to premature pessimism.If the past decade has been bad for democracy, the next one may turn out to be surprisingly tough on autocrats.The new orthodoxy is especially misleading about the long-term future of governments that promise to return power to the people but instead erode democratic institutions. These populist dictatorships, in countries such as Hungary, Turkey, and Venezuela, share two important features: first, their rulers came to power by winning free and fair elections with an anti-elitist and anti-pluralist message. Second, these leaders subsequently used those victories to concentrate power in their own hands by weakening the independence of key institutions, such as the judiciary; curtailing the ability of opposition parties to organize; or undermining critical media outlets. (By “populist dictatorships,” I mean both outright dictatorships, in which the opposition no longer has a realistic chance of displacing the government through elections, and competitive authoritarian regimes, in which elections retain real significance even though the opposition is forced to fight on a highly uneven playing field.)According to the new orthodoxy, the populist threat to liberal democracy is a one-way street. Once strongman leaders have managed to concentrate power in their own hands, the game for the opposition is up. If a significant number of countries succumb to populist dictatorship over the next years, the long-term outlook for liberal democracy will, in this view, be very bleak.But this narrative overlooks a crucial factor: the legitimacy of populist dictators depends on their ability to maintain the illusion that they speak for “the people.” And the more power these leaders concentrate in their own hands, the less plausible that pretense appears. This raises the possibility of a vicious cycle of populist legitimacy: when an internal crisis or an external shock dampens a populist regime’s popularity, that regime must resort to ever more overt oppression to perpetuate its power. But the more overt its oppression grows, the more it will reveal the hollowness of its claim to govern in the name of the people. As ever-larger segments of the population recognize that they are in danger of losing their liberties, opposition to the regime may grow stronger and stronger.The ultimate outcome of this struggle is by no means foreordained. But if the past decade has been depressingly bad for democracy, the next one may well turn out to be surprisingly tough on autocrats.
In North America and western Europe, populist leaders have gained control of the highest levers of power over the course of only the past few years. In Turkey, by contrast, Recep Tayyip Erdogan has been in power for nearly two decades. The country thus offers an ideal case study of both how populist dictators can seize power and the challenge they face when increasingly overt oppression erodes their legitimacy.Erdogan became prime minister in 2003 by running on a textbook populist platform. Turkey’s political system, he claimed, was not truly democratic. A small elite controlled the country, dispensing with the will of the people whenever they dared to rebel against the elite’s preferences. Only a courageous leader who truly represented ordinary Turks would be able to stand up against that elite and return power to the people.He had a point. Turkey’s secular elites had controlled the country for the better part of a century, suspending democracy whenever they failed to get their way; between 1960 and 1997, the country underwent four coups. But even though Erdogan’s diagnosis of the problem was largely correct, his promised cure turned out to be worse than the disease. Instead of transferring power to the people, he redistributed it to a new elite of his own making. Over the course of his 16 years in power—first as prime minister and then, after 2014, as president—Erdogan has purged opponents from the military; appointed partisan hacks to courts and electoral commissions; fired tens of thousands of teachers, academics, and civil servants; and jailed a breathtaking number of writers and journalists.Even as Erdogan consolidated power in his own hands, he seized on his ability to win elections to sustain the narrative that had fueled his rise. He was the freely elected leader of the Turkish republic; his critics were traitors or terrorists who were ignoring the will of the people.
Although international observers considered Turkey’s elections deeply flawed, and political scientists began to classify the country as a competitive authoritarian regime, this narrative helped Erdogan consolidate support among a large portion of the population. So long as he won, he could have his cake and eat it, too: his ever-tightening grip on the system tilted the electoral playing field, making it easier for him to win a popular mandate. This mandate, in turn, helped legitimize his rule, allowing him to gain an even tighter grip on the system.More recently, however, Erdogan’s story of legitimation—the set of claims by which he justifies his rule—has begun to fall apart. In 2018, Turkey’s economy finally fell into recession as a result of Erdogan’s mismanagement. In municipal elections this past March, Erdogan’s Justice and Development Party (AKP) lost Ankara, Turkey’s capital, and Istanbul, its largest city. For the first time since taking office, Erdogan was faced with a difficult choice: either give up some of his power by accepting defeat or undermine his story of legitimation by rejecting the results of the election.Erdogan chose the latter option. Within weeks of Istanbul’s mayoral election, the Turkish election board overturned its results and ordered a rerun for the middle of June. This turned out to be a massive miscalculation. A large number of Istanbulites who had previously supported Erdogan and his party were so outraged by his open defiance of the popular will that they turned against him. The AKP candidate suffered a much bigger defeat in the second election.Having tried and failed to annul the will of the people, Erdogan now faces the prospect of a downward spiral. Because he has lost a great deal of his legitimacy, he is more reliant on oppressive measures to hold on to power. But the more blatantly he oppresses his own people, the more his legitimacy will suffer.The implications of this transformation extend far beyond Turkey. Authoritarian populists have proved frighteningly capable of vanquishing democratic opponents. But as the case of Erdogan demonstrates, they will eventually face serious challenges of their own.
It is tempting to cast the stakes in the struggle between authoritarian populists and democratic institutions in existential terms. If populists manage to gain effective control over key institutions, such as the judiciary and the electoral commission, then the bell has tolled for democracy. But this conclusion is premature. After all, a rich literature suggests that all kinds of dictatorships have, historically, been remarkably vulnerable to democratic challenges.Between the end of World War II and the collapse of the Soviet Union, for instance, dictatorships had a two percent chance of collapsing in any given year. During the 1990s, the odds increased to five percent, according to research by the political scientists Adam Przeworski and Fernando Limongi. Clearly, the concentration of power that characterizes all dictatorships does not necessarily translate into that power’s durability.Instead of assuming that the rise of populist dictatorships spells an end for democratic aspirations in countries such as Hungary, Turkey, and Venezuela, therefore, it is necessary to understand the circumstances under which these regimes are likely to succeed or fail. Recent research on autocratic regimes suggests that there are good reasons to believe that populist dictatorships will prove to be comparatively stable. Since most of them are situated in affluent countries, they can afford to channel generous rewards to supporters of the regime. Since they rule over strong states with capable bureaucracies, their leaders can ensure that their orders are carried out in a timely and faithful manner. Since they control well-developed security services, they can monitor and deter opposition activity. And since they are embedded in efficient ruling parties, they can recruit reliable cadres and deal with crises of succession.On the other hand, many of the countries these regimes control also have features that favored democratization in the past. They usually have high levels of education and economic development. They contain opposition movements with strong traditions and relatively established institutions of their own. They often neighbor democratic nations and rely on democracies for their economic prosperity and military security. Perhaps most important, many of these countries have a recent history of democracy, which may both strengthen popular demands for personal liberties and provide their people with a template for a democratic transition when an autocratic regime does eventually collapse.All in all, the structural features on which political scientists usually focus to gauge the likely fate of authoritarian regimes appear finely balanced in the case of populist dictatorships. This makes it all the more important to pay attention to a factor that has often been ignored in the literature: the sources and the sustainability of their legitimacy.BROKEN PROMISESIn the twentieth century, democratic collapse usually took the form of a coup. When feuds between political factions produced exasperating gridlock, a charismatic military officer managed to convince his peers to make a bid for power. Tanks would roll up in front of parliament, and the aspiring dictator would take the reins of power.The blatantly antidemocratic nature of these coups created serious problems of legitimacy for the regimes to which they gave rise. Any citizen who valued individual freedom or collective self-determination could easily recognize the danger that these authoritarian governments posed. Insofar as these dictatorships enjoyed real popular support, it was based on their ability to deliver different political goods. They offered protection from other extremists. They vowed to build a stable political system that would dispense with the chaos and discord of democratic competition. Above all, they promised less corruption and faster economic growth.In most cases, those promises were hard to keep. Dictatorships frequently produced political chaos of their own: palace intrigues, coup attempts, mass protests. In many cases, their economic policies proved to be highly erratic, leading to bouts of hyperinflation or periods of severe economic depression. With few exceptions, they suffered from staggering levels of corruption. But for all these difficulties, their basic stories of legitimation were usually coherent. Although they often failed to do so, these dictatorships could, in principle, deliver on the goods they promised their people.Populist dictatorships are liable to suffer from an especially sudden loss of legitimacy.This is not true of populist dictatorships. As the case of Erdogan illustrates, populists come to power by promising to deepen democracy. This makes it much easier for them to build dictatorships in countries in which a majority of the population remains committed to democratic values. Instead of accepting an explicit trade­off between self-determination and other goods, such as stability or economic growth, supporters of populist parties usually believe that they can have it all. As a result, populists often enjoy enormous popularity during their first years in power, as Russia’s Vladimir Putin, Hungary’s Viktor Orban, and India’s Narendra Modi have demonstrated.Once they consolidate their authority, however, populist dictators fail to live up to their most important promise. Elected on the hope that they will return power to the people, they instead make it impossible for the people to replace them. The crucial question is what happens when this fact becomes too obvious for large segments of the population to ignore.THE VICIOUS CYCLEAt some point during their tenure, populist dictators are likely to face an acute crisis. Even honest and competent leaders are likely to see their popularity decline because of events over which they have little control, such as a global recession, if they stay in office long enough. There are also good reasons to believe that populist dictatorships are more likely than democracies to face crises of their own making. Drawing on a comprehensive global database of populist governments since 1990, for example, the political scientist Jordan Kyle and I have demonstrated that democratic countries ruled by populists tend to be more corrupt than their nonpopulist peers. Over time, the spread of corruption is likely to inspire frustration at populists’ unfulfilled promises to “drain the swamp.”Similarly, research by the political scientist Roberto Foa suggests that the election of populists tends to lead to serious economic crises. When left-wing populists come to power, their policies often lead to a cratering stock market and rapid capital flight. Right-wing populists, by contrast, usually enjoy rising stock prices and investor confidence during their first few years in office. But as they engage in erratic policymaking, undermine the rule of law, and marginalize independent experts, their countries’ economic fortunes tend to sour. By the time that right-wing populists have been in office for five or ten years, their countries are more likely than their peers to have suffered from stock market crashes, acute financial crises, or bouts of hyperinflation.Once a populist regime faces a political crisis, the massive contradictions at the heart of its story of legitimation make the crisis especially difficult to deal with. Initially, the political repression in which populist regimes engage remains somewhat hidden from public view. Power grabs usually take the form of complicated rule changes—such as a lower retirement age for judges or a modification of the selection mechanisms for members of the country’s electoral commission—whose true import is difficult to grasp for ordinary citizens. Although political opponents, prominent journalists, and independent judges may start to experience genuine oppression early in a populist’s tenure, the great majority of citizens, including most public-sector workers, remain unaffected. And since the populist continues to win real majorities at the ballot box, he or she can point to genuine popularity to dispel any doubts about the democratic nature of his or her rule.This equilibrium is likely to be disrupted when a shock or a crisis lowers the leader’s popularity. In order to retain power, the leader must step up the oppression: cracking down on independent media, firing judges and civil servants, changing the electoral system, disqualifying or jailing opposition candidates, rigging votes, annulling the outcome of elections, and so on. But all these options share the same downside: by forcing the antidemocratic character of the regime out into the open, they are likely to increase the share of the population that recognizes the government for what it truly is.This is where the vicious cycle of populist legitimacy rears its unforgiving head. As support for the regime wanes, the populist autocrat needs to employ more repression to retain power. But the more repression the regime employs, the more its story of legitimation suffers, further eroding its support.Populist dictatorships are therefore liable to suffer from an especially sudden loss of legitimacy. Enjoying a broad popular mandate, their stories of legitimation initially allow them to co-opt or weaken independent institutions without oppressing ordinary citizens or forfeiting the legitimacy they gain from regular elections. But as the popularity of the populist leader declines due to internal blunders or external shocks, the vicious cycle of populist legitimacy sets in. Custom-made to help populist leaders gain and consolidate power, their stories of legitimation are uniquely ill adapted to helping them sustain an increasingly autocratic regime.Anti-government protesters in Budapest, March 2019Anti-government protesters in Budapest, March 2019Tamas Kaszas / ReutersA CRISIS OF POPULIST AUTHORITY?Many populist dictatorships will, sooner or later, experience an especially serious crisis of legitimacy. What will happen when they do?In The Prince, Niccolò Machiavelli warned that the ruler “who becomes master of a city accustomed to freedom” can never sleep easy. “When it rebels, the people will always be able to appeal to the spirit of freedom, which is never forgotten, despite the passage of time and any benefits bestowed by the new ruler…. If he does not foment internal divisions or scatter the inhabitants, they will never forget their lost liberties and their ancient institutions, and will immediately attempt to recover them whenever they have an opportunity.”Populist dictators would do well to heed Machiavelli’s warning. After all, most of their citizens can still recall living in freedom. Venezuela, for example, had been democratic for about four decades by the time Hugo Chávez first ascended to power at the end of the 1990s. It would hardly come as a surprise if the citizens of countries that have, until so recently, enjoyed individual freedom and collective self-determination eventually began to long for the recovery of those core principles.But if populist dictators must fear the people, there is also ample historical evidence to suggest that autocratic regimes can survive for a long time after their original stories of legitimation have lost their power. Take the twentieth-century communist dictatorships of Eastern Europe. From their inception, the communist regimes of  Czechoslovakia and East Germany, for example, depended on a horrific amount of oppression—far beyond what today’s populists in Hungary or Poland have attempted so far. But like today’s populists, those regimes claimed that they were centralizing power only in order to erect “true” democracies. In their first decades, this helped them mobilize a large number of supporters.Eventually, the illusion that the regimes’ injustices were growing pains on the arduous path toward a worker’s paradise proved impossible to sustain. In Czechoslovakia, for example, cautious attempts at liberalization sparked a Soviet invasion in 1968, followed by a brutal crackdown on dissent. Virtually overnight, the regime’s story of legitimation went from being an important foundation of its stability to a hollow piece of ritualized lip service.  As the Czech dissident Vaclav Havel wrote in his influential essay “The Power of the Powerless,” it was “true of course” that after 1968, “ideology no longer [had] any great influence on people.” But although the legitimacy of many communist regimes had cratered by the late 1960s, they were able to hold on to power for another two decades thanks to brutal repression.Populist dictatorships in countries such as Turkey or Venezuela may soon enter a similar phase. Now that their stories of legitimation have, in the minds of large portions of their populations, come to be seen as obvious bunk, their stability will turn on the age-old clash between central authority and popular discontent.Recently, a series of writers have suggested that the rise of digital technology will skew this competition in favor of popular discontent. As the former CIA analyst Martin Gurri argued in The Revolt of the Public and the Crisis of Authority in the New Millennium, the Internet favors networks over hierarchies, the border over the center, and small groups of angry activists over bureaucratic incumbents. These dynamics help explain how populists were able to displace more moderate, established political forces in the first place. They also suggest that it will be difficult for populists to stay in power once they have to face the wrath of the digitally empowered public.
This argument, however, fails to take into account the differences in how dictatorships and democracies wield power. Whereas dictatorships are capable of using all the resources of a modern state to quash a popular insurgency, democracies are committed to fighting their opponents with one hand tied behind their back. Dictators can jail opposition leaders or order soldiers to fire into a crowd of peaceful protesters; democratic leaders can, at best, appeal to reason and shared values.This imbalance raises the prospect of a dark future in which digital technology allows extremist networks to vanquish moderate hierarchies. Once in power, these extremist movements may succeed in transforming themselves into highly hierarchical governments—and in using brute force to keep their opponents at bay. Technology, in this account, fuels the dissemination of the populists’ stories of legitimation when they first storm the political stage, but it fails to rival the power of their guns once their stories of legitimation have lost their hold.It is too early to conclude that the populist dictatorships that have arisen in many parts of the world in recent years will be able to sustain themselves in power forever. In the end, those who are subject to these oppressive regimes will likely grow determined to win back their freedom. But the long and brutal history of autocracy leaves little doubt about how difficult and dangerous it will be for them to succeed. And so the best way to fight demagogues with authoritarian ambitions remains what it has always been: to defeat them at the ballot box before they ever step foot in the halls of power
0 notes