Doubling Down on Democracy

A Protester is seen holding up a US Flag in Hong Kong on September 8, 2019, Protester march from Charter Garden to the US Consulate in Hong Kong calling for support. (Photo by Vernon Yuen via Getty Images)

Americans are beginning to feel some relief from the worst of the political pressures of the past twelve months. Nevertheless, a global pandemic, nationwide protests over social justice, a bitterly contested election, the incumbent’s refusal to accept his loss, and the storming of the U.S. Capitol by his angry mob—all exacerbated by the wretched excesses of social media and traditional media’s substitution of self-serving speculation for skeptical, factual reportage—have all taken a sharp toll on our individual and collective psyches. It doesn’t help that the backdrop is growing Chinese power and Russia’s ongoing efforts to create exploitable chaos.

Even the good news adds to the confusion: the stock market is seemingly in a rush to price in strong future economic growth and America’s undeniable comparative advantage in tech innovation. And, really, how many trillion dollars in federal stimulus spending is too many?

In times like these, when everything seems new and invites getting caught up in the passions of the moment, it might not be a bad idea to step out of the maelstrom and review what we think is really important.

Such, I think, was the purpose of Freedom House, the venerable monitor of the tides of freedom and democracy around the world, in convening, together with the Center for Strategic and International Studies and the McCain Institute, a nonpartisan task force that I joined on democracy and its authoritarian challengers.

Our final report, Reversing the Tide: Towards a New U.S. Strategy to Support Democracy and Counter Authoritarianism, came out today. The first thing to say about the report is that it is full of detailed recommendations that will be useful to the Biden Administration, future administrations, and Congress in their efforts to think about promoting democracy abroad and shoring it up at home.

Many of these recommendations have the potential to drive significant change in policymaking. The report calls on President Biden to issue a Presidential Decision Directive identifying “support for democracy at home and abroad as a core value and core national interest.” It calls for democracy to become the “fourth D” of our national security strategy, joining defense, diplomacy, and development. The report calls for the publication of a National Democracy Strategy alongside the traditional National Security Strategy and for establishing an interagency National Democracy Council to oversee its implementation.

The government’s interagency processes may not be the sexiest subject in town, but if Biden follows these recommendations, he will motivate serious activity within the government. He will also find many individuals who care deeply about these issues and about the people around the world, from Hong Kong to Belarus, who are fighting for their freedom.

In a broader sense, however, the significance of this report comes not primarily from its specific recommendations but from the general claim it stakes: Democracy really matters to the United States and to Americans. We are now going through a rough patch from which we have yet to fully emerge, and one reason for the roughness is our rediscovery of the profound ways in which our practice of democracy fails to live up to the ideals that animate it. But the problems lie in the practice, not the ideals themselves. Regardless of our structural problems, there aren’t many Americans who urge the replacement of practices like popular elections with governing structures modeled on those of the Chinese Communist Party (CCP). They want a better American democracy, not Xi Jinping. The task force report unequivocally affirms this fact.

This normative conclusion about how a free people should govern its affairs has application abroad, as well. Democratization by force of arms is not part of the program, but standing with those who seek freedom and democracy for themselves definitely is. So is assisting these seekers of democracy in ways they see fit.

The top-down and externally driven approaches of the past must give way to assistance and support designed in close consultation with the people who actually have to live with the results. This is a worthy project not just for American democracy but for democracy more generally. The democracy summit that President Biden has proposed is an opportunity for like-minded countries to explore what can be done collectively along these lines. Like Americans, the people of, say, Japan, South Korea, and Australia have shown no appetite for chucking their democracies in favor of one-party rule, let alone dominance from abroad by the CCP.

As the Cold War ended and especially in the decade that followed it, democracy seemed to be traveling globally with the wind at its back. That is no longer the case. But this fact does not warrant the conclusion that we should give up on the role of democratic aspirations—including the hopes for human, civil, and political rights—in the conduct of our foreign policy.

In the largest sense, in fact, the Freedom House report arrives as a rebuke to those who conclude from democracy’s current challenges either that we should put all such considerations aside, instead approaching the world solely from the perspective of national interests narrowly construed, or stay home with our heads hung in shame at our own deficiencies.

We should do neither. It would be wrong in principle, bad for our country and our friends, and a comfort to our enemies; it’s not who we are as a people.

This article was originally published on April 14th, 2021 in American Purpose

Weiner’s Folly

As we travel down the rabbit hole that is Tim Weiner’s The Folly and the Glory: America, Russia, and Political Warfare 1945–2020, let us begin at the beginning, with the first sentence: “For seventy-five years, America and Russia have fought for dominion over the earth.”

The aging Cold War hawks among us might read a sentence like that with some degree of satisfaction: We told you it was a long, twilight struggle; we told you the Communists in the Kremlin sought global domination; we told you the free world was imperiled; we told you the United States had to contain them and contest their dominance. So it is that Weiner’s acknowledgement of the nature of the struggle and the stakes involved, much at odds with the fashionable anti-anti-Communism that took hold during the Vietnam war, smells of vindication, if not napalm and victory, to the hawks in the parallel ideological struggle at home.

But wait, let’s read it again: “For seventy-five years America and Russia __have fought__…” (Emphasis added.) So apparently this struggle for “dominion over the earth” is ongoing. But the Cold War ended in November 1989 with the fall of the Berlin Wall—certainly by December 1991, with the dissolution of the Warsaw Pact and the breakup of the Soviet Union, which collapsed in shambles. The United States and the West won—though “dominion over the earth” surely overstates the jackpot. And that was 30 years ago.

So it turns out this is not the story Weiner is telling here—at most a part of it, and a misleading part at best. Though during the Cold War “Russia” was a casual way of referring to the Soviet Union, the Soviet Union of the Cold War is for Weiner a lesser included case of an ongoing competition in political warfare between Russia and the United States. It did not reach its culmination in 1991, but in 2016.

When Russia won.

If you do read Weiner’s book, the paragraph in The Folly and the Gloryyou will feel you have long been waiting for appears on page 231. All becomes clear at last: “But Putin had another kind of weapon at the ready, and its long fuse was about to be lit. He wanted to undermine democracy in America, and how better to achieve that aim than to elect a dangerous demagogue as president?”

So what we have here, really, is not quite a disinterested history of the often-dirty, sometimes dangerous political contestation short of war between the United States and the Soviet Union and then, yes, Russia since the end of the Second World War. Rather, it is the projection backward onto history of a conclusion about recent American politics, namely, that Vladimir Putin delivered the White House in 2016 to a Russian “agent of influence” named Donald J. Trump. The Folly and the Glory is the inquiry that follows a dumbfounded smack of one’s palm to one’s forehead and the question, how the hell could that happen?

And strangely enough, at least as far as Cold War history and the talents of Putin go, it’s a pretty good showing. It retains a certain conventional left-leaning tendentiousness, but Weiner did not win his Pulitzer Prize and his National Book Award (in 2007 for Legacy of Ashes: The History of the CIA) simply because he reflects that conventional wisdom. He is also a thorough researcher and a good writer. And here, the author’s prior inclination to torch the CIA’s clandestine activities comes into tension with his conviction that political warfare is serious, and he doesn’t want the United States to lose.

The Folly and the Glory has a lot more to say about the American role in political warfare than about our rival’s. That’s in part because less digging was necessary. The Church Committee’s 1975 Senate investigation into U.S. intelligence operations opened a window onto 30 years’ worth of clandestine activity. Researchers also have tools such as the Freedom of Information Act to pry loose documents classified and unclassified. Add to them the massive leaks that came from Chelsea Manning via Julian Assange and Wikileaks and from Edward Snowden, and Weiner had a lot to work with.

Although KGB activities during the Cold War and its follow-on agencies’ activities in Russia since then aren’t entirely a black box, there is far less available information about them. This contributes to the impression that through the 1970s, most of the folly was on the American side, whereas most of the glory was on the Soviet side. American officials in and around this shadowy world were mostly serious people trying to do their best for their country. But they got a lot wrong, occasionally fell prey to monomania, made pledges on which they were unable to deliver, and engaged in brutal infighting over turf. Young Tim Weiner would surely disapprove of men who routinely refer to their adversaries as “Commies.” The Folly and the Glory manages to accord them ambivalent respect.

At the same time, under the aegis of the Brezhnev Doctrine, according to which once a country turned Communist, it would stay Communist, Soviet gains throughout the world were mounting; 1979 saw the Soviet invasion of Afghanistan and Marxist revolution in Nicaragua. Some of what transpired was military action or armed revolutionary activity, but Weiner is right to highlight the importance of “spying and subversion, subterfuge and sabotage, stolen elections and subtle coups, disinformation and deception, repression and assassination.” At the start of the Cold War, he notes, the United States was something of an ingenue in such matters, whereas intelligence services in Russia had been relying on such tactics since Peter the Great.

The United States built considerable capacity for political warfare after World War II, however, and in Weiner’s judgment “it sped the collapse of the Soviet Union”—before its skills began to wither due largely to entropy and apathy. Meanwhile, in Russia, an ex-KGB operative with a thorough understanding of these dark arts was consolidating his grip on a state down but by no means out. Putin’s list of grievances was long, and his determination to deploy the techniques of political warfare against the United States was firm.

And then—well, Trump happened. Although Weiner’s title bills the temporal period of his book as 1945–2020, May 17, 2017, seems to be its spiritual endpoint—the date of the appointment of independent counsel Robert C. Mueller, whose investigation was supposed to blow the lid off Russian collusion with the Trump campaign. To Weiner, the view embodied in all the hopes and aspirations of Trump’s “collusion” critics remains pristine and unimpeachable to this day._____________ 

IN THIS TELLING, Christopher Steele, author of the notorious “dossier” leveling collusion and other allegations against Trump and his associates, is still a “veteran British spy” and “a highly reliable reporter on Russia, his field of expertise,” rather than the peddler of unverified and demonstrably false secondhand gossip he actually is. Steele was “working for private eyes hired by the Clinton campaign”—a fact not publicly known when the dossier surfaced and denied subsequently by Clinton people who knew they were lying. While “no known evidence proved [Trump] had been bribed with cash or blackmailed, . . . he had made himself an attractive target for the Russians for thirty years” and “was surely a mark.” So there.

As for the known Russian agent of influence here, the Internet Research Agency, which was indeed targeting U.S. politics, workers there toasted themselves the morning after the election, many spontaneously saying, “We made America great again.” Or so goes the story Weiner accepts uncritically. If you think what made the difference in the 2016 election was a Russian troll farm, $15 million worth of Facebook ads, and obvious hoaxes quickly identified as such, like the one that had Pope Francis endorsing Trump—if you think such mice as these out-influenced nonstop saturation media coverage of the candidates, $2.4 billion in expenditures, Hillary Clinton’s failure to campaign in Wisconsin, and FBI Director James Comey’s decision to announce days before the election that he was reopening an investigation into Clinton’s email server—then the high-fives the day after in Saint Petersburg are just more confirmation.

But wait, what has happened since May 2017? Didn’t the Mueller investigation come up dry on Trump campaign collusion with Russia? Weiner quotes Mueller saying, after the fact, that “a thorough FBIinvestigation would uncover facts about the campaign and the president personally that the president could have understood to be crimes.” But wasn’t that investigation Mueller’s job? Ah, but Mueller never got a charter to investigate Trump’s previous financial affairs—plus there was Trump’s flagrant obstruction, notwithstanding that Mueller’s own report couldn’t conclude it was criminal. Collusion springs eternal.

Viewed from the present moment, however, an even more basic question about reality is in order: If Putin elected Trump in 2016, what happened in 2020? Did Russia, having mastered political warfare to the point of victory then, lose interest four years later? Was it thwarted in 2020 by—Trump-administration countermeasures? Fact checks on Twitter? Whatever else may be said of 2020, on the positive side of the ledger is how it has dispelled the notion that the Russians have subverted and now dominate American politics.

If you start with the unshakeable conviction that Trump is a Russian agent of influence, then no inability of the independent counsel to demonstrate criminal collusion will persuade you otherwise. Never say no evidence. Say “no known evidence.”

But if this conviction drives you to revisit the entire postwar era and conclude that the Soviet Union then was and Vladimir Putin now is doing everything possible to undermine American democracy through “political warfare,” I’ll take it. Now do China.

This article was originally published on January 7th, 2021 in Commentary

Keeping the Republic

Since the 1789 adoption in Philadelphia of the Constitution that established the form of government of the United States of America, this country has been committed to democratic, rights-regarding self-governance. “What do we have,” a lady asked Benjamin Franklin afterwards, “a republic or a monarchy?” Franklin replied with a quip for the ages: “A republic, madam, if you can keep it.”

That American habit of conditionality has persisted. “O say,” Francis Scott Key asked in 1814 after watching the British shell Fort McHenry in Baltimore, “does that star-spangled banner yet wave?” And there was Lincoln at Gettysburg in 1863, explaining that the Civil War should be seen as a question of whether America, built on principles of liberty and equality, could “long endure.”

We keep asking. And so we should, not least to remind ourselves that democratic self-governance is neither automatic nor easy. But some 230 years after Franklin’s impish reply, this much we seem entitled to say about our Republic: We have kept it.

The arduous road

The keeping of the American Republic hasn’t been an easy or gentle task. It entailed our willful accommodation of the odious institution of slavery for more than seventy-five years after the ratification of the Constitution, then a Civil War in which more than 2 percent of the country’s population died, then more than a hundred years and counting of regional segregation and nationwide discrimination. Keeping the Republic also entailed our encroachment on what we called a frontier, displacing and decimating the Native American population.

It entailed the struggle to recognize that “all men are created equal” applies to all human beings. It entailed navigating the disruptions of the industrial age and now the digital age, two world wars, assorted lesser wars, and a Cold War against a nuclear-armed superpower convinced of its inevitable worldwide dominion—not to mention a Great Depression, a Great Recession, the disruptions globalization has wrought, and numerous lesser economic travails. It entailed the creation and, then, the difficulties of managing a vast administrative state meant to reduce the risks attendant on a modern economy far removed from its largely agrarian origins.

Keeping the Republic has entailed coping with not just the advantages but the problems of being first a rising power, then a great power, then a superpower, then a hegemonic power, and now a power facing other rising powers. Moreover, the path hasn’t just been long; it has also been marked by often overheated political moments in which great numbers of Americans viewed their political opponents as morally illegitimate or even worse. In such moments, preservation has entailed the need to cope with the human ambition to have its way whatever the cost.

Through it all, we’ve kept our Republic—our democracy, as we call it today, the ability of free people to govern themselves. The commitment hasn’t wavered for nearly 250 years, through wave after wave of crisis and response. It stands as the longest-lasting such commitment in the world, and there is reason to believe that it will persist.

American exceptionalism

An intermittent elite disdain for the idea of American exceptionalism has always been part of the mix of American opinion. One need not embrace doctrinal exceptionalism, though, to recognize that the United States is indeed exceptional.

How exceptional? With the ravages of a pandemic, the polarizing figure of President Trump, the digitally promoted tribalism, the coarseness of expression, and the increasing insistence that you’re entitled to a little violence if your side is losing despite its righteousness, 2020 will definitely rank among America’s toughest peacetime years.

But does even this year show that Americans are willing to abandon our constitutional arrangements? In favor of what, exactly? Membership in some upper Midwestern militia? Swearing allegiance to a woke clerisy dedicated to shutting down dissent? A divorce, velvet or otherwise, between red states and blue states?

Even this year, the democratically and legally constituted authorities, with all their equivocations, have not really shown themselves prepared to let hotheads prevail over the Constitution and the Bill of Rights. To put it another way, if “buy low” is generally good advice, 2020 would be an excellent time to invest in American democracy.

Warts and all

The failure of the totalitarian regimes of the 20th century, and of Orwell’s chilling vision in 1984 of permanent one-party rule to come to pass, reflects the reality that mutually respected freedoms are more in accord with human desire than is total control by one human being or one advantaged group. The conceit of totalitarianism is that with enough force, one can eliminate the possibility of disagreement once and for all. But in fact one cannot do so, even if the force is thoroughgoing and brutal. That is because disagreement over at least some things is inevitable whenever two or more people gather. Even relatively successful exercises in totalitarian control can suppress only the expression of disagreement, not the disagreement itself.

The question of politics, writ large, is how to resolve disagreements. The totalitarian answer is to resolve them by force from the top down—inevitably to the primary advantage of those closer to the top. But the dissatisfaction generated by such an arrangement can never be eradicated, nor can the desire for better arrangements.

Peaceable, rights-regarding, democratic self-governance is simply better than any other form of government at filtering and satisfying people’s competing desires and disagreements. The United States literally pioneered this form of government; it is the founding member of a club now populated on every continent by democratic states peaceably inclined toward one another.

The condition of the American Republic has never conformed to an ideal type, nor will it. It arose under a particular set of circumstances and has evolved within constitutional limits as circumstances have changed. The result is often unattractive. For an example, take security: The United States had the geographic advantage of maturing politically between two oceans. But beginning in the late 19th century, American security has focused on the development of formidable military capabilities. In fact, these capabilities radiate outward through alliance relationships and less formal understandings to provide security for many other countries, democratic countries most prominent among them. To varying degrees, this security umbrella relieves other countries of much of the burden of providing for their security themselves.

The American ability to take care not only of its own security but also that of others warrants commendation. Unfortunately, it also creates certain democratic debilities that many other democratic countries don’t encounter precisely because of their lesser security responsibilities: the excesses of the surveillance state, for example, and criminal misbehavior in wartime. Some deficiencies of American democracy, in other words, have structural origins.

Others are constitutional. The United States went first in crafting a republican constitution—and today bears the scars of the struggle to bring thirteen states together in one nation. No country setting out on a democratic path today would create a mechanism such as the electoral college. But we have it, and we are stuck with it until such time as there is sufficient support to amend the Constitution (or for some extra-constitutional workaround).

Other democratic deficiencies are entirely of our own making. In the post-Cold War period of American “hyperpower” (hyperpuissance, a coinage of then-French Foreign Minister Hubert Védrine in 1999), some U.S. politicians and policymakers became wildly overconfident about their ability to hasten the adoption of democratic, rights-regarding self-governance around the world. American exceptionalism was transformed into a universal principle and mission. That was a mistake: Self-governance advances only against resistance, and it remains a difficult enterprise even when broadly supported. (In fact, American exceptionalism might better be viewed as an alternative to the expectation of universal application of this form of governance.)

Notwithstanding overreach and resistance, however, the example of the United States still stands—and that of the United Kingdom, France, Germany, Japan, and many more. The hills are now alive with shining cities, though their brightness varies from time to time. Democracies should work singly and together to bolster the case for democratic, rights-regarding self-governance. A little competition among them for bragging rights on aspects of democratic practice is hardly a bad thing. In fact, it’s excellent.

The Authoritarian Challenge

Meanwhile, authoritarians of all kinds, especially those with sufficient resources to support clandestine active measures abroad in addition to the secret police that keep them in power at home, don’t like the power of democratic example. They like even less advocacy on behalf of the superiority of democracy. Worse still are efforts by the United States and others to promote democracy and provide support for democratic forces abroad. So authoritarians use whatever capacity they can muster to undermine the legitimacy of democracy itself.

The Chinese Communist Party (CCP) today, Putin’s Russia (like the Communist Party of the Soviet Union before it), and the mullah-ruled theocracy in Iran are quick to point to the faults of the United States as the leading democratic power and our supposed failure to live up to our democratic ideals. The purpose is twofold: secondarily, to persuade others that the American example is not worth following; primarily, to persuade Americans that the American example is fundamentally flawed and not worth advocating for.

How do we respond to this challenge? First, we should note that the CCP, Putin, and the mullahs do not share our republican ideals. Their criticism is insincere. They loathe and fear the United States for its ideals as the most powerful democracy and will do so still more intensely the more fully we live up to them. Russia and Iran, for example, persecute individuals who are not heterosexual. They would prefer a United States with anti-sodomy laws still on the books as justification for their own persecution to a United States that implicitly rebukes this persecution abroad through constitutional protection for same-sex marriage. Similarly, Chinese government media outlets enjoy calling out the United States for racism, while the CCP perpetrates genocide against the Uighurs in Xinjiang.

Such persecution is indeed perfectly consistent with these regimes’ authoritarian sensibilities: no hypocrisy there. It’s just that their authoritarian sensibilities are disgusting. The United States can and should accept criticism over its practices from those who share our self-governing ideals. In fact, Americans themselves provide abundant such criticism. The United Kingdom, France, and others among the democratically like-minded have standing to offer criticism. We should take criticism from the likes of China, Russia, and Iran as an invitation to explain the superiority of our political arrangements.

The second response to the challenge from these non-democracies is something of a corollary. We should not confuse two different questions: how the world should be and how it measures up to those ideals. Deficiency on the latter does not refute the normative prescriptions of the former.

If our ideals were truly impossible to achieve, the perpetual failure to measure up or even make progress toward them would be grounds for rethinking their validity. In fact, however, the United States and many other countries have substantially achieved progress toward democratic rights, from self-governance on the political side to individual sufficiency in the face of inevitable scarcity on the economic side. China, Russia, Iran and many others have not. There is no doubt that the United States and others could improve on their performance. Indeed, our normative principles—our ideals about how the world should be—themselves provide the guideposts for improving our practice. For many other countries, their interest in pursuing these ideals consists only in pretending to support them.

There is a third response to these regimes: to push back, with moral clarity as well as humility. It may be true that China would collapse into anarchy without a strong one-party state. If so, that’s sad for the Chinese people (though good for members of the Chinese Communist Party). But our people and the people of other countries practicing democratic, rights-regarding self-governance do not require one-party rule to avoid such a collapse, and that is simply a better position for a government or a society to be in. We should say that when we can, and certainly in response to charges of hypocrisy from autocrats and would-be totalitarians. If they act to undermine systems of self-governance by taking advantage of the freedom of our societies, we should find creative ways to return the favor. We are not without capabilities, and our adversaries have weaknesses—most of them involving their fears for their long-term success and their personal physical security.

We shouldn’t expect the rest of the world to conform anytime soon to principles of democratic, rights-regarding self-governance. But we can affirm that such a system is the best answer human beings have devised to the problem of politics. Or to tone that claim down a bit, we can put it in Churchillian terms: our system is the worst—except for all the others.

Either way, we’re keeping it; and, where we can, we should help others who aspire to pursue something similar.

This article was originally published on November 23rd in American Purpose

Ideology Divides Americans on Concern Over Coronavirus

The first six months of 2020 presented Americans with a baffling and dangerous public health crisis as well as varied and evolving policy responses from governments at the local, state, and federal level — all of which directly affected their lives in real time.

Research into COVID-19 has been growing exponentially. A mid-May article in Science put the number of published papers at 23,000 and doubling every 20 days. Yet much remains unknown about the virus, and we still have no certain knowledge of how the crisis is going to end. Nevertheless, self-confident public recriminations over policy decisions are as bitter as one might expect in a hyper-partisan political environment during a presidential election year. It’s prudent to keep in mind that the criticisms come without benefit of dispassionate hindsight and, indeed, in the middle of ongoing uncertainty.

The Democracy Fund + UCLA Nationscape survey provides important insight into how Americans have viewed the crisis as it has unfolded. The survey has solicited views on the coronavirus pandemic, interviewing approximately 6,300 people each week beginning March 18, 2020. It reveals views of the crisis that differ across the ideological spectrum, but with commonalities of behavior that resist the ideological divergence.

In some respects the results are akin to a dog that isn’t barking: some of the questions in the survey ask people about their own behavior, and the results vary only modestly across demographic groups including sex, age, and income, as well as party affiliation, 2016 presidential vote, and ideology. For example, the July 9–15 wave asked respondents whether in the past week they had “worn a mask when going out in public.” Overall, 92 percent said yes and 8 percent no, with small differences across demographic groups.

Table 1

voter%20study%20group%20table%201.JPG

Modest differences emerge in political categories, but in the context of a high degree of consensus: 90 percent of Republicans versus 96 percent of Democrats said they had worn a mask.

A behavioral question with a more uneven result asked respondents whether they had left their house in the past week “for non-essential goods or services.” Here, the overall split was 54 percent yes to 46 percent no. Men were somewhat more likely to say they had left home, at 58 percent versus 51 percent of women. Young people were more likely than older people to say they had stepped out for non-essentials, with 55% of 18-29 years olds reporting that they had done so, compared to 46% of people over 65 years old. On the question of going out in public, Democrats are emerging as slightly more cautious than Republicans: only 47 percent of Democrats said they left the house in the past week for non-essentials, compared to 64 percent of Republicans.

One pandemic “dog” has been barking consistently through all weekly waves of the Nationscape survey: the considerable difference in level of concern over the virus based on ideological point of view. The survey asks: “How concerned are you about coronavirus here in the United States?” It gives respondents a choice between very, somewhat, not very, and not at all concerned. Those who identify as “very liberal” are much more likely to say they are very concerned than those who identify as “conservative” or “very conservative.” In the July 9–15 wave, 85 percent of the “very liberal” are very concerned — compared to 47 and 49 percent respectively in the two conservative categories.

In the unweighted Nationscape sample of 6,319 individuals who answered this question, those who identify as “very liberal” make up a little over 10.5 percent of all respondents, about the same percentage as those identifying as “very conservative.” People calling themselves “liberal” account for another 16.5 percent, “moderate” a little over 33 percent, and “conservative” a little over 18.5 percent.

Of those identifying as “liberal,” 71 percent are very concerned, with moderates at 61 percent. So apart from the slightly greater propensity toward being very concerned among the “very conservative” compared to the “conservative,” we have a fairly straightforward ideological progression: the farther left you lean, the more likely you are to be very worried about the coronavirus.

Figure 1

voter%20study%20group%20figure%201.JPG

Between 8 and 11 percent of all respondents each week say they are “not sure” where they fall ideologically. In the July 9–15 wave of the survey, just over 10 percent were “not sure.” Forty-seven percent of these “not sure” respondents were very concerned about the coronavirus — the same as the proportion of those identifying as “conservative” who expressed the same level of concern. In previous waves, however, the percentage of this “not sure” group saying they are very concerned about the coronavirus has fallen between the percentage of those identifying as “moderate” and those identifying as “conservative” who said the same. In other words, if level of concern is a predictor of political ideology, this group generally lands slightly to the right of “moderate.”

It is also noteworthy that for the “very liberal,” the rate of very concerned has changed barely at all since the survey began asking the question. In the first 17 times the question was posed in weekly surveys, only once did the very concerned fall as low as 72 percent among the “very liberal.” All other weeks 75 percent or more of this group were very concerned, with nine weeks at 80 percent or more.

By contrast, those identifying as “conservative” have experienced changing levels of concern. For the first five weeks of the survey question, the very concerned in this group landed in the 53–55 percent range. At the end of April, conservatives dipped to 45 percent and continued to fall to a low of 35 percent in the June 4–10 survey, after which they began to rise again, reaching 47 percent in the July 9–15 wave. Also notably, by the time of the survey wave that began May 7, one in five of those identifying as “conservative” said they were not very concerned or not at all concerned. At no time have those two responses combined accounted for more than 8 percent of those calling themselves “very liberal” or “liberal.”

In addition to being more divided on concern about the virus, conservatives are more divided on restrictive government policy measures in response. For example, one survey question has asked about support for state and local government action to “close schools and universities.” In the July 9–15 wave, respondents who either strongly support or somewhat support school closings account for 84 percent of those identifying as “very liberal,” 81 percent of “liberal” respondents, 59 percent of “moderate” respondents, 51 percent of “conservative” respondents, and 45 percent of “very conservative” respondents. And within the “very liberal” category of those who strongly or somewhat support closing schools, nearly 80 percent strongly support such school closings.

The Nationscape survey has not asked why people are concerned about the coronavirus to the extent that they are. It seems likely that most people’s first thoughts go to their own health and that of their families and friends, considering the risk of contracting COVID-19 and how sick one might get. But many Americans are also concerned about second-order effects; for example, the economic downturn in general or their personal finances, or added stress in areas such as child or elder care. For example, FiveThirtyEight’s tracking shows that every day starting March 22, at least 50 percent of Americans have said they are very concerned about the effect of coronavirus on the economy, with just under an additional third saying they are somewhat concerned. The level of concern about the economy is in fact considerably higher than the concern that “they, someone in their family, or someone they know will become infected with coronavirus”: FiveThirtyEight shows 36 percent very concerned, 33 percent somewhat, 17 not very, and 11 percent not at all concerned.

It therefore seems very likely that disaggregating health from other concerns among the very concerned and somewhat concerned would reveal an even more pronounced ideological dimension to the response.

This article was originally published on July 31, 2020 in Voter Study Group

‘Life, Liberty and the Pursuit of Happiness’ — These Words Set America on Path to Progress

Reports by government commissions aren’t generally known for their insight into basic questions about the human condition, nor can they typically be read for pleasure. The report of the State Department’s Commission on Unalienable Rights, in circulation as of last week, is perhaps the exception that proves the rule: a lively and serious inquiry into the basic ideas that animated the founding of the United States and provided impetus to the global pursuit of human rights.

Secretary of State Michael Pompeo chartered his commission well before the tough six months America has just been going through, from pandemic to lockdown to protests, some of them violent. Yet current conditions make its message all the more timely. 

Demonstrators demand justice and rail against past and present injustice. And whether they are aware of it historically or not, they mostly rely on claims introduced into the political world in the American Declaration of Independence. George Floyd had a right not to be slain by a police officer. Government is supposed to protect people’s lives and liberty. They should govern themselves as equals and be free to pursue happiness as they see it, without fear of capricious force under color of law.

Declaring independence, gradually

“Life, Liberty and the pursuit of Happiness,” that is: Before the Declaration, ideas were brewing along the lines of the “unalienable rights” of all human beings, but the political world was the sport of kings and barons, chieftains and the strong. To most of them, the idea that government should be “of the people, by the people and for the people,” as Lincoln described it 87 years later, had never occurred. Yet the idea of these rights was so powerful and so liberating that it became not only a global beacon against oppression, but also the means by which Americans began to free themselves from the constraints of the times in which it arose.

Declaration of Independence, July 4th, 1776 painted by J. Trumbell and engraved by W.L. Ormsby, N.Y. Library of Congress

That’s because in saying “all men are created equal,” Jefferson and the other signers of the Declaration of Independence both meant it and did not mean it. Clearly, they didn’t mean “all men and women are created equal,” a formulation that would come to the fore with the Declaration of Sentiments drafted by Elizabeth Cady Stanton at the Seneca Falls Convention in 1848. Nor did it apply to men, women and children who were slaves, including of some of the very signers. Nor did they mean it with regard to Native Americans being driven from their ancestral lands.

Nor did the all the abolitionists and early women’s rights advocates themselves necessarily believe in universal human equality. Nevertheless, those five words formed the basis of 244 years’ worth — and counting — of demands for equality in the United States and beyond our own national borders.

The work our founders began isn’t over

The Founders did not finish the job of political equality with the Declaration and the Constitution, nor did Lincoln with the Civil War and Emancipation Proclamation, nor did Susan B. Anthony when she illegally cast a ballot in the 1872 presidential election, nor the Supreme Court in 1954 in Brown v. Board of Education by reversing its previous holding and declaring that the “separate but equal” justification for segregation was not equality.

But the Founders did start the project of political equality by risking their necks on independence in the name of those five words. And the others mentioned here, and many more, continued the project against resistance, by relying on a history tracing back continuously and directly to “all men are created equal” as they demanded justice.

This is a story the Commission on Unalienable Rights tells with clarity and erudition. Likewise compelling is its account of the resonance of the principles of the American founding in the United Nation’s 1948 Universal Declaration of Human Rights and the little-remembered context in which other countries drew on their own national traditions in pursuit of the universal rights they delineated in UDHR.

The American story is woefully incomplete without an account of the injustice perpetrated here and the suffering it has caused. But it is also woefully incomplete in the absence of an account of how ideas about unalienable rights articulated at the time of the founding became an engine driving the pursuit of justice here and throughout the world.

This article was originally published on July 24th, 2020 in USA Today

What Guyana Needs Now

Any victory for democracy and the rule of law is worth celebrating in these tumultuous times, no matter how small the country. Guyana, an English-speaking country of 780,000 on the north Atlantic coast of South America, is now facing a moment of truth. Will its governing party acknowledge that it lost in a free and fair election and proceed to a peaceful transition? Or will it resist, plunging the country into chaos? 

Guyana is one of the poorest countries in South America, but that could change dramatically thanks to the discovery in 2015 of vast oil reserves off its coast. ExxonMobil is leading a consortium that hopes to be pumping 750,000 barrels a day from Guyana’s waters by 2025. 

Oil revenue on such a scale, however, can be either a great blessing or a curse. The key question is whether the political leadership of such a country directs the financial benefit to the people, or uses it to line the pockets of cronies and rent-seekers. 

Indeed, the “natural resource trap” is one of the four major obstacles to development that economist Paul Collier identified in his seminal book The Bottom Billion. Following independence from the United Kingdom in 1966, Guyana’s miserable economy had all the characteristics of post-colonial dependency on agriculture and natural resources under an increasingly authoritarian leader. 

The best way to ensure that the oil windfall benefits the Guyanese people is the presence of an accountable, democratic government. Guyana has been conducting free and fair elections since 1992. Freedom House ranked it 75 on its 100-point scale for 2019, a free country with the same score as India, praising Guyana’s “regular elections, a lively press, and a robust civil society.” 

In December of 2018, the government of President David Granger suffered a no-confidence vote in the National Assembly, and Guyana’s politics started to go a little wobbly. It took until September of last year, following a court battle, for Mr. Granger to call a new election, which took place on March 2 this year. 

The balloting and counting proceeded freely and fairly, in the view of election observers from the Organization of American States and others. But the tabulation of results did not. Observers noted lopsided margins reported in favor of Granger’s party in Guyana’s populous District 4. 

The opposition, led by Irfaan Ali, cried foul and was backed of international observers. Guyana’s High Court tossed out the tabulated results and ordered a recount. In May, it finally got underway, with the District 4 results coming in on June 8.  
It turned out that the initial reporting was clearly fraudulent. Granger’s APNU+AFC party was credited with nearly 20,000 more votes than it actually received, and Ali’s PPP with about 3,600 fewer. Correcting the fraud makes Ali the close but clear winner — by about 15,000 votes, in an election where 456,000 were cast nationwide. 

The result is now clear, and earlier this week, the Guyana Elections Commission rightly announced its willingness to declare the winner based on the recount tabulations. Notwithstanding new and baseless complaints by the ruling party about the balloting back in March, there is no need for the Commission to delay the formal declaration. 

Granger must then proceed to an orderly transition, for which he will receive credit both at home and internationally. If Granger refuses, his government will certainly face international condemnation and sanctions that could jeopardize Guyana’s oil windfall. 

Guyana’s two main political parties have an ethnic component. APNU+AFC is largely made up of Guyanese of African descent, whereas the PPP is mostly Guyanese whose ancestors came from India. There is always potential for conflict in such circumstances. But Guyana’s democratic ways have gone a long way toward keeping tensions under control. 

Granger must act now in a way that upholds this peace. The people of Guyana stand to gain much in the years ahead, and it will be to their own immense credit as a citizens of a peaceful, democratic country. 

This Article was originally published on June 18th, 2020 in Real Clear World

‘The Abandonment of the West’ Review: How a Civilization Ends

For all its achievements, the West was increasingly faulted for its deficiencies at home and abroad.

The very title of Michael Kimmage’s work of intellectual history—“The Abandonment of the West: The History of an Idea in American Foreign Policy”—comes with a shock of recognition. Why, yes, who in the realm of foreign policy now speaks of “the West”? It’s gone. Where did it go? Come to think of it, we more or less abandoned it, didn’t we?

Intellectual history is a tricky genre. In addition to describing what human beings have done, it attempts to discern what people have thought about what they were doing as they did it: how their conceptualization of the world around them shaped them. To try to make sense of this, historians examine what people have said. But there’s no escaping the problem of things that go without saying: the unspoken context of the times, often little understood by those operating within its confines.

Checkpoint Charlie in 1961.PHOTO: ALAMY

Mr. Kimmage rightly believes that he has hold of one of the most important concepts of the previous century, the idea of the West, and capably traces its evolution and context. He purports to limit himself to its role in shaping U.S. foreign policy, but in truth he ranges more widely. He writes with keen observation, for example, on the proliferation of neoclassical and neo-Gothic architecture in the United States after the 1893 Columbian Exhibition in Chicago—part of America’s renewed involvement with Europe. He also draws on such African-American thinkers as W.E.B. Du Bois and James Baldwin, not primarily for their critique of American foreign policy but for the insight arising from their sense of being in the West but not entirely of it. The history of the idea of the West is also, as Mr. Kimmage shows, a history of the critique of the West.

The book proceeds more or less chronologically, charting first the rise of the modern idea of the West through its Cold War heyday; then the emerging critique of the West; and finally its dissolution into the universalism of the “liberal international order.” 

In what Mr. Kimmage calls the Columbian Republic—the period that followed what Frederick Jackson Turner described as the closing of the American frontier at the end of the 19th century—the U.S. began to turn away from its own westward expansion and actively cultivate its European connections, including the shared inheritance from Greece and Rome. The U.S., at last a global power, was at the forefront of Western civilization, or perhaps of civilization as such.

The second period takes us from 1919, when Woodrow Wilson failed to win congressional approval for American membership in the League of Nations, through 1945, the point at which the global dominance of the United States became apparent to everyone. World War II, Mr. Kimmage argues, was fought not only to defeat Hitler but to expunge the fascist blight that had overtaken two historical centers of Western civilization, Berlin and Rome. The war that the U.S. waged on Nazi Germany was fierce and brutal—yet not as brutal, Mr. Kimmage notes, as the war against Japan, a land not of the West. 

In the years before and after the war, America’s leading universities designed programs in Western civilization, unapologetically designating the great books to be read by students so they could understand their place in it. Mr. Kimmage bookends the next period of his history with the publication of William McNeill’s influential “The Rise of the West” in 1963. By that date, the West also stood in opposition to the East, the communist bloc behind Moscow’s Iron Curtain. This East-West dimension persisted through the end of the Cold War.

But, notes Mr. Kimmage, McNeill’s book actually appeared “at the end of an era in American politics and foreign policy.” The triumphalist view of the West found itself being increasingly interrogated for its deficiencies at home and abroad—for racism, imperialism, colonialism and what Columbia University’s Edward Said, in the late 1970s, would identify as “Orientalism,” the patronizing and dismissive Western view of other cultures. The Vietnam War began under the guidance of “the best and the brightest” (as David Halberstam dubbed them) from elite universities that were on board with the progressive purposes of the U.S. government. By Said’s time, the universities had emerged as bastions of a thorough critique of the West and its leading power, the United States.

Partly because of the increasing weight of this critique, and partly because of the breakup of the Soviet Union in the early 1990s, the utility of speaking of “the West” in foreign policy reached an end. Mr. Kimmage finds it telling that President George W. Bush felt a need to walk back his use of the term “crusade” to describe the coming U.S. response to the 9/11 attacks. It was deemed insensitive, and even the leading architects of the “Global War on Terror” drew a line at insensitivity of this sort.

It is to Mr. Kimmage’s immense credit that he manages to maintain a firm hold on two ropes pulling in opposite directions: First, critics of the West were right about many matters that had previously been ignored or played down: the history of racism and disregard for women, the settler genocides and imperialism. Second, the West got a number of big things right as well: in the realm of foreign policy, the need to defeat fascism, to resist communism, and to promote (however inconsistently and imperfectly) the spread of freedom. There was as well the emergence of a vision of political life based on mutual respect—a proposition that contains within it a basis for the criticism of existing practice and therefore self-improvement. 

Racism and conquest have been ubiquitous in politics. The political wherewithal to call them out and try to overcome them has not. A frank acknowledgment of Western shortcomings, past and present, as Mr. Kimmage demonstrates so persuasively, makes sense only in the context of an appreciation of the singular Western contribution to human flourishing.

This article was originally published on April 24th in The Wall Street Journal

The Return of the State

The main narrative lines about trends in international politics in the past 40 years, and especially since the end of the Cold War, have converged around the proposition that the scope of independent state action has been diminishing. “Interdependence” is the word used to describe the condition of international politics, and improvement in “global governance” is considered the normative objective. Now, however, as COVID-19 has become top priority for political leaders worldwide, it has become apparent that action by national governments and their local jurisdictions is very nearly the sole vector of meaningful response to the pandemic. The state is front and center once again.

Let’s undertake a brief survey of ideas about international politics in which the scope of state action was seen to be in decline and see how well they are faring in this, our plague year.

Globalization is real, and the theorizing about “complex interdependence” that has accompanied it has substantial validity. Globalization has mostly been supported by states in the interest of economic growth—which in turn increased the global middle class from 1.8 billion people in 2009 to 3.8 billion, half the world’s population, before the arrival of COVID-19.

Yet the interdependence of states in a globalized world offers little going forward in response to the pandemic. While globalization offers efficiencies in the satisfaction of aggregate demand in good times, in a bad time such as this come the revelations that supply chains are long and fragile, and that the pursuit of comparative advantage comes at the expense of self-sufficiency in critical materials such as protective masks and medications. States now compete with each other to acquire such goods and look to national means to boost production within their borders.

The liberal internationalism of the post-World War II era produced such institutions as the United Nations system (including the World Health Organization) and the General Agreement on Tariffs and Trades, the precursor to the World Trade Organization—some of the earliest entities of what would come to be called global governance. But who now travels in pursuit of effective action against the pandemic to United Nations headquarters in New York City? That’s not just because COVID-19 is raging in New York; it’s because whatever the strengths of the United Nations may be, they are irrelevant here. Instead, in March, U.N. Secretary-General Antonio Guterres issued a statement calling on the Group of Twenty (G-20) governments of leading global economies to develop a coordinated strategy, what he called “a ‘war-time’ plan.” To call on others to develop a war-time plan is to acknowledge that one is not oneself a war-time planner.

Many have shared Guterres’s aspiration for effective leadership from the G-20 and the Group of Seven (G-7), the world’s largest advanced economies. The two organizations did meet virtually to address the crisis. While the Trump administration blew up a joint communique of the G-7 by insisting that it refer to the “Wuhan virus,” which offended the other members, even more noteworthy was how little a joint communique would actually have done to spur action, let alone coordinated action. As noted by Barbara Martin, a fellow at the Canadian Global Affairs Institute and a strong proponent of an increased role for the G-7 and G-20 in global governance, the March G-7 meeting “clearly focused on itself”—not the needs of others, especially the developing world, which at that point had mostly yet to feel the fury of COVID-19.

As for the World Health Organization, it is a repository of data and is supposed to be the global whistleblower when a threat to international public health arises. And so it has been, for example in the 2009 case of the H1N1 outbreak. On April 15 that year, a California patient took ill with a new influenza virus. On April 17 came a second case. The day after, April 18, the federal Centers for Disease Control (CDC) notified the World Health Organization (WHO) of the outbreak. On April 25, a mere 10 days after the first case emerged, the WHO declared a public health emergency of international concern.

Not this time. The first Chinese victims of 2019’s novel coronavirus began showing symptoms in the Wuhan region as early as November. It was already December 30 by the time a doctor at Wuhan Central Hospital blew the whistle in a chat-room on a new coronavirus. Only the day after word leaked out in this way did the Chinese government inform the local WHO office. Through much of January, the WHOaccepted and repeated the false Chinese claim that there was no evidence of human-to-human transmission. Not until January 31, when human-to-human transmission was obvious and undeniable, did the WHO declare an emergency—more than six weeks, rather than 10 days, after the outbreak.

Whether or not the WHO acted corruptly under China’s undue influence, as some charge, it was certainly useless in raising a timely warning as some 7 million people left Wuhan in January, the month of Chinese new year celebrations. And at present, who now looks to the WHO for guidance on what to do? The conduct of the WHO here is an example of failure of global governance, but even more serves to illustrate its limits. It’s not that there is no such thing as global governance, nor that global governance is doomed to be ineffective. But health organizations inside individual nation-states, not the world’s, are now driving the response.

For many Europeans, as shocking as the COVID-19 itself has been the absence of value-added from the European Union to European countries fighting it. The EU has faced significant challenges before, from the Eurozone sovereign debt crisis in 2009 to Brexit. It has weathered them, and has continued to deliver an extraordinary set of benefits to members: a common market with common regulations; free movement of goods and people, including the right to live and work in a country not one’s own; opportunity to join a currency union; and for the less prosperous of the EU’s members, typically in Central and Eastern Europe, transfer payments from Brussels (meaning from wealthier EU countries).

The new coronavirus is so vastly greater a challenge that it has obliterated many of these supranational advantages. With commerce largely shut down, a common market means little. With “stay at home” orders, freedom of movement has been curtailed within countries, and national borders to contain the spread of the virus are up again throughout Europe. Whatever transfer payments some countries are receiving (which run no higher than 3 percent of GDP anywhere in the EU; that’s for Lithuania, about $500 per person per year) are now being overwhelmed by national economic shutdowns and national fiscal responses, such as Germany’s trillion-euro stimulus. A common currency is most useful when there is travel and commerce among the states that are members, and these have shut down as the European Central Bank is busy pumping liquidity into the system. A bidding war broke out among EU member-states seeking medical supplies as the European Commission, the EU’s executive authority, watched from the sidelines. In all, Brussels seems largely irrelevant to what is most on the minds of presidents and prime ministers in Europe these days.

The same is true of the Atlantic alliance. Although some have proposed to enlarge NATO’s writ to encompass pandemic response, this seems a fanciful project for what is, at its core, a military alliance grounded in American power. Among NATO allies, the ones who were meeting (or on the way to meeting) the common commitment to devote at least 2 percent of GDP to defense are the ones who place high value on the American security commitment. Likely, they still do. And while post-Cold War NATO has often characterized itself as an alliance of “shared values,” some of its newer members, especially, have always seen their membership primarily in terms of security of their own countries against the potential threat from another state, namely, Russia.

But notwithstanding the UN Secretary-General’s talk of “war-time” planning, the virus is not an armed attack triggering treaty obligations. The United States is going to take care of (sorry) America first, Germans Germany, Lithuanians Lithuania. U.S. allies are a great asset in general, but with regard to COVID-19, they matter little. The North Atlantic Council, NATO’s decision-making body, will have nothing to contribute on the pandemic ravaging NATO’s territory, and were Barack Obama or George W. Bush president now, it’s hard to imagine either seeing NATOas a useful vehicle for dealing with it.

Indeed, around the world, the biggest topic of conversation besides what is happening locally is exactly about the role of one particular state: China. Whether it centers on recriminations for China’s early handling of the new coronavirus, or on how China got COVID-19 effectively under control (or whether it really did), or on the assistance China is providing to other countries, it’s a conversation that befits a rising power in a system of states.

Rather than looking upward from the national perspective to international institutions, the attention of most Americans seems to be focusing downward—to state and local governments and how they are responding. What’s true in Louisiana is true in Lombardy. One lives one’s lockdown locally.

The most noteworthy area of effective international cooperation has been among central bankers trying to ensure that the pandemic does not produce a global financial crisis as well. The World Bank and IMF will also be relevant in delivering aid and financial facilities as the pandemic spreads in the global South. Modern central banks, from the U.S. Federal Reserve to the European Central Bank to the Bank of England, have authority to operate largely independently of the political leadership of their states (or of eurozone governments). Finance is government at its most technocratic, and this is certainly to the good of national economies and the global economy.

But globalization as trade in goods and services, “complex interdependence,” most international institutions, supranational and global governance structures, alliances—all of these are receding in salience as national governments scramble to muster their own resources to fight COVID-19 within their borders. The rise of populist and nationalist sentiment around the world has usually been associated, correctly, with right-wing politics. The COVID-19 “re-nationalization” of international politics is anything but. It enjoys full backing from mainstream politicians and technocrats alike. The administrative capacity to tend to the needs of populations residing in the developed world exists at the state level and no other.

Indeed, one way of looking at the competition among states since the Russian Revolution in 1917 has been as a contest between the administrative state associated with market-based democracies and the one-party state of authoritarian or totalitarian governments. It’s bureaucracy versus party. This competition persists, and while many analysts have lately been giving the edge to China’s one-party state governance in efficiency in delivering public goods and services, such as responding to a pandemic, it’s far too easy to dismiss the advantages that arise from the relative openness of administrative-state governance in market-based democratic countries.

States with weaker national governance are likely to suffer greatly in this pandemic—though perhaps their generally younger demographic profiles will provide them some relief from a disease that has been more lethal to older people. The median age of the EU population is just under 43; in Africa, 18. But it’s certain that weak health care systems will be overwhelmed. And it’s an open question whether the traditional donor conferences among wealthy states will be convening given the demands on the home front.

This dramatic return of the salience of the state does not imply that there is no place for international institutions and internationalist ideas and normative aspirations in a world of sovereign states. There certainly is. The central bankers prove as much right now. And once the crisis passes, patterns of cooperation will re-emerge. This international cooperation has helped and will again help keep the peace and increase global prosperity. But international organizations and ideals are not, in fact, supplanting the state as the preeminent form of political organization, and it would be wise to tailor our expectations accordingly.

Some things only states can do. Mustering a response to a global pandemic is one of them. Whether the response goes well or badly is a separate question.

This Article was originally published on April 15th, 2020 in Commentary

Moral Responsibility and the National Interest

In his 2011 Presidential Study Directive 10, Barack Obama declared, “Preventing mass atrocities and genocide is a core national security interest and a core moral responsibility of the United States.” He sought, in this area of humanitarian concern at least, the unity of moral responsibility and national security interest in a policy of prevention. He briefly elaborated his reasoning as follows: “Our security is affected when masses of civilians are slaughtered, refugees flow across borders, and murderers wreak havoc on regional stability and livelihoods. America’s reputation suffers, and our ability to bring about change is constrained, when we are perceived as idle in the face of mass atrocities and genocide.”

It seems unlikely that Obama’s rhetoric here did much to persuade anyone who was not already convinced about the importance of humanitarianism—that taking action to prevent mass atrocities is a sound priority for U.S. policy, indeed, a “core” priority. To say “our security is affected when masses of civilians are slaughtered” is merely to restate the proposition that prevention is a national security interest, which it may or may not be. That is the question. When “refugees flow across borders,” presumably fleeing violence, this could clearly affect U.S. national interests in some cases—but not necessarily in all cases. Atrocities certainly “wreak havoc” on local stability and livelihoods, but they may or may not have regional effects the United States is obliged as a matter of national interest to care about.

As for the reputational damage that idleness in the face of mass atrocities supposedly causes the United States, that would seem to hold mainly for those who already believe the United States should take action. The moral authority of the United States as an opponent of genocide and mass atrocities would indeed be compromised as a result of a failure to take preventive action when possible. But whether the United States has or should seek such moral authority is another question. If you have concluded that the United States has neither a moral responsibility to act to prevent atrocities nor a national interest in doing so—either in general or in a specific case—then you are likely to be willing to ignore claims about damage to your reputation coming from those who disagree with you. As for idleness constraining the ability of the United States “to bring about change,” how does it do so? One could argue—in fact, many do argue—that refraining from unnecessary humanitarian military intervention keeps America’s powder dry for those occasions when the use of force is necessary according to criteria of national interest.

President Obama, in short, was preaching to the choir—those who share his view about the “moral responsibility” of the United States to take action. That’s not necessarily a bad thing to do, but it offers little to those who would like to understand why the prevention of atrocities by and against others is something the United States must undertake as a matter of national interest. Obama offered no more than an assertion of national interest, leaving us with a humanitarian imperative one could accept for moral reasons or decline for practical ones—reasons of state, i.e., national interest. 

Worse, Obama formulated his statement in such a way as to evade perhaps the hardest question arising out of this consideration of American moral responsibility and national interests: What happens when taking action to prevent atrocities actually conflicts with perceived U.S. national interests? This contradiction was nowhere more apparent than in Obama Administration policy toward Syria, where the prevention of atrocities came in second to the national interest Obama perceived in avoiding American involvement in another Mideast war.

Now, one could perform a rescue mission on Obama’s rhetoric by noting that he claimed preventing atrocities was “a” core national security interest and “a” core moral responsibility—not “the.” His statement thus implicitly acknowledges other such “core” interests—which, of course, he left unspecified. Presumably, these core interests and responsibilities may at times conflict with each other, and in such cases, Obama has provided no guidance on how to resolve the conflict. A cold-eyed realist such as John Mearsheimer could say that national security trumps or should trump moral responsibility in all such cases: There is nothing “core” about moral responsibility when the chips are down. If that’s what Obama was really saying, then one could chalk it up to posturing—a president claiming moral credit for seeming to take a position he has no real intention of backing up with action.

But this is a willfully perverse reading of Obama’s statement. He did not say what he said in order to relieve the United States of all responsibility for taking preventive action with regard to mass atrocities. On the contrary, his intention was plainly to elevate the importance of such preventive action within the government. His statement came in the context of the establishment of a new Atrocities Prevention Board, an interagency body that would meet periodically to assess risks in particular countries and develop policies to mitigate them.

Meanwhile, the signal contribution of the Trump administration to date on the relationship of moral responsibility to national interest has been what one might describe as the cessation of moralizing. President Trump himself stands apart from his modern predecessors in generally eschewing appeals to morality in his public comments, preferring to justify himself by recourse to a kind of callous pragmatism. Yet this tough-guy act entails a bit of posturing of its own. Recall that Trump was visibly moved by children suffering in Syria from a chemical attack by the Bashar Asad regime—to the point of authorizing a punitive military strike. Even a document as interest-focused as his 2017 National Security Strategy gestures at strengthening fragile states and reducing human suffering. And at Trump’s National Security Council, Obama’s Atrocities Prevention Board is in the process of undergoing a rebranding as the more modestly named Atrocity Warning Task Force, but its function appears to be substantially the same.  

While putting “America first” seeks to subordinate moral responsibility to national interest, the moral aspect of policy choices never entirely goes away. In fact, the president seems to take the view that U.S. moral authority has its true origin in putting America first—and being very good at it. Without the strength that comes from serious American cultivation of its security interests, moral authority means nothing. And while many argue that Trump is sui generis, it is hard to miss that the current Commander in Chief has tapped into a kind of popular moral fatigue that was already brewing under Obama. 

The tension between moral responsibility and national security interests is real, and it cannot be resolved either by seeking an identity between the two or simply chucking out moral considerations in their entirety. The key to making sense of Obama’s sweeping statement is to view policies of prevention not as either a matter of moral responsibility or national security interest, but always as a matter of both. There is no separable national security argument about how to handle cases in which large numbers of lives are at risk without considering the moral implications of doing so or failing to do so, nor is there a moral argument that can govern action apart from national security interests. As a practical matter for policymakers, the question of what to do always takes place at the intersection of moral responsibility and national security interests.  

Generally speaking, there are three broad categories of ethical reasoning in deciding what one should do. As applied to the United States or other governments, a “consequentialist” perspective asks whether the likely outcome of what we do is good for our country and our friends and bad for our enemies; a “deontological” or rule-based ethical perspective tells us to do the right thing; and a “virtue-ethics” perspective asks us to take action that reflects our values and will reflect well on us as a country. Moral authority, and with it “moral responsibility” of a “core” nature or otherwise, is mainly a matter of the latter two categories of normative reasoning. Perpetrating atrocities is wrong, and those with the capacity to stop it should do so: That’s a general rule of conduct. Because the Declaration of Independence founded the United States on the principles of the “unalienable Rights” of all human beings to “Life, Liberty and the pursuit of Happiness,” Americans should take action where possible to secure those rights for others when they are being violated on a mass scale; our self-respect requires us to take action rather than turn away: That’s a virtue-ethics perspective.

But something else is already implicit in these imperatives, at least at the extreme where the United States contemplates taking military action to halt a genocide in progress. Even in response to genocidal activity, no doctrine or rule can oblige the United States to take action that would have suicidal consequences. Nor does self-respect properly understood ever demand suicidal policies. The Latin maxim “Fiat justitia et pereat mundus”—“Let justice be done, though the world perish”—may be an appropriate point of view for an advocacy group working to promote such desired ends as human rights and accountability for abusers with as little compromise as possible. But the principle is no basis for government policy. The United States would not invade Russia to halt genocide against the Chechens, for the simple reason that the possible if not likely consequence of doing so, nuclear war, would be worse. So consequentialism is already present in the rule-based and virtue-based normative declarations about supposed American moral responsibilities; in the preceding paragraph, it arises in such phrases as “those with the capacity to stop” atrocities and “where possible.”

The implicit consequentialism in even a “core moral responsibility” also rescues the rules- or virtue-based normative arguments from a frequently voiced criticism, namely, that they cannot be consistently applied in the real world. Of course they can’t! Once again, we are reasoning about the most extreme form of political violence: mass atrocities currently under way and whether to take action to halt them. The fact that a military response in one such hypothetical could lead to nuclear war does not mean that a military response in all such circumstances would be devastating. Therefore, the fact that one must refrain from responding in certain circumstances out of prudential considerations does not mean one must refrain in all circumstances. The moral reasoning that leads to the conclusion that one should take action is not rebutted by the conclusion that taking action in a particular case is too dangerous. One should take action where one can take action with due regard for prudence. The moral position does not become hypocritical just because it can only be realized imperfectly.   

Critics often deride “idealism” of the Wilsonian sort (or the Kantian sort) as wildly impractical. So it can be. But if we consider consequences at the opposite end of the spectrum from nuclear war—that is, where the risks of taking action are negligible to the vanishing point—then we still face a moral question about whether or not to act. And in such circumstances, why wouldn’t the United States act? 

Here, a useful analogy is the response to a natural disaster abroad: an earthquake or a tsunami. Offering humanitarian assistance through the U.S. military, for example, is not entirely without risk, but in many circumstances the risk is minimal—perhaps little more than the risk accompanying routine training missions. Because we place a value on preserving human lives, we extend the offer. This is “idealism” in action as well. An American president would be hard-pressed to give a speech to the American people explaining why the United States has no reason to care about lives lost abroad to a natural disaster and no reason to extend assistance to survivors to prevent further loss of life.

Most policy questions, of course, arise in the context of risk to the United States that falls between “negligible” on one hand and “nuclear war” on the other. When atrocities are “hot”—ongoing or imminent—the United States must and will always weigh the risks of taking action to stop them. President Obama’s statement in no way obviated such a necessity. Nor will the calculation ever be strictly utilitarian in the sense of the greatest good for the greatest number: Any American president will put a premium on the lives of Americans, including members of the armed services. To make these common-sense observations is to lay the groundwork for a conversation about whether the United States may be prepared to intervene in a particular situation involving atrocities. There is no “doctrine” that can provide the answer to the question of risks in particular circumstances. Policymakers must and will assess risk on a case-by-case basis.

Moreover, there is no reason to believe policymakers will all arrive at the same conclusion in a given situation. There will likely be disagreement over most cases between the extremes, and some of it will be the product not of differing technocratic assessments of risk but of the predispositions policymakers bring to their positions. These predispositions may be ideological in character or may simply reflect different tolerances for risk affecting individuals’ calculations. Observers often say that the decision to take action is a matter of “political will.” That’s true, but it’s not typically a matter of conjuring political will out of a climate of  indifference. Rather, it entails acting against the resistance of opposing political will. When it comes to questions of humanitarian intervention, there will almost always be stakeholders in favor of inaction. Their justification is unlikely to be articulated on the basis of indifference to the lives at risk. Rather, it will be based on the contention that the likely cost is too high—a consequentialist argument based on national interest.

In practical cases, the only thing that can overcome such consequentialist calculation is a moral case. We have seen that such a moral imperative is implicit in the modern reading of the founding principles of the United States—the “unalienable Rights” human beings have, starting with life, liberty, and the pursuit of happiness. It is no refutation of these principles to note that governments often fail to respect them. Governments should respect them and should enact laws and enforce policies that protect individuals from abuse. Ultimately, the utility of Obama’s statement was not in establishing the “national security interest” of the United States in preventing atrocities as such, but in reminding policymakers of the moral questions involved in and underlying national interests. 

Steps toward the institutionalization of this moral perspective—to create within the U.S. government redoubts where officials begin with the presumption that lives are worth saving—are a welcome addition to policymaking processes that often find it easier to pretend that national security interests are indifferent to moral considerations. Obama’s Atrocities Prevention Board represents such an addition. Another addition was Obama’s requirement for the Intelligence Community to make assessments of the risk of atrocities in other countries. Congress has weighed in with the near-unanimous passage of the Elie Wiesel Genocide and Atrocities Prevention Act, which President Trump signed into law in January. The legislation included support in principle for a dedicated interagency prevention process such as the APB, mandated training for select Foreign Service Officers, and required the executive branch to report to Congress on prevention efforts. All of these initiatives are ways of affirming that the human element matters in considerations of national interest, just as the imperatives of national interest in some cases circumscribe acting out of the best of moral intention.  

And it is here, finally, that the most important aspect of the prevention/protection agenda comes into sharper relief. So far, we have mostly been looking at hard cases: where atrocities are ongoing, perhaps on a mass scale, and the only plausible way to halt them is through military intervention, with all the risk it entails, and perhaps also with no certainty of success. We have contrasted an extreme such “nuclear war” scenario with nearly risk-free humanitarian disaster relief, while noting that most “hot” atrocity situations fall somewhere in between. 

In all these cases, we have been analyzing situations involving ongoing atrocities. A “prevention” policy in such circumstances entails at best prevention of furtherloss of life—a worthy moral goal subject to prudential calculations of risk. But a “prevention” agenda is much broader. Its purpose is to prevent atrocities, or conflict more broadly, from breaking out in the first place. This process has two elements: first, identifying countries at risk; second, devising and implementing policies to reduce it.

Once again, consideration of the matter through the perspective of a natural disaster is illustrative—in this case, an epidemic. Ebola is an often-deadly infectious disease, an uncontrolled outbreak of which could consume lives on a large scale. Everyone knows this. We also have a general idea of where the disease might break out. Medical doctors and epidemiologists have studied the problem and devised strategies to contain an outbreak. They have briefed public health officials in countries at risk and have sought to overcome any self-interested resistance to acknowledgment of the problem and the need for planning and training to cope with an outbreak. 

Political violence is, of course, more a matter of human volition than an epidemic, but some of the structure of the problem is similar. To the extent political conflict has the potential to turn violent but has yet to do so, it resembles a known deadly pathogen to which human beings will fall prey, potentially in large numbers, should it break out. But human beings are not helpless in dealing with such a problem. They need not wait idly by, leaving the possibility of an outbreak to fate. There are strategies for containment that have proven effective in past cases. There are best practices in terms of the standard of medical care, and there are protocols for caregivers to reduce the possibility that they will contract the disease from the patients they are treating. What is more, all involved are aware of the acute danger of the problem, the need to address it seriously, and the importance of learning lessons from past successes and failures. 

All of these elements are present in cases of potential conflict, including conflict of the worst sort—mass atrocities and genocide. A fundamental requirement is expertise—first, the equivalent of epidemiologists, individuals skilled in identifying potential sources and precursors to conflict and political violence and strategies to prevent or contain it; second, the equivalent of doctors, those who implement and further refine policies designed to address the sources of risk and to disrupt the precursors to violence. This requirement entails the commitment of resources in developing and coordinating expertise as well as in implementing prevention policies.  

A second major requirement is the willingness to put the expertise to use—the political question we have been assessing throughout. With regard to “prevention” in the sense in which we are now taking the term, however, we are no longer looking at the possibility of putting Americans and others in harm’s way, or at least no more so than that risk faced by U.S. diplomats operating in challenging countries. The “risk” is simply the opportunity cost of resources devoted to the cause of prevention—the sacrifice of whatever else might be paid for with dollars going to prevention efforts. 

Moreover, just as public health officials look at the dangers of failing to contain an outbreak of disease—another aspect of risk—so policymakers dealing with prevention must look at the potential costs of the failure of prevention efforts. The worst toll of violent conflict comes in the form of human lives lost, but that is hardly the only cost. The resources required to halt a conflict once it is under way and promote reconstruction and reconciliation in its aftermath are vastly greater than the sums required to assess risk and devise and implement prevention policies—a point well understood with regard to epidemics.    

Epidemics don’t respect national borders, and neither do some conflicts and potential conflicts. Many potential conflicts, however, are specific to particular countries, and it is unsurprising that policymakers tend to view them through this prism. The history of international politics is more than the sum of all bilateral interactions between governments, but that sum represents the largest component of the history. Organs of the U.S. government that deal with foreign affairs—from the State Department to the Defense Department to USAID to the Intelligence Community to the National Security Council and beyond—are typically organized regionally and then typically by country, as is the interagency process by which they seek to coordinate their efforts to create government-wide policy. The cumulative effect of this mode of organization is to create substantial reservoirs of country expertise inside the government, and this is a very good thing. 

But to deal with a potential epidemic, you need not only experts on countries in which epidemics may break out. You need a means of identifying which countries are at greater or lesser risk of epidemic—which means you need experts on epidemics and their causes, or epidemiologists. And you need experts on what to do if an epidemic does break out, public health experts and responders—expertise that cannot wait to be assembled until a breakout actually occurs. And you need experts in how to treat the victims of the epidemic—doctors. In the case of political violence and conflict, you need expertise in identifying potential steps to address drivers of conflict and to disrupt precursors, a task conceptually similar to the development of a vaccine. And of course, once you have these expert human resources in place, you really do need country experts to provide the necessary local context and to adapt general principles and policies to specific cases.

These are the prevention resources the U.S. government needs—and has taken some strides to develop, however incompletely and perhaps without as much systematic attention as the magnitude of the challenge requires. The question of a “national security interest” in prevention in this sense is really nothing other than the question of why we have a government that deals with foreign capitals, regional and international organization, and other entities abroad at all. The United States operates about as remotely from Star Trek’s “prime directive” as is imaginable. “Observe but do not interfere” is not the American métier (nor, for that matter, was Captain Kirk much good at observing the prime directive on behalf of the United Federation of Planets; his moral sense kept coming into play).

There are numerous ways of conceptualizing aspects of or the totality of the problem of political violence and its prevention. Some of them, in fact, have developed into rich policy niches offering great practical insight and guidance. We began here with an area I have spent some time on over the past 15 years, namely, trying to improve the ability of the U.S. government to prevent genocide and atrocities, as well as to improve international coordination of such efforts. But that’s just one aspect. To name a few more: conflict prevention, post-conflict stabilization and reconciliation, peace-building and promoting positive peace, pursuing the Millennium Development Goals and promoting sustainable development more generally, promoting resilience, capacity-building in fragile states or failed or failing states, promoting human rights, promoting effective governance, halting gender-based violence and promoting gender equity, countering violent extremism, human protection, the responsibility to protect (R2P), and promoting accountability as deterrence.

Practitioners in these areas may object, but I would submit that none of them is quite as advanced in its mastery of the policy questions on which it focuses as the epidemiologists, doctors and public health officials are on the problem of preventing an outbreak of a deadly infectious disease. This is not a criticism, but an acknowledgment of several major differences: First, humanity has had a clear idea of the problem of epidemics for centuries, whereas many of the concepts under the “prevention” umbrella listed in the paragraph above are of fairly recent origin as policy matters. Second, as noted previously, though human and governmental responses to a potential epidemic are volitional in much the same way prevention policies are, the diseases themselves are not; political violence or conflict is always volitional, and human motivations are complicated. Third, the downside risk of an outbreak of a deadly infectious disease is vivid in the public imagination and therefore obviously something worth the serious attention and resources of governments, international and regional organizations, nongovernmental organizations, and expert individuals. Conflict and political violence, prior to their eruption in actual killing, are murkier possibilities.

The prevention agenda is also hobbled by what we must acknowledge as an epistemological problem: How do you prove something did not occur as a result of a given policy choice? The failure of prevention is easy to discern, from Rwanda to Syria. Success is more elusive.

From my point of view, a serious examination of the evidence demonstrates that NATO’s military intervention in Kosovo prevented ethnic cleansing, mass atrocities, and possibly genocide. But this was a case where the atrocities were already under way and subsequently stopped. Likewise, a dozen years later, NATO’s Security Council-authorized bombing campaign in Libya prevented the forces of ruler Muamar Qaddafi from wiping out the opposition in Benghazi and exacting the reprisals he threatened on its civilian supporters. 

But some scholarship has questioned whether Qaddafi really intended to engage in mass atrocities, thus the necessity of the intervention on its own terms. And, of course, following the fall of Qaddafi, Libya rapidly sank into violent chaos, a situation neither NATO nor its member countries, including the United States, was prepared to prevent, at considerable human cost. Interventions on this scale are rarely simple and one-off: Save the civilians in Benghazi. Intervening sets in motion a chain of events whose end is rarely known at the beginning. 

These, of course, are the “hot” cases. The broader prevention project we are considering here aims to address causes of and derail precursors to political violence well before it breaks out. Its activities take place farther “upstream” from such potential violence. That just makes the success of prevention harder to demonstrate.

Harder, but not impossible. And the farther upstream one is from actual violence, the more closely prevention policies seem to resemble each other across the broad conceptions of the prevention of political violence enumerated above. Once violence has broken out, perspectives may diverge; debate still persists over whether bombing Nazi death camps in 1944-45 would have saved lives in sufficient number to justify diverting resources from the military objective of winning the war as quickly as possible. But in the context of a list of countries ranked by risk of bad potential outcomes in the future, the policy interventions from differing perspectives are similar. Potential ethnic conflict, for example, has shown itself open in some cases to amelioration by programs fostering dialogue between relevant groups, and this is true whether the stated priority is conflict prevention, human rights, gender, or societal resilience. 

The fundamental point with regard to a prevention agenda is that a policy toward country X is incomplete without one. The U.S. government, as a matter of national interest, conducts assessments of political risk worldwide, typically at the country and regional level. Two conclusions follow: First, the risk assessment should draw on expertise across the full spectrum of the ways in which political violence manifests. Second, the purpose of risk assessment isn’t merely to avoid surprises; it’s to try to avert potential bad outcomes that are identifiable now. 

That we are so far able to do so only imperfectly, with regard both to our processes for identifying and ranking risk and to our policies for mitigating it, is no reason to pretend we don’t care about this moral aspect of our national security interests. It’s reason to work to get better at the task.  

This article was originally published on August 29th in The American Interest

George F. Will and Testament

Review of ‘The Conservative Sensibility’ By George F. Will

The publication of George F. Will’s new book, his 15th, took place one month to the day after his 78th birthday. He has been writing his syndicated column for the Washington Post, for which he won a Pulitzer Prize in 1977, for 45 years. He has been a regular feature on public-affairs television programs since the days of This Week with David Brinkley, which premiered in 1981. He follows baseball closely enough to have written two bestsellers on the subject. He finished a Ph.D. at Princeton in 1968 and is deeply steeped in the canonical works of political philosophy and Western culture as well as in American history. He has enjoyed the company of Washington political figures from Daniel Patrick Moynihan to Ronald Reagan. Though an adherent of no particular school within the spectrum of conservative opinion, he has long been one of America’s best-known conservatives.   

Columnist, pundit, television personality, scholar, author, newspaperman, bon vivant, aphorist, baseball fan, conservative—in a span that began in Richard Nixon’s America and continues through Donald Trump’s: One eagerly awaits the memoirs of such a man, or so one should. Continue reading