• Home
  • Books
    • The Heroic Heart

Tod Lindberg

Category Archives: Commentary

The Case for Trump’s War Is the Case for Bush’s War

17 Tuesday Mar 2026

Posted by Tod Lindberg in Commentary

≈ Leave a comment

“It’s not 2003.” So say some fervent Donald Trump supporters who are desperate to distinguish the U.S. attack on Iran from the U.S. invasion of Iraq 23 years ago. And since it’s not 2003, “this is not a time for neocons to be spiking the football.” So said Heritage Foundation President Kevin Roberts and, by saying so, inadvertently made the case that the “neocons” he takes to have been responsible for the Iraq War do have excellent reason to spike the football over Iran. The Trump of Roberts’s imagination would never do anything like what the “neocons” of his wild and convenient imaginings cooked up for Iraq. Except that Trump just did.

The United States is now at war with a country whose leaders have been gathering mobs to chant “Death to America” since the Islamic revolution of 1979 and have made good on it ever since by killing Americans within reach of Iran’s power when practical. Let us set aside for now analysis of the curious need of so many to wrench Trump’s bold and surprising decision into alignment with their historically uninformed quest for Marvel-style neocon villains. While we’re at it, let’s set aside consideration of those on the other side of the aisle whose desire for Trump to fail seems more powerful than their desire for their country to succeed. We’re at war, and if it’s too much to ask that the nation unify around the American cause, that’s sad—but such are the times in which we live, and fortunately for all of us, Iran is now in the fearsome hands of the U.S. military.

In truth, it is 2003 again. History rhymes. An American president has had to decide, on the basis of information he has at hand, how to cope with a grave threat to American interests and values. And again, a president has chosen war. Then, it was Iraq. Now, it is Iran. The real surprise is that, geopolitically, the Iraq example has turned out to be a good one for Trump to follow. The dubious origin of that war a generation ago and the epic failure of American-style liberal values to take root in that country have obscured significant aspects of the positive outcome of the conflict. And confusion about the security issues that faced the United States then and that are facing it now is distorting public and elite perceptions of the Iran problem Trump has taken the country to war to address.

When a recent poll of American historians ranked the best and worst American foreign policy decisions of all time, it was a foregone conclusion that going to war in Iraq in 2003 would rank as the worst. And the conclusion was correct. True, 58,000 Americans died in Vietnam, and the American military nearly collapsed in its aftermath; estimates of direct civilian deaths in Vietnam ran as high as 2 million and nearly double that regionally; the United States lost Indochina to Communism, and Saigon fell ignominiously. Contrast that with Iraq, where we lost 4,500 killed, after which the U.S. military emerged more capable than ever; where estimates of direct civilian deaths range to 200,000, a tenth of the toll in Vietnam. We tried withdrawing in 2011 but had to go back after the emergence of ISIS, which killed perhaps 100,000 civilians before we and allies destroyed it and saved Iraq and the world from the deadliest innovation of the 2010s. But there you are. It is difficult to talk about Iraq in polite company—on both left and right—in any way other than with acknowledgments of how disastrous, how calamitous, how ruinous it was.

And we all know why.

To review, the Bush administration erroneously believed that Saddam Hussein possessed large stocks of chemical and other weapons—and that he still harbored an intention to develop nuclear weapons, an intention that dated back to the construction of the Osirak nuclear reactor, which Israel destroyed in 1981. We should not waste time on those who claim the Bush administration was consciously lying when it led us into war. This groundless slander actually works to obscure a complex truth. Leaders can make decisions only on the basis of the information they have at the time. But events that follow those decisions compel us to pass judgment on them in light of additional information gleaned in their wake. “If you knew then what you know now” is an inevitable question in retrospect—but it is meaningless when it comes to real-time decision-making. For Bush, in the post-9/11 context, Saddam’s supposed capabilities and ambitions made him too dangerous going forward not to confront and depose.

The second Iraq problem was the failure to anticipate—and once it was underway, to acknowledge—the gathering insurgency inside the country working in opposition to the U.S. occupation and its efforts to install a democratic government. Vice President Dick Cheney said in 2005 the insurgency was in its “last throes.” Unfortunately, it was not, and the U.S. military was increasingly vexed by its inability to solve the lethal problem of improvised explosive devices on roadbeds. Those bombs accounted for about half of all American casualties.

Like many, I supported going to war in Iraq in 2003 on the grounds that Saddam, his weapons stocks, and his ambitions posed an unacceptable risk. Had we known then that the WMD fears were the product of American and allied intelligence failures, which Saddam could have dispelled but chose not to, most of us would have supported the continuation of the Clinton -administration policy of slapping Saddam back even as he probed the determination of the West to hold fast to the limitations on his actions and choices to which he had agreed as a condition of ending the 1991 U.S.-led war against him. But we would have worried about the long-term viability of keeping him in that box. Support at the UN Security Council for the sanctions regime had begun to erode only a few years after it had been imposed. After 9/11, it was also all too easy to imagine him making common cause with, or being an active participant in, the new and indeterminate terrorist threat to the United States (and of course there were those who believed he had been involved in some way with the attack). He certainly had more than sufficient motive.

At the time, many of us embraced the view that America and its allies would be liberating Iraq from Saddam, a vicious tyrant, and, once liberated, that America would have a responsibility to try to establish a decent government for Iraq’s people to replace the malevolent one we took down.

Critics on both left and right have claimed ever since that we went to war for the misbegotten purpose of bringing democracy to Iraq. This view—in part a result of ex post facto foolhardy utopianism that flowed from the feckless pens of the Bush White House’s talented but overenthusiastic rhetoricians—gets the sequence of events wrong. In the absence of serious security concerns about the Saddam regime, there would have been no war and hence no “democracy promotion.” If there is no decision to topple the regime first, there are no questions about what you will replace that regime with. Some hoped the Middle East was ready for a wave of change from autocracy to liberalization and democracy. It wasn’t. But the war aim of the military power the United States deployed was first to oust Saddam, not to democratize the region and the world.

The decision to go to war would have been forever vindicated had the U.S. military indeed turned up large stockpiles of chemical and biological weapons—though some might disagree in light of the ensuing insurgency and the cost it inflicted on our troops. In policy circles, a significant number of the war’s initial supporters were ready for a withdrawal by mid-decade as U.S. casualties mounted. They had no stomach by 2007 for Bush’s counterinsurgency “surge.” Yet the surge—in conjunction with the “Anbar awakening,” in which Sunni sheikhs starting in mid-2006 turned against the insurgents of al-Qaeda in Iraq and allied instead with U.S. forces—was a clear success by summer 2008. And the war ended two years later.

If you are George W. Bush, and you took the country to war on the basis of a mistake on the scale of the Iraq WMD intelligence failure, you cannot expect the judgment of history to be other than negative—even though you can honestly claim you made the decision on the basis of what you considered the best information available at the time. At the same time, this negative retrospective judgment offers no real counsel to presidents and policymakers assessing future dangers and making decisions about them. They, too, will have to make incredibly hard choices without perfect information. Trump just did.

And here is the key point when it comes to reassessing our fight in Iraq. Saying history’s judgment of a decision is negative is not the same as saying that nothing positive came of the decision. In Iraq, the United States sought militarily to establish with certainty that Saddam Hussein would no longer be a factor in global politics and that Iraq would have no chemical and biological weapons or the ambition or prospect of developing a nuclear weapon. To return to a notorious phrase, “mission accomplished.” Saddam was out, Iraq was free of WMD, and after the surge ended the war, the government in Iraq has posed no threat of any kind to anyone outside its borders. It is a functioning state, though it is shot through with corruption and tribal tangles and internal squabbling that have so far prevented it from securing a bright future. But it’s off the map when it comes to geopolitical hot spots—following a 30-year period during which Iraq was one of the most destabilizing forces for evil on the globe.

It’s impossible to say what would have happened if the United States had left Saddam in place. Clearly, in the short run, he did not pose a threat as a supplier of dangerous weapons to terrorist actors, as we had feared, because he didn’t actually have those weapons. But that is something we came to know only as a result of toppling the regime; the perceived threat would still have been a huge preoccupation for American and Western leaders. That’s not enough to justify the war, but it adds to an honest understanding of why the war came to be.

Saddam in the longer run would have been an entirely different matter. Having played a malevolent role in Iraq and regionally for decades, he would certainly have sought to continue in it to the extent possible. In a grab for oil, he invaded Iran in 1980. He used chemical weapons extensively from 1983 to 1988 in the Iran–Iraq war, and he used them against his own Kurdish population during his Anfal campaign of 1988. In another grab for oil, he invaded and conquered Kuwait in 1990, and he threatened to use chemical weapons (but didn’t) against the U.S.-led coalition that ejected him in 1991. During that first Gulf War, he also launched dozens of Scud missiles against Israel and civilian targets in Saudi Arabia; because he had used them, Israelis spent the war putting on gas masks in case he had loaded them onto his missiles. The fact that, by 2003, Saddam had no stores of chemical or biological weapons was unknown to anyone but himself and whatever “inner circle” he had. The question, which had taken on a new coloration in the aftermath of 9/11, was whether he was too dangerous to ignore, especially if the Security Council allowed the sanctions imposed on him to lapse, giving him more resources.

Saddam was 65 years old when Baghdad fell to the United States. The problems he posed in international politics might have persisted for decades had Baghdad remained in his hands. Instead, those specific problems ended with his regime and his death by hanging three years later. Good riddance.

The Iraq War also had implications beyond Iraq. It was intended in part to scare others out of pursuing and possessing nuclear, chemical, and biological weapons. Under some circumstances, the United States has the military capability to prevent hostile states from acquiring especially dangerous military capabilities. The question of whether such an adversary would actually use such capabilities once it possesses them doesn’t arise if the state doesn’t possess them.

Did this added deterrent dimension of the Iraq War work? There’s evidence to suggest it did.

At the end of 2003—that is, after the fall of the Saddam regime—Libya’s Muammar Qaddafi made the decision to abandon his nuclear, chemical, and biological weapons programs as well as long-range missile development. International inspectors verified their termination in 2004.

Syria also had a nuclear program underway when war broke out. Construction had begun in 2001 on a facility at al-Kibar modeled on a reactor in North Korea that could produce enough plutonium for one or two nuclear weapons per year. Syria insisted the project was not a nuclear reactor at all but a conventional military facility. When reports of a contract with Russia to build a reactor in Syria surfaced in February 2003, during the buildup before the U.S. attack on Iraq, both Russia and Syria hastily denied any such arrangement—which was likely an indication of newfound caution in the wake of America’s declared determination that it would intervene if nuclear threats began to gather rather than wait until it was too late to do so. As the al-Kibar facility neared completion in 2007, Israel bombed and destroyed it. Subsequent inspections found incontrovertible evidence of al-Kibar’s nuclear nature. After its destruction, Syria’s nuclear ambitions went dark, perhaps in keeping with a sense of the heightened risk of proceeding. (Syria did use chemical weapons against its own people in 2013, which Barack Obama had declared a “red line” requiring intervention—a line from which he hastily retreated when the test came.)

Meanwhile, Iran’s nuclear-weapons-development program underwent a shift in 2003. The mullahs dispersed its elements and moved it underground. A 2007 U.S. National Intelligence Estimate found that 2003 was a turning point—a conclusion subsequently confirmed by the release in 2008 of internal Iranian documents by the chief inspector of the International Atomic Energy Agency. Those documents offered information about Iran’s pre-2003 “Project Amad,” a detailed plan to develop nuclear weapons and configure them as missile warheads. In 2018, Israel’s Mossad seized another cache of nuclear records further describing the Amad project’s ambition to produce a small arsenal by the early 2000s. That didn’t happen. In short, while Iran by no means gave up its nuclear ambitions and programs after the U.S. took down Saddam’s regime, Iran’s leaders understood that their pursuits entailed greater risk in light of Bush’s determination to deal with the nuclear threats before they took full root. The shifts they felt they had to make likely slowed their progress.

Then there is the case of North Korea. The Kim regime’s pursuit of a nuclear arsenal had been underway for decades by 9/11, and Pyongyang was getting very close. The United States made that clear in 2002, when Washington openly announced we had been played for suckers—that a decade of Western bribes (called “the agreed framework”) paid to North Korea to prevent nuclearization had failed. Given that fact, and the fact that in 2003, the United States had successfully ousted Saddam, North Korea did the opposite of Libya. It rushed ahead, and by 2005 openly announced it had achieved nuclear-weapons capability—then conducted a successful underground test in 2006. The Bush administration did not act. It had its hands full with Iraq. But there was a unique feature of the situation on the Korean peninsula: thousands of conventional munitions the North has had at the ready for decades to fire off at South Korea’s capital, Seoul, which is less than 40 miles from North Korean territory. The United States was thus conventionally deterred from military action to halt or slow North Korea’s nuclear-weapons program.

The prospect of conventional weapons deterring the United States from attacking an aspiring nuclear-weapons state is a good vantage point from which to return to the Iran of 2026. Iran’s ability in the days following the U.S.-Israeli attack to fire off barrages of missiles and drones is an indication of where the problem of the Iranian nuclear-weapons program was headed: in the direction of a conventional Iranian deterrent to the ability of the United States or Israel to do anything about it. Both Secretary of State Marco Rubio and Prime Minister Benjamin Netanyahu specifically said that Israel had determined it had to strike when it did because Iran’s increasing conventional short-range missile capacity would soon make such an attack too dangerous—the North Korea problem.

The United States had considerably damaged the Iranian nuclear-weapons program with its June 2025 attack, in conjunction with Israel, on Fordow and other nuclear facilities. And the United States and Israel could perhaps have continued to strike as necessary while Iran built replacement facilities over time. But not indefinitely if Iranian conventional capabilities continued to increase rapidly. As of late 2025, the path to an Iranian nuclear arsenal no longer ran underground but through the skies, in the form of missiles and drones.

That Iran is pursuing nuclear weapons is not in doubt—on the strength of vastly more evidence than the intelligence case against Saddam Hussein. Iran’s single-mindedness in its quest is unique in international politics. Its threats to wipe out Israel have been nonstop, and they extend to the “Great Satan,” the United States. Through its direct and indirect actions against U.S. interests—whether supplying sophisticated roadside bombs to insurgents in Iraq or its support for numerous Middle East malefactors from Hamas to Hezbollah to the Houthis—the regime in Iran has conclusively demonstrated that it is as dangerous as a nonnuclear-weapons state can be, and there is every reason to doubt that an Iranian nuclear weapon would be useful to the regime only as a defensive deterrent. Given Iran’s embrace of Shiite millenarianism, it’s an open question whether the nuclear weapons Israel and the United States possess would deter Iranian use.

Trump has not been alone in saying Iran can’t have a nuclear weapon. The “international community” says so as well. But such declarations are largely performative in the absence of the power to back them up. This Trump commands. Once among the harshest critics of the decision to go to war in Iraq, Trump has found that the information he has at his disposal has obliged him to take the country to war over a threat from Iran—a threat that is analogous to, but far more serious than, the one George W. Bush perceived in Iraq.

No, “it’s not 2003.” It’s a generation later, and the problem of the worst weapons in the hands of the worst state actors persists. Iraq under Saddam Hussein was one. It is no longer. Donald Trump has made it clear he is determined to make sure the Islamic Republic follows the Baath regime into the dustbin of history. It’s likely that one person in America who is rooting him on, based on his own complex and rueful experience, is George W. Bush.

This article was originally published on March 17, 2026 in Commentary.

The Age of Trump: A Sobering Return to Reality

19 Thursday Feb 2026

Posted by Tod Lindberg in Commentary

≈ Leave a comment

A decade after Donald Trump’s descent down an escalator in his New York City apartment building in 2015, it can no longer be denied, either by friend or foe, that we are living in the Age of Trump, and that his shadow will be cast over the first half of the 21st century for as long as historians write their chronicles. But what does this even mean? Trump makes it difficult to discern. We cannot tell what, for him, is a core conviction rather than a negotiating point. He pivots so rapidly between seemingly contradictory positions that his policy framework has become a Rorschach test for the various factions within his coalition. Nevertheless, as we enter the second decade of the Age of Trump, we can begin to define the fundamental values that are undergirding his administration, especially in the realm of foreign policy, even if it often seems as though Trump is allergic to any kind of core principle. But even that, if true, is a matter of values. It’s just a question of what he values and what he is willing to put on the line for it.

Trump began as a candidate in revolt against Democrats and Republicans and all the niceties and rituals that had been established to help mediate the spaces between the parties. Trump-era values are therefore, at least in part, a critique of the animating principles of the past—but how far back in the past?

The predecessor to the Age of Trump was the “post–Cold War era consensus,” and the critique Trump and his supporters make of it is, to put it mildly, robust. The collapse of the Soviet Union brought with it a generation of American hegemonic dominance across the globe that seemed, on balance, quite satisfactory to those involved in creating and perpetuating it. But it was unsatisfactory to Trump and many of those he represents. They rail against the consensus’s supposed preference for “endless wars” and against an economics seen as favoring the interests of shareholders and great wealth over the concerns of the working class and Main Street.

But Trump’s doings and undoings are more than merely a reaction to the triumphalism of the period, including the notion that we had reached the “end of history.” The objections extend back to the basic elements of the post–World War II liberal order itself. Though this order was largely American in origin and a product of the unprecedented global dominance of the United States across all measures of power in the aftermath of World War II, for many it has become a euphemism for a system that allowed our allies a free ride on our defense dollar and the entrenchment of trade rules that allowed foreign countries to place barriers to entry on American-made products while the United States opened itself up to a flood of imports grounded in cheap labor abroad. Even after the Cold War, the United States maintained a disproportionate security burden, while NATO allies shirked defense commitments to boost their domestic welfare programs. American-led interventions in Kuwait and the former Yugoslavia went off smoothly in the earliest post–Cold War years, but the failures in Iraq and Afghanistan created a crisis of confidence and fueled debates about American military presence abroad.

Meanwhile, the economic model that em-erged at the end of the 1970s—with Margaret Thatcher’s ascendancy in the UK, the beginnings of U.S. deregulation in the late Carter administration, and finally the election of Ronald Reagan in 1980—is viewed with deep skepticism despite the fact that the American economy has grown sevenfold over the past four decades and remains the worldwide engine of innovation and productivity. The model, some in the Trump camp argue, led to American manufacturing moving offshore in pursuit of low-cost labor. That produced cheaper goods for American consumers but shuttered U.S. factories and thereby hollowed out middle- or working-class lifestyles across the country.

During that same time, they point out, American strength was being degraded from within. Progressive elites have grown increasingly committed to a worldview that rejects classical liberal and Judeo-Christian values in favor of a self-loathing disrespect toward Western heritage and culture. The very notion of “human rights” went from serving as an international bulwark against another Holocaust and a rallying cry against Communist totalitarian oppression to a weapon used to advance progressive policy preferences—from new forms of marriage to radical notions of gender identity, as well as twisted conceptions of “oppressed versus oppressors” used to justify or excuse anything from antiwhite bigotry to Pakistani grooming gangs in the UK to the heinous attacks of October 7. The values-based case for preserving the postwar liberal order rings hollow when Christians are arrested in the United Kingdom for praying silently outside abortion clinics at the same time that Islamists march freely down British streets chanting anti-Semitic and anti-Western hate, or when free speech is censored under the guise of fighting disinformation and “hate speech” as defined by leftist NGOs.

For these and other, less seemly reasons, more radical elements of the Trump coalition claim that anyone who speaks in favor of maintaining the “postwar foreign policy consensus” is just part of a shameful and entropic “uniparty”—members of a camp pushing for an international order determined to constrain U.S. freedom of action abroad and diminish American sovereignty in favor of the interests and values of a global and “globalist” class.

But even if, as its defenders argue, this order still manages to provide more benefits than any available alternative, it is hard to dispute that its returns have begun falling short relative to the investment of American blood and treasure. How did we manage to reach a point where the nation that established and has led this order is now seeing such diminishing returns? The answer lies in the underlying animating value at the heart of America’s grand strategy for the past century—and ultimately at the heart of the Age of Trump’s critique.

_____________

The United States has treated its role as a global superpower much differently than past hegemons. For nearly a century, a fundamental assumption underpinning American grand strategy has been the belief that it was possible (and desirable) at some level to replicate on the international stage what the American experiment aims to do domestically—“to form a more perfect Union.”

For all of its very real triumphs, American foreign policy throughout much of the 20th century and into the 21st century suffered from a misguided, idealistic hubris—certain that our American way was establishing the conditions for permanent peace and stability across the globe. It was within reach; we had only to pave the road. After defeating existential threat after existential threat at significant cost, from Nazi Germany and Japan to the Soviet Bloc, our strategic priority in victory was not to prioritize our own sovereignty and enlightened self-interest but instead to look for ways to foster global cooperation and harmony. Rather than concentrating on identifying and preparing for the inevitable rise of the next great threat, our time and energy were spent trying to create a world in which new threats would not emerge.

Woodrow Wilson was the first to begin advancing this vision of a glorious future—believing that our World War I victory had created an opportunity to secure world peace by creating collective security arrangements grounded in binding multilateral commitments, with the aid of a new international body. Wilson hoped an elite expert class could help set international rules and standards to enable countries to transcend the messy notions of national interests and balances of power in the joint pursuit of the greater global good. Should any threat to this new order arise, each country was expected to jump to its defense, regardless of where the threat originated. Of course, Wilson and fellow idealists believed there would be little need for any such enforcement, because states would adhere to it, being rational actors who wanted good things. Wilson envisioned a self-sustaining order whose foundation lay in the power of institutions and law, rather than what we have come to call “hard power.”

In the end, Wilson’s vision was a resounding failure. Nations were not amenable to being told by an international bureaucratic elite working at his League of Nations what their interests should and should not be, nor were they interested in enforcing multilateral collective security commitments that did not take their concrete national interests into consideration. Wilson’s idealism was no match for the hard realities of power and conflict, and critics like Senate Majority Leader Henry Cabot Lodge were rightly skeptical of the proposition that open-ended universal commitments had an automatic claim on precious American blood and treasure.

Two decades later, as World War II was coming to its end, Franklin Roosevelt tried a different approach. Rather than trying to avoid the problem of national interests, Roosevelt bet that the victorious Allied powers would all see it was in their interest to maintain a stable, peaceful global order. Recognizing this required actual power, he came up with the “Four Policemen” idea, according to which four of the most powerful nations emerging from World War II—the United States, the Soviet Union, the United Kingdom, and China—would work together as enforcers, a concept later echoed in the formation of the United Nations Security Council.

The problem this time was that the United States and the Soviet Union had vastly different views on what that global order should look like, given their fundamentally incompatible ideologies and core values. It took the likes of Republican Senator Arthur Vandenberg—who as a young newspaper editor championed Lodge’s opposition to Wilson’s League of Nations idea—to find common ground with President Harry Truman in shifting American foreign policy to deal with the Soviets as the adversaries they were rather than the permanent allies Roosevelt naively hoped they could be.

By the time Francis Fukuyama put forth his “end of history” thesis in 1989, however, it did seem to many that this time could be different. With the United States emerging as the sole superpower, great-power competition seemed relegated to the past, and thanks to the triumph of democratic capitalism over Communism, it appeared there also was finally an answer as to how nations could organize their affairs in a universally satisfying manner, one capable of unlocking the full potential of the postwar liberal order.

Capitalism and free trade made it possible to envision the interests of nations playing out in a constant series of win-win interactions, fostering strong incentives for peace as a means of maintaining economic prosperity and encouraging the transition of the likes of Russia and China into liberal democracies and responsible global partners. And since, according to the “democratic peace” thesis, mature democratic states do not make war on each other, this would only further reinforce a permanent global peace. American post–Cold War strategy, then, was to ensure this progression continued apace. That it would happen was rarely questioned; the only real doubts were how quickly it would happen and how much work it would take to convince holdouts.

Yet just as with Wilson and Roosevelt, the post–Cold War promise of a universally accepted democratic capitalist system solidifying a permanent global peace came crashing down, in part due to the machinations of a radical Islamist terrorist and the 19 hijackers who brought the fantasy of universal democratic and Western consensus to a fiery end on a sunny September morning. The war that began in 2001 came to an ambiguous end two decades later with our pullout from Afghanistan, an event many think gave the Soviet Union’s dictatorial successor in Russia an implicit green light to start a war on the European continent for the first time in nearly 80 years.

And then there’s China. For not only has the liberal order failed to meet the expectations of Trump and his supporters, but as was true in the aftermath of World War I and World War II, another great-power threat has emerged in Beijing from a nation with the desire and increasing capability to significantly harm our interests—ironically, and inexcusably, thanks in large part to our help.

In a misguided effort to push China toward political liberalization, the United States went to great lengths to bring China into the international economic system. But far from following the rules, China went to great lengths to cheat and steal to gain every economic and technological advantage possible. At the same time, it began conducting what is widely believed to be the largest peacetime military buildup in history, all while significantly ramping up information warfare and malign influence operations aimed at the United States and our allies. Underlying all of this is a desire not just to gain a competitive market advantage or achieve regional hegemony, but to recast the global order in Beijing’s favor—and to the detriment of American interests and values.

In spite of the best intentions of American politicians, history returned with a vengeance—great power confrontation in a fight for global dominance, wars of aggression, economic uncertainty, competition for critical resources. And with the return of history came the Age of Trump.

_____________

At its core, the Age of Trump’s foreign policy is in part a rebuke of the idea that history will end, that the universal principles that animate our nation will be universally accepted, and that peace and stability will be everlasting. The question is what to do about this reality. How should this dark and skeptical view inform American foreign policy and America’s place in the world going forward? And how, if at all, do our founding values fit into this future?

A small but vocal Trump faction seeks an Age of Trump that eliminates all vestiges of the postwar liberal order and looks instead to the isolationist, or at least anti-interventionist, spirit that existed prior to World War II. Given how history actually played out, it is easy to forget how strong that current of thought was. Even after having been attacked by Imperial Japan on December 7, 1941, with war in the Pacific a certainty, it was not clear until Hitler declared war on the United States a few days later that we would join the fight against Nazi Germany. From its foundation in 1940, the America First Committee, which claimed 850,00 members—and whose chairman, Robert Wood, was a former general and then-chairman of Sears, Roebuck and Co.—held large rallies against going to war in Europe. It dissolved the day Hitler declared war. The committee’s present heirs seem perfectly comfortable letting American power diminish if doing so furthers the cause of a new Age of Trump characterized by non-intervention.

However, there are two significant reasons why even attempting to make the Age of Trump an isolationism redux will fail. First, Trump’s own actions and policies have made clear at this point that, while he views nearly everything as negotiable, he is not an isolationist and is perfectly willing to use American power to intervene abroad in service of American interests. The narrative that the muscular foreign policy of his first term was just the product of secret Never Trumpers in his administration has been resoundingly crushed by his actions in the second term. One does not send stealth bombers to obliterate Iran’s nuclear facilities or conduct a major military operation to arrest Venezuela’s illegitimate dictator in his bed and bring him to trial in the United States on narco-terrorism charges if there is any squeamishness about the use of American power.

Second, and every bit as important given that Trump has just a few more years in power, his voters actually overwhelmingly reject a United States that has accepted decline and isolation. Polls consistently show that Trump voters are far more hawkish and supportive of a strong U.S. presence on the global stage than the isolationist faction has sought to delude us into believing. Trump voters, including those within his most loyal MAGA camp, have no problem recognizing China, Russia, and Iran as adversaries, and they continue to recognize the value of those allies like Israel who pull their own weight and provide a benefit to American security and prosperity. Trump supporters overwhelmingly prefer a United States willing to confront adversaries rather than a United States that has accepted a supposedly inevitable decline. And even as they may dislike elements of the current postwar order, they have no desire to see a Chinese global order take its place or see someone else’s values and principles dictate global norms. There is a reason why Trump campaigned on slogans like “Make America GreatAgain” and “peace through strength”—that’s what his voters actually want.

So two things appear to be simultaneously true. Yes, there is a real discontent in the Age of Trump with how the postwar order has evolved in the post–Cold War era. There are real frustrations that the current order has not only required too much of the United States, but that many of its most influential thinkers are now advancing principles and values fundamentally contradictory to those upon which our nation was founded and that form the bedrock of Western civilization. At the same time, neither Trump nor the majority of his supporters wants to forswear U.S. global leadership in favor of a simplistic pre–World War II isolationism that meekly accepts the decline of American power.

In the end, what the Age of Trump’s protagonists seem to want is for the United States to start actually acting like a global power. That means ensuring that any global order we lead and sustain definitively serves the interests of the American people and reflects our founding values and principles. They have no problem with American intervention per se—they simply (and quite reasonably) want American power to be used successfully and in furtherance of America’s enlightened national interest. The goal is not retreating from the world or destroying all vestiges of the order that we helped build, but to remake it as necessary to ensure it is consistent with our national purposes. And if that is indeed the kind of foreign policy this era will pursue, the Founding Fathers provide a worthwhile blueprint for the future—and a bridge back to the moral core of our nation’s founding.

The truth is that the Founding Fathers would have felt far more at home in the rough-and-tumble Age of Trump than the heady early days of the post–Cold War period with all its unrealistic wishcasting. While they were animated by the belief that each person possesses unalienable rights flowing from an intrinsic, God-given dignity, the Founding Fathers did not share the impractical idealism of Wilson or Roosevelt. These 18th-century men refused to harbor unrealistic expectations about human beings and the way politics and power work. While they espoused principles universal in nature, the Founding Fathers were under no illusion that their principles would ever be universally accepted. They knew that their claims would meet resistance; most kings, including the colonists’ lawful sovereign, George III, had little use for dignity-grounded arguments that undermined the legitimacy of royal authority. The question of the vindication of the founding principles of the United States of America was therefore never separate from the need to defend them—and win them—by force.

The Declaration of Independence was not a suicide pact. Revolution is a risky business for those rebelling. Failure means a date with the hangman. But those who signed the Declaration had a plan. The Declaration was not merely a statement of principle and a catalogue of the abuses of the colonies by the crown. It was a strategic document as well.

The commander of the Continental Army, George Washington, had in mind a drawn-out war for independence, one that would avoid a decisive engagement between his force and the formidable British army and its German hireling auxiliaries, the Hessians. Washington sought to make use of the vast territory of the colonies to wear down the British to the point that they’d give up.

But that was not the only aspect of the American power-based strategy for independence. The United States needed, and through the Declaration sought and soon obtained, a willing ally capable of assisting with “boots on the ground” and substantial naval power, of which the United States had none.

France was the key. French strategists anticipated that the power balance in their long-running rivalry with Britain would tilt decisively in favor of the latter if Britain retained its colonies in the New World. Assisting the colonies in their struggle for independence would have the short-term benefit of tying up British forces there and, in the long run, if successful, prevent the British crown from making use of its assets and resources in America in the struggle for position in the Old World. For France, the future of Europe ran through the American Revolution.

The problem was that France couldn’t overtly support the Continentals in the sovereign territory of its British rival so long as the conflict remained at the stage of the tiny 1775 battles of Lexington and Concord. As the historian Larrie D. Ferreiro argues in his 2016 book Brothers at Arms, this was the problem the Declaration of Independence solved. Once the Continental Congress took decisive action, there was no turning back. The equation for France changed. Providing military aid to an independent country was a different proposition from interfering in internal disputes on someone else’s sovereign territory.

While France immediately started providing clandestine support to the Continental Army, the formal French-American alliance against Britain awaited the Continental Army providing the French with proof of concept for the viability of the military endeavor. That came in fall 1777, with the Battles of Saratoga in New York, which ended with the surrender of a surrounded and outnumbered British force of more than 5,000. The French would go on to play a critical role in the war’s final battle at Yorktown in 1781, where their naval forces deprived the British of their anticipated access to the Chesapeake Bay, and the Marquis de Lafayette and the Comte de Rochambeau led French troops alongside Washington’s Continental Army to victory over British General Charles Cornwallis. His surrender effectively ended the war and vindicated the Declaration.

France was not acting altruistically in support of the American Revolution. It was deploying its power in pursuit of its interests, namely, a weakened British Empire humiliated by the loss of its American colonies. The Continental Army had something bigger to strive for, not only independence and survival but also the principles Jefferson set forth in the Declaration. Without the power to defend them by prevailing against the Crown, the principles by themselves might have lived on to inspire others to take them up and fight for them. But with power, they marked the beginning of the United States and its advance to the pinnacle of global power in support of ideas grounded in equal God-given human dignity and the rights that flow from it.

_____________

This combination of power and principle, present at the creation of the United States and continuing to animate its growth and vitality for 250 years and counting, remains a reliable guide for American leaders and policymakers in the Age of Trump and beyond. It’s a legacy Americans have made for themselves. The nature of politics is to produce ugly outcomes. What’s unusual is a good outcome, and the United States by 2026 has produced more of them than history has recorded for any other polity, not merely because of our values but also because of the way our power sustains them.

The Age of Trump’s protagonists are right to vehemently reject the voluntary and unnecessary erosion of American power. The challenge is whether they can build something positive—whether they can retain the needed emphasis on power to secure American interests while remaining true to the founding principles that have made and continue to make our nation great.

Doing so will require clarity on several fronts. First, the United States does not merely face strategic competitors, but enemies. These enemies do not need to be manufactured—they have made themselves and their intentions clear. China is leading an anti-American bloc that includes Russia, Iran, North Korea, and (at least until his arrest) Maduro’s Venezuela, all united around a single goal, which is to bring the United States to its knees. China is ultimately not interested in securing a better trade deal or being placated with a sphere of influence, as ironically, Trump and some of his advisers seem to believe. China wants a Washington subservient to Beijing, and it knows it can count on its revanchist partners in a campaign to harm American interests and standing.

Second, while American foreign policy must be completely oriented toward denying and degrading the threat from this Chinese bloc, we must be realistic about what success means. While America’s 20th-century experiences with great-power clashes resulted in outright victories, history shows us that this is not necessarily the norm. We instead should expect decades, or even centuries, of the kind of long struggles seen throughout European history, where success more often looks like consistently tipping the scales in one’s favor rather than a decisive defeat that catapults us back into the status of uncontested global hegemon. This means steeling the American people and orienting our defense and economic policies on a timeline lasting decades while unabashedly employing hybrid-warfare tactics to weaken and undermine the enemy regimes—as they are doing to us now.

Relatedly, even if we did secure a more decisive victory reminiscent of World War II or the Cold War, we should not make the mistake of assuming that such a victory will be permanent. For every Japan that becomes a useful ally, there is the Soviet Union that simply morphs into the same adversary in a different form.

Third, our interests are best served when we both set and enforce the rules. The postwar order’s failures are lessons that must be learned and not repeated. We should not allow our adversaries into an order we lead. We should require even our allies to shoulder a fair burden, and we should hold them to account when they abandon shared values and principles. Preserving an order in which America remains predominant will require a lot of work. It will be far harder than throwing up our hands and walking away, as our enemies would like and as the isolationists among us dream of doing. But our order is far preferable to a world dominated by the Chinese Communist Party.

And fourth, the Age of Trump must be one that faces up to the “clash of civilizations” framing articulated by Fukuyama’s great antagonist, Samuel P. Huntington. It’s not just that our allies sometimes need cajoling to recommit to shared civilizational values; we also need to remind ourselves why we fight our enemies. Our national interests are morally superior to those of our adversaries because the values that inform them are morally superior. The principle animating our nation from the beginning is the unshakeable belief in the dignity of every human, and it is fundamentally incompatible with the values that animate the Chinese Communist Party, Putin, or any of our other adversaries. We know from history that our values will never be universally accepted but will always be under various forms of attack. Rather than running from this reality, the Age of Trump can and should use it as the glue that again marries principle with power.

We did not know it then, but Trump’s escalator entrance was the start of a sobering return to reality. History is clear: No peace is permanent, and human beings are incontrovertibly imperfectible. Conflict and war between states will never be relegated to the ash heap of history, and international relations will always be a nasty fight for supremacy, one in which the winner gets to shape the future according to its interests and values. The test for the Age of Trump is whether it ultimately will repeat past mistakes and abandon either principles or power (or both), or whether it will reconnect power to America’s founding values and lay to rest the dangerous delusion that power is unnecessary or self-sustaining.

This article was originally published on February 19, 2026 in Commentary by Tod Lindberg and Corban Teague.

The Invidious NVIDIA Deal

20 Tuesday Jan 2026

Posted by Tod Lindberg in Commentary

≈ Leave a comment

President Trump’s style is such that he would portray himself as a master of the foreign-policy game, like all other games, even in the absence of noteworthy successes in that realm. Yet both in his first term and in the first year of his second, he has put together a string of wins. One was his first-term “maximum pressure” reversal of Barack Obama’s dealmaking course on Iran, which culminated in Trump’s second-term destruction of Iran’s nuclear-weapons facility at Fordow. Related were the U.S.-brokered Abraham Accords improving relations between Israel and its Arab neighbors in a de facto alliance against Iranian regional influence. Another was the elimination of the Islamic State in Syria and Iraq. And how about his first-term decision to supply lethal aid, including Stinger missiles and sniper rifles, to Ukraine—which, along with robust covert U.S. intelligence engagement with Kyiv, probably saved the government from collapse in the early days of the full-scale Russian invasion in 2022?

The most important among Trump’s successes, however, was to crystallize, from 2017 on, an emerging view in Washington of China as a strategic rival in a return to global great-power competition. The National Security Strategy released that year rightly described China as an aspiring peer competitor, aiming to erode U.S. influence not only in the Pacific but also globally: “For decades, U.S. policy was rooted in the belief that support for China’s rise and for its integration into the post-war international order would liberalize China.” Contrary to that hope, the strategy argued, “China seeks to displace the United States in the Indo-Pacific region, expand the reaches of its state-driven economic model, and reorder the region in its favor.”

Trump’s revised view of China had implications across a range of policy areas—from military requirements to global supply chains and technology transfer. But the key to unlocking necessary reform is, first, the recognition that the strategic context has changed. The complacent view of China as a peacefully rising power that would soon settle into the role of “responsible stakeholder” in the American-led global order—the dominant Washington forecast for China since the Clinton administration—crumbled under the reality of a Chinese Communist Party determined to use all the resources at its command to maintain its exclusive grip on political power domestically and to increase Chinese influence regionally and globally.

These observations about Trump’s foreign-policy successes will be deeply offensive to almost everyone whose inability to stand him is now entering its second decade. And it will meet fierce resistance from those whose biggest concern is the foreign-policy damage Trump has done, especially to relations with European allies. What’s more, it turns out that the fatigue that often begins to gather during the fifth year of a two-term presidency is a factor whether the terms are consecutive or not. Trump’s high-velocity second term is exhausting not only because of the pace and breadth of policy change but also because Trump’s approach to the actions he takes seems to be premised on the belief that he has vast popular support, which he doesn’t.

Nevertheless, a more realistic view of the “China Challenge,” as a State Department Policy Planning document from 2020 dubbed it, was Trump’s most significant course change, and the new perspective (though not Trump himself) has won substantial bipartisan support. Today, that phrase “China challenge” seems if anything too mild a description of the danger Beijing poses to American-led global order.

But if consistency is the hobgoblin of little minds, Trump’s is capacious enough to encompass inconsistencies great in range and grand in scale. So against the rare constancy of the Trumpian view of China, he presented a breathtaking contradiction in December 2025. He proclaimed his willingness to allow American chipmaker Nvidia to sell to China its high-end H200 GPU, potentially providing a boost to Beijing’s effort to catch and surpass U.S. companies in pursuit of artificial intelligence.

This accommodation seemed wildly at odds with pretty much everything Trump has done or said about China going back to his pre-presidential years. His announcement produced a broad-based “What the hell?” moment—well, the actual word being used is not “hell”—among all those who have spent a decade getting more and more concerned about China, if not at Trump’s behest than at least in seeming accordance with his sympathies.

Why the apparent reversal? The search for explanations for Trump’s actions often brings trouble down upon the seeker. In many cases, no sooner does a plausible-sounding explanation emerge than events, often generated by Trump himself, overtake and obviate it. Thus, for example, at first blush, overriding the ban on the sale of the H200 was a massive boon to Nvidia, which one might either applaud or abhor in accordance with one’s view of Big Tech in general, Nvidia itself, or the weight of its valuation in one’s 401(k). So perhaps the announced deal was the latest installment of Trump’s deal-making, pro-business streak. But the United States government also stands to benefit fiscally from Trump’s deal, whose terms apparently call for 25 percent of the billions in proceeds from chip sales to flow to the Treasury. The legal basis and policy soundness of the government’s taking a direct cut on the sale of a product seem dubious—in effect, an excise tax beyond the power of the president to impose without congressional authority. But in the Age of Trump, it’s always full speed ahead, since the Republican-controlled Congress provides no blowback and creates no friction.

That leaves the courts to act, but if they don’t like it, Nvidia could presumably just make a voluntary contribution to the Treasury anyway according to Trump’s formula. True, that would give the company the discretion to welsh on the deal, but we have also reached the point at which CEOs have good reason to be concerned about incurring the president’s wrath. Certainly, the Nvidia CEO, Jensen Huang, has been heavily courting Trump this year, including at a meeting on December 3, mere days before Trump’s December 8 announcement. So maybe the administration is operating squarely in the tradition of “the chief business of the American people is business,” in the words of Calvin Coolidge. Billionaire CEOs are people, too, including Huang, a man who has contributed hundreds of millions to such public-spirited projects as Trump’s inauguration and Trump’s White House ballroom.

But maybe the Nvidia go-ahead isn’t so much about the company and the Treasury as it is the latest gambit in Trump’s pursuit of a mega-deal on tariffs and other economic matters with Chinese dictator Xi Jinping. The zigs and zags of Trump’s tariff maneuvering are maddeningly difficult for outsiders to follow—as they apparently are even for senior administration officials. While the latter have more access to Trump, they aren’t mind readers, and even if they were, Trump’s mind changes with some frequency for reasons known at most to himself. To Trump stalwarts who thought they knew his mind on China, the Nvidia announcement must have come as an even greater shock than it did to the Trump-curious and neutral Trump-watchers who wish success upon his presidency for the sake of the country. Trump-despisers, for their part, gravitate toward the view that whenever Trump does something of which they disapprove, he reveals his true colors. Here they had the option of classifying the decision as Trump coming under the sway of domestic billionaires kowtowing to him, or as Trump reverting to his supposed affinity for foreign dictators or strongmen. Or both.

To view Trump’s move as a bargaining ploy is to put Trump back into a comprehensible Trumpian context. Selling our biggest adversary our excellent chips doesn’t sound like an element of making America great again, but if the real goal is to butter up Xi for a deal that rectifies all Trump’s trade grievances with China, that sounds more MAGA-compliant.

But maybe that’s not what’s going on either. Maybe—or so emerged another line of interpretation—the Nvidia green light was actually Trump setting a trap for China. Next-generation GPU chips such as Blackwell are already available from Nvidia, and still-more-powerful GPUs like Rubin units are on the runway. So perhaps Trump was opening the way to get China hooked on an obsolescent chip. Widespread Chinese adoption of the H200 might lock in a chip gap with the United States in the lead. Easy access to the H200 would also slow the imperative for Chinese tech companies to develop competitive or possibly superior chip technology. In effect, Trump would be flooding China with American-made goods in the expectation that doing so would undermine China’s indigenous capacity to innovate and manufacture—a karmic high-tech turning of the tables on how China supposedly hollowed out ordinary American manufacturing by flooding the United States with goods produced by cheap Chinese labor.

We live in a golden age of speculative prognostication—not for its accuracy, of course, but for sheer volume and speed. It’s not quite right to say that the posters on X/Twitter foresee every possibility and every conceivable set of consequences flowing from each one. But there’s a lot bouncing around out there. So naturally, the possibility that Trump has set a trap for Xi has generated the second-order argument that Xi is on to him. In the end, the argument goes, China will buy very few H200 chips, precisely in order to avoid stunting the growth of Chinese chip development. Accordingly, the big deal will almost certainly be a bust, both for Nvidia and the Treasury. Or, in the telling of others, China will buy only enough H200s to retro-engineer them to steal the tech, as it has with so many other innovative American products—although it’s rather fanciful to suppose, given the sophistication of Chinese espionage efforts in this area, that export controls have hitherto been successful in preventing China from obtaining sufficient H200s to steal the tech already. But the chip design by itself is not enough. Manufacturing copycats, we are reliably told, is also beyond China’s current capabilities.

Still more esoteric is the rumor making the rounds that a joint effort Google and Meta are about to unveil will undercut Nvidia’s chip dominance with a system that will allow the products of others to easily run on the currently exclusive Nvidia operating system CUDA—which is now the standard for AI development. Though Nvidia is famous for chip-making, a huge component of its market valuation is a product of its software “moat” exclusivity, which will soon end. If true, Trump either knows this or believes it, or he doesn’t. The ensuing possibilities: He’s either supremely well-informed (because billionaires talk to billionaires in the 19th-century manner of the Cabots talking only to the Lodges), or he’s a complete ignoramus. Whichever is true, the China deal is an example of great dealmaking or supreme perfidy, depending on your prior outlook on him.

_____________

So to sum up, we don’t know and may never know why Trump made this decision. We don’t know whether it will go through in the end, and if it does, how many Nvidia GPUs will end up in China and with what effect on AI development there. And we don’t know how damaging the implications of such sales will be to U.S. national security. Though many claim otherwise, no one has a Magic 8 Ball. Few have seen the intelligence assessments of the effect of the sale of H200s, and they aren’t talking (and may be wrong). And few of us are privy to the group-chat banter of the Billionaire Boys’ Club, for what that’s worth.

What we do know, with a high degree of confidence, is that if there is indeed a China challenge—and there is—a presidential directive clearing the way to provide Beijing a boost in its effort to outpace us on artificial intelligence is not part of the way to meet it.

The reason that’s true has less to do with the technological ins and outs of the H200 question than with questions related to American seriousness of purpose, moral clarity, and resolve on China more broadly. Since the end of the Cold War, the United States has had a relatively easy time presiding over what the Chinese have come to call “hegemonic civilization.” Credit the Chinese Communist Party for recognizing the reality of U.S. power—that’s the “hegemonic” element—as well as its ideational element, the “civilization” that we have used our power to preserve and expand through such means as encouraging indigenous democrats working to liberalize governments of varying degrees of authoritarianism; calling out human rights abuses such as China’s slow-rolling genocide of the Uyghur people; and entering security partnerships or alliances with countries menaced by their neighbors.

We have our values and the power to back them up. China has different values and the power to maintain its grip at home. Increasingly, China seeks to flex and extend its influence abroad, with emphasis at present on intimidation tactics directed against our Asian allies, including military provocations. What, in China’s view, should come after “hegemonic civilization”? At first, a global order in which China is the dominant power in Asia, with U.S. influence there drastically diminished. In the long run, perhaps a return to hegemonic civilization, the problem with which all along may have been that the United States, not China, is hegemon.

That places the desire of the United States to remain on top of the global order on a collision course with Chinese ambition. In many gray-zone areas, that clash is already underway. It’s important to note, for example, that China thinks it has every right to help itself to the fruits of technology developed in the United States and “the West,” broadly construed. That’s because of the supposed illegitimacy of the self-serving global order that “the West” has been imposing on the world since about 1500, and especially during the “Century of Humiliation” from the First Opium War in 1839 through Mao’s revolution in 1949. This outside imposition kept China down, an outrage against a nation with thousands of years of continuous history. China is catching up by all available means and is unlikely to stop at parity.

The George W. Bush administration’s 2005 National Defense Strategy declared that the United States would not allow a “peer competitor” to rise to rival the United States. Some critics called this vow hubristic. Democratic administrations since then have sought to manage the relative decline in American power through adroit navigation of international law and institutions they hoped would buttress a rules-based order with widespread buy-in, including from China. The result wasn’t good. Now we have the Trump 2025 National Security Strategy vowing, like Bush’s, to maintain U.S. military dominance without expiration. It states: “We want to recruit, train, equip, and field the world’s most powerful, lethal, and technologically advanced military to protect our interests, deter wars, and—if necessary—win them quickly and decisively, with the lowest possible casualties to our forces.” That’s fine, but making good on it is not solely an American question. China seems not to accept this American ambition, and Beijing gets a say in whether we achieve it and at what cost.

A third world war, this time primarily between the United States and China, is not inevitable. But protracted conflict with China is indeed inevitable, and managing it requires both strategic clarity and moral clarity. China is not our friend, nor is China going to become our friend, because our ambitions and our values clash. That doesn’t mean we can’t have mutually beneficial trade relations, in the ordinary comparative-advantage sense. We can welcome China’s ideological challenges to the superiority of our system as an opportunity to argue in its favor. We can hold to our view that our “China problem” lies not with the Chinese people but with the Chinese Communist Party. In Trumpian terms, we can acknowledge and welcome the desire in Beijing to make China great again insofar as it can be peacefully reconciled with great-again America. But our relations with China will also have a darker side. To pick a mild example, we need a covert capability to steal China’s technological advances in areas where they surpass us—if we don’t already have one, which would surprise me.

The strategic and moral clarity we need to be effective in maintaining our position as China continues to rise is not just a matter for policymakers and elected officials. It includes the American people as well. Some Trump acolytes have been doing their best to persuade Americans to turn wholly inward—or perhaps more accurately, to persuade American leaders that the people have turned inward. All the talk of “endless wars,” which claims to reflect public opinion, is more an attempt to influence elite opinion against exercising American leadership in the world. It’s having its moment, though Trump himself has had no qualms about bombing Iran, the Houthis, Venezuelan drug-runners, and Islamists in Nigeria—and has enjoyed substantial public support for such actions.

These are not sideshows, but China is the main problem. To navigate it, Trump and his successors will need support in American public opinion. With the proper framing, they will have it. It entails a clear articulation of the value to Americans of the American way of life and the threat the global ambition of the Chinese government poses to it. The proper framing is “what we stand for versus what they stand for.”

Selling advanced American chips to China does not fit with that framing. It’s a rebuke to the proposition that our security interests and China’s are not aligned, a case of business as usual in an area where most Americans can plainly see the potential for peril. Whatever the American ambivalence, or worse, about the coming of AI, it is certain that Americans prefer American dominance in AI over Chinese dominance. The same is true for all other tech areas of consequence. Trump’s H200 decision arises in the context of this competition. It invites the conclusion that this tech competition is no big deal. The next time a proposed tech sale with national security implications arises, it invites the remark, Even Trump thought selling high-end GPUs to China was fine. China will cheerfully exploit this precedent, as will U.S. commercial interests when opportunities arise. Also, the argument that we need to buttress our military capabilities to counter China’s growing power at the same time as we’re selling them chips that can contribute to their growing power doesn’t exactly roll trippingly off the tongue. We’re not necessarily at the Cold War level of worry that “the capitalists will sell us the rope with which to hang them,” in the pithy statement misattributed to Lenin. But we shouldn’t act to compromise the proposition that we ought not sell our adversary the rope with which to hang us.

_____________

Late in 2025, I played a war game set from 2028 to 2032 involving an attempt by China to take Taiwan by force. The purpose of the game was not to find out how such a move would turn out, but rather to test the effect of a military capability China is developing that the United States currently has no plans to meet: a conventionally armed (not nuclear-tipped) intercontinental ballistic missile capable of striking anywhere in the United States. But the game was illuminating on the broader question nonetheless.

I played on the China team. This turned out to be a relatively straightforward proposition. China’s objectives, as our team articulated them, were clear. First, obtain Taiwan. Second, do so at the lowest possible cost militarily. Third, reduce U.S. influence in East Asia. The China team understood that achieving the third objective would flow by itself from achieving the first objective. The best path toward achieving the second objective was to do everything possible to avoid provoking the United States into a full-scale war over Taiwan. So no initial Chinese attack on U.S. bases, ships, and military personnel. China’s pretext, in the game, was that Beijing was resolving an internal Chinese dispute over “splittist” tendencies on Taiwan, which Beijing asserts is part of China and the United States, diplomatically, does not dispute.

The China team observed no similar clarity of purpose from the team playing the United States. As the U.S. war-gaming team sent U.S. carriers steaming with uncertain purpose toward the conflict zone, American diplomats busied themselves seeking to reassure U.S. allies of the American commitment to their security. China’s diplomats were busy themselves, reminding U.S. allies that an internal Chinese dispute over Taiwan had nothing to do with them, and that they should stay out—not, by the way, that the United States was actually urging allies to mount a common defense of Taiwan. The U.S. team seemed to think Washington could thwart China’s third goal, reducing American influence in the region, while remaining equivocal about how far the U.S. should or could go to thwart China’s first goal, conquering Taiwan. It’s hard to reassure treaty allies while abandoning a de facto ally under attack. China would not hesitate to draw allies’ attention to this contradiction and the questions it raises about the U.S. commitment to them.

The problem is that “strategic ambiguity”—we might just defend Taiwan, our current declared intent—is a peacetime posture designed to deter. It’s not a policy that directs action if deterrence fails and shooting starts. I think China understands this. In coming years, the most effective deterrent to a Chinese military move may not be the prospect of the U.S. Navy riding the waves to Taiwan’s rescue, its Pacific allies sailing in the wake; it may be a (nonnuclear) Taiwanese capability to inflict harm on China within the power of Taipei to direct.

I would hate to think that the abandonment of the U.S. position in the Pacific, including our allies and our commitment to keep sea lines of communication open, began with Trump’s announcement about H200 sales to China. But such is the possibility that has arisen in his glaring departure from clarity on the China challenge.

This article was originally published on January 20, 2026 in Commentary.

The Disease of Presentism

24 Monday Nov 2025

Posted by Tod Lindberg in Commentary

≈ Leave a comment

Review of ‘Violent Saviors’ by William Easterly 

William Easterly’s Violent Saviors is a libertarian tract on global economic development and political economy. But as its subtitle—The West’s Conquest of the Rest—demonstrates, this is a magical moment and angle for such a polemic. Easterly presents Violent Saviors as an economic history, but it is equally a work of intellectual history. Violent Saviors tells the story of bad ideas running amok, and the good ideas that warred with the bad.

Easterly, a professor at New York University, begins with European powers and their colonies and early imperial conquests, including the destruction or removal of local populations typically described as “savages” by the newcomers. The slave trade of the 17th and 18th centuries looms prominently, as does slavery after the Revolution and Jim Crow after the Civil War. He recounts the Belgian King Leopold’s atrocities in late-19th-century Congo as harrowingly as anyone ever has, as well as other bad scenes from the British Empire in India and the Caribbean. The American victory over Spain in 1898 yielded the spoils of the Philippines, which the United States proceeded to despoil. He also recounts the coercive depredations of Lenin and Stalin as they remade Russia into the Soviet Union, resulting in the death of tens of millions, including in the Holodomor, the vast Stalin-induced starvation in Ukraine. Hitler, for his part, saw the conquest of the lands of the inferior “race” of Slavs to the east as essential to German development, and of course, the Jews had to die. The Communist revolution in China led to still more scores of millions of deaths, which Mao Zedong regarded as an acceptable price for the modernization of China.

Easterly connects the dots of this history by citing the recurring justifications of the words of the perpetrators and conquerors, what he calls the “Development Right of Conquest.” The powerful and prosperous countries of Europe, eventually encompassing “the West,” with the United States in the lead, justified their expansionism either in the name of bringing development to the benighted locals or—if the benighted locals were unable or unwilling to advance—in the name of making better use of the land and resources of the territory in question. In all cases, the colonizers and conquerors proceeded entirely without the consent of local populations, especially over the question of whether they actually wished to develop. Often, these powers assigned themselves the role of civilizing the savages and spreading true religion. The same was true of the Soviet Union and China, which brought Communist ideology into the mix in pursuit of their own visions of progress. The Hitler regime proceeded on the basis of supposed Aryan racial superiority.

Human “agency” or “dignity” is precisely what those acting on the Development Right of Conquest denied to those in their way. Easterly notes that “extermination” used to have the additional meaning of “driving out.” This, the developers often did, though sometimes they resorted to enslavement (often rationalized as an improvement in the living conditions of those enslaved) or “extermination” in the modern sense of mass killing and genocide.

Easterly’s story is mostly one of bad actors. In the New World of North America, he starts with the Puritan John Winthrop, the first governor of the Massachusetts Bay Colony. Before he set sail for America in 1630, he set forth his justification for those who questioned the righteousness of his plans. Easterly writes:

Winthrop argued that the conquerors [had] a right to the land because of their ability to improve it. Surely, God had not intended “a whole Continent as fruitful and convenient for the use of man to lie waste without any improvement.” … The natives in New England had failed to fulfill God’s mission. “This savage people” did not develop the land or themselves. “They enclose no land, neither have they any settled habitation, nor any tame cattle to improve the land by.” … Winthrop in 1629 reassured his audience that the English seizure of Indian lands was actually beneficial for the Indians, because the English would teach them the arts of improvement….A fateful us and them had entered the lexicon of progress. The idea of “us” conquering “them” for their own good—the imaginative and fateful mixture of coercion, paternalism, and superiority—was destined for a momentous career for the next four centuries.

Violent Saviors does not lack for additional examples along Winthrop’s lines, and Easterly relishes skewering the purveyors.

But there are heroes as well. They are the economists and other thinkers who upheld the essential elements of freedom and consent at the heart of classical economic thinking, starting with Adam Smith. Voluntary consent, not coercion, should be the basis on which human beings interact with one another, in the marketplace and in all other respects.

To good effect, Easterly juxtaposes Smith and the Marquis de Condorcet, the 18th-century French philosophe for whom political and economic decision-making should be the province of experts. The expert governance for which he advocated was for the good of those governed, whether they liked it or not. Most of the major problem areas Easterly explores—colonialism, slavery, forced migration, etc.—also produced in opposition classically liberal thinkers in the mold of Smith. These figures were willing to cut against the grain of their times to deplore the deplorable, even if they often lost their arguments to contemporaneous forces of coercion. The liberals would be vindicated in time—through such developments as the end of slavery and Jim Crow, the extension of property and voting rights to women, and self-determination or national liberation for the colonized (though independence often served to usher in a new crew of oppressors).

_____________

The word “libertarian” appears only twice in Violent Saviors, and only once as a description of the school of thought that informs its perspective on historical events. Easterly prefers the term “liberal” for those heroically aligned in their time with his ideal of noncoercive policy action that accords with equal human dignity by putting freedom or liberty first. And indeed, the individuals he elevates warrant that label. But they are not alone, and unfortunately, this is where the book loses its way—and finds its odd congruence with the “presentism” of our times. With just a few grudging asides to the contrary, Easterly joins the mighty chorus of dismissal of the past and its people as morally and intellectually indefensible—because their views are so out of sync with the wiser opinions of today.

Now, to be fair, many of the popularly held opinions of today are indeed wiser than those of yesteryear. The argument in favor of slavery was just as bad when slavery was a matter of current controversy as it would be if you could find anyone propounding it today. One must note that Thomas Jefferson was a slave owner as well as the author of the Declaration of Independence. The fact that he didn’t personally exemplify the principles he espoused does not negate the validity of the principles, or their historical impact on the spread of liberty.

This is the general point that Easterly leaves out of Violent Saviors. If the only true liberals of the past were those whose views turned out to be sufficiently in accord with the views of the present, it’s hard to see how liberalism could have managed to attain its dominance in the modern world. Easterly in the end calls for a resolution of the “us-versus-them” problem through an expanding sense of “us.” That’s fine, but it ignores the extent to which the status of “us” has already expanded historically.

Perhaps Easterly’s answer as to why and how it expanded is that the truth of liberal principles, including those of neoclassical economics, is enduring. But to attain a purchase in the world, these principles must have a purchase on human beings—most of whom, like Jefferson, have additional and often contrary drivers behind the actions they take. Should Jefferson have abstained from the Louisiana Purchase because of its implications for Native Americans and the westward expansion of slavery? The implication of Easterly’s effort to put noncoercion first would seem to be yes. And indeed, in good libertarian fashion, he quotes John Quincy Adams on the foreign policy aims of the United States, which “goes not abroad in search of monsters to destroy. She is the well-wisher to the freedom and independence of all. She is the champion and vindicator only of her own.” Easterly then cuffs Adams for staying silent on forced Indian migration, which demonstrates how he has established a purity test no politician in history has ever passed.

The past is monstrous yet great, harrowing yet inspiring. In no sense is it merely the motion of ideas—though ideas both good and bad have animated those who made history. To the extent there has been real-world “progress” in economics or politics—and there has been—it has never been unsullied by wickedness.

On the North Sentinel Island in the Bay of Bengal lives a tribe of 400 to 500 indigenous people who have had next to no contact with the wider world. They are among a small number of isolated tribes that have no record of violence or conquest (though the Sentinelese do not take well to visitors and murdered a Christian missionary in 2018 who had the effrontery to step onto the sand of their beach). With such possible exceptions, all the rest of us are the sons and daughters of conquerors who extinguished the bloodlines of the conquered. No one has a rightful claim to a smug superiority to history.

This article was originally published on November 24, 2025 in Commentary.

The Assassination Fan Base

21 Tuesday Oct 2025

Posted by Tod Lindberg in Commentary

≈ Leave a comment

Eras creep in and taper off without clear demarcation; only in retrospect can we classify a single event as the beginning of one or the end of another. With the two assassination attempts on Donald Trump as well as the successful hits on United Healthcare CEO Brian Thompson and conservative activist Charlie Kirk, we must now ask whether a new era of assassinations is upon us, an era comparable to the one that gripped the country between 1963 and the early 1980s.

The assassination of JFK in November 1963 shocked America to its core. The America of 1963 did not need a “visual” to be shocked; it would be nearly 12 years before the public got a chance to see the “Zapruder film,” the grainy, black-and-white home movie of Kennedy’s last moments as his motorcade passed the Texas School Book Depository in Dallas and an assassin’s bullet tore through his skull. The mere notion that anyone might kill the president of the United States was itself borderline unthinkable—in a way, perhaps, even for those charged with the safety of the president. Riding in the back of a limo open to the air was as normal for presidents and politicians in its day as it has been unthinkable ever since.

That kind of weird innocence persisted in the immediate wake of the assassination. The authorities quickly located the assassin and arrested Lee Harvey Oswald. They could not imagine that the open way they disclosed plans about Oswald’s movements in custody would provide an opportunity to a man with a gun and murderous intent to get so close. Photographers were on hand to capture Jack Ruby firing a single shot at close range. The best-known image of Lee Harvey Oswald is the one in which he is already dying—a split second after being hit, a stunned expression on his face and his mouth slightly agape.

With a president and his assassin both dead, the conclusion of investigative commissions that Oswald was “a lone gunman acting alone” instantly had to vie with numerous other scenarios that emerged from elaborate chains of speculation. And does, to this day. We are used to writing off such speculation by invoking the term “conspiracy theory,” which is a way of dismissing those who challenge widely accepted accounts of the supposed facts of a situation. But throughout history, assassinations have more often than not been conspiracies. While some American killers—like “disappointed office seeker Charles Guiteau,” who shot President James Garfield because he didn’t get a patronage job—did the job themselves, John Wilkes Booth was not “acting alone” when he assassinated Lincoln, just as Brutus was the leader of a conspiracy to murder Julius Caesar.

Only 49 years before JFK was killed, numerous conspiring individuals with bombs and guns had stationed themselves on Archduke Franz Ferdinand’s path through Sarajevo in 1914 before Gavrilo Princip got him, setting World War I in train. Puerto Rican nationalists worked together to try and assassinate Harry Truman in 1950. Thus it was hardly irrational to inquire into the possibility of a conspiracy, especially since Oswald was a known Communist who had defected to the Soviet Union five years earlier before giving up and returning to the United States. Law enforcement always considers the possibility that more than one person is involved in a difficult-to-solve murder and sometimes finds a conspiracy at work. When the conclusion is otherwise, as it was with the Warren Commission’s finding in the Oswald case, it’s an easy leap for conspiracy-hunters to conclude that law enforcement must have been in on it.

The impact of the JFK assassination and its presence in our common cultural conversation did not wane over time, in part because assassinations and political violence started to become commonplace in its wake. It was the first in a series of high-profile murders or assassinations, or attempts thereof, that persisted for more than two decades.

The Kennedy assassination marked the turn as well to a period of volatility in American politics in a bizarre conflation of the civil rights movement, campus protest, early feminism, a new intellectual radicalism, and the escalation of and mounting opposition to the war in Vietnam—as well as resistance to all these trends.

There had even been a prologue to the Kennedy assassination some months before in 1963: the assassination of civil rights activist Medgar Evers, the NAACP’s field officer in Mississippi. Evidence pointed to a member of the Ku Klux Klan, who in 1964 was charged and brought to trial. All-white juries hung twice, letting him go free. (In a controversial retrial in 1994, a mixed-race jury convicted Byron De La Beckwith of the murder.)

After Kennedy, the next high-profile American assassination was that of the militant black nationalist Malcom X, in 1965. This was indeed the product of a conspiracy. Multiple gunmen opened fire on him as he was about to give a speech. In this case, however, the deed was a product of an internecine struggle, since the perpetrators were members of the Nation of Islam, from which Malcom X had grown increasingly estranged in recent years.

The impression of the 1960s as an assassination spree solidified with the slayings of civil rights giant Martin Luther King Jr. in April 1968 and, mere months later, President Kennedy’s brother and former Attorney General Robert F. Kennedy, then himself a presidential candidate.

James Earl Ray, whose racist views were unconcealed, shot King with a high-powered rifle from a building across from King’s Memphis motel room. King and his colleagues had stepped outside onto the walkway of their second-floor room. A photographer who was staying in a room nearby heard the shot and rushed onto the walkway, where he captured an image of the mortally wounded King collapsed on the floor as members of his retinue, arms outstretched, point in the direction from which the shot came.

Riots broke out across the country, wreaking devastation in urban areas. Ray, who fled the scene but was quickly identified as the prime suspect, was apprehended abroad, traveling on a counterfeit passport, in June 1968. He confessed and was sentenced to 99 years, though he later recanted and unpersuasively alleged a conspiracy. In 1975, however, Americans learned that J. Edgar Hoover’s FBI had been surveilling King as part of its COINTELPRO (Counterintelligence Program) activities, which let loose a fresh torrent of conspiratorial speculation.

Bobby Kennedy was a senator from New York and, by June 1968, a leading candidate for the 1968 Democratic presidential nomination. On June 4, he was in California celebrating his primary victories that day in California and South Dakota. As Kennedy and his entourage made their way out of the hotel through its kitchen shortly after midnight, Sirhan Sirhan, 24 years old, rushed RFK, shooting the senator three times, including once at close range in the head. Sirhan wounded several others before he was subdued. Photographers captured iconic images of a busboy kneeling next to the fallen RFK trying to comfort him. Kennedy died in a hospital 26 hours later.

Sirhan was a Palestinian Christian who had emigrated with his family from Jordan to the United States after Israel’s War of Independence. He was blunt about his anti-Semitic motive. As Sirhan saw it, RFK’s support for Israel in the Six-Day War in 1967 and for sending Phantom fighter jets to the Jewish state in its aftermath warranted his murder. Convicted at trial, he received a sentence of death, later commuted to life in prison. Though eligible for parole, he has been denied every time, most recently by Governor Gavin Newsom in 2023. He was also repeatedly denied motions for a new trial, alleging that he had been drugged or brainwashed as part of a conspiracy.

In May 1972, Alabama Governor George Wallace was on the presidential campaign trail in Laurel, Maryland. With television cameras rolling, Wallace took off his suit coat and began to work the crowd. Arthur Bremer, 21, stepped up and fired multiple times, gravely wounding Wallace, who survived but remained paralyzed from the waist down. The television footage, captured at close range, is graphic. Wallace falls to the blacktop on his back, and blood spreads on his white shirt. Bremer’s diary, which Harper’s published to substantial controversy as a self-portrait of a sociopath living in troubled times, claimed he had shot Wallace in pursuit of notoriety. Once again, conspiracy theories abounded, including one advanced by the left-wing literary provocateur Gore Vidal. He claimed the diary had been a plant by the Nixon White House. The jury rejected Bremer’s insanity defense, and he spent 35 years in prison.

Assassinations were only one part of the broader story of political violence in the United States and abroad in this period. U.S. troop deployment in Vietnam peaked at more than 530,000 in 1968, and protests began to accelerate. During the Democratic National Convention in 1968, the streets and parks of Chicago saw violent clashes between police and thousands of demonstrators protesting the war. The revolutionary Black Panther Party, which espoused a doctrine of armed resistance, was involved in shoot-outs with police in Oakland, Chicago, Los Angeles, and New Orleans. Members were also charged with plotting to plant bombs in public buildings. To “bring the war home,” the Weather Underground, a revolutionary spin-off of the left-wing Students for a Democratic Society, launched a bombing campaign targeting police stations and government buildings, including the Pentagon and the Capitol. Police who found themselves the target of rocks generally broke up protests with tear gas, but in the case of Kent State University in 1970, members of the National Guard opened fire on student protesters, killing four.

Nor was the United States alone in political violence. At the 1972 Summer Olympics in Munich, the Palestinian group Black September took Israeli athletes hostage and killed 11 with the world watching. “Bloody Friday” in Northern Ireland involved more than 20 separate bombings orchestrated in Belfast by the Irish Republican Army in little more than an hour. Prime ministers of Jordan and Spain were among the more prominent victims of assassins in 1971 and 1973, respectively. The first president of Bangladesh was slain alongside most of his family in a coup in 1975.

Meanwhile, in the course of less than three weeks in September 1975, there were two attempts on the life of President Gerald R. Ford. The first was by a follower of the notorious cult leader and convicted murderer Charles Manson. Lynette “Squeaky” Fromme pointed a gun at Ford but didn’t fire it. She said she wanted to draw attention to environmental causes. The second would-be assassin, Sara Jane Moore, who later said she sought to spark a violent revolution, got a shot off but missed. A man nearby grabbed her arm as she fired a second time, deflecting the shot, which wounded a bystander. Film crews captured both attempts, and the first impression the footage leaves, when viewed 50 years later, is of a sudden outburst of confusing motion. If one didn’t know what one was seeing, one wouldn’t. Fromme and Moore each received life sentences and won parole after serving more than 30 years. (Moore died in September at the age of 95.)

In the mid-to-late 1970s, the Red Army Faction in Germany murdered 34 politicians and industrialists, while the Red Brigades in Italy kidnapped and slaughtered leading Italian politician Aldo Moro. In the United States, following the resignation of President Nixon, the brief Ford administration, and the 1976 election of Jimmy Carter, American history journeyed through a truly dismal period, one that prominently featured the assassination of San Francisco Mayor George Moscone by political rival Dan White in 1978. Moscone had won the election only with the support of a radical minister named Jim Jones, who later fled to Guyana along with nearly 1,000 members of his People’s Temple. When Representative Leo Ryan went to the Jones compound to make sure his constituents weren’t being held captive, he was murdered on Jones’s orders. Jones then coerced his flock into consuming a poisoned fruit drink—a mass murder-suicide that took more than 900 lives.

The sense that America had been spinning out of control helped put Ronald Reagan in the White House by a staggering margin of 10 points and 40 states in 1980. Though a victory of such magnitude indicated an electorate deeply fatigued by the period’s malaise, there would be no instantaneous exit. Barely three months after Reagan took office, John W. Hinckley shot Reagan as he was leaving an event at the Washington Hilton. Network news cameras captured the shooting, and the footage aired within minutes. Reagan recovered, but his injuries were far more grave than initially reported. A jury found Hinckley not guilty by reason of insanity (he had committed the crime to attract the attention of the teenage actress Jodie Foster), and he was institutionalized at Saint Elizabeth’s Hospital in Washington and released in 2016. Federal law at the time of the shooting required the government to prove the defendant was compos mentis rather than requiring the defendant to prove he wasn’t. After the Hinckley verdict, lawmakers reversed the burden.

Less than two months later, Mehmet Ali Agca shot and critically wounded Pope John Paul II in Vatican City’s St. Peter’s Square. Video captured John Paul II collapsing in the open-air Popemobile as it sped off. Agca, a Turkish national, had previously been imprisoned for the 1979 murder of a Turkish newspaper editor. He then escaped. Agca told multiple conflicting stories about the motive behind the assassination attempt. Italian authorities quickly determined that Agca did not act alone. His lengthy stay in a luxury hotel in Sofia established a “Bulgarian connection” that pointed back through Bulgarian intelligence and perhaps the East German Stasi to the KGB—and thus to the highest levels of the Soviet Union. The danger the Polish pope posed to the Soviet bloc was undeniable, but Soviet apologists denied any such connection, of course, and the evidence was pooh-poohed or simply ignored by many on the grounds that it would aggravate U.S. relations with Moscow. The Pope, for his part, forgave Agca, met him in prison, and urged his release.

One more stop abroad will suffice in this account: In 1984, the Irish Republican Army set off a massive bomb targeting UK Prime Minister Margaret Thatcher in her hotel at a Tory party gathering in Brighton. It killed five people, and Thatcher herself was a narrow miss. Images of the hotel in the aftermath of the blast show a ragged V-shaped crater in the upper floors of the hotel and just to the left of the center of the façade. Patrick Magee, the IRA bomber, had planted the bomb and its timer during a stay at the hotel four weeks before. In this case, neither the perpetrators nor their motive was in doubt: The IRA issued a statement claiming responsibility and promising to try again. Police arrested Magee and other IRA members in London in 1985.

_____________

And then the assassination era came to an end, after two decades in which it was one of the dominating facts of our common life. Of course, political violence didn’t end altogether, nor will it ever. Consider the anti-government bombing of the federal Murrah Building in Oklahoma City in 1995, which claimed 168 lives and injured hundreds more. Horrific it was, but thankfully, it proved to be a one-off. (The 9/11 attack six years later belongs in a separate category.)

The new source of recurring violent shock to the American psyche was the mass shooting, especially school shootings, which are distinctive not for high-profile victims but for the random ordinariness of the mise-en-scène. The Columbine High School shooting in Colorado in 1999 brought the matter home to the suburbs, where it remains. Anti-Semitic violence is a more recent recurring disruption.

Now, however, we are at least several attempts, some of them successful, into what may be a new era of assassinations. The dramatic near miss against Trump at a campaign rally in Butler, Pennsylvania, in July 2024 was Exhibit A. Next was a second, fortunately bullet-free, attempt on Trump at his golf course in Florida. Third was the slaying of UnitedHealthcare’s Thompson in midtown Manhattan in December 2024. Finally, and most dramatically, was the assassination of Charlie Kirk at a college campus event in Utah in September. Other noteworthy recent entries include the slaying of the Minnesota state house’s Democratic majority leader in June 2025, an aborted attempt on Justice Brett Kavanaugh in June 2022, and an arson attack in April 2025 on the governor’s mansion in Harrisburg, Pennsylvania, intended to kill the state’s governor, Josh Shapiro, as he and his family slept. At a further remove, mass shootings took place at a GOP congressional baseball practice in 2017 and at a constituent meeting in Arizona with Democratic Representative Gabby Giffords in 2011. Though some were wounded in these events, the lawmakers survived.

If a new era of assassinations is underway, it has not supplanted but rather overtaken the era of mass shootings. These have continued, with churches and Jews increasingly prominent among the targets.

But why assassinations then? And why now?

The potential victims of assassins haven’t changed. They are prominent individuals whom assassins have targeted specifically. (Political violence in the form of terrorism typically doesn’t have a particular individual as a target; its design is to terrify large populations.) Among the would-be assassins themselves, certain commonalities also emerge: a desire for notoriety, to leave an otherwise unattainable mark on history, and to pursue a political agenda.

On the latter, it’s worth noting that animus among the killer or killers toward the victim is about as close to an inescapable feature of assassination attempts as one gets. This is true of necessity in the case of a conspiracy. “Loners seeking notoriety” don’t work for groups operating secretly. But it must hold true for the loners as well. The prominence of the victim has specific qualities, and the murder, or attempt, can’t be separated from animus related to what has made the intended victim famous. Supposedly, John Hinckley was willing to try to kill Jimmy Carter, but he actually did try to kill Reagan. Bremer said he would kill Wallace or Nixon—but not George McGovern or Hubert Humphrey, the top two Democrats in the race for their party’s nomination. The efforts to deny the leftward orientation of the political motivation in the assassination of Charlie Kirk would be laughable were they not a symptom of our current era. In general, it’s hard to find a would-be assassin who professed undying love and support for the individual he was attempting to kill. The will to annihilate is specific—the target is not a president but this one.

If assassins are trying to change the course of history, which of course many are, they are attempting to do so by eliminating an obstacle that stands in the way of their vision, whatever it may be. The living JFK was an obstacle Oswald could and did overcome, leaving an indelible stamp. But how did history change? In ways we can never really know, and certainly not in ways that could be known in advance by an assassin. What if Lincoln or Kennedy had lived? The question invites those reflecting on it to project onto the past their current-day political preferences for how history might have been different. The deed may have been undertaken in pursuit of sweeping change, but in most cases, we are left with only the deed itself and the consequences that flow from it directly: better presidential security after JFK, the extension of Secret Service protection to presidential candidates after RFK, a national holiday and memorial on the National Mall for MLK. But would the Vietnam War or race relations have turned out differently? No one can know. The melodramatic assertion that the assassination of Franz Ferdinand caused World War I doesn’t survive the reality of a chain of decisions that could have gone differently after the assassination.

That political violence in the form of assassination has political motives, and that they are often wildly out of sync with what the assassination will achieve, are constants not just in the recent American experience but throughout history. The big difference between the late-20th-century era of assassinations and the present is that the former was largely a story of the targets and the perpetrators (whether an individual or a conspiracy). Now, however, the story is about the targets on one side—and the perpetrators (alone or in conspiracy) and the supporters of the perpetrators on the other.

Consider the JFK assassination. This is high history, an individual inserting himself indelibly into the nation’s story via the act of assassinating the president. The nation is an onlooker (which is the reason I made so much, in my brief catalogue of the previous period, of the visuals we have from these assassinations and attempts). We, the people, were not involved. We absorbed the information about events, and we responded accordingly, typically and normally with distress and outrage. Now, we mustn’t be naive. There were, no doubt, Americans whose black hearts welcomed the death of one or both Kennedys, and that’s likely all the truer in the case of King. But if so, they mostly kept it to themselves or articulated it only in the presence of intimates. You could say that the public square, notwithstanding the First Amendment and broader commitments to free speech, placed a cordon sanitaire around permissible opinion, keeping out such noxiousness as assassination celebration and consigning it to a fringe communicating through the mails with mimeograph sheets, and to private homes. A public culture of good manners also has the effect of cultivating well-mannered people and perhaps as well a moral sensibility of actual decency.

In the previous era of assassinations, Americans also had at their disposal a social resource that went largely unappreciated at the time—the ability to ignore. If you were the craziest person out of a million Americans in the 1980s, when there were 250 million Americans, you were pretty socially isolated from the 250 or so people who were just as crazy as you. Or make it the craziest in 100,000: isolated from your 2,500 peers nationwide. The latter might have proved sufficient for a gathering in a windowless big-city room. But that’s not quite enough to make a revolution.

Now, through social media of all kinds, the 2,500 worst among us can easily find and interact with each other on a regular basis, exchanging views on whom to hate and perhaps who constitutes the gravest peril to the life they want to live. But now this is not a matter of just a single set of 1-in-100,000 sociopaths, nor is it obvious that sociopathy becomes dangerous only as it affects the 1-in-100,000 worst. Perhaps 8 million to 10 million people in America have been or are incarcerated for violent crimes. Out of 260 million adults, that’s at least 1 in 50. Meanwhile, there are multiple overlapping and non-overlapping sets of sociopathic individuals based on the particulars of the sociopathy. In addition, the term “sociopath” may not describe a fixed quality, in the sense that one either is or is not sociopathic—or evil. Someone on the fence can be cultivated by a sociopath to turn sociopathic. One can even imagine an individual who has no intention of personally killing a member of some specified “out group” nevertheless encouraging someone else to kill through the mere addition of a “like” click on social media. In the context of terrorism, this process is generally known as “radicalization.” In the context of American polarization and the ways in which we increasingly dehumanize those with whom we disagree, we might call this “sociopathization.” I think, given recent examples, these processes do produce would-be assassins, including successful ones. But I also think they have produced something of significantly broader importance—in fact, the defining characteristic of the new era.

It’s the assassination fan base.

The wounded Reagan quipped to the lead doctor on his trauma team, “I hope you’re all Republican.” What made the quip amusing is that both Reagan and the team knew it mattered not in the least whether its members were Republican. The doctor, a Democrat, amusingly but perhaps a bit solemnly replied, “Today, we’re all Republicans.”

I think most Americans would like to live in a world where such an exchange is still possible. I’m not sure it is.

A significant number of Americans took to Bluesky, TikTok, Reddit, and the streets to express their regret that Trump’s would-be assassins had been unsuccessful and to praise the assassins of Charlie Kirk and UnitedHealthcare’s Thompson. In the case of the latter two, many asked or offered their opinion on who should be next. (I won’t cite any examples. If you are at all online, you have seen them in abundance, and if not, you may want to spare yourself.)

At present, the assassination fan base is pretty much a left-wing subculture. So far, it has applauded attempts on the lives of a former president, a conservative activist, a corporate CEO, and a conservative Supreme Court justice. The closest thing on the right is the online coterie claiming that Trump supporters who stormed the Capitol on January 6, 2021, did nothing wrong, either because they were let in or were duped into entering by a government plot. But to speak up on behalf of J6 defendants, even to the point of alleging conspiracies, is not the same as celebrating the assassinations of Kirk and Thompson and lamenting the misses on Trump. I hope no comparable figure on the left becomes a target that thereby allows us to ascertain whether there is a comparable fan base for assassination on the right.

We should also note that even “lone gunmen, acting alone” have to get their ideas about whom to target from somewhere. They, too, have social networks, which likely traffic in in-group suggestions about who in the out-group are the worst of the worst. So we are now living in a political culture in which a potential would-be assassin can count on a social network for inspiration and an outpouring of public support after the fact. This is fertile ground for evil, perhaps because assassins always believe they are doing good. And we may be cultivating more and more of them.

This article was originally published on October 21, 2025 in Commentary.

The Good Books

26 Tuesday Aug 2025

Posted by Tod Lindberg in Commentary

≈ Leave a comment

Review of ’13 Novels Conservatives Will Love (but Probably Haven’t Read)’ by Christopher J. Scalia

Christopher J. Scalia has written a book that takes a valiant stand against the self-obsessed screen-culture spirit of our times. It’s called 13 Novels Conservatives Will Love (but Probably Haven’t Read). Scalia, now a senior fellow at the American Enterprise Institute, is a former English professor, and he has a deep and abiding love for literature as well as an evangelical streak that compels him to spread the joy. “Why read fiction?” he asks, and replies, “Simple: great fiction is a source of beauty, and beauty is good.”

This sentiment stands sharply in contrast to the milieu in which Scalia’s book appears. For years now, it seems that every day has brought a new story about how young people find it hard to read at book length, so thoroughly steeped are they in other media, especially short-form video content on their phones. While some enter the rejoinder that old people have been complaining about the declining literacy of youth for generations, this observation doesn’t clear the hurdle of the possibility that the literacy of youth actually has been declining for generations, with TikTok the impetus for the latest fall-off.

The arrival of AI in 2023 has made matters worse. Now you can get a summary of any book you want out of ChatGPT or Grok (though there is substantial risk AI will tell you something that isn’t true). Beauty aside, whatever utility that once came from reading a book—say, the ability to write an assigned paper—is capable of fulfillment by more efficient means. Nowadays, one can also bypass the summary and prompt AI to write the whole paper.

Under such circumstances, what is the utility function of reading? This is the first problem Scalia is up against. The second, which paradoxically points toward a solution to the first, is our gaping political polarization. With regard to literature, the leftward extreme has little to no use for works from the past, the authors of which suffer from the fundamental deficiency of bad character in the view of their modern readers—actually, their modern nonreaders—who feel themselves to be undeniably superior in sense and sensibility. At best, the malignant dead offer passages that can be pressed into service in support of one’s position in current controversies—a use of literature that is hardly novel, though rarely admirable.

A recent example is an article in the New York Times that recasts Jane Austen’s Mansfield Park as an extended hidden polemic against slavery. I’d say the urge to read Mansfield Park as an abolitionist tract begins with the recognition that Austen’s genius is undeniable—and therefore deserving of a context, however stretched, in which it can resonate with today’s bien-pensant opinion. Although this reading, if true, would reduce Mansfield Parkto a middlebrow problem novel—which it isn’t—the bright side here is that a new reader looking for a self-affirming anti-slavery allegory might make the pleasant discovery of something different and better.

The ransacking of the past in search of good material has at least the virtue of proceeding from a core hypothesis that literature has intentional meaning accessible to careful readers. The denial of this proposition is an even worse practice of the literary left. I have a higher degree of tolerance for critical theory than most in my political demographic—but not to the point at which authorial intention and meaning get dismissed as inaccessible and irrelevant as reader confronts text. Yet that view began its takeover of mainstream academia more than two generations ago, starting with Roland Barthes’s 1967 proclamation of “The Death of the Author.”

Within a decade, deconstruction and other fashions of theory had become so dominant that two UK novelists and professors, David Lodge and Malcolm Bradbury, could put forth a years-long tag-team procession of campus novels hilariously lampooning the phenomenon and its practitioners. Theory fully flowered as an object of satire with John L’Heureux’s 1996 The Handmaid of Desire, in which an ambitious professor at a school resembling Stanford (where L’Heureux taught) plots to replace the English Department with the “Department of Discourse and Theory”—even as he keeps locked in a desk drawer a copy of Austen’s Emma, to which he secretly repairs in times of stress.

The combination of screen culture, runaway presentism, and the triumph of theory over author gives Scalia the opportunity he has seized: making the case that conservatives should devote some of their time and intellectual energy to conserving the literary tradition of the novel. Ruling out already well-known candidates such as the Austen novels, 1984 and Bonfire of the Vanities, Scalia has picked 13 entries for his list, offering for each a summary interpretive essay, including relevant biographical details, and consideration of how the work resonates with conservative sensibility. He spells out the elements of the conservative disposition he sees reflected in his selections as follows:

They include the preference for gradual social and political change over sudden innovation and revolution; the recognition of the imperfectability of mankind and the consequent dangers—and inevitable doom—of utopian projects; an inclination toward time-tested traditions over abstract theory and untested innovation; a respect for religious belief, particularly in the Judeo-Christian tradition; and an emphasis on the institutions of civil society, especially the family.

It’s off to the books, then, with chapters starting with Samuel Johnson’s 1759 Rasselas, Prince of Abyssinia, and proceeding chronologically to Christopher Beha’s 2020 The Index of Self-Destructive Acts. And here’s where a certain competitive streak, as well as a certain modesty, must kick in.

In Lodge’s 1976 novel Changing Places, which kicked off the duet with Bradbury, a young British professor of English, on exchange at a university resembling Berkeley, introduces his American colleagues to a party game from home. It’s called “Humiliation.” Players take turns naming a work they haven’t read, and they get one point for each player in the group who hasread it. So to win, the humiliation one inflicts is upon oneself. In the Changing Places installment of the game, one assistant professor, seized by competitive spirit, blurts out “Hamlet!” No one believes him, but he swears an oath that he’s telling the truth, eventually storming out of the room over colleagues doubting his veracity. So, yes, a member of the English faculty who hasn’t read Hamlet. For his department colleagues, this is a bit much. He unexpectedly flunks his tenure review three days later and is driven into exile.

Scalia’s list immediately causes one to do a conservative literacy tally in line with “Humiliation.” To play this version, just give yourself a minus-one for each unread novel from the two above on Scalia’s list and the 11 following: Fanny Burney’s Evelina, Walter Scott’s Waverley, Hawthorne’s Blithedale Romance, George Eliot’s Daniel Deronda, Willa Cather’s My Ántonia, Zora Neale Thurston’s Their Eyes Were Watching God, Evelyn Waugh’s Scoop, Muriel Sparks’s The Girls of Slender Means, V.S. Naipaul’s A Bend in the River, P.D. James’s The Children of Men, and Leif Enger’s Peace Like a River.

I won’t tell you my score. But I will admit that I hadn’t read Johnson’s Rasselas or Beha’s Index before I agreed to review Scalia’s book. To check his work, I then did read them. It turns out that he’s a reliable and entertaining guide.

“Johnson wrote Rasselas over the course of a week in 1759 quite simply because he needed the money,” Scalia notes. And indeed, it reads like something written in a week by someone who needed the money—provided the someone in question was the towering literary figure of the eponymous Age of Johnson. In it, a young Abyssinian prince and his sister break out of their elegant captivity in the “Happy Valley” in the company of an older and wiser man, Imlac, who has traveled the world. He guides Rasselas in the search for his “choice of life.”

Rasselas is at times very funny. For example, Imlac tells the travelers about the time he once spent with one of the greatest astronomers in the world. The astronomer’s studies of the movement of celestial bodies have forced him to the conclusion that through his influence on them, he has the power to control the weather—though he has concluded it best not to do so. Seeking inner peace, the troubled astronomer solemnly transfers his unique power to Imlac. At the conclusion of Imlac’s tale, his auditors are amused to varying degrees by the madness of the astronomer. Imlac upbraids them: “Few can attain this man’s knowledge and few practice his virtues, but all may suffer his calamity. Of the uncertainties of our present state, the most dreadful and alarming is the uncertain continuance of reason.”

Scalia notes that the travelers’ journey becomes a stage for Johnson’s depiction of his ceaseless “belief in a universal human nature.” In the end, the journey is one toward an understanding of human potential and its perils, not a preparation for a culminating “choice of life.”

Of The Index of Self-Destructive Acts, Scalia notes that “Beha bristles when reviewers and interviewers compare him to Tom Wolfe.” But the family resemblance to Bonfire is unmistakable: money and influence in upper-crust New York City, ambition, selfishness, bad choices leading to the inexorable pressing in of fearsome consequences.

Though Wolfe’s powers of observation are keener, Beha has more psychological depth and wider intellectual range. That includes an exceptionally well-rendered character, Margo, a sometimes-aspiring poet whose interior monologues brim with unattributed passages from Wordsworth. She has set herself to the slow-moving task of seducing the novel’s married protagonist, Sam, for whom the affair is a close-run thing: “He was trying to do something impossible. He wanted to become someone else, but to do it while staying himself. He wanted to be the person who slept with Margo Doyle while remaining the person who was faithful to Lucy. It contradicted the foundational laws of Boolean logic.” Sam is a data journalist.

Scalia rightly calls Index “a novel about endings,” which he relates to George F. Will’s contention that the “foundational conservative insight” is that “nothing lasts.”

One could raise principled objections to Scalia’s project in its entirety. The appeal of great or even good literature is universal and should not be contingent on its consanguinity with the political preferences of today’s readers. We should read novels for their beauty and insight, not in search of affirmation of our pre-existing convictions. The problem is that while everybody used to think that, it’s now a view that many reject. Its remaining supporters are almost by definition culturally conservative.

Scalia didn’t pick this fight with progressive presentism, or with the threat screen culture poses to art. The fight began with an assault on the beauty and insight of the great “content creators” and “influencers” of the past. It’s ongoing, and Scalia is right to join it.

This article was originally published on August 26, 2025 in Commentary.

Reason to Believe

15 Thursday May 2025

Posted by Tod Lindberg in Commentary

≈ Leave a comment

Review of ‘Believe’ by Ross Douthat 

The title of Ross Douthat’s new book, Believe, is a verb in command form referring to God. Yet the ambition of the author is hardly on par with that of a revivalist preacher or a biblical prophet warning of God’s coming justice. Rather than a command, Douthat offers an invitation—to set aside the modern secular prism through which most of our assessment of contemporary morals, manners, and politics refracts and to reopen our eyes to the possibility of reasonable belief that God created our world and ourselves, has intervened in it from time to time in the past, may be doing so now, and may again.

Discourse that isn’t avowedly religious these days is instead thoroughly secular. Douthat, a believer and a columnist for the New York Timeswhose work often takes up religious matters, is the exception who proves the rule. I wouldn’t proffer the claim that no one else at the Times believes in God. But if they do, they certainly don’t let it affect their work. The world the journalists describe is one of natural causes exclusively, straight back to the Big Bang or some other “forever.” They believe human existence itself, including the thoughts we harbor and the actions we choose to take, emerges from an evolutionary process that began with life far less than human. History is one thing after another, perhaps bending toward progress, but certainly not providential. The Times will report as needed on opinions human beings have about supernatural or God matters—the fact that people believe. But the truth or falsity of any such belief is a nullity with regard to explaining how the world works.

For example, as Mark Lilla remarks in a discussion of modern religious revivalism in his recent book Ignorance and Bliss, “A national poll in 2012 revealed that well over half of all Americans believe in the possibility of demonic possession.” He notes, “The exorcist for the Archdiocese of Indianapolis told a journalist in 2018 that he had received seventeen hundred phone or email requests for exorcisms in that year alone.” He adds: “Which is madness.” I have little doubt Lilla is mostly correct in his judgment. But entirely? It falls to Douthat to inquire whether one reason the numbers are rising is that demonic possession is real. He holds that possibility open—and even cites the profusion of such beliefs and cases in a supposedly secular age as evidence that we are missing something. That something would be the ongoing “enchantment” of a world supposedly “disenchanted,” that is, done with God and the supernatural.

_____________

Douthat asks us to conduct a thought experiment: Imagine living in a world in which pretty much everybody believed in God. This is not necessarily a closed off and benighted Dark Ages in which all secular activity must defer to ecclesiastical authority. It’s also the predecessor to our world now. You could pray to God for a good harvest, but doing so would not relieve you of the responsibility of being a competent farmer. You could wonder at God’s creation while rigorously investigating how it works, from the movement of celestial objects to the workings of the human body—in order to more fully appreciate and give thanks for God’s handiwork.

His conclusions from this thought experiment are twofold. First, there is nothing essentially incompatible about a world of belief and a world in which science and technology proceed apace. Second, the conclusion that secularization is an irreversible process that has permanently supplanted belief in God among rational human beings is nonsense. God remains a distinct possibility.

The first chapters of Believe spell out Douthat’s case for why believing is reasonable and indeed preferable to nonbelief. What caused the Big Bang, that light in the void? What caused a lifeless world to sprout vegetation and, a couple of days or eons later, to teem with living creatures? How is it that humans have consciousness and minds, including free will? Douthat is neither a physicist nor a biologist nor a neurologist investigating the workings of the brain. Rather, he is a journalist of an endangered species, endowed with seemingly limitless skeptical curiosity to find out as much as he can about subjects that really matter. He has read widely and deeply enough to bring others up to date on the latest science and its compatibility with belief. In other words, he has updated the maps of the various cul-de-sacs in which science and reason find themselves in their search for a godless explanation for all that is—the final ruling out of the Almighty.

At times, however, Douthat wants to go further, inferring the necessity of a designer from the appearance of design in the makeup of the universe, life, and mind—that is, a rational human necessity to believe. Science has shown that the universe is constructed according to such rigorous specifications that if even one step in the instruction manual had varied infinitesimally, the whole thing would be impossible. Without an omniscient and omnipotent creator, there could be no creation or universe at all. This argument attempts to fill with a logical leap the gap between the limit of what science can aspire to know and a creator-god.

The problem here, as I see it, is that it makes no sense to speak of our universe in terms of its probability or improbability. It’s here, and we’re in it. Though those of us so inclined should investigate its workings as thoroughly as possible, its factuality is self-evident, requiring no explanation by us. William James, in Varieties of Religious Experience, recounts how the 19th-century transcendentalist philosopher Margaret Fuller declared, “I accept the universe”—to which her contemporary Thomas Carlyle immortally responded, “Gad, she’d better.” Even the transcendentalists understood they had little choice in the matter. We may believe the universe exists because God created it, but knowledge in the sense of empirically verifiable science eludes us, and the universe rolls on.

_____________

Thus we have the broad contour of Douthat’s case that “the basic justifications for a religious worldview are readily accessible to a reasonable human being.” But his book also has a subtitle: Why Everyone Should Be Religious. That raises the thorny question of which if any religious tradition to embrace beyond one’s personal faith in God. Douthat is Roman Catholic, but he states in his introduction that “my aim for this book is to be useful to readers who might take many different religious paths.” He saves his Catholic apologetics for the last chapter and presents it as a case study of the broader faith formation for which he advocates. He allows for readers who will “choose to close the book just before that chapter.”

At this point, Believe becomes something of a self-help book. Its hypothetical audience comprises those readers Douthat has successfully persuaded of the reasonableness of belief to the point of their actual belief. What then? Douthat imagines a “Spirituality” section of a secular bookshop. Side by side are works on Judaism, Christianity, Islam, Buddhism, Hinduism, Mormonism, the occult, demonology, Wicca, astrology, and more. His advice for pilgrims in choosing a book is to look at those representing the biggest religions first—simply because their success in winning adherents suggests that they’re on to something. And because his case for God opens the door to demons that Mark Lilla keeps shut, Douthat warns off those embarked on his self-help journey from dabbling in the occult and other forms of asking for trouble.

Douthat explicitly acknowledges that he is at risk of “perennialism” here—that is, of judging all major religions to be converging on “permanent truths about God and the cosmos” and perhaps as well on a set of common moral and ethical teachings about the good life and how to lead it, quite apart from the particular revelations at the core of each tradition. His book is at its least attractive with this guidance for seekers: “If you find the general case for faith convincing but Islam’s traditional attitude toward women retrograde or the Catholic Church’s teaching on, say, masturbation ludicrous, then you should seek out the forms of religion that agree with you, build them up and let them try to build you up; become the change you seek in the religious world.”

Taking this statement more seriously than it deserves, I’d say it smacks a bit of reserving for man the right to show God who’s boss—which invites the divine rejoinder “Where were you when I laid the foundations of the earth?” And, of course, promoting belief in the premise underlying God’s question to Job is, after all, Douthat’s main point. He’s just too nice to say: Disobey God’s law and his prophets if you wish, but be prepared for the possibility of consequences.

As to where the foundations of the earth came from, that’s not something science or logic can tell us. But apropos of Douthat’s primary contention, it’s entirely reasonable to believe the answer is God.

This article was originally published on May 15, 2025 in Commentary.

A Theory of Rawls

10 Tuesday Dec 2024

Posted by Tod Lindberg in Commentary

≈ Leave a comment

Review of ‘Liberalism as a Way of Life’ by Alexandre Lefebvre

Alexandre Lefebvre’s Liberalism as a Way of Life belongs to the school of Anglo-American political philosophy whose defining figure was Harvard’s John Rawls, author of A Theory of Justice and Political Liberalism. The Rawls school views the pursuit of justice as the cornerstone of a liberal society. But Lefebvre’s insightful account is also something of a departure, an original and at times exciting contribution to our understanding of liberalism—in the classical as opposed to the partisan political sense.

A professor at the University of Sydney, Lefebvre describes himself as a “liberal all the way down.” For him, as for Rawls, in deciding on laws and social arrangements that can perpetuate a just society, we must place ourselves behind a “veil of ignorance” in “the original position” of a member without attributes of such a society—that is, without knowledge of one’s place in it, whether one is rich or poor, favorably endowed with genetic and environmental gifts or encumbered by their absence. From this position, Rawls argues, reasonable people—knowing nothing about where they would fall in such a society—would write its rules in such a way as to favor the least advantaged among them, because once the veil is lifted, they could find they occupy exactly the least advantaged position.

No society has ever been created from such a premise, of course, and from a certain angle, it could look as if Rawls was rejecting all claims of justice on behalf of societies—or nation-states—that fail his test of putting the least advantaged first. Thus, one could read Rawls—and many did—as calling for radical reform and repudiating the legitimacy of states organized according to other priorities. But that’s not how Rawls saw it. His liberal theory of justice didn’t encompass a corresponding theory of injustice, according to which all societies reflecting deviations from reasonable conclusions behind the veil of ignorance were so disconnected from justice as to warrant condemnation. He thought they could improve.

In any event, Lefebvre notes, Rawls himself became somewhat dissatisfied with A Theory of Justice—notwithstanding its colossal success in his field and its standing as perhaps the most influential work of political philosophy of our time ever since its publication in 1971—on the grounds that it was “unrealistic.” So he turned to another question in his later work, Political Liberalism (1993). “How is it possible,” Rawls asked, “for there to exist over time a just and stable society of free and equal citizens, who remain profoundly divided by reasonable religious, philosophical, and moral doctrines?”

The answer is that competing but reasonable “comprehensive doctrines” at work among people could yield to a “liberal political conception” in which big-picture doctrines would be respected insofar as they were reasonable. For Rawls, a comprehensive doctrine is anything that spells out the details of how to live a good life: as an Orthodox Jew, say, or an Opus Dei Catholic, or a Communist, or a cultivator of Aristotelian virtue. Political liberalism would never seek that status of a “comprehensive doctrine,” but it could be the organizing and limiting principle according to which adherents of various doctrines could live in a stable society of free and equal citizens.

Lefebvre writes about Rawls’s evolution as a thinker very well. But his distinctive achievement is to note that nowadays Rawls needs turning around. That’s because, he says, “decade by decade, year by year, and day by day, liberal ideals and sensibilities have spread to every nook and cranny of the background culture of liberal democracies.” Further, “so ubiquitous is liberalism that it has performed that special trick of disappearance achieved only by omnipresence: to have become invisible by infiltrating everything.” Lefebvre’s conclusion: “Love it or hate it, we all swim—we positively marinate—in liberal waters. And here is my critique: the firewall that political liberalism draws between comprehensive doctrines and a liberal political conception obscures this changed landscape.” Lefebvre doesn’t quite say that liberalism has become a “comprehensive doctrine,” indeed the defining comprehensive doctrine, of modern democracies. But he ought to have.

_____________

It is true that liberalism offers no single formula for how to live a good life of the sort that once characterized states and their gods or ideologies. In that sense, it is not “comprehensive,” leaving to individual judgment or conscience many important questions about how to live. But liberalism does include at least one doctrinal element that overrides any and all presumptions of any and all comprehensive doctrines that might contradict it. That is the “reasonableness standard.” Liberals insist that adherents of comprehensive doctrines, whether they count themselves liberal or not, be “reasonable” in their adherence. Indeed, the word is central to Rawls’s research question—so much so that one could say he slipped his answer into the question itself. An unreasonable “comprehensive doctrine”—that is, a coercive or violent doctrine—cannot be part of a “just and stable society of free and equal citizens.” It is up to these contending comprehensive doctrines, almost all of which have historical associations with coercion and violent propagation, to modify themselves as necessary to become “reasonable.” Whether their adherents profess allegiance or opposition to a liberal political conception, their behavior must conform to it, or there will be adverse consequences for them.

Lefebvre has the acuity to see that, generally speaking, the behavior of individuals and organized groups in modern democratic societies is liberalism-compliant. He addresses his book mainly to those who identify themselves as through-and-through liberals in the sense of both the Rawls of A Theory of Justice and the Rawls of Political Liberalism— people who are, if not “liberals all the way down” like himself, then most of the way. Unfortunately, this nudges him into two related observational errors. As Peter Berkowitz notes, Lefebvre is too stingy in recognizing the genuineness of the liberalism of people who, for whatever reason, don’t identify themselves as liberals. As an exercise, Lefebvre would like liberals to imagine themselves in the “original position” when thinking about the fair distribution of social goods. That might be sufficient for a certain subset of liberals of the left-progressive sort, but if those were the only people practicing “liberalism as a way of life,” there would be no book to write about a society imbued with liberalism. The point is that most conservatives, most Orthodox and other Jews, most devout conservative and liberal Catholics, most evangelical Protestants, and many other non-progressives in Western societies, are nevertheless practicing liberals in daily life. They follow their “comprehensive doctrines” in a reasonable way—which is to say, within the overriding noncoercive liberal comprehensive doctrine. Although it may be tough for those steeped in the ways of Anglo-American political philosophy to accept, even the vast majority of Trump supporters are functionally liberal—not those who stormed the Capitol or those who think storming capitols is a good idea, but most everyone else. It may also be tough for people who spend their lives theorizing politics to accept that many Americans and other denizens of modern society don’t care very much about politics at all, and that’s fine.

His second observational miscue lies in his characterization of the gap between liberalism as the pursuit of justice or fairness and the actuality of the liberal world we live in. Rawlsian justice is what “liberals all the way down” want; but “liberaldom,” in Lefebvre’s coinage, is what we (all of us, whether we want it or not) actually have. The liberalism of liberaldom falls far short of what sweet reason would yield behind the veil of ignorance. In his own characterization, Lefebvre’s liberaldom is to liberalism as Kierkegaard’s “Christendom” is to Christianity—a complacent world in which we are all failures unable to live up to our professed ideals. In his view, liberals in liberaldom have much to answer for. He admits that he and his wife “spend a lot of money to send our daughter to private school” for the additional opportunity it provides, even though that might betray the egalitarianism required by Rawls’s conception of justice. “Now it’s your turn” to start your self-help program by making your own necessary admissions, he admonishes his liberal readers.

That is refreshingly honest, even though what such admissions really serve to illuminate is a key problem with Rawls’s approach to the pursuit of justice—which is that we will always strive to do more for those we love and who are in our own personal care, and that there is nothing unjust or immoral in that. Lefebvre is accordingly a little too hard on himself and on liberalism as we live it.

This article was originally published on October 15, 2024 in Commentary

Moyn v. World

10 Tuesday Dec 2024

Posted by Tod Lindberg in Commentary

≈ Leave a comment

Review of ‘Liberalism Against Itself’ by Samuel Moyn

Samuel Moyn was born in 1972, which was, in its way, perfect timing. There can be no doubting his youthful precocity; his writing bears traces of it to this day. But even a precocious child of the 1970s and ’80s couldn’t have had much in the way of direct contact with the social and political upheaval that gripped the United States and the West in the 1960s and 1970s—to say nothing of the real-time controversies and choices in the aftermath of the Second World War.

The world emerged from that war horrified not only by its devastation but also by the stark realization of just how awful were the possibilities of man’s inhumanity to man. True, the right side won. But the war itself, the Holocaust, the rapid dissolution of a wartime alliance with the Soviet Union into a Cold War in which the Soviet side pursued a totalitarian form of global ideological and political dominion—all this left serious people wondering whether the horror of mid-20th-century Europe was past, or merely prologue to something worse. The Soviet Union, having established its dominance in Central and Eastern Europe at the end of the war, sent dissidents to the Gulag at home, smashed an uprising in Hungary in 1956, put missiles in Cuba in 1962, and crushed the Prague Spring in 1968.

Moyn, now the Chancellor Kent Professor of Law and History at Yale University, knows this history—as history. But he was about seven years old when the Soviet Union invaded Afghanistan, the Marxist Sandinistas came to power in Nicaragua, and the revolutionary regime of Ayatollah Khomeini took American diplomats hostage in Iran. Did he get to stay up late in the last year of the Carter administration to watch ABC’s America Held Hostage at 11:30 Eastern? I don’t know. What I do know is that by the time Moyn got to college, the Berlin Wall had fallen. And by the time he graduated, Germany was reunited—a geopolitical fact that troubled the sleep only of those on the Soviet side of the wartime alliance that had defeated German fascism less than half a century before. Also during Moyn’s college years, the Warsaw Pact dissolved, the Soviet Union broke up, and the Baltic states and Ukraine (among other former Soviet Socialist Republics) became independent.

Now, if you lived through much or any of what transpired en route to the amazing collapse of the Soviet Union, you might have said something along the lines of “Whew, close call.” Or even, “Thank God.” But if, like the Chancellor Kent Professor of Law and History at Yale University, you missed all that and only read about it later, you can simply take as a given the victory of freedom, democracy, the West, whatever. That’s what happened, after all. And without a glimmer of gratitude or even apparent awareness of what you’re doing, you can move on to your self-admiring excoriation of the supposed intellectual and moral failings of those who took the side of freedom, democracy, the West, whatever.

Liberalism Against Itself, Moyn’s new book, presents the story of how a group of intellectuals—the “Cold War liberals”—struggled to grasp the situation of the world in the two decades after the Second World War and ended up betraying liberalism and the principles of the Enlightenment in a way that fundamentally narrowed the vision of and ambition for human political action in pursuit of progress.

In Moyn’s estimation, Enlightenment-inspired liberalism has yet to recover from this Cold War betrayal and may never do so. Nor does liberalism necessarily deserve to recover, such were the hideous and unnecessary transfigurations the Cold War liberals wrought. All this took place, by the way, even before the Cold War liberal tendency split and “collapsed” into what Moyn views as the sibling depravities of neoliberalism and neoconservatism. The only real hope is that a new generation of thinkers (oh, I see Samuel Moyn has his hand up) will repudiate the narrow vision of Cold War liberalism and attach us to the Enlightenment’s radical faith in human possibility via politics.

Moyn is well-read. But he is less interested in understanding the thinkers he analyzes than in prosecuting the case against them—or rather, against the Weberian “ideal type,” the “Cold War liberal,” he has dragooned them into representing. Many elements of this ideal type will be familiar to anyone with even passing acquaintance with what intellectuals were arguing about in the postwar period. Yes, it is true, many of the thinkers of the day developed a deep antipathy toward collectivism and a regard for individual liberty as the best of the liberal tradition. Yes, among the Cold War liberals, there developed a philosophical “anti-canon” that generally began with Rousseau and extended through Hegel and Marx into its real-world manifestation in Soviet Communism and, especially, Stalinism.

Yes, the Nazi regime, though drawing on different sources as well, did have “totalitarianism” in common with the Soviet Union, in the view of the Cold War liberals. Yes, the preservation of individual liberty against the danger of this totalitarianism looked to be Job One. Yes, the United States was the locus of resistance. And yes, the Cold War liberals saw this as a struggle between good and evil.

They rejected (though I think it’s fair to say they nevertheless feared) the historicist claim that Communism was the inevitable victor in the contest with democratic capitalism or democratic socialism—more broadly between totalitarianism and the “Free World.” They also rejected the relativist tendency of the strain of historicism incapable of drawing a distinction between good and bad in politics. The West, they believed, really was worth preserving on the merits. And they rejected the view that the human being was perfectible through political or any other means. On the question of whether the human was permanently and inescapably dark, they differed. But they had in mind, above all, preserving whatever good there is in the human.

The Cold War liberals, like every generation of intellectuals before or since, also had the intellectual fashions of their times to contend with, as well as ample personal vanities generally stemming from the conviction, not wrong, that they were smarter than everybody else. Moyn pretty much has the same bias in his own favor, which is not supported by the text he has produced. Indeed, Liberalism Against Itself is a shambles in many, many ways—literary, intellectual, political, and especially moral. It’s organized into chapters featuring the names of (I presume) Moyn’s eccentric short list of leading Cold War liberals: Judith Shklar, Isaiah Berlin, Karl Popper, Gertrude Himmelfarb, Hannah Arendt, and Lionel Trilling. In the hands of a deft writer, this approach of connected intellectual profiles can work well—it does in Mark Lilla’s The Reckless Mind, for example. But here, why he chooses these six and not others is murky, and Moyn’s consideration of them spills from chapter to chapter often seemingly on the basis of when something pops into his head. I have edited numerous books, and at around page 25, I found myself grumbling about what a shame it is that nobody edits books anymore. By page 50, I was struck with the harrowing thought that the published version of the book appears after, not absent, heavy editing.

He is a scrutineer of ephemera par excellence. Does it really matter that Hannah Arendt may never have read Judith Shklar’s After Utopia, something Moyn deduces because Arendt’s library contained a copy of the book with no handwritten notations? He includes a reproduction of the typescript contents page of Shklar’s doctoral dissertation, for example. He does so, I think, in an effort to vivify his discussion of how the structure of her dissertation changed from its submission to its publication as After Utopia. There is a kind of filial piety here with regard to Shklar, but in the case of all those subject to his criticism, his scrutiny seems so small-minded that it all but rehabilitates their weaknesses. By the time Moyn is done attacking Lionel Trilling for his embrace of Freud’s dark view of human nature, for example, I was almost ready to give Civilization and Its Discontents a fresh hearing.

But the infamia Moyn pronounces on the Cold War liberals is not, in the main, related to the trivialities that manage to bog down a book of merely 170 pages plus notes and index. Moyn’s indictment is that their fear of the collectivism of Soviet Communism was so exaggerated that they were willing to abandon and attack the more ambitious Enlightenment project of human perfectibility through political action in favor of acquiescence to and defense of an American and Western individualist status quo shot through with injustice.

Let me reframe, as the structuralists say. What this book actually argues, though its author does not know it or want it to be so, is that the Cold War liberals grasped the most pressing moral problem and political challenge of their lifetime with unwavering clarity. They understood that Communism, like Nazism, was evil, and that freedom, which starts with individual liberty, is good in itself, but fragile. They recognized that the ambition on the other side was total—that is, totalitarian—and in hot pursuit of global victory, both ideologically and politically. They sought to thwart this victory as best they could in their area of comparative advantage, the life of the mind. They did so in part by defending the values of individual liberty embodied in the United States and the West but not the Soviet bloc.

Most of them recognized that the actualization of liberty in the West was incomplete, but that its opponents were out to crush it in its entirety. They argued all this out among their intellectual peers while such characters as Dean Acheson and John Foster Dulles were busy elsewhere. And they did all this without knowing whether freedom would persist in the West against a permanent adversary (the optimistic view) or would fall to decadence and a radical onslaught at home or in a nuclear Holocaust.

But Moyn knows how the Cold War turned out, and to him, the outcome seems so obvious that everybody at the time should have been able to see it coming. Facing down an opponent bent on remaking politics into a collectivist enterprise under, say, Stalin’s dictatorship, why stick to a defense of individual liberty against the collective when you could embrace a more positive Rousseauian project of collectively removing the chains in which the human birthright finds itself?

Moyn can imagine no intellectually or morally satisfactory answer to this question. His foray into the writings of his subjects is for the purpose of framing the inadequacy of their stance, not to understand it. Yet there is an answer common to his subjects, one that makes sense in their times and ours. It’s that freedom is often first on the chopping block among those who presume to know and speak in the name of the “general will.” The intellectuals of Cold War liberalism got that right, and their greatest legacy ought to be an awareness of the need to preserve individual freedom while pursuing political improvement, lest the “improvement” take an oppressive, totalitarian, or even genocidal turn. Whatever they got wrong, they were right about that.

A Russian invasion of Ukraine and a Hamas massacre in Israel perhaps serve as a reminder, to those born too late for the last round, that the defense of freedom under attack is a permanent political challenge. Anything resembling progress in politics is contingent on human beings, including intellectuals, rising to the occasion. Surely this should not be beyond the grasp or beneath the amour propre of the Chancellor Kent Professor of Law and History at Yale University.

This article was originally published on January 15, 2024 in Commentary

What It Means to Be Better

10 Tuesday Dec 2024

Posted by Tod Lindberg in Commentary

≈ Leave a comment

Values are central to American foreign policy, and there’s no use pretending otherwise

A consistent point of contention in the debate over American foreign policy has to do with the respective roles of American interests and American values. On the center-left in the United States, it’s common practice simply to assert that American interests and values are, if not one and the same, at least in substantial accord. This is also a view significantly held on the center-right. But it has come under challenge in recent times by those on the right who are seeking to clarify and simplify matters by chucking values (viewed as sentimental self-indulgences) out of the debate in favor of strict calculations of national interest. In dueling manifestos released over the past year and a half, “national conservatives” and “freedom conservatives” have laid out contrasting visions for the future of the United States. But though they differ in many ways, both place the advancement of U.S. national interests as the top American priority in its relations with the rest of the world.

Now, the national interest is, of course, something every state pursues by definition. But the course of global events sometimes imposes choices on countries with an inescapable moral or values component—choices that have no less urgency than questions of national interest. One such event was the unprovoked Russian invasion of Ukraine in February 2022. Another was the Hamas massacre of civilians in Israel on October 7. In both cases, opposing sets of values were clearly on display. One set seeks the obliteration of an enemy and is more than willing to attack civilians in pursuit of that end. The other seeks an end to such wanton aggression. Those not directly involved in these conflicts are forced to decide whether to take a side, and if so, which one. This is a values question as much as a question of national interest. Opinion polls show that Americans support Ukraine and Israel rather than Russia and Hamas. Moreover, the situations of Ukraine and Israel, as victims of barbarous attacks, more closely align with American sympathies than those of Russia and Hamas as perpetrators. The question of what practical policy choices and real-world involvement the country should engage in when it comes to these matters is one thing. But the values choices Americans have made—and not just Americans—are inescapably part of the calculation.

Even when we think about American interests, the modifier “American” carries a lot more weight than it would in any other case where we’re simply describing a place on a map. Yes, American interests should be framed around the country to which the interests belong, namely, the United States. But more than just a geographical or sovereign tag, “American” also refers to a set of values that shape what American interests are and how to pursue them.

Contrary to what one typically hears from today’s self-styled “realists,” recognizing both the centrality and usefulness of values in American foreign policy is in no way a new thing, or a post-9/11 fantasy, or confined to starry-eyed liberal internationalists. The great classical realist Hans Morgenthau, who died in 1980, believed that national interests needed to connect to a national purpose and stated that a nation should “pursue its interests for the sake of a transcendent purpose.” Even Morgenthau’s most famous disciple, the realpolitik master Henry Kissinger, closed his 1994 masterwork, Diplomacy, by making the case—surprising, coming from him—that American foreign policy needed to remain grounded in national values and ideals. Kissinger declared that America “must not abandon the ideals which have accounted for its greatness” and that “for America, any association with Realpolitik must take into account the core values of the first society in history to have been explicitly created in the name of liberty.”

For these and others like them, it never was the case that values had no place in American foreign policy. Indeed, as these examples suggest, they believed values were foundational. Rather, the complication involves what our “values” are and, in a world with finite resources and capabilities, how far we can go in trying to spread them to other places and how to weigh that effort against competing priorities. While we may (and do) dispute how policymakers come down on these questions in particular instances, any attempt to avoid or evade these essential questions is preposterous. We can’t decide what to do without thinking about what we should do, and the values we hold will by definition figure in this task.

_____________

So what, then, are the values that the modifier “American” implies? Or to borrow from Morgenthau, what is that “American purpose” to which our interests should be connected? While Morgenthau suggested “equality in freedom,” the truth is that we should look to an even more fundamental element from which equality and freedom both spring—and that is, simply, human dignity. At our nation’s very beginning, the Founding Fathers articulated in the Declaration of Independence what would become our nation’s core value proposition: that “all men are created equal, that they are endowed by their Creator with certain unalienable rights.” By this, the Founders were expressing the (quite literally) revolutionary idea that each person has an intrinsic dignity, something given to them “by their Creator” rather than another human, and therefore a quality inherent to their very being.

Because this dignity is not bestowed by any other person, it cannot be taken away by any other person. In the view of the Declaration, it belongs inherently and equally to all as a gift from God. Many secular accounts of equal dignity belonging to all human beings have also been proffered over the years, offering those who prefer one a this-worldly ground for ideas about rights. Whether God-given or otherwise, from this equal dignity flows to each person a set of “unalienable rights,” at the core of which are “life, liberty, and the pursuit of happiness.” Rather than the state being the force that gives worth and meaning to each person, it is instead the individual that gives purpose to the government, which is here to protect those unalienable rights owed to each because of their equal dignity.

It is particularly telling that the Founders chose to start with this values proposition, and that only after establishing it did they move on to discussions of issues more commonly associated with core interests. Far from seeing values as a liability or an afterthought, our Founders rightly understood that values served as our greatest strength, and that our interests, both personal and political, flow from them, not the other way around.

In doing so, the Founders illuminated a profound but underappreciated truth—that national interests are intricately intertwined with national values. It is not, then, just the United States that must grapple with questions of values. It is that every nation—and every non-state actor with political aspirations—must do so as well. Each such state or actor must define its interests and the methods through which it chooses to pursue them in the context of some values framework. The inclination of many in the foreign-policy establishment to bypass this central fact creates the unfortunate tendency to assume a “moral equivalence” in the pursuit of national interests. They seem to believe that each country is just doing what every other country is doing.

The truth is, not every nation’s values, and thus the interests it chooses to pursue, are of equal moral standing. Some are better than others. While it is not unique to the United States for national values to affect national interests, what is uniquely (or at least distinctively) American is to have the values framework grounded so definitively in the principle of human dignity. And this very particular American values framework is in fact a superior one compared with the values frameworks held by such threatening geopolitical competitors as China, Russia, Iran, and Hamas.

Take for example three principles commonly associated with core national interests—security, freedom, and prosperity, the protection of which are core responsibilities of the state. These principles correspond at the state level to the Declaration’s enumeration of “life, liberty, and the pursuit of happiness” as rights belonging to individuals.

At an abstract level, all states pursue security, freedom, and prosperity. But what Americans mean by security, freedom, and prosperity is very different from what China’s Xi and Russia’s Putin mean. “Security,” for example, can mean merely the security from the “state of nature” that Hobbes found in a Leviathan state. Or it can mean the security of the citizens of a state against invaders through a strong military capable of deterring or defeating an invader. Security in either or both of these senses is something Putin or Xi would have no difficulty embracing. But security in the sense of a set of rights that inhere in the person and that the state is bound to intervene to protect when someone seeks to violate them—that indeed, the purpose of the state is to protect the security of individuals in exactly this sense—takes us to a richer place, one where Xi and Putin cannot go.

Similarly, “freedom,” which, in international-relation terms, means that a state should be able to pursue its own course without interference in its internal affairs from others. This is a matter of “sovereign right,” and Xi and Putin claim to be leading defenders of this aspect of statehood against meddlesome outsiders. Freedom in this sense is not just a matter of principle; it requires a nation to possess the strength that will prevent outsiders from interfering. The United States would agree. But though we have here reached the limit of what Putin and Xi mean by freedom, we have not exhausted its meaning to the United States as something to preserve. Once again, the United States values freedom as the condition of individual liberty Americans enjoy by right—and which the state has a constitutional obligation to protect. In Xi’s China, ethnic and religious minorities are rounded up and subjected to unthinkable atrocities. In Putin’s Russia, political opponents are poisoned with lethal nerve agents and its citizens are conscripted into a war of aggression. To Hamas, “freedom” appears to be impossible without the destruction of Israel and the elimination of Jews from the Middle East. While it is a truism that all states, including the United States, can improve on their human-rights records, it is simply true that some states have far more improving to do than others.

Even in the narrow sense of sovereign freedom, it is noteworthy that Russia and China demand it for themselves but deny it to their neighbors, whom they seek to dominate. But doesn’t the United States seek, as a global hegemonic power, to do the same, and not just by flexing its muscle, but by enticing others to embrace its values? Perhaps—but not all hegemonic powers have the same values. The substance of these rights regarding American values is distinctive. The United States is of the view that “freedom” is something other states should choose to protect and preserve among their own people, not only in the collective sense of freedom from dominance by others, but in the pursuit of individual liberty free of state control.

Finally, “prosperity.” That the United States, China, and Russia wish to be nationally prosperous is not in question—not least to pay for the security that protects their freedom, whether in a narrow sovereign sense or in the richer American sense. True, Putin has turned out to be rather self-destructive in this regard, inflicting economic misery not only on the mass of the Russian people but also on his “oligarch” elite through sanctions following the invasion of Ukraine. In China’s case, however, one can truly marvel at how much increased economic freedom over several decades has done to improve the lives of hundreds of millions of Chinese people (while disproportionately enriching a favored Chinese elite as well, to be sure). So it is that securing the ability to pursue “prosperity” is, once again, not merely a national aim but, aspirationally, also an individual endeavor.

We have entered a new era of great power competition and violent challenge, which once again is at its core a dispute between fundamentally incompatible values frameworks. Rather than seeking refuge in an abstract neutral standpoint that ignores major moral differences, conservatives should unashamedly make the case that American foreign policy should protect and advance American values, and that these are not the same values all other countries—particularly our chief competitors—seek to protect and advance. While the United States has been guilty at times of a flawed application of its values, the values of Xi, Putin, Iran’s leaders, and Hamas are simply fundamentally flawed.

We are better.

This article was originally published on December 15, 2023 in Commentary by Tod Lindberg and Corban Teague

← Older posts

Recent Articles

  • The Case for Trump’s War Is the Case for Bush’s War
  • The Age of Trump: A Sobering Return to Reality
  • The Invidious NVIDIA Deal
  • The Disease of Presentism
  • The Assassination Fan Base

Read Tod’s Articles

Archives

Blog at WordPress.com.

  • Subscribe Subscribed
    • Tod Lindberg
    • Already have a WordPress.com account? Log in now.
    • Tod Lindberg
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar