Liberty News

  • Why the Hawks Are Wrong about China Too    (Doug Bandow, 2019-11-14)
    Doug Bandow Some opponents of U.S. interventions in the Middle East don’t mind endless wars. They just think America needs to undertake a genuine Asian “pivot” or “rebalance” to counter China. For them, it is only a matter of which enemy must be fought. , Certainly the Sino-American relationship has become more fractious, with the Trump administration plotting geopolitical as well as economic confrontation. Every new dispute seems to lead to calls in Congress for additional sanctions. Some policymakers imagine a new Cold War and perhaps even military conflict. Both at home and abroad, the government of the People's Republic of China (PRC) is doing much harm. However, confrontation for the sake of confrontation, seemingly Capitol Hill's policy toward numerous nations, is counterproductive. Challenging the PRC will achieve little if Washington does not have clear and realistic objectives. , Yes, we need to pivot towards Asia, but that doesn't mean we should treat Beijing as an enemy. , China is an ancient civilization, spurred onward and upward today by lingering anger and resentment caused by centuries of oppression and humiliation. The formation of the PRC 70 years ago inaugurated a new era. Nevertheless, for the first three decades or so, China's potential was merely theoretical: Mao Zedong and the Chinese Communist Party (CCP) were perpetually at war with their own people. Once Mao passed from the scene in 1976, however, Beijing moved onto a path of growth. To the good, hundreds of millions of people escaped immiserating poverty. To the bad, a still-authoritarian regime gained strength and resources. The PRC's sharpest critics have developed a steadily expanding number of grievances on myriad topics: trade practices, North Korea, religious liberty, domestic economic policy, Hong Kong, regional territorial disputes, mistreatment of the Uighurs, other human rights abuses, Chinese overseas investment, intellectual property theft, Taiwan, investment access, discrimination against foreign firms, cyber warfare, and more. The issues are serious and the list is daunting. Demanding satisfaction on all of them guarantees failure. It is worth considering how Americans would respond to a China that made a similar set of demands, with threats of confrontation, retaliation, and even war. We would not be inclined to compromise, let alone surrender. Beijing's serious weaknesses, of which there are several, might even increase its intransigence. Washington must set priorities and consider carefully how pursuing too many competing objectives risks failing to achieve any of them. Economics tops President Donald Trump’s list. Structural reform matters more than his misplaced quest to arbitrarily cut the trade deficit, essentially an accounting fiction. More serious are barriers to access by U.S. firms, discrimination against foreign enterprises, intellectual property theft, organized corporate espionage, and state-backed cyberattacks. Moreover, attempting to restructure Chinese economic policy, such as subsidies to inefficient but politically sensitive state enterprises, is far more problematic. Economic intervention is significant and varied even in the U.S. and Europe, despite their pretensions as market-oriented systems. Washington and assorted state capitals do much to underwrite their favored interests. The political trade-offs go to basic sovereignty. Compromise will be necessary to win concessions: Beijing is not simply going to accept American dictates. Hong Kong has gone from a seeming success to a potential disaster in a few months. Over the administration’s objections, Congress is pushing for sanctions. But good policy should seek to change behavior, not highlight outrage, and this requires an understanding of Beijing’s core interests. There is no circumstance under which the PRC will grant full democracy to the special administrative region. None. Nor is there any chance that China will offer Hong Kong independence. Demanding that it do so might generate sanctimonious feelings of self-satisfaction in some legislators, but little more. In contrast, there is at least some hope that Beijing might continue to respect Hong Kong’s relative autonomy and avoid a violent crackdown. Even in these cases, however, ostentatious threats are likely to prove counterproductive; no rising nationalistic power can afford to be seen as yielding to foreign dictates about its political system. A private yet firm approach by the U.S. backed by friendly Asian and European countries emphasizing the risk to relations with the PRC might offer the best hope of influencing Chinese policy. Similar is the challenge posed by Taiwan. Its 24 million people have made a nation, but Beijing is no more inclined to accept separation than was Washington willing to tolerate the South’s secession in 1861. Threats of war lack credibility since, as a Chinese general once argued, the U.S. will not risk Los Angeles for Taipei. Nor should it. Nevertheless, one could imagine some discreet understandings that might help defuse tensions. For instance, the U.S. and Taiwan promising that no American military forces will be stationed on the island; China reducing missile forces directed at Taipei; Taiwan maintaining its ambiguous status rather than claiming independence; and the PRC accepting the land’s continuing autonomy rather than seeking to resolve its claim. Both the U.S. and China desire the denuclearization of North Korea. However, Beijing also seeks stability—hardly an unreasonable concern, given the potential for national implosion, resulting in loose nuclear weapons, factional war, and mass refugee flows. Imagine Washington’s reaction to a similar prospect in Mexico: today’s ongoing drug violence is bad enough. Moreover, the PRC has good reason not to simply hand over its one military ally to America, advancing Washington’s objective of containing Chinese power. Assurances of U.S. support in dealing with future developments on the peninsula as well as a commitment to withdraw American forces in the event of reunification might encourage greater cooperation by Beijing. Human rights has also become a matter of great contention. Under President Xi Jinping, the PRC is slouching toward totalitarianism. Xi increasingly looks like a new Mao, dedicated to amassing untrammeled power for himself and the CCP. The abuses are legion and growing: a million Uighurs in reeducation camps, widespread religious persecution, ruthless crackdowns on any hint of dissent, restrictions on academic exchanges, tighter internet censorship, an intrusive “social credit” system, and much more. Washington should certainly affirm the importance of all governments to respect the lives, dignity, and liberty of their peoples. However, America’s ability to enforce this commitment is extraordinarily limited. The most vital interest of every authoritarian government is preservation of its rule, and when it comes to China, the U.S. has few tools to force change. Sanctions have become Washington’s all-purpose remedy, but the broader the objective—release all political prisoners, create a free press, hold democratic elections—the less credible the effort. Beijing might release a particular prisoner or change a specific process under pressure, but it will not forgo repression or abandon communism. That would require a much more far-reaching internal transformation, as occurred with the collapse of the Soviet Union. Even more contentious are the multiple territorial disputes throughout the South China Sea and other Asia-Pacific waters. Washington has no claim of its own and no direct stake in the controversies. Nevertheless, the U.S. has a strong interest in the peaceful resolution of regional disputes and would prefer that its allies and friends control strategic waters rather than the PRC. Yet these concerns do not justify war with China, which could result since Beijing has a compelling interest in controlling its surrounding waters. Imagine America’s reaction if a hostile power’s navy routinely sailed down the East Coast, around Florida, and into the Gulf of Mexico. Washington’s strongest interest is in preserving the independence of friendly states—most notably Japan, the Philippines, Australia, and New Zealand—not in backing their claims to various uninhabited rocks littering contested waters. It is essential that American policymakers recognize that Washington’s geopolitical dispute with the PRC is over influence, not survival. Yet the usual hawkish suspects rarely make that distinction. Although Chinese leaders might imagine global domination in the distant future, their nation’s current ambitions are much more circumscribed. And whatever their intentions, they almost certainly will lack the capability to challenge America in its neighborhood for a very long time, if ever. The tyranny of distance combined with the high cost of projecting power—which currently hampers U.S. operations in East Asia—would work against Beijing. Washington’s sophisticated nuclear deterrent will long offer the U.S. a final defense. At issue is America’s continued domination of China along China’s border. However desirable that may be in theory, it is very different than protecting America’s independence, territory, population, liberties, and constitutional system. Moreover, the cost of battling China in its neighborhood would be extraordinarily costly. It is easier for China to sink a carrier than for America to build a new one. The U.S. should spend whatever is necessary to protect the homeland. But to maintain an ability to assault the Chinese mainland? That’s another story. Finally, these many objectives are sometimes in tension, if not outright conflict. The greater the number of goals, the harder it is to achieve any one of them. Washington has variously threatened sanctions or war over human rights, geopolitical disputes, trade, governance of Hong Kong, Taiwan’s status, economic policy, and North Korean relations. Does anyone realistically imagine that China’s nationalistic leaders will agree to even some of these demands, let alone all of them? In the coming years, the PRC will pose an ever-tougher challenge to America. Yet the threat is very different than that posed by the USSR during the Cold War. China is far more integrated into the global community. Shifting focus from the Middle East to Asia makes sense. However, though Beijing is a tough competitor, it is not yet an enemy, and Washington should not treat it as one. Neither of us would win a war with the other, hot or cold. Doug Bandow is a senior fellow at the Cato Institute and a former special assistant to President Ronald Reagan. He is the author of Foreign Follies: America’s New Global Empire.
  • FDA Should Make Anti-HIV Drug over the Counter    (Jeffrey A. Singer, 2019-11-14)
    Jeffrey A. Singer Secretary of Health and Human Services Alex Azar announced last week a federal suit against Gilead Sciences, makers of pre-exposure prophylaxis against HIV, also known as PrEP, claiming the company is infringing on government patents while selling these prescription-only drugs at high prices. The case could devolve into a protracted legal battle hinging on arcane patent law and the validity of government patents. , But both parties could avoid that if the Food and Drug Administration would simply make PrEP and post-exposure prophylaxis, also known as PEP, available over-the-counter. The Centers for Disease Control and Prevention reports sexual activity is the predominant cause of HIV transmission in the U.S. Studies show PrEP can reduce risk of HIV transmission from sex by 99 percent, and reduces HIV transmission from needle sharing by 74 percent when taken daily. PEP is also effective in preventing HIV transmission, but must be taken within 72 hours of exposure and continued for 28 days, and should be followed by repeated HIV testing from a health care provider. , Numerous studies show substantial price reductions when drugs move from prescription-only to over the counter. It is reasonable to expect the same to happen with PrEP and PEP. , Experts recommend that regular users of PrEP get semiannual blood tests to check their kidney function, because long-term use can cause renal impairment. But that doesn’t mean the drug shouldn’t be available over the counter. People taking nonsteroidal anti-inflammatory drugs long-term, such as ibuprofen, can harm their kidneys and should probably periodically check their kidney function. And people on long-term acetaminophen can harm their liver. These drugs are already available over the counter. In October, recognizing the need to make these drugs more available, California Gov. Gavin Newsom signed into law a bill allowing pharmacists to prescribe PrEP and PEP. While the FDA decides whether a drug is classified as prescription-only or over the counter, states get to determine the scope of practice of their licensed health care practitioners. State legislatures have increasingly expanded pharmacists’ scope of practice, allowing them to prescribe a prescription-only drug, as a means of working around the federal prescription requirements in order to improve access (and decrease cost) to medications and vaccinations their residents want and need. Making PrEP and PEP over the counter will greatly increase access to these lifesaving products. At-risk individuals won’t have to spend the time and money going to a doctor’s office to get a prescription. Their access won’t be limited to retail pharmacies, because over-the-counter drugs can be obtained at thousands of other retail and convenience stores. They might even become available in vending machines, as is the case with Plan B, also known as “the morning after pill” for birth control. Numerous studies show substantial price reductions when drugs move from prescription-only to over the counter. It is reasonable to expect the same to happen with PrEP and PEP. In his last State of the Union Address, President Trump set the laudable goal of eliminating new transmissions of HIV by the year 2030. Increased access to PrEP and PEP is critical to reaching that goal. The U.S. Department of Health and Human Services oversees the Food and Drug Administration. If HHS Secretary Azar is serious about making HIV prophylaxis cheaper and more available, he doesn’t have to get bogged down in a multi-year lawsuit with a pharmaceutical manufacturer, with an uncertain outcome, while new cases of HIV mount up every day. He can press the FDA to make PrEP and PEP over the counter. Jeffrey Singer, M.D. practices general surgery in Phoenix and is a senior fellow at the Cato Institute.
  • The Case for Free-Market Liberalism in Africa    (2019-11-13)
    Perceived as a by-product imported from the West, liberalism — often known in the English-speaking world as “classical” liberalism — has been rejected as an ideological model to define the political culture and systems of the African continent. It was primarily rejected because liberalism, like its related economic system capitalism, is observed as the system of the oppressor, the European colonizer who seeks to maintain his domination upon the African people by sustaining his order. Second, it was rejected because most African political leaders argued that the emphasis that liberalism puts on the individual, is not compatible with African political culture due to the fact that the collective is valued over the individual. As a result, socio-politically, the majority of African governments were autocratic and oppressive, and the freedom of the citizens in each African country, was significantly restrained. Economically, the majority of African countries that embraced a more government-controlled economy, impoverished their people, and made their economy stagnant. Liberalism was certainly born in Europe, and popularized in the seventeenth century through Second Treatise of Government of John Locke; but it is far from inherently being a western ideological by-product because the countries that have embraced liberalism — to various degrees — met with a commensurate amout of economic and political prosperity. One example of the success of liberalism outside of Europe and North America, is Japan; whereby politically, a Western-style rule-of-law has been integrated into the political system while economically, the living standard of the Japanese people is significantly higher due to a relatively high degree of economic freedom . Yet Japan is not a Western country nor a Western culture. If liberalism has been so beneficial to Japan, why can’t it work in Africa as well? Private Property: The Secret to Economic Growth Liberalism can have a serious positive impact in Africa if Africans embrace the concept of private property. As a matter of fact, economic freedom is the ability to retain private property. Private property is what determines the growth of capital. And that’s essential for a higher standard of living. The African continent is exceedingly rich in natural resources, yet the living standard of Africans is very low. The most plausible explanation for this discrepancy is the lack of a system that protects private property. Indeed, natural resources have no value unless human beings use their knowledge in combination with these resources to create value from. A resource is not a resource unless it is used to generate value. And the best way to create value through the utility of a scarce resource is to privately own that resource. The retention of private property generates capital, creates economic growth, stimulates economic incentives, and produces wealth. For example, South Africa is today the most prosperous country in Africa because compared to much of Africa, its people are economically and politically free. They have access to private property because the government plays a lesser role in economic activities. Rwanda is another great example of economic advancement in Africa. Rwanda retains an authoritarian political system, but it is relatively economically free. And economic freedom is, as Milton Friedman noted, an important first step to increasing political freedom as well. Today, Rwanda has been ranked as one of the best countries in Africa to do business. Yet it has been only 25 years since the nation was swept up in a bloody genocide. If Rwanda has become one of the most economically advanced countries in Africa, it is because the rate of retention in private ownership has considerably increased since the end of the genocide. In fact, the rate of retention in private ownership increased from 10 percent in 1997 to 72 percent in 2019. It means that more people have had access to private property, and therefore, were able to create capital, and the creation of capital stimulated the Rwandan economy. The Republic of Kenya is another African nation that has a striking economic growth with more than 5 percent annual GDP growth in recent years. This would not be possible if Kenyans were denied the economic freedom necessary to acquire and build capital and wealth. The Rule of Law: The Source of Political Stability One of the most important conditions for an economic system that works is a reliable legal system that will protect economic freedom and civil liberties. This is a key component of liberalism. If private property is not legally secure from both neighbors and the political system, it is not really secure. This is often referred to as “the rule of law.” But the rule of law has often not prevailed in most Africa countries. In fact, most African countries, at the dawn of the independences, established a one-party state. These states could set rules and seize property arbitrarily without regard to established law. The fundamental reason for establishing such a political system, was to avoid any kind of potential rebellion from the masses against the political authority. Consequently, the civil liberties of the African people were drastically restricted, and the rulers ruled their respective states autocratically, and sometimes oppressively. As a result, the rule of law was substantially undermined, the political systems were unsurprisingly illiberal. For example, Zaire, under Mobutu, was ruled despotically with the enforcement of a one-party state that decided for everything on civil policy. The lack of the liberalization of political institutions and the absence of the rule of law, led to several coup d’états and political instability; notably the one in Liberia with Samuel Doe, overthrowing President William R. Tolbert in the early 1980s or in Togo with Gnassingbé Eyadema overthrowing Sylvanus Olympio in the mid-1960s. The rule of law, however, is an essential factor that forms the foundation of economic prosperity and political stability. Without the rule of law, a society cannot adequately function economically nor politically. Unfortunately, the African political leaders of the post-colonial era have failed to develop the concept of the rule of law within African political culture and in African political systems. A society cannot be economically advanced if it has no political stability, and political stability is rooted in the rule of law. Liberalism, like capitalism, is not a western imported by-product as some Pan-Africanists may want to make it seem like. It is a by-product of human nature. Liberalism can be implemented in African political culture if the African people are willing to accept it, not as a “western by-product,” but as a social antidote to guarantee the improvement of their well-being politically and economically.
  • Mises Institute in Orlando    (2019-11-13)
  • Why Friedman Is Wrong on the Business Cycle    (2019-11-13)
    According to an article in Bloomberg on November 5, 2019, Milton Friedman’s business cycle theory seems to be vindicated. According to Milton Friedman, strong recoveries are just natural after particularly deep recessions. Like a guitar string, the harder the string is plucked down, the faster it should come back up. Bigger recessions should lead to faster growth rates during the recoveries, to get the economy back to the pre-recession level of activity. In Friedman’s model, the size of the recession predicts the growth rate in the recovery.1 The Bloomberg article refers to a study by Tara Sinclair that employs advance mathematical techniques that supposedly confirmed Friedman’s hypothesis that in the US bigger recessions are followed by faster recoveries — but not the other way around. According to Bloomberg some other researchers found similar results for other countries. On this way of thinking, views such as those presented by Ludwig von Mises that the magnitude of an economic bust is because of the magnitudes of the previous boom is false. Contrary to Mises, a common view is the bust is caused by various mysterious factors that have nothing to do with the previous boom. But that the main problem with the Friedman’s model is the lack of a coherent definition of what a boom-bust cycle really is. Defining Boom-Bust Cycles In a free, unhampered market, we could envisage that the economy would be subject to various shocks, but it is difficult to envisage a phenomenon of recurrent boom-bust cycles. According to Rothbard, Before the Industrial Revolution in approximately the late 18th century, there were no regularly recurring booms and depressions. There would be a sudden economic crisis whenever some king made war or confiscated the property of his subjects; but there was no sign of the peculiarly modern phenomena of general and fairly regular swings in business fortunes, of expansions and contractions. The boom-bust cycle phenomenon is somehow linked to the modern world. But what is the link? The source of the recurring boom-bust cycles turns out to be the alleged "protector" of the economy — the central bank itself. A loose central bank monetary policy, which results in an expansion of money out of “thin air” sets in motion an exchange of nothing for something, which amounts to a diversion of real wealth from wealth-generating activities to non-wealth-generating activities. In the process, this diversion weakens wealth generators, and this in turn weakens their ability to grow the overall pool of real wealth. The expansion in activities that are based on loose monetary policy is what an economic "boom" (or false economic prosperity) is all about. Note that once the central bank's pace of monetary expansion strengthens the pace of the diversion of real wealth is also going to strengthen. Once however, the central bank tightens its monetary stance, this slows down the diversion of real wealth from wealth producers to non-wealth producers. Activities that sprang up on the back of the previous loose monetary policy are now getting less support from the money supply; they fall into trouble — an economic bust or recession emerges. Contrary to Friedman, it is not possible to have an economic bust without the emergence of the previous boom. Again the subject matter of boom-bust cycles are various activities that emerged on the back of central bank policies. These activities, which we label as bubble activities, are preceded by monetary policies of the central bank, which in turn tends to manifest itself by means of the yearly growth of money supply and the height of interest rates. Loose monetary policies are likely to manifest in the strengthening of the annual growth of money supply and a decline in the policy interest rate. A tighter monetary stance is likely to manifest in the decline in the annual growth of money supply and in the increase in the policy interest rate. Again, a loose monetary stance, and a subsequent increase in the momentum of money supply results in the increase in bubble activities whille a tighter stance leads to their demise. On this way of thinking an increase in bubbles cannot emerge during a tighter monetary stance. On the contrary, a tighter stance will lead to a demise of the bubbles. Obviously then it does not make much sense as the Friedman’s business cycle model suggests that an economic boom is caused by the previous bust. So how then we are to respond to various sophisticated empirical studies, which support Friedman’s theory? To suggest that the boom-bust cycle can be depicted by a guitar string — the harder the string is plucked down, the faster it should come back — is not different from the hypothetical case where if a dog barks four times this preceded an economic recession and if a dog barks two times prosperity was followed suit. Note that the dog-barking example to explain boom-bust cycles is as ridiculous as the example of the guitar string. To explain a phenomenon one needs to identify the key factors that are responsible for this phenomenon. Obviously, a guitar string has nothing to do here with the emergence and the demise of bubble activities. Hence, regardless of the mathematical sophistication if the underlying logic of the analysis is flawed then the outcome of the analysis must be rejected. 1. See Economic Inquiry, April 1993, 171-77
  • Transatlantic Alliance Mistake: Turkey Isn't Worthy of NATO Membership    (Doug Bandow, 2019-11-13)
    Doug Bandow It is hard to imagine a less appropriate visitor or time. Turkey’s President Recep Tayyip Erdogan is journeying to Washington. He has guided his nation, a one-time valued ally, far from America’s principles and practices. President Donald Trump’s view of Erdogan as a “friend” makes Ankara’s drift more dangerous. , Turkey would not be invited to join the transatlantic alliance today. It has abandoned even the pretense of liberal democracy. Once viewed as a responsible Islamic model, the Turkish republic is turning into a soft dictatorship. Ankara always was the odd man out in NATO: poor, Islamic, and at best quasi-free and -democratic. However, during the Cold War the United States was willing to overlook Turkey's limitations and failings to bolster Western Europe's southeast flank. That nation also offered a convenient outpost in the Middle East. The hyper-nationalistic population proved hostile to Washington, but the military, which wielded a not-so-subtle veto over the country's politicians, ensured that policy remained on Washington's course. However, the secular nationalist doctrines known as Kemalism, named after the country's founder, gradually broke down in the face of increasingly well-organized Muslims determined to live out their faith more publicly. As the economy stalled and exhausted establishment parties imploded, the Justice and Development Party, or AKP, rose along with its founder, former Istanbul Mayor Erdogan. The AKP won a dramatic national victory in 2002 and has ruled ever since. , Once viewed as a responsible Islamic model, the Turkish republic is turning into a soft dictatorship. , For a time, Prime Minister Erdogan followed a seemingly liberal agenda: he kept the military in its barracks, aimed for entry in the European Union, and ended many nationalist strictures. Even liberals—in the broadest sense—and feminists lauded progress under his leadership. However, as the 2000s ended a new Erdogan emerged. Years before he reportedly said that democracy was like a street car, you get off when you arrive at your destination. Once he secured power and neutered the military, he turned authoritarian. That in part reflected his fear of rising evidence of corruption: once on the outs, AKP activists now had an opportunity to reward themselves for their hard work and good fortune. Tax authorities were turned against opposition businessmen. Critical media were targeted and seized. Perceived opponents, especially followers of the Islamic cleric Fethullah Gulen, were identified and purged from government posts. Some critics were swallowed by vast conspiracy prosecutions directed at those supposedly plotting to overthrow Erdogan. As time progressed the modern sultan's ruthlessness, paranoia, and grandiosity grew. The failed 2016 coup acted like the Reichstag fire for Adolf Hitler, providing an excuse and opportunity to crush his opponents. Tens of thousands have been arrested, imprisoned, fired, and/or persecuted. The press has mostly been stolen, cowed, coopted, or bought. Earlier this year, for the first time, Erdogan refused to accept a valid election result, the AKP loss of the mayoralty in Istanbul, Turkey's largest city. He forced a rerun, which was lost by a huge margin. However, while thought to be on the electoral ropes, his recent invasion of Syria gained plaudits from across the political spectrum. If only Turks were suffering under Erdogan’s misrule, then Washington might look the other way. However, Ankara’s fantastic dragnet caught Americans, too. For instance, Serkan Golge, a Turkish-American and NASA scientist, was arrested and sentenced to 7.5 years in jail for possessing a $1 bill, supposedly the sign used by Gulen’s followers. Thankfully, he was unexpectedly released after three years imprisonment in May. Others still languish in custody. At least one other American, evangelical minister Andrew Brunson, was detained as a hostage. Erdogan was shockingly blunt, saying: “They say, ‘give us the pastor.’ You have another pastor in your hands. Give us that pastor and we will do what we can in the judiciary to give you this one.” Last year, Ankara released him after two years in prison under strong U.S. pressure. Still, U.S. officials never let the brutality of the House of Saud or numerous other strongmen get in the way of a practical if not beautiful friendship. Saudi Arabia and Egypt both have arrested numerous dual citizens, which has left the overall relationship largely unimpaired. However, Ankara's orientation has changed dramatically. Erdogan has done more than removed state debilities on religious practice. He has begun to press more fundamentalist Islamic values on those with a more liberal disposition. That does not make Turkey the same as Iran, but Erdogan's conduct has created fears that Turkey could become Iran-lite. Worse, at least from a geopolitical perspective, Turkey has arrayed itself against several important American objectives. For instance, at the early stages of the Syrian civil war Ankara allowed ISIS fighters transit across Turkish territory. Erdogan's son appeared to traffic in Islamic State oil. Instead of combating Islamic fundamentalism, the regime dallied with the ideology for a profit. Even today, the regime is more determined to crush Syrian Kurdish forces than Islamic State fighters. Turkey twice invaded northern Syria, conquering land occupied by Kurdish forces allied with Washington. The result has been to seriously disrupt U.S. plans which already were disastrously unrealistic. Turkey and its Syrian insurgent allies engaged in ethnic cleansing. Washington removed its forces under pressure. Kurds reengaged with the Syrian government. Moscow negotiated with Ankara. The Turkish assault caused the president to threaten to wreck Turkey's economy; members of Congress are demanding sanctions against Ankara. There is little reason to expect Turkey to become more cooperative. Moreover, Turkey moved sharply toward Russia after courting war by shooting down a Russian warplane. Ankara cooperated with Moscow in Syria and purchased Russian S-400 missiles despite sharp opposition from Washington. Here, too, Congress pressed a reluctant administration to impose economic sanctions, as required by statute, on Turkey, a NATO ally. Although the administration refused, the bilateral relationship is in tatters. Other than President Trump, a passel of Turkish lobbyists, and Turkish-Americans loyal to a distinguished heritage, Ankara is left with few friends in America. The problem is not just with America. Although Syria and Russia still disagree over important issues, no European NATO member can feel certain that Ankara would be on its side in the event of conflict—the most important contingency for which the alliance was formed. Yet the Europeans must tread carefully, given Turkey's control over potential mass migration into the continent. The deterioration in the relationship was captured by Erdogan's egregious visit in 2017, when his security detail physically attacked peaceful protestors. On U.S. soil, Erdogan treats Americans with the same contempt and brutality that he treats Turkish citizens in his homeland. Trump should not have invited Erdogan to America. More substantively, Washington should respond in kind to Turkish hostility. If Ankara activates the S-400 missiles purchased from Russia, then the United States should embargo the F-35s originally slated for Ankara. Imposing sanctions seems overkill; refusing to trust the Turkish military is the right remedy. Washington also should remove its estimated fifty nuclear weapons from Incirlik Air Base. Although they are theoretically secure, Erdogan could take nuclear hostages to coerce the United States. Indeed, given his express desire to make Turkey into a nuclear power, he might decide that grabbing America's tactical nukes would offer a short-cut. If the U.S. cannot trust Turkey with F-35s, Washington cannot trust Ankara with possession of nuclear weapons, however restricted. Certainly, the United States should drop Turkey from any military planning for the Middle East or elsewhere. Washington still should attempt to cooperate with Ankara when possible. However, Erdogan cannot be relied upon irrespective of the particular issue. The right objective is damage limitation. More controversially, the United States and other NATO members should create procedures allowing for the suspension and ouster of errant members—and apply the measures to Turkey. Given its proximity to the Europeans, they are more likely to be cautious in supporting expulsion. However, suspension, at least, seems in order, to limit Ankara's access to sensitive intelligence and limit its ability to do organizational mischief. Doing so also would allow for the eventual creation of a new government with a different perspective. Trump should give his Turkish counterpart a call. There's no need for Erdogan to come to America. And certainly not to bring along his brutish bodyguards. Whatever needs to be communicated could be done by phone. The president and Erdogan might be friends, but Washington and Ankara are not. Doug Bandow is a senior fellow at the Cato Institute and a former Special Assistant to President Ronald Reagan. He is the author of Foreign Follies: America's New Global Empire.
  • Who Gets Buried at the Kremlin? Time for a Post-Revolutionary Purge    (Doug Bandow, 2019-11-13)
    Doug Bandow On November 1, 1961, Lenin's tomb disgorged Joseph Vissarionovich Stalin's embalmed remains. After his death in March 1953, Stalin's body was displayed next to that of Bolshevik founder Vladimir Ilych Lenin. , Stalin's death, perhaps a murder orchestrated by secret policy head Lavrentiy Beria, ended a reign marked by promiscuous and arbitrary mass murder. Stalin's chief lieutenants, all implicated in his manifold crimes, sang his praises post-mortem. Many Soviet citizens, on the receiving end of decades of propaganda as part of an all-encompassing personality cult, were genuinely disconsolate, even hysterical. Five days after his death from a cerebral hemorrhage or poison, his coffin was carried into the small building adjacent the Kremlin holding Lenin's carefully preserved body. Speaking on the occasion were Beria, who carried out Stalin's executions; Vyacheslav Molotov, the foreign minister who negotiated the infamous Hitler–Stalin pact; and Georgy Malenkov, Stalin's henchman, who initially succeeded the dead Red Czar. In November, after careful preparation of his body, Stalin was placed next to Lenin. , It is far past time to rebury the bodies and ashes elsewhere, starting with Lenin and Stalin. , It was a singular honor for a man who competes with Mao Zedong for the title of bloodiest dictator in human history. (As architect of the Holocaust and initiator of World War II, Adolf Hitler stands alone, but the other two directly killed more people, especially their own.) To his credit, Nikita Khrushchev, almost a liberal at that time within the Soviet Union, began a tortured process known as “destalinization.” In February 1956, he made the famous “Secret Speech,” entitled “On the Personality Cult and Its Consequences,” to the 20th Communist Party Congress, which denounced Stalin’s crimes. The text soon circulated, causing an uproar among party faithful worldwide while creating hope for an easing of the Cold War. Left unexplored was the responsibility of those, like Khrushchev, who had served the infamous “Man of Steel,” the meaning of the surname adopted by Stalin as a revolutionary in 1912. (His birth name was Dzhugashvili.) Stalin mixed guile and finesse with brutality and treachery as he gradually eliminated all his rivals after Lenin’s death in 1924. Every step of the way he was aided by the ambitious, foolish, and fearful. He purged other rivals, decimated the ranks of party members, slaughtered Red Army officers, eliminated foreign communist leaders, starved to death millions of Ukrainians, arrested and murdered the unfortunate and unlucky alongside critics and class enemies, kidnapped and eliminated foreign opponents such as the anti-communist “White Russians” and members of Poland’s wartime government-in-exile, allied with Nazi Germany, imprisoned and murdered returning Russian POWs, exterminated dissent, undertook mass deportations from suspect populations, imposed the infamous “Iron Curtain” across Europe, and constructed the vast gulag system dramatically described in Alexander Solzhenitsyn’s shocking The Gulag Archipelago. No one man, no matter how competent, determined, and powerful, can wreak such havoc alone. With the support of military chief Georgy Zhukov, Khrushchev defeated his rivals to lead the USSR. The dangerous, blood-soaked Beria was eliminated. But not Malenkov, Molotov, and the others. After failing to oust Khrushchev, they expected a trip to the gulag, if not death, but he merely “retired” them, a dramatic change from the era of deadly choreographed show trials initiated by Stalin. Domestic repression eased. One result was the publication of Solzhenitsyn’s One Day in the Life of Ivan Denisovich. Still, Khrushchev could be ruthless: a few months after the Secret Speech he OKed using the Red Army to crush the Hungarian Revolution. Khrushchev steadily eliminated Stalin’s name, which had been strewn promiscuously about on buildings, landmarks, and even cities. The Stalinist era officially ended on November 1, 1961. At the 22nd Party Congress that October, Khrushchev announced the reburial of Stalin, then still on public display. In a choreographed speech, an 80-year-old Bolshevik and famed disciple of Lenin, Dora Abramovna Lazurkina, opined that “Comrades, I could survive the most difficult moments only because I carried Lenin in my heart, and always consulted him on what to do. Yesterday I consulted him. He was standing there before me as if he were alive, and he said: ‘It is unpleasant to be next to Stalin, who did so much harm to the party.’ ” Yes, he had damaged the party. And a lot more, like carrying out mass murder and imprisonment. But never mind; the symbolism remained dramatic. Khrushchev followed with a decree ordering the removal of Stalin’s corpse, which was reburied in front of the Kremlin wall, with a modest granite marker that announced “J. V. Stalin 1879–1953.” (A small bust was added in 1970.) Khrushchev explained, “The further retention there of the sarcophagus with the bier of Stalin shall be recognized as inappropriate, due to the serious violations by Stalin of Lenin’s precepts, abuse of power, [and] mass repressions against honorable Soviet people.” Stalin’s new resting place was encased in cement to hinder renewed veneration. A few years later, Khrushchev was ousted by his protégé, Leonid Brezhnev, who proved to be the perfect apparatchik: decrepit, cautious, unimaginative, dull, and stultifying. Soviet society again closed as dissent was forbidden. Dissidents were punished. The Czech Prague Spring was suppressed. Solzhenitsyn was expelled from the USSR. But Brezhnev was no Stalin. The Red Army did not roll westward, and détente eased superpower tensions. Nor was there any revival of domestic Stalinism. The new Communist Party leader did not have the backbone of a mass murderer. Khrushchev lived out his life in comfortable retirement. By the 1980s, the system broke down. When Mikhail Gorbachev took over in 1985 after Brezhnev’s two enfeebled, short-lived successors, the new leader’s efforts to reform the system only sped its demise. Thirty years ago the Berlin Wall fell. On Christmas Day 1991, the Soviet flag was lowered from the Kremlin for the last time. The Evil Empire, as President Ronald Reagan termed it, dissolved. Yet Stalin remains buried in a place of honor. And not just him. Many communist unworthies remain alongside him in the Kremlin Wall Necropolis, a protected landmark since 1974. Lenin’s tomb — the first one was constructed of wood in 1924, followed by today’s granite structure, which was completed in 1930 — is the most dramatic burial place in Red Square. (The Bolshevik leader said he wanted to be buried next to his mother, and his widow, Nadezhda Krupskaya, opposed preservation of his body, but personal preferences did not matter in the new workers’ paradise.) He was not the first to be interred there, however. In November 1917, the Bolsheviks buried 240 dead from the October Revolution in Red Square. Notable people also were buried there: revolutionary leader Yakov Sverdlov and American journalist John Reed, for instance. The practice of mass burials continued for only a few years, though rank retained its privileges in the classless society: Sverdlov and two other Communist Party leaders, Bolshevik leader and Red Army commander Mikhail Frunze and Secret Police Chief Felix Dzerzhinsky, a Pole by birth, were buried individually in front of the wall before the practice was stopped in 1926. Next came the interment of ashes in the Kremlin wall. Since the Orthodox Church forbade cremation, the practice was viewed as an affirmation of atheism. Occasional burials resumed in 1946, however, starting with Mikhail Kalinin, who had served as head of state. Most interments continued in the wall; occasionally cosmonauts and generals muscled their way into the atheistically sacred ground. The last such honored burial occurred in 1984, that of Defense Minister Dmitry Ustinov. He demonstrates the fickleness of fame. He was a bureaucrat in a collapsing system whose 15 minutes of fame is long past. Yet he continues to occupy valuable real estate in one of the world’s most recognizable and symbolic locations. While there is an understandable reluctance to disturb the dead, Russians today have good reason to initiate a process of Red Square “cleansing.” It would make sense to begin with Stalin, who deserves the treatment accorded the body of Oliver Cromwell after the restoration of the British monarchy. Indeed, there has been some discussion about sending his body back to the now independent country of Georgia, where he was born — and almost half the population expresses its admiration for him — but so far he remains unmoved. The residents of the other 11 individual graves also deserve disinterment. They were discreditable brutes, though some were more monstrous than others. In addition to Stalin are Sverdlov, Frunze, Dzerzhinsky, and Kalinin, mentioned earlier. Then came Andrei Zhdanov, chief propagandist who once was viewed as a potential successor to Stalin. Next came Stalin, followed by Kliment Voroshilov and Semyon Budyonny, both Bolshevik politicians and military commanders. Last to be buried were Leonid Brezhnev, Mikhail Suslov, Yuri Andropov, and Konstantin Chernenko; the first, third, and fourth were party general secretaries, while the second was the party’s unofficial chief ideologist. The wall, too, should be cleared. Doing so should be easy, since removing urns filled with ash is a relatively uncomplicated process. Among those who should be thus disposed of include Leonid Krasin, Sergei Kirov, Vyacheslav Menzhinsky, Sergey Kamenev, Alexei Kosygin, and Ustinov. Kirov, the influential party leader of Leningrad, was notable as a friend of Stalin whose assassination, some believe on the orders of a paranoid Stalin, was the excuse for the great purge. Menzhinsky chaired the secret police. Kosygin originally shared leadership with Brezhnev before losing the ensuing power struggle. As noted earlier, a few non-politicians (though none who are nonpolitical, since nothing in the USSR was nonpolitical) reside in the Kremlin’s walls, such as Georgy Zhukov, the great World War II army commander; Anatoly Serov, a noted fighter pilot; and Maxim Gorky, writer and Nobel Prize in Literature nominee. The U.S. remembers and commemorates plenty of discreditable and unattractive figures who are historically significant. But few played such a malign role as serving the Soviet Union as it crushed its own people and spread murderous repression abroad. Of course, the ultimate target should be Lenin. After communism collapsed, there was serious discussion about closing the tomb. The first democratically elected president of Russia, Boris Yeltsin, removed the honor guard from the mausoleum and urged Lenin’s burial. Although a majority of the public appears to back holding a funeral for the top Bolshevik, nationalist impulses have blocked the move. Proposals to close the entire commie necropolis have met with less favor. Communism no longer governs Russia, authoritarian though today’s ruling regime is. But the ideology lives on, symbolically, at least, as a gaggle of misanthropes, creeps, thugs, and murderers remain interred in historic Red Square and the Kremlin’s walls. It is far past time to rebury the bodies and ashes elsewhere. Then the space created could commemorate the multitude of victims and legion of true Russian heroes. Surely there are a long list of people far more worthy to reside in death in the Kremlin Wall Necropolis than the likes of Ustinov, Andropov, Brezhnev, Dzerzhinsky, Stalin, and Lenin. is a Senior Fellow at the Cato Institute and former Special Assistant to President Ronald Reagan. He is the author of Foreign Follies: America's New Global Empire
  • U.S. Military Assistance Cannot Fix Mexico's Cartel Mayhem    (Ted Galen Carpenter, 2019-11-13)
    Ted Galen Carpenter President Donald Trump’s response to the massacre of an American ex-pat family by drug cartel gunmen in northwest Mexico was both emotional and suggestive of a policy response that could have far-reaching implications for both Mexico and the United States. Trump reacted to the incident with a tweet that stated “this is the time for Mexico, with the help of the United States, to wage WAR (sic) on the drug cartels and wipe them off the face of the earth. We merely await a call from your great new president!” He added: "If Mexico needs or requests help in cleaning out these monsters, the United States stands ready, willing & able to get involved and do the job quickly and effectively." , It was not exactly clear as to what Trump had in mind regarding the nature of such “help.” Perhaps it was merely an offer for enhanced sharing of information about the cartels from the FBI, the Drug Enforcement Administration (DEA) and other U.S. law enforcement agencies. Such an order from the president would be merely a modest increase in the assistance that those agencies already provide to Mexico and other drug-source countries. Also, it is possible that Trump was offering to use the CIA and other U.S. intelligence agencies to help the Mexican government track and disrupt the drug cartels. Even that move would not constitute a dramatic increase in Washington’s participation in Mexico’s longstanding war on drugs. There is another possibility, though, that cannot be ruled out. Does the Trump administration now contemplate direct U.S. military participation in the worsening conflict between the Mexican government and several major drug cartels? Such a role could take two forms. One initiative would entail drone strikes and other applications of airpower against targets in areas of Mexico under the de facto control of a cartel because government security forces are ineffective or have withdrawn entirely. The other possibility is that Washington would deploy Special Forces personnel on the ground to attack armed cartel units and help the Mexican government regain control over areas in which the drug gangs have run amok. Either move would be fraught with multiple negative consequences. , Drone strikes or U.S. special forces missions will not solve Mexico's cartel and drug problems. Trump needs to resist the temptation to adopt a futile, counterproductive military option in the country. , Some of Trump’s closest supporters in Congress certainly are lobbying for a tougher policy, including a military component, against the cartels. In two interviews on Fox News, Sen. Tom Cotton (R-AR) warned ominously: “If the Mexican government cannot protect American citizens in Mexico, then the United States may have to take matters into our own hands.” Cotton emphasized that “our special operations forces were able to take down [ISIS leader Abu Bakr] al-Baghdadi in Syria a couple weeks ago,” and they did the same “to Osama bin Laden in Pakistan eight years ago.” He added, “I have every confidence that if the president directed them to do so, that they could impose a world of hurt on these cartels.” Clearly, he had something more in mind than increased sharing of intelligence information with the Mexican government. Indeed, he scorned President Andres Manuel Lopez Obrador’s policy of “hugs, not bullets,” and countered that the only effective way to fight the cartels was with “more bullets and bigger bullets.” A mid-October incident in Mexico had already alarmed Cotton and other national security hardliners. Armed enforcers of the powerful Sinaloa drug cartel battled units of Mexico’s National Guard on the streets of Culiacan, a city of eight hundred thousand people, for more than eight hours to free two sons of former drug lord Joaquin “El Chapo” Guzman. In a stunning development, they defeated the government troops. Writing in the Federalist, conservative analyst John Daniel Davidson described the horrifying scene. “Armed with military-grade weapons and driving custom-built armored vehicles, cartel henchmen targeted security forces throughout Culiacan, launching more than one dozen separate attacks on Mexican security forces.” The scene, Davidson said, “could have been mistaken for Syria or Yemen. Footage posted on social media showed burning vehicles spewing black smoke, heavily armed gunmen blocking roads, dead bodies strewn across the streets, and residents fleeing for cover amid high-caliber gunfire.” The battle ended only when trapped government forces received an order directly from Obrador to cease fighting.  Trump’s offer of assistance to Obrador is not the first time that he has hinted about possibly involving the U.S. military in Mexico’s drug war. Just weeks after entering the White House, Trump adopted a similar stance in a session with then-president Enrique Pena Nieto—and did so in even less cordial terms. “We are willing to help you,” Trump stated. “But they [the cartels] have to be knocked out, and you have not done a good job of knocking them out.” Trump affirmed that he knew “how tough these guys are—[but] our military will knock them out like you never thought of.” The U.S. president assured Pena Nieto that he preferred to assist the Mexican military rather than take direct action, but the implied, menacing alternative was apparent. Obrador thanked Trump for the more recent offer of military assistance, but he quickly rejected it. Indeed, Obrador’s administration seems committed to addressing the underlying causes of drug violence rather than trying to solve it through military force, as his predecessors attempted. Trump’s “offer” of help is a direct challenge to that new approach and risks causing serious tensions in bilateral relations. Even worse, if the United States pursued airstrikes or other military options in the name of “national security” over the Mexican government’s objections, that action would create an alarming crisis. Mexican officials and the Mexican people have long memories, and they recall clearly the numerous bullying episodes by the “Colossus of the North” in both the nineteenth and twentieth centuries. Even a limited U.S. military involvement in Mexico would revive those memories and feelings of resentment. In addition to the danger of alienating Mexico’s government and population, using U.S. military power against the drug cartels could well lead to another unwinnable, seemingly endless war for the United States. We don’t need another quixotic adventure to go along with those in Afghanistan, Iraq, Syria and Yemen. The cartels are powerful because there is a sizable consumer market for drugs in the United States and other countries. The prohibition policy to which Washington and its allies stubbornly cling drives up prices, thereby enriching and empowering the organizations that control such a lucrative commerce. Drone strikes or U.S. special forces missions will not solve that dilemma. Trump needs to resist the temptation to adopt a futile, counterproductive military option in Mexico. Ted Galen Carpenter, a senior fellow in security studies at the Cato Institute and a contributing editor at the National Interest, is the author of twelve books and more than 850 articles on international affairs. His books include Bad Neighbor Policy: Washington’s Futile War on Drugs in Latin America (2003) and The Fire Next Door: Mexico’s Drug Violence and the Danger to America (2012).
  • After Years of Decline, Competition in Banking Finally Grows Again    (2019-11-13)
    US Banks are seeing a larger number of new entrants into the industry. Chime, a mobile-only bank, has opened two million online checking accounts and is adding more customers each month than Wells Fargo or Citibank. Firms from outside traditional consumer banking including Square, Goldman Sachs (Marcus), and Robinhood are entering the industry as well. The consulting firm CG42 said in a recent report on the vulnerability of retail banking that it expects the ten largest banks to lose $344 billion in deposits over the next year. Applications for and approvals of FDIC deposit insurance are at a recent high, with fifteen approvals in 2018 and eight so far this year, shown in the chart below. Despite banking’s 20 year decline in the number of banks, the floodgates are seeming to be open for a new wave of digital-first banks to pursue new licenses. While the new banks may not outnumber the 1,500 banks that have closed since 2009 — their appeal to a new wave of consumers represents a substantial threat to the 0.2% of megabanks that hold more than two-thirds of industry assets . Source: FDIC Paving the Way for New Banks As the demographic of the United States shifts younger, customers have started to move away from reliance on traditional brick and mortar branches and instead prefer app-based services with lower fees . This has led to an erosion of entrance and exit costs as the need to build buildings and pay tellers is eliminated. Additionally, venture capitalists are setting records with funding of “neo-banks,” investing $2.5 billion through the second quarter of 2019 — for reference, the previous high was only $2.3 Billion in all of 2018. Both the shift in preferences away from physical branches and the availability of funding have paved the way for new banks to enter the market. A paper entitled Competition in Banking by Carol Ann Northcott published in 2004 lists the ways that banks differ to include reputation, product offerings, and the extensiveness and location of their branch networks. Scandals such as Wells Fargo’s “Eight is Great” phony account scandal and the Financial Crisis of 2008 have tarnished the reputation of big banks and, along with the disappearing significance of branches, removed the mystique of the incumbent banks. Disappearing Tax Advantage for Incumbent Banks Under Section 172 of the Internal Revenue Code, corporations are able to carry forward Net Operating Losses (NOLs) indefinitely, minimizing tax liability. In the chart below using data from the World Bank, before-tax (red line) and after-tax (blue dots) return on assets are diverging and the tax burden (purple, calculated as the difference between before and after-tax ROA) is growing since the Great Recession in 2008. Furthermore, in 2017 the Tax Cuts and Jobs Act (TJCA) put additional limitations on NOL deductions, serving to increase the tax liability for banks. With Net Operating Loss (NOL) tax exemptions rolling off for incumbent banks, their cost advantage is reduced — putting them on a more even playing field with new entrants. While the rules surrounding NOLs exist to provide protection from “excessive hardships from tax based upon an arbitrary annual accounting,” they can artificially prop up inefficient firms and stifle competition. As banks become profitable again following the Great Recession and lose their tax advantage over new banks, opportunity for new entrants grows. Source: World Bank Implications for Consumers Banks contribute greatly to growth by facilitating production in other industries and promoting capital accumulation through the supply of credit. As competition for customers grows, banks are forced to often lower their profit margin or lose their market share to rival banks. In this situation, a larger quantity of credit will be supplied at the lower price. A paper by Besanko and Thakor examines loan and deposit markets and finds that loan rates decrease and deposit rates increase as more banks are added to the market. These findings support the theoretical prediction that a more competitive environment results in the larger quantity of credit is supplied at a lower price. As more banks enter the market and capture market share, incumbent firms lose their power in the market. Shining the light on a potential pitfall of a more competitive market, Northcott finds that a banking system that exhibits some degree of market power may improve credit availability of certain firms and provide incentives for banks to screen loans which aids the efficient allocation of resources. In addition, she finds that market power may contribute to stability by providing incentives that mitigate risk-taking behavior and providing incentives to screen and monitor loans. Guzman also finds that the problems of the loan-borrower relationship may be exacerbated by more competitive market structures where information is not costlessly obtainable by the bank. Northcott finds no consensus in the literature on the optimal competitive structure, but it is clear that, for the moment, entry costs for retail banks are falling and competition is growing in the digital age.
  • A Deeply Flawed History of the Austrian School    (2019-11-12)
        The Marginal Revolutionaries: How Austrian Economics Fought the War of Ideasby Janek WassermanYale University Press, 2019xiii+ 354 pages Janek Wasserman, who teaches history at the University of Alabama, has written a useful but deeply flawed book. Useful, because Wasserman has brought to light substantial archival material on the background of the Austrian school, but deeply flawed on two counts. First, Wasserman is beyond his depth when he writes about theoretical issues. In particular, he does not understand Mises, but his lack of knowledge is apparent elsewhere as well. Second, he obtrudes his political opinions on readers in a way that must generate skepticism about his presentation of his archival research. Wasserman distinguishes a number of stages in the history of the Austrian school. I do not propose to discuss these in detail but will mention only a few highlights. In general, Wasserman stresses the networks among the leading Austrians. They all knew each other and, though often at odds, they tended to support one another in times of crisis. Further, the cultural ferment of Vienna affected them: “The exchange of ironical barbs and clever repartee reflected the mode of the Austrian School specifically and modernist Vienna in general. The famed literary critic and cultural icon Karl Kraus best embodied this spirit. … Good polemics demanded satire and unfairness. It also was not enough to win one’s dispute with intellectual foes: one had to best adversaries in style. Schumpeter and Bőhm[-Bawerk] excelled in these arts and used the tools of the Gymnasium and coffeehouse to great effect.” Schumpeter and Mises are often, and correctly, viewed as rivals who had little use for each other, but one of Wasserman’s most valuable insights is that they sometimes worked together. “Schumpeter encouraged Mises to speak out on Austrian monetary problems in the Austrian Political Society, where the two made common cause against the wartime government.” Wasserman rightly notes that, despite his deviations from classic Austrian theory, Schumpeter’s Capitalism, Socialism, and Democracy is best read as a defense of capitalism: “While capitalism in its current, desiccated form seemed destined for collapse, this need not transpire. Deploying a satirist’s wit and an ironist’s pen, Capitalism revealed that Schumpeter believed just the opposite. Capitalism may sow seeds of its own destruction, but it still constituted the surest guarantee of prosperity and democracy. … Schumpeter also leveled a hearty criticism against his economist colleagues, whose static models of perfect competition and complete information, of partial and general equilibria, possessed little explanatory power for a dynamic world. … Capitalism, Socialism, and Democracy is one of the greatest and subtlest apologia for capitalism and elitist liberalism ever written.” If Wasserman deserves praise for his treatment of Schumpeter, unfortunately this same is not true for his account of Mises. He adopts uncritically the perspective of Hayek, who varied in his estimation of Mises, and Gottfried Haberler about Nationalőkonomie: “Hayek conceded that the book showed a glaring ignorance of recent developments. … Hayek’s critique followed the lead of Haberler, who had argued for years that Mises was no longer a significant economist and that his work offered no insights for anyone who had learned economics since the Great War: ‘If one had studied the classics and Marshall in 1912, then one would have learned nothing from Mises.’” Had Wasserman consulted the book itself, he would have found that it includes references to Haberler’s then contemporary work on international trade theory and Hayek’s work, also then recent, on the business cycle and the socialist calculation argument. Matters become even clearer if one examines Human Action, the English expansion and revision of the German treatise. In it, Mises responds to Haberler’s criticism of Austrian business cycle theory and dissents from Hayek on the Ricardo effect. Even more important, though, are Wasserman’s mistakes about praxeology. He says, “Mises’s most controversial assertion was his insistence on the a priori quality of the praxeological axiom. … This unremitting stance, which denied explanatory power to inductive reasoning or empirical observations, left many scholars cold. … Moreover, it did not seem that praxeology was supple enough to address contemporary problems.” Incredibly, Wasserman appears to attribute to Mises the odd view that every statement about economics can be deduced from the action axiom. Instead, of course, Mises developed praxeology as a deductive science that economists could use to help explain particular events. Doing so does not preclude empirical investigation but rather requires it. An even worse misunderstanding is this: “Mises’s elevation of economics to the status of logic had great seductive power. If all of Mises’s economic assertions could be deduced from his core tenet — ‘Human action is purposeful behavior’ — then decisions that impeded the smooth functioning of human action violated scientific law and human will.” This does not follow at all, and only someone bereft of ability to reason logically could think it did. If all actions are purposeful, then actions that impede other actions are also purposeful. Wasserman’s incompetence in theoretical issues is not confined to mistakes about Mises. He rightly says that The Theory of Games and Economic Behavior is difficult, but at one point he quotes a long sentence, which I shall not reproduce here, and says of it: “As a further example take one of von Neumann’s more straightforward explanations from early in the book, the elements of a game. … [Then follows proposition 6.2.1] Virtually no economists at the time were familiar with set notation or group theory, rendering this passage incomprehensible to its intended audience.” In fact, the proposition is easy to understand and requires no knowledge of group theory or set notation. It says no more than that a game consists of a fixed number of moves, where a “move” is a choice among given alternatives, and provides symbols for these points. Here is another example of Wasserman’s ignorance, though here I am captious. He says, “Rőpke attracted the support of Hayek and the Italian éminense grise social scientist Benedetto Croce…”. To call Croce a “social scientist” is jarring. Croce was a leading light of Italian Idealist philosophy, as well as a historian and man of letters, not a social scientist. Wasserman has strong political opinions and, as I have said earlier, he obtrudes these on readers in a way that arouses mistrust about his presentation of archival material. He says, “In this spat, the Austrians of the LvMI [Mises Institute] renewed their ongoing feud with the Kochs, GMU [George Mason University, and Cato]. The Misesians rejected the separation of economics and politics: Austrian economics implied libertarianism — of a conservative stripe. The GMU Austrians were consistently anti-interventionist and pro-market not just in their scholarship but in their politics, and many of them identified ideologically with libertarianism. They nevertheless believed that one could keep one’s scholarship and politics separate. Rejecting the ‘value-free’ pretensions of the left-leaning libertarians — and the longer wertfrei tradition of the Austrian School — the LvMI bloc reached out to other marginal right-wing groups, such as states’ rights organizations, historical revisionists, and neo-Confederates.” Murray Rothbard did not reject value-freedom in economics. To the contrary, he insisted on it, and a principal theme in his writings about policy is that economists should make clear their value-commitments. In this he has been followed by Joseph Salerno, whom Wasserman assails. A grosser misunderstanding on Rothbard could hardly be imagined. As Dante long ago said, “non ragioniam di lor.” Let us look at this ill-thought out book and pass on.
  • Rick Rule: Deep Understanding of Markets Opens a Pathway to Entrepreneurial Leadership    (2019-11-12)
    Rick Rule is CEO at Sprott US Holdings. His lifetime focus on natural resources finance enabled him to carve a unique pathway to entrepreneurial success. Like many entrepreneurial journeys, Rick’s had some twists and turns. Here are some of the key stages. Key Takeaways and Actionable Insights Find out early what you love. Rick enjoyed the outdoors, nature and therefore natural resources, the associated science of efficient and effective use of natural resources, and finance. All of us have a combination of likes and preferences that may stimulate us but may not initially appear to present us with an entrepreneurial recipe. But as Curt Carlson explained in Episode #34, combining knowledge from different people and fields can result in compounding insights. Combine Knowledge in New Ways. Rick combined natural resource science with principles of corporate finance, specifically debt and equity finance for extractive industries. As a result of the special properties of natural resource markets, and firms’ needs for customized financing, an opportunity niche emerged. Rick’s application of his special combination of knowledge placed him in a competitively advantaged position. Learn By (Hard) Experience. Rick learned not to confuse a bull market with brains, as he puts it. He did business through a complete commodity market cycle in the 1970s through the early 80s, experiencing volatility and ups and downs first hand. Theory is no substitute for experience. Nevertheless, his knowledge of Austrian Business Cycle Theory, Austrian Price Theory (“the cure for high prices is high prices, and the cure for low prices is low prices”) granted him a superior perspective in interpreting market signals. Develop Deep Market and Customer Understanding. In his focus market, Rick developed a business segmentation that focused on participant firms of a defined size ( Identify a Need You Can Fill For Your Carefully Selected Audience in Your Carefully Selected Market Segment. The business model came together in a way that Rick describes as “lender of last resort to high quality management teams in high quality companies that were not popular” and were therefore capital constrained. In addition, Rick’s understanding of business cycles and commodity prices further strengthened his confidence in lending when others would not, the market rewards for which turned out to be high. Combine Empathy, Trust and Courage. Rick confirmed the E4E emphasis on empathy as an important skill for entrepreneurs — primarily, in his case, empathy for the customers whom he financed. He sought to combine empathy with trust: in a market where information is scarce, it is imperative to have trust in the sources. “Without trust,” says Rick, “I have no information, and therefore I cannot make decisions.” The third emotional attribute he identified is courage — the courage to have the conviction that your model indicating a future upcycle or price rise is well constructed, and not to second-guess it during the time that the trade is underwater. Additional Resource "Rick Rule's Path to Entrepreneurial Leadership" (PDF): Mises.org/E4E_39_PDF
  • The FDA Wants to Control Your Stem Cells    (2019-11-12)
    There’s an escalating government assault on our stem cells and we should all be very concerned about it. Most people probably associate stem cells with religious debates over embryos and fetuses. However, we all have stem cells inside of us that many contend can be extracted, processed and re-administered as as medical service. These are called autologous or “personal” stem cells. Current Food and Drug Administration (FDA) guidance, however, essentially classifies our stem cells as “drugs,” thus preventing us from freely using them as we wish. The FDA magically turned our cells into “drugs” in 2006 by changing one word in 21 CFR 1271, the regulatory framework that governs stem cells. It was an act that occurred without public commentary and conferred an authority upon the FDA that Congress never intended it to have. In the Journal of Translational Medicine, Michael Freeman and Mitchell Fuerst referred to the word change as a “semantic sleight of hand.” In their 2012 paper, those same authors warned us that the FDA wasn’t done expanding its regulatory authority over our stem cells. Given recent FDA actions like its restrictive 2017 SCT guidance, the issuance of warning letters to clinics using autologous SCT and the litigation against SCT clinics, Freeman and Fuerst’s words ring eerily prophetic. Indeed, at the urging of the House Energy and Commerce Committee, as well as others in the private sector, the FDA has recently increased its enforcement actions against clinics offering SCT. With the assistance of certain corporate media outlets, a distinct cadre of vocal SCT regulationists have methodically deployed disinformation campaigns, unsubstantiated invective and pejorative terms such as “rogue,” “unproven,” “wild west,” “unregulated” “illegitimate,” “dangerous” and “snake oil” in an effort transform to the term “stem cell clinic” into a scary slur. The SCT regulationist assault comes principally from the FDA, whose regulations and guidance unduly restrict access to personal SCT and hurt more people than they purport to protect. In fact, the actual reported cases of harm from SCT are remarkably few, while many patients express satisfaction with treatment. Likewise, academic publications like Stem Cell Reports have joined the regulationist phalanx, publishing a pseudo-scientific study suggesting that online platforms such as YouTube should summarily suppress or eliminate content containing positive testimonials about SCT. Google has recently joined the authoritarian online censorship campaign against SCT. Similarly, the Canadian Medical Association Journal published a SCT fearmongering case report with unsubstantiated generalizations about the “dangers” of SCT. Bioethicists and scientists also play active roles in the regulationist cadre, arguing for SCT crowdfunding censorship by GoFundMe and baselessly asserting how unscrupulous, bad-actor stem cell clinicians use deceptive marketing practices to exploit desperate patients. Even a recent Pew Charitable Trust report endorsed an increased FDA crackdown on autologous SCT, which is hypocritical given that its recommendations were clearly biased against SCT clinics and informed in part by a former Johnson & Johnson employee. That’s the same Johnson & Johnson that is currently embroiled in a maelstrom of controversy over its toxic, cancer-causing “baby” powder and that just reached a $20.4 million settlement in an opioid epidemic liability lawsuit. The SCT regulationist campaign is effectively propelled by a paternalistic government-media-industry- academic quadplex that issues policy decrees with an insularity and condescension that regards patients as chattel, or a veritable medical untermensch. This results in a non-consensual usurpation of our own health autonomy, an infantilization of a crippled sub-stratum of Americans who are excluded from any serious discussion of SCT regulatory policy, and who are presumably too stupid to make their own medical decisions. This power asymmetry within the SCT policymaking dynamic is incongruent with the fundamental precepts of justice, inclusion and egalitarianism that undergird our American democracy. And appurtenant to our democracy are certain fundamental civil rights, one of which is the right to privacy. Arguably, there is some legally cognizable privacy interest in cells that are extracted from us with the intention of being infused back into us. Our right to privacy is premised upon the idea of personal autonomy and extends to the right to bodily integrity. In Union Pacific Railway Co. v Botsford (1891) the U.S. Supreme Court opined that “no right is held more sacred of carefully guarded … than the right of every individual to the possession and control of his own person, free from all restraint of interference from others, unless by clear and unquestionable authority of law.” The FDA and the SCT regulationists would do well to recognize this right.
  • Two Parties, Should We Care?    (2019-11-11)
    Includes an introduction by Lew Rockwell. Recorded on November 9, 2019, in Lake Jackson, Texas.
  • Ron Paul, Hero    (2019-11-11)
    Includes an introduction by Jeff Deist. Recorded on November 9, 2019, in Lake Jackson, Texas.
  • How "Meaningless Words" Create the Narrative    (2019-11-11)
    Recorded on November 9, 2019, in Lake Jackson, Texas.
  • How Not to be a CIA Propagandist    (2019-11-11)
    Recorded on November 9, 2019, in Lake Jackson, Texas.
  • Propaganda and the 2020 Foreign Policy Debate    (2019-11-11)
    Recorded on November 9, 2019, in Lake Jackson, Texas.
  • Welcome to the 2019 Ron Paul Symposium    (2019-11-11)
    Includes an introduction by David Gornoski. Recorded on November 9, 2019, in Lake Jackson, Texas.
  • 19th-Century Americans Didn't "Support the Troops"    (2019-11-11)
    Were an American from the mid-nineteenth century to time-travel to modern America, he'd be truly amazed to find that Americans are often expected to thank soldiers "for your service" and to act as if the military was doing the taxpayer a favor. The lionizing of government employees in uniform has become standard fare in the post-9-11 world, with special discounts for members of the military, early boarding on airplanes, and free meals at restaurants. It's quite a contrast from the attitude of Americans during the first century of the republic, however. Of this, the examples are numerous. For example, in his memoirs, Ulysses S. Grant recounts how he trotted out into the streets of Cincinnati after first receiving his uniform as an officer. According to Grant: I donned [the uniform] and set off for Cincinnati on horseback. While I was ... imagining that everyone was looking at me ... a little urchin, bareheaded, barefooted with a dirty and ragged pants ... turned to me and said "Soldier! will you work? No sir-ee: I'll sell my shirt first."1 This attitude, Richard Bruce Winders explains in his history of James A. Polk, "illustrates the image of soldiers, common in the 1840s, as slackers on the public dole."2 Indeed, even as late at 1891, a speech published in the Christian journal The Churman recounted Grant's anecdote and concluded "the national contempt" for the army was based on the fact "it is 'such a lazy life.'"3 Nor did such attitudes begin in the 1840s. In his biography of George Washington, Mason Locke Weems notes the lack of concern over American casualties suffered under Anthony Wayne in a battle with the Shawnee in 1794: However, after the first shock, the loss of these poor souls was not much lamented. Tall young fellows, who could easily get their half dollar a day at the healthful and glorious labor of the plough, to go and enlist and rust among the lice and itch of a camp, for four dollars a month, were certainly not worth their country's crying about. [Emphasis in the original.]4 The Militia vs. The Federal Military This general contempt for soldiering wasn't applied to all soldiers. In nineteenth century America, it was considered honorable to be a militia man — a part-time soldier tasked with protecting one's community from raiding Indians and gangs and thugs. It was something else entirely, however, to be a professional, full-time soldier. Those people, it was commonly felt, were indeed what we today would call "welfare queens" living off the hard work of American taxpayers. In other words, for Americans of the time, it was laudable to take up arms in defense of one's community. But one was also expected to get a real job. [RELATED: "Why We Can't Ignore the 'Militia' Clause of the Second Amendment" by Ryan McMaken.] Put another way, the militias were one thing. The "standing army" was something else entirely. This isn't surprising given the general disdain for standing armies in early America. The fear and contempt for standing armies was the primary motivation for the Second Amendment — an amendment designed to encourage and protect local militias and the ownership of firearms outside of federal control. According to historian Marcus Cunliffe, this attitude goes back to the days of American resentment over British regulars being quartered and fed using the using the housing and food of American civilians. In many cases, Americans begrudgingly tolerated the imposition, but after the Revolution, this attitude toward professional soldiers was simply transferred from British troops to American federal troops.5 Indeed, even members of the military were aware of their semi-pariah status.In a speech to military cadets, officer Benjamin Butler concluded in 1849 that, "large standing armies" are "productive of needless expenditure; injurious to the habits and morals of the people." Even within military families full off federal officers, one brother might advise another in 1845 that "I by no means desire that my sons should ever wear a sword. I would certainly prefer that they should become honest, industrious mechanics."6 This would echo a common sentiment of the period that even for those who did spend some time as a professional soldier, it was best to use "every kind of negative and positive inducement" to encourage a soldier "to turn himself back into a civilian before it was too late."7 In practice, of course, few Americans ever had to deal in person with any of these federal officers they so disdained. As Cunliffe notes, Americans in many areas had no idea what a regular officer looked like. Regular soldiers existed for them only as caricatures — the enlisted men as drunkards and "mercenaries," the officers as haughty "aristocrats."8 Decades later, the rarity of federal officers remained a point of pride for critics of the standing army up until the First World War. In a 1914 semi-humorous editorial in Collier's magazine titled "Why We Cannot Have a Standing Army," George Fitch is grateful the United States is not like the Old World where "working people of Europe must" support millions of soldiers "in idleness." In America, Fitch happily reminds his readers , "millions of people live and die without seeing a member of the regular Army." When forced to deal with a standing army, however, Fitch suggests the soldiers be equipped with rickshaws and "plac[ed] at the service of the public" so that the taxpayers might go "joyriding" and thus more easily endure the burden imposed by the "armed hordes."9 Such rhetoric in the mid-nineteenth century would have been extremely commonplace. But in the decades following the Civil War, the sheer number of military veterans — combined with the fact their state militia units had been mostly federalized in the conflict — meant soldiers more commonly came to be regarded as objects of reverence rather than suspicion. It was just one of the ways the United States became "federalized" after the Civil War. Military service became less about service to one's particular community, and more about national service. This change was helped along by federal legislation which blurred — and eventually all but abolished — the line between state militias and the federal military. Today, the militias have been transformed into the National Guard and made de facto permanent instruments of federal military policy. The distinction between the "citizen soldier" and the professional hireling been almost totally erased, and there is no longer any cultural mandate to suspect federal troops of wasting the taxpayers' hard-earned cash. Indeed, the taxpayers are often now expected to thank the soldiers for doing a service the taxpayer already pays handsomely for.  Our nineteenth century time traveler would find this state of affairs to be very odd indeed. 1. Corson, O.T., "Birth Boyhood and Education of General Grant." The Ohio Educational Monthly, Volume 71. p 9. 2. Richard Bruce Winders. Mr. Polk's Army: The American Military Experience in the Mexican War. Texas A&M University Press. 2000. p. 51 3. Address by Rev. William Langford. The Churchman, November 14, 1891. p 637. (Volume 64.) 4. Weems, M.L., The life of George Washington : with curious anecdotes, equally honourable to himself and exemplary to his young countrymen. Lippincott. Philadelphia. 1800. p 151.(https://archive.org/details/lifeofgeorgewashweem/page/n6) 5. Cunliffe, Marcus. Civilians and Soldiers: The Martial Spirit in America 1775-1865. Little, Brown and Com. Boston. 1968. 6. Ibid. p 130. 7. Ibid. p 129. 8. Ibid. p 103. 9. Collier's, Volume 52., March 14, 1914., p 9.
  • The Decline of the Rule of Law    (2019-11-11)
    Political wisdom, dearly bought by the bitter experience of generations, is often lost through the gradual change in the meaning of the words which express its maxims. Though the phrases themselves may continue to receive lip service, they are slowly denuded of their original significance until they are dropped as empty and commonplace. Finally, an ideal for which people have passionately fought in the past falls into oblivion because it lacks a generally understood name. If the history of political concepts is in general of interest only to the specialist, in such situations there is often no other way of discovering what is happening in our time than to go back to the source in order to recover the original meaning of the debased verbal coin which we still use. Today this is certainly true of the conception of the Rule of Law which stood for the Englishman's ideal of liberty, but which seems now to have lost both its meaning and its appeal. There can be little doubt about the source from which the Englishmen of the late Tudor and early Stuart period derived their new political ideal for which their sons fought in the 17th century; it was the rediscovery of the political philosophy of ancient Greece and Rome which, as Thomas Hobbes complained, inspired the new enthusiasm for liberty. Yet if we ask precisely what were the features in the teaching of the ancients which had that great appeal, the answer of modern scholarship is none too clear. We need not take seriously the fashionable allegation that personal freedom did not exist in ancient Athens: whatever may have been true of the degenerate democracy against which Plato reacted, it certainly was not true of those Athenians whom, at the moment of supreme danger during the Sicilian expedition, their general reminded above all that they were fighting for a country in which they had "unfettered discretion to live as they pleased." But wherein did this freedom of the "freest of the free countries," as Nicias called it on the same occasion, appear to consist – both to the Greeks themselves and to the Elizabethans whose imagination it fired? I suggest the answer lies in part in a Greek word which the Elizabethans borrowed from the Greeks but which has since gone into disuse; its history, both in ancient Greece and later, provides a curious lesson. Isonomia, which appears in 1598 in John Florio's World of Wordes as an Italian word meaning "equalitie of lawes to all manner of persons," two years later, in its Englished form "isonomy," is already freely used by Philemon Holland in his translation of Livy to render the description of a state of equal laws for all and of responsibility of the magistrates. It continued to be used frequently throughout the 17th century, and "equality before the law," "government of law," and "rule of law," all seem to be later renderings of the concept earlier described by the Greek term. Equal Laws for All The history of the word in ancient Greek is itself instructive. It was a very old term which had preceded demokratia as the name of a political ideal. To Herodotus it was "the most beautiful of all names" for a political order. The demand for equal laws for all which it expressed was originally aimed against tyranny, but later came to he accepted as a general principle from which the demand for democracy was derived. After democracy had been achieved, the term continued to be used as a justification and 'later, as one scholar suggests, perhaps as a disguise of the true character of democracy: because democratic government soon proceeded to destroy that very equality before the law from which it derived its justification. The Greeks fully understood that the two concepts, although related, did not mean the same thing. Thucydides speaks without hesitation of an "isonomic oligarchy," and later we find isonomia used by Plato quite deliberately in contrast to, rather than in vindication of, democracy. In the light of this development the celebrated passages in Aristotle's Politics in which he discusses the different kinds of democracy, even though he no longer uses the term isonomia, read like a defense of this old ideal. Readers will probably remember how he stresses that "it is more proper that law should govern than anyone of the citizens," that the persons holding supreme power "should be appointed only guardians and servants of the law," and particularly how he condemns the kind of government under which "the people govern and not the law." Such a government, according to him, cannot be regarded as a free state: "for when the government is not in the laws, then there is no free state, for the law ought to be supreme over all things"; he even contends that "any such establishment which centers all power in the votes of the people can not, properly speaking, be called a democracy, for their decrees can not be general in their extent." Together with the equally famous passage in the Rhetorics, in which he argues that "it is of great moment that well-drawn laws should themselves define all the points they can and leave as few as may be for the decision of the judges," this provides a fairly coherent doctrine of government by law. How much all this meant to the Athenians is shown by the account given by Demosthenes of a law introduced by an Athenian under which "it should not be lawful to propose a law affecting any one individual, unless the same applied to all Athenians," because he was of the opinion that, "as every citizen has an equal share in civil rights, so everybody should have an equal share in the laws." Although, like Aristotle, Demosthenes no longer uses the term isonomia, the statement is little more than a paraphrase of the old concept. 17th-Century Rediscovery A characteristic dispute between Hobbes and Harrington, from which, I believe, the modern use of the "government by laws and not by men" derives, indicates how alive these views of the ancient philosophers were to the political thinkers of the 17th century. Hobbes had described it as "just another error of Aristotle's politics that in a well-ordered commonwealth not men should govern but the law." Harrington countered that the "art whereby a civil society is instituted and preserved upon the foundation of common right or interest" is "to follow Aristotle and Livy … the empire of laws, not of men." To the 17th-century Englishmen, it seems, the Latin authors, particularly Livy, Cicero, and Tacitus, became increasingly the more important sources of political philosophy. But, even if they did not go to Holland's translation of Livy where they would have found the word, it was still the Greek ideal of isonomia which they met at all the crucial points. Cicero's Omnes legum servi sumus ut liberi esse possumus [we are all servants of the laws in order that we may be free] (repeated later, almost word for word, by Voltaire, Montesquieu, and Kant) is perhaps the most concise expression of the ideal of freedom under the law. During the classical period of the Roman Law, it was once more understood that there was no real conflict between freedom and the law, their generality, certainty, and the restrictions they placed on the discretion of the authority, which was the essential condition of freedom. This condition lasted until the strict law (ius strictum) was progressively abandoned in the interest of a new social policy. As a distinguished student of Roman Law, F. Pringsheim, has described this process 'which started under the Emperor Constantine: The absolute empire proclaimed together with the principle of equity the authority of the imperial will unfettered by the barrier of law. Justinian with his learned professors brought this process to its conclusion. Struggle for Economic Freedom When it comes to show what the Englishmen of the seventeenth and eighteenth centuries made of the classical tradition they had rediscovered, any brief account must inevitably consist mainly of quotations. But many of the most telling and instructive expressions of the central doctrine as it developed are less well known than they deserve. Nor is it generally remembered today that the decisive struggle between King and Parliament which led to the recognition and elaboration of the Rule of Law was fought mainly over the kind of economic issues which are again the center of controversy today. To the 19th-century historians the measures of James I and Charles I which produced the conflict seemed antiquated abuses without topical interest. Today, some of these disputes have an extraordinarily familiar ring. (In 1628 Charles I refrained from nationalizing coal only when it was pointed out to him that it might cause a rebellion!) Throughout the period it was the demand for equal laws for all citizens by which Parliament opposed the King's efforts to regulate economic life. Men then seem to have understood better than they do today that the control of production always means the creation of privilege, of giving permission to Peter to do what Paul is not allowed to do. The first great statement of the principle of the Rule of Law, of certain and equal laws for all and of the limitation of administrative discretion, is contained in the Petition of Grievances of 1610; it was caused by new regulations for building in London and the prohibition of the making of starch from wheat which the King had made. On this occasion the House of Commons pleaded: Among many other points of happiness and freedom which Your Majesty's subjects of this kingdom have enjoyed under your royal progenitors, Kings and Queens of this realm, there is none which they have accounted more dear and precious than this, to be guided and governed by the certain rule of law, which giveth both to the head and the members that which of right belongeth to them, and not by any uncertain and arbitrary form of government…. Out of this root hath grown the indisputable right of the people of this kingdom, not to be subject to any punishment that shall extend their lives, lands, bodies, or goods, other than such as are ordained by the common law of this land, or the statutes made by their common consent in Parliament. The further development of what contemporary Socialist lawyers have contemptuously dismissed as the Whig doctrine of the Rule of Law was closely connected with the fight against government-conferred monopoly and particularly with the discussion around the Statute of Monopolies of 1624. It was mainly in this connection that that great source of Whig doctrine, Sir Edward Coke, developed his interpretation of Magna Carta which led him to declare (in his second Institutes): If a grant be made to any man, to have the sole making of cards or the sole dealing with any other trade, that grant is against the liberty and freedom of the subject … and consequently against this great charter. We have already noticed the characteristic positions taken on the critical point of executive discretion by Hobbes and Harrington respectively. We are not interested here in tracing the further steps in the development of the doctrine and shall pass over even its classical exposition by John Locke, except for the rarely noticed modern justification which he gives it. Its aim is to him what contemporary writers have called the "taming of power": Laws made and rules set … to limit the power and moderate the dominion of every part and member of society. The form in which the doctrine became the common property of an Englishmen was determined, however, as is probably always true in such cases, more by the historians who presented the achievements of the revolution to later generations than by the writings of the political theorists. Thus, if we want to know what the tradition in question meant to the Englishman of the late eighteenth or early 19th century, we can hardly do better than turn to David Hume's History of England which indeed is to a large extent an interpretation of political progress from "government of will" to "government of law." There is particularly one passage, referring to the abolition of the Star Chamber in 1641, which shows what he regarded as the chief significance of the constitutional developments of the 17th century: No government, at that time, appeared in the world, nor is perhaps found in the records of any history, which subsisted without a mixture of some arbitrary authority, committed to some magistrate; and it might reasonably, beforehand, appear doubtful whether human society could ever arrive at that state of perfection, as to support itself with no other control, than the general and rigid maxims of law and equity. But the Parliament justly thought that the King was too eminent a magistrate to be trusted with discretionary power, which he might so easily turn to the destruction of liberty. And in the event it has been found that, though some inconveniencies arise from the maxim of adhering strictly to law, yet ,the advantages so much overbalance them, as should render the English forever grateful to the memory of their ancestors who, after repeated contests, at last established that noble principle. Later, of course, this Whig doctrine found its classic expression in many familiar passages of Edmund Burke. But if we want a more precise statement of its content we have to turn to some of his lesser contemporaries. A characteristic statement which has been attributed to Sir Philip Francis (but which probably occurs in the Junius letters) is the following: The government of England is a government of law. We betray ourselves, we contradict the spirit of our laws, and we shake the whole system of English jurisprudence, whenever we entrust a discretionary power over the life, liberty, or fortune of the subject to any man, or set of men, whatsoever, on the presumption that it will not be abused. The fullest account of the rationale of the whole doctrine which I know occurs, however, in the chapter "Of the Administration of Justice" in Archdeacon Paley's Principles of Moral and Political Philosophy: The first maxim of a free state is, that the laws be made by one set of men, and administered by another; in other words, that the legislative and the judicial character be kept separate. When these offices are united in the same person or assembly, particular laws are made for particular cases, springing oftentimes from partial motives, and directed to private ends: whilst they are kept separate, general laws are made by one body of men, without forseeing whom they will affect; and, when made, must be applied by the other, let them affect whom they will…. Parliament knows not the individuals upon whom its acts will operate: it has no case or parties before it: no private designs to serve: consequently, its resolutions will be suggested by the consideration of universal effects and tendencies, which always produce impartial and commonly advantageous regulations. Here, I suggest, we have nearly all the elements which together produce the complex doctrine which the 19th century took for granted under the name of the Rule of Law. The main point is that, in the use of its coercive powers, the discretion of the authorities should be so strictly bound by laws laid down beforehand that the individual can foresee with fair certainty how these powers will be used in particular instances; and that the laws themselves are truly general and create no privileges for class or person because they are made in view of their long-run effects and therefore in necessary ignorance of who will be the particular individuals who will be benefited or harmed by them. That the law should be an instrument to be used by the individuals for their ends and not an instrument used upon the people by the legislators is the ultimate meaning of the Rule of Law. Since this Rule of Law is a rule for the legislator, a rule about what the law ought to be, it can, of course, never be a rule of the positive law of any land. The legislator can never effectively limit his own powers. The rule is rather a meta-legal principle which can operate only through its action on public opinion. So long as it is generally believed in, it will keep legislation within the bounds of the Rule of Law. Once it ceases to be accepted or understood by public opinion, soon the law itself will be in conflict with the Rule of Law. As the establishment of the Rule of Law in England was the outcome of the slow growth of public opinion, the result was neither systematic nor consistent. The theorizing about it was mainly left to foreigners who, in explaining English institutions to their compatriots, had to try to make explicit and to give the appearance of order to a set of seemingly irrational traditions which yet mysteriously secured to the Englishman a degree of liberty scarcely known on the Continent. These efforts to embody into a definite program for reform what had been the result of historical growth at the same time could not but show that the English development had remained curiously incomplete. That English law should never have drawn such obvious conclusions from the general principle as formally to recognize the principle nulla poena sine lege, or to give to the citizen an effective remedy against wrongs done him by the state (as distinguished from its individual agents),or that English constitutional development should not have led to the provision of any built-in safeguards against the infringement of the Rule of Law by routine legislation, seemed curious anomalies to the Continental lawyers who wished to imitate the British model. The demand for the establishment of the Rule of Law in the Continental countries also became to some extent the conscious aim of a political movement, which had never been the case in England. Indeed, for a time in France and for a somewhat longer period in Germany, this demand was the very heart of the liberal program. In France it reached its height during the July monarchy when Louis Philippe himself proclaimed it as a basic principle of his reign: "Liberty consists only in the rule of laws." But neither the reign of Napoleon III nor the Third Republic provided a favorable atmosphere for the further growth of this tradition. And although France made some important contributions in adapting the English principle to a very different governmental structure, it was in Germany that the theoretical development was carried furthest. In the end it was the German conception of the Rechtsstaat which not only guided the liberal movements on the Continent but became characteristic of the European governmental systems as they existed until 1914. This continental development is very instructive because there the efforts to establish the Rule of Law met from the very beginning conditions which arose in England only much later – the existence of a highly developed central administrative apparatus. This had grown up unfettered by the restrictions which the Rule of Law places on the discretionary use of coercion. Since these countries were not willing to dispense with its machinery, it was clear that the main problem was how to subject the administrative power to judicial control. It is a matter of comparative detail that in fact separate administrative courts were created to enforce the elaborate system evolved to restrain the administrative agencies. The main point is that the relations between these agencies and the citizen were systematically subjected to legal rules ultimately to be applied by a court of law. The German lawyers indeed, and with justice, regarded the creation of administrative courts as the crowning achievement of their efforts toward the Rechtsstaat. There could hardly have been a more grotesque and more harmful misjudgment of the Continental position by an eminent lawyer than A. V. Dicey's well-known contention that the existence of a distinct administrative law was in conflict with the Rule of Law. Limits to Coercion The real flaw of the Continental system, which English observers sensed but did not understand, lay elsewhere. The great misfortune was that the completion of the Continental development turned on a point which to the general public inevitably appeared merely a minor legal technicality. To guide all administrative coercion by rigid rules of law was a task which could have been solved only after long experience. If the existing administrative agencies were to continue their functions, it was evidently necessary to allow them for a time certain limited spheres within which they could employ their coercive powers according to their discretion. With respect to this field the administrative courts were therefore given power to decide, not whether the action taken by an administrative agency was such as was prescribed by the law, but merely whether it had acted within the limits of its discretion. This provision proved to be the loophole through which, in Germany and France, the modern administrative state could grow up and progressively undermine the principle of the Rechtsstaat. It cannot be maintained that this was an inevitable development. If the Rule of Law had been strictly observed, this might well have caused what David Hume had called "some inconveniences," and might even substantially have delayed some desirable developments. Although the authorities must undoubtedly be given some discretion for such decisions as to destroy an owner's cattle in order to stop the spreading of a contagious disease, to tear down houses to prevent the spreading of tire, or to enforce safety regulations for buildings, this need not be a discretion exempt from judicial review. The judge may want expert opinion to decide whether the particular measures were necessary or reasonable. There ought to be the further safeguard that the owners affected by such decision are entitled to full compensation for the sacrifice they are required to make in the interest of the community. The important point is that the decision is derived from a general rule and not from particular preferences which the policy of the government follows at the moment. The machinery of government, so far as it uses coercion, still serves general and timeless purposes, not particular ends. It makes no distinction between particular people. The discretion conferred is a limited discretion in the sense that the agent is to carry out the sense of a general rule. That this rule cannot be made wholly explicit or precise is the result of human imperfection. That it is in principle, however, still a matter of applying a general rule is shown by the fact that an independent and impartial judge, who in no way represents the policy of the government of the day will be able to decide whether the action was or was not in accordance with the law. No Permanent Achievement The suspicion with which Dicey and other English and American lawyers viewed the Continental position was thus not unjustified, even though they had misunderstood the causes of the state of affairs which existed there. It was not the existence of an administrative law and of administrative courts which was in conflict with the Rule of Law, but the fact that the principle underlying these institutions had not been carried through to its conclusion. Even at the time when, in the later part of the last century, the ideal of the Rechtsstaat had gained its greatest influence, the more deliberate efforts made on the Continent had not really succeeded in putting it into actual practice as fully as had been the case in England. There still remained there, as an American observer (A. B. Lowell) then described it, much of the kind of power which "most Anglo-Saxons feel … is in its nature arbitrary land ought not to be extended further than is absolutely necessary." And before the principle of the Rechtsstaat was completely realized and the remnants of the police state finally driven out, that old form of government began to reassert itself under the new name of Welfare State. At the beginning of our century, the establishment of the Rule of Law appeared to most people one of the permanent achievements of Western civilization. Yet the process by which this tradition has been slowly undermined and eventually destroyed had even then been underway for nearly a generation. And today it is doubtful whether there is anywhere in Europe a man who can still boast that he need merely keep within the law to be wholly independent, in earning his livelihood, from the discretionary powers of arbitrary authority. Socialist Inroads The attack on the principles of the Rule of Law was part of the general movement away from liberalism which began about 1870. It came almost entirely from the intellectual leaders of the socialist movement. They directed their criticism against practically everyone of the principles which together make up the Rule of Law. But initially it was aimed mainly against the ideal of equality before the law. The socialists understood that if the state was to correct the unequal results which in a free society different gifts and different luck would bring to different people, these had to be treated unequally. As one of the most eminent legal theorists of Continental socialism, Anton Menger, explained in his Civil Law and the Propertyless Classes (1890): By treating perfectly equally all citizens, without regard to their personal qualities and economic positions, and admitting unlimited competition between them, it was brought about that the production of goods was increased without limit, but also the poor and weak had only a small share in that increased output. The new economic and social legislation attempts therefore to protect the weak against the strong and to secure for them a moderate share in the good things of life. We know today that there is no greater injustice than to treat as equal what is in fact unequal. A few years later, Anatole France was to give wide circulation to the similar ideas of his French socialist friends in the much quoted gibe about "the majestic equality of the laws, which forbids the rich as well as the poor to sleep under bridges, to beg in the streets, to steal bread." Little did the countless well-meaning persons who have since repeated this phrase realize that they were giving currency to one of the cleverest attacks on the fundamental principles of liberal society. The systematic campaign which during the last sixty years has been conducted against all the constituent parts of the tradition of the Rule of Law mostly took the form of alleging that the particular principle in question had never really been in force, that it was impossible or impracticable to achieve it, that it had no definite meaning, and, in the end, that it was not even desirable. It may well be true, of course, that none of these ideals can ever be completely realized. But, if it is generally held that the law ought to be certain, that legislation and jurisdiction ought to be separate functions, that the exercise of discretion in the use of coercive powers should be strictly limited and always subject to judicial control, etc., these ideals will be achieved to a high degree. Once they are represented as illusions and people cease to strive for their realization, their practical influence is bound to vanish rapidly. And this is precisely what has happened. The attacks against those features of the Rule of Law were directly determined by the recognition that to observe them would prevent an effective control of economic life by the state. The economic planning which was to be the socialist means to economic justice would be impossible unless the state was able to direct people and their possessions to whatever task the exigencies of the moment seemed to require. This, of course, is the very opposite of the Rule of Law. Concept of Justice Abandoned At the same time, another and perhaps even more fundamental process helped to speed up that development. Jurisprudence abandoned all concern with those metalegal criteria by which the justice of a given law can alone be determined. For legal positivism the concrete will of the majority on a particular issue became the only criterion of justice applicable in a democracy. On this basis it became impossible even to argue about – or to persuade anybody of – the justice or injustice of a law. To the lawyer who regards himself as a mere technician intent upon implementing the popular will, there can be no problem beyond what is in fact the law. To him the question whether a law conforms to general principles of justice is simply meaningless. The concept of the Rechtsstaat, which originally had implied certain requirements about the character of the laws, thus came to mean no more than that everything the government did must be authorized by a law – even if only in the sense that the law said that the government could do as it pleased. Years before Hitler came to power German legal scholars had pointed out that this complete emptying of the concept of the Rechtsstaat had reached a point where what remained no longer formed an obstacle to the creation of a totalitarian regime. Today it is widely recognized in Germany that this is exactly where that development led. But if there is now a healthy reaction under way in German legal thinking, the state of British discussion on this crucial problem seems to be very much where it was in pre-Hitler Germany. The Rule of Law is generally represented as either meaningless or requiring no more than legality of all government action. According to Sir Ivor Jennings, the Rule of Law in its original sense, "is a rule of action for Whigs and may be ignored by others." In its modern sense, he believes, it "is either common to all nations or does not exist." In Professor W. A. Robson's opinion it is possible to "distinguish 'policy' from 'law' only in theory" and "it is a misuse of language to say that an issue is 'nonjusticiable' merely because the adjudicating authority is free to determine the matter by the light of an unfettered discretion; and equally incorrect to say that an issue is 'justiciable' when there happens to be a clear rule of law available to be applied to it." Professor W. Friedmann informs us that in Britain "the Rule of Law is whatever Parliament, as the supreme lawgiver, makes it" and that therefore, "the incompatibility of planning with the Rule of Law is a myth sustainable only by prejudice or ignorance." Yet another member of the same group even went so far as to assert that the Rule of Law would still be in operation if the majority voted a dictator, say Hitler, into power: "the majority might be unwise, and it might be wicked, but the Rule of Law would prevail. For in a democracy right is what the majority make it to be." In one of the most recent treatises on English jurisprudence it is contended that in the sense in which the Rule of Law has been represented in the present discussion, it "would strictly require the reversal of legislative measures which all democratic legislatures have found essential in the last half century." That may well be. But would those legislatures have regarded it as essential to pass those measures in this particular form if they had understood that it meant the destruction of what for centuries, at home and abroad, had been regarded as the essence of British liberty? Was it really essential for social improvement that law after law should have given ministers powers for "prescribing what under this Act has to be prescribed"? About one thing there can be no doubt: this is essential to the progress of socialism. [This article was originally published in The Freeman: April 20, 1953 (part I), May 4, 1953 (part II).]
  • My First Time to East Berlin    (2019-11-11)
    This is the 30th anniversary of the fall of the Berlin Wall — one of the most glorious moments in the modern history of freedom. I first passed through that wall while hitchhiking around Europe in the summer of 1977. After camping in West German woods within mortar range of the Iron Curtain, I headed out at sunrise to catch a ride into East Germany. A quick hitch with an affable French businessman put me on the Autobahn and into West Berlin before noon. Rambling around the city, I met a young Dutch lesbian who was also surveying the scene. Hendrika slightly resembled a wheel of Gouda cheese but she had bright cheeks and mischievous eyes. She and I were both planning to visit East Berlin, and we figured there would be safety in numbers in enemy territory. We caught up the next morning and passed through Checkpoint Charlie into the Soviet Bloc’s premier “showplace of communism.” Traveling from West Berlin to East Berlin was like passing into the mirror image of Disneyland. Instead of ticket takers at the entrance of rides, there were undercover cops enticing visitors to exchange currency on the black market — after which they would be fined or jailed. The police were backstopped by civilian informers lurking everywhere, waiting to get paid for tidbits or smears. Hendrika’s fluent German helped keep us from being arrested as we traversed semi-forbidden parts of the city. On AlexanderPlatz, near the city center, we saw elite East German soldiers goosestepping down the street. West Germany banned the goosestep after the Third Reich was destroyed. But that particular march naturally appealed to the Stalinist regime that the Russians imposed on East Germans after World War II. The goosestep perfectly captures the relation of the State to the people: anyone who did not submit would be crushed. As George Orwell wrote, “The goose-step is one of the most horrible sights in the world, far more terrifying than a dive-bomber. It is simply an affirmation of naked power.” Walking through East Berlin, I saw many bullet holes in apartment buildings and other structures. I didn’t know whether the damage remained from street battles between fascists and communists in the late 1920s and early 1930s, or from the Red Army’s bloody conquest of the city in 1945, or from the Soviet’s brutal repression of a popular uprising in 1953. Perhaps the bullet holes were left to remind people of the futility of resistance. On the other hand, almost everything looked gray and shabby, so it may have been simply on a long list of blemishes that never got fixed. Almost all the people I saw on East Berlin streets looked utterly browbeaten. I noted in my journal: “Perhaps despotism drains the soul. It would be difficult to have a high opinion of one’s powers if one was constantly being coerced and subjected to others’ wills.” The East German regime insisted that the Berlin Wall was to keep fascists out from their workers’ paradise. I wasn’t aware of the East German government taking any polls to determine how many of their subjects swallowed that hokum. Regardless, that regime felt entitled to inflict endless delusions on people in order to secure the victory of the proletariat or whatever. Hendrika and I visited the ghoul-like, cavernous main library in East Berlin. It was necessary to purchase a pass to enter, and only East Berlin students and visitors from non-communist nations were permitted inside. Every room had a guard, and we had to sign a registry and show our passes before entering it. There were vast empty spaces inside, perhaps symbolizing the vast Yukon Territory of knowledge beyond the pale. The local folks using that library didn’t seem to be emitting any mental sparks — maybe intellectual curiosity was considered a thought crime, too? The East Germans toiling with books and papers probably knew that anyone who raced across No Man’s Land to some forbidden idea might be terminated with extreme prejudice. Why have libraries where thinking was a crime? Any government terrified of ideas must be doing something wrong. As we gallivanted around East Berlin, Hendrika rattled on about life on her commune in southern Holland. She talked of being a “free spirit,” and I soon realized that the Dutch translation meant “also does animals.” She bragged of frolicking with any and all types of mammals with no reservations or prejudices. She was the first person I met who viewed anthrax as a sexually-transmitted disease. (And no, I didn’t.) Exiting Berlin, I caught a ride to Frankfurt with a friendly young leftist German university student. He said that there was not much difference between “freedom” in West and East Berlin because some workers in West Berlin lived in an area on the edge of city with no subway station and only one supermarket. He lamented that it took them half an hour via a bus line to get to the center of the city. Hence, they had no freedom — just like the people in East Berlin. He stressed that East Germany had many advantages over West Germany, such as free health care and zero unemployment. Maybe he wasn’t aware that the D.D.R. government dictated the occupation each young person must follow? Did this guy not notice the food in the East German markets was utterly grim — almost zero fresh fruits aside from apples? I was puzzled why someone who seemed quite intelligent was utterly oblivious to the catastrophic consequences of destroying economic freedom. I did not catch the guy’s name and have no idea what happened to him. Maybe I should check to see if there are any German academics on Bernie Sander’s economic policy team. Maybe this guy helped inspire Bernie’s denunciation of “23 underarm spray deodorants” in capitalist stores? A decade later, I crossed the Berlin Wall plenty of times while getting dirt for articles I wrote for New York Times, Reader’s Digest, Wall Street Journal , and other publications . I caught hell a number of times at East Bloc border crossings but never had a problem in Berlin.
  • Warren's Ginormous School-Choice Hypocrisy    (Corey A. DeAngelis, 2019-11-10)
    Corey A. DeAngelis Elizabeth Warren proudly opposes school choice — for your kids. She apparently could afford to take her own son out of the public-school system and enroll him in a private school, yet her education policies would deny that choice to less wealthy Americans. , Sen. Warren’s schools plan, which she released last month, is radically anti-choice. She promises to end private-school vouchers and tax credits. She’d block new programs that give families choices and work to shut down existing ones. She’d also ban for-profit charter schools, end federal funding for new charters and add more regulatory barriers to opening them. The blueprint sounds like it came straight from the teachers’ union playbook: It calls for boosts in funding to government-run schools and more red tape for their competition. Warren and her daughter, Amelia, offered support for a limited form of public-school choice in their 2003 book, “The Two-Income Trap.” But her campaign insists that “Elizabeth Warren never supported private-school vouchers.” Warren reiterated this position in September’s Democratic presidential debate when she said, “Money for public schools should stay in public schools, not go anywhere else.” It is anything but “progressive” to limit educational options for the least-advantaged students in the US. Yet it’s the peak of hypocrisy to exercise school choice for your own kids while fighting to prevent other families from doing the same. One of Warren’s main talking points on education is that she attended traditional public schools as a child. But that decision was likely more her parents’ than hers. The more relevant question is where she chose to send her own kids. Until now, no one has been able to answer that question. Although Education Week tried, writers Alyson Klein and Maya Riser-Kositsky reported that Warren’s campaign “did not respond to inquiries about where she sent her children.” I e-mailed the Warren campaign the same question and similarly didn’t receive a response. Why the mystery? Warren regularly reminds us that she attended and taught at public schools. She brags that she is “#PublicSchoolProud.” Yet not proud enough, it seems, to stop her from exercising school choice by sending her kid to an expensive private school. There was little information regarding her children's K-12 educations on the Internet. But using her son Alex Warren's full name and birth year (1976), I searched for school yearbooks on the premium version of Ancestry.com and found one record from 1987. The fifth-grade yearbook picture indeed appears to be Warren's son, matching family photos online. The record was from Kirby Hall School, a private school only a half-mile from the University of Texas at Austin, where Warren taught. It's not clear if Alex attended Kirby or any other private school during any other year, but if Warren was willing to pay the cost, she certainly had the option. Kirby's current tuition is $17,875 per year. The school also boasts a student-teacher ratio of only 5 to 1; that ratio in public schools is three times as high. And every one of Kirby's graduates goes to college. People who can afford it also exercise choice by moving to communities with good public schools, as Warren appears to have done. A similar search for her daughter turned up just one record, which showed Amelia attended Anderson High School in 1987. Although it's not a private school, US News and World Report ranks Anderson among the top high schools in the country. Warren's family's educational situation is vivid proof of the need for school choice. In the same year, one child went to private, the other went to public. One size does not fit all. I don't blame Alex Warren one bit for attending private school when he was a child or Sen. Warren for sending him there. I'm happy for him and happy Warren had that option. But it's beyond hypocritical for the senator to try to deny less advantaged families educational choice after exercising it for her own kid. Corey DeAngelis is the director of school choice at Reason Foundation and an adjunct scholar at Cato Institute.
  • Media Outlets Turn Syria into Their Latest Melodrama    (Ted Galen Carpenter, 2019-11-10)
    Ted Galen Carpenter Mainstream media outlets inundated President Donald Trump with shrill denunciations when he announced the withdrawal of U.S. troops from Syria’s northern border with Turkey. The most frequent, emotionally laden criticism was that he had betrayed America’s Kurdish allies who had done much of the fighting against ISIS, giving Turkey a “green light” to launch a brutal military offensive against them. Most opinion pieces—and editorials masquerading as news stories—used such terms as loyal, noble, faithful and democratic to describe the Kurds. Once again, news coverage turned a complex geopolitical issue into a simplistic melodrama featuring admirable protagonists confronting odious villains. , Trump's handling of the withdrawal issue was indisputably clumsy, and the Turkish government has many unsavory features. Indeed, as I've written elsewhere, it should be an embarrassment to Washington to have the duplicitous, autocratic Turkish government as a NATO ally. But Ankara's concerns about the impact of Kurdish separatist campaigns in Syria and Iraq on Turkey's own festering Kurdish minority problem is not without merit. The Kurdistan Workers Party (PKK) in Turkey has waged a secessionist war intermittently since the early 1980s. Turkish leaders worry about ties between the PKK and their ethnic brethren in Syria and Iraq. The nature and extent of the relationship is the subject of considerable uncertainty and controversy. Writing in Time, Jared Malsin, a Middle East correspondent for the Wall Street Journal, notes that both Turkey and the U.S. list the PKK as a terrorist organization, “but the U.S. insists that its militia partners in Syria are a separate group from the PKK.” Yet, Malsin emphasizes, “the two organizations have direct ties, and Kurdish citizens of Turkey are among the YPG’s [People’s Protection Units] fighters.” The Kurdish Democratic Union Party of Syria, which controls the YPG, originated in 2003 as an offshoot of the PKK. , The press has a history of playing loose with the facts when it comes to covering foreign wars. The Syrian drama is only the latest episode. , The Kurdish political movements in both Syria and Iraq also fall considerably short of being models of Western-style democracy. Kurdish fighters in Syria have committed a significant number of atrocities against prisoners of war and even noncombatants. Moreover, the Syrian Kurds did not assist the United States against ISIS out of a sense of altruism. Instead, they saw an unprecedented opportunity to achieve a long-standing goal: the establishment of an autonomous (if not outright independent)  Kurdish-ruled region similar to what their counterparts in Iraq attained after the 1991 Persian Gulf War. Defeating ISIS was a prerequisite to establishing such an enclave. The stark oversimplifications that characterize much of the media treatment of Trump’s decision to sever ties with the Syrian Kurds have typified coverage of other foreign policy issues. Indeed, for most portions of the media, the entire Syrian civil war since 2011 has been a stark melodrama between good and evil akin to the animated television series feature, Dudley Do-Right, which last appeared in 2014. In the standard media narrative, Syrian dictator Bashar al-Assad was arch-villain Snidely Whiplash, while Syrian insurgents were his innocent democracy-seeking victims—the equivalent of Sweet Nell, about to be tied to the railroad tracks. The United States, of course, was the noble, idealistic Dudley Do-Right, riding to the rescue. Not only did such accounts ignore or minimize the crucial religious dimension of the Syria conflict, even though the insurgency was overwhelmingly Sunni while Assad’s supporters were a coalition of religious minorities consisting mainly of Christians, Druze and Assad’s own Alawites—a Shiite offshoot—who feared their likely fate under a Sunni-dominated successor government. Most media outlets also denied or ignored the radical Islamist orientation of several rebel factions, even going so far as to simply regurgitate Obama administration propaganda that they were “moderates.” On some occasions, especially the siege of Aleppo, media accounts were almost the opposite of the actual situation. A small minority of more skeptical journalists sharply criticized their colleagues for their naïve or biased conduct. According to the Boston Globe’s Stephen Kinzer, “for three years, violent militias have run Aleppo. Their rule began with a wave of repression. They posted notices warning residents: ‘Don’t send your children to school. If you do, we will get the backpack and you will get the coffin.’” A handful of other analyses confirmed the extent of abusive Islamist rule in rebel-held Aleppo, something the mainstream media sanitized. In early 2016, the Syrian army, backed by Russian air support, launched an offensive to push the militants out of Aleppo. But, Kinzer fumed, “much of the American press is reporting the opposite of what is actually happening. Many news reports suggest that Aleppo has been a ‘liberated zone’ for three years but is now being pulled back into misery.” He noted that Washington-based reporters used terminology that attempted to portray even the staunchly Islamist faction, Jabhat al-Nusra, as being composed “of ‘rebels’ or ‘moderates,’ not that it was the local al-Qaeda franchise.” Georgetown University senior fellow Paul R. Pillar likewise was critical of much of the Aleppo coverage, finding it excessively emotional and one-sided. British journalist Patrick Cockburn contended that there was far more propaganda than genuine news coming out of Aleppo. He charged that most outlets were relying on locals provided by rebel factions to provide information, and such a process guaranteed blatant bias. “The foreign media has allowed— through naivety or self-interest—people who could only operate with the permission of al-Qaeda-type groups such as Jabhat al-Nusra and Ahrar al-Sham to dominate the news agenda.” The same pattern of oversimplifying complex disputes and designating one side as virtuous and the other as villainous was a striking characteristic of media accounts of the Balkan Wars (Bosnia and Kosovo) in the 1990s. Proponents of advocacy journalism, such as Samantha Power, who would later serve on the National Security Council and become U.S. ambassador to the United Nations during Barack Obama's administration, openly asserted—and in her case, continues to assert—that it was the media's duty to sell the public on the need for Western military intervention to save the victims of odious Serb aggressors. Key nuances, including that some of the most ferocious fightings in Bosnia occurred between Muslims and Croats, not between Muslims and Serbs, were largely ignored. So too, were war crimes that Muslim forces committed in both Bosnia and Kosovo. In the latter theater, such atrocities included murdering civilians and prisoners of war to sell transplantable organs on an international black market. Media coverage of the Balkan and Syrian conflicts was horribly simplistic and one-sided, but the worst example of media malpractice occurred in the lead-up to the Iraq War. Not only did most major media outlets largely exclude opinion pieces that challenged the Bush administration's case for war, but prominent journalists lionized Ahmed Chalabi, the corrupt leader of the main exile group, the Iraqi National Congress. Worse, they served as channels for his disinformation. The most egregious offender was the New York Times and its lead reporter, Judith Miller. Chalabi and his colleagues continuously fed the Times bogus information about two developments that were certain to alarm the American people and generate public pressure for war. One intelligence stream highlighted Saddam Hussein’s alleged ties to Al Qaeda, including reported clandestine meetings between Iraqi officials and leaders of the terrorist organization. Another stream featured an alleged defector who provided “evidence” that Saddam’s government was vigorously expanding an arsenal of “weapons of mass destruction”—specifically, chemical and biological weapons. Most worrisome were allegations that Baghdad was actively pursuing a nuclear-weapons program and already had made substantial advances. Chalabi made certain that his good friend, Judith Miller, broke that story. The problem, of course, is that both streams of intelligence were totally bogus. Yet Miller would later deny any culpability for building a false case for war. She protested that “relying on the mistakes of others and [making] errors of judgment are not the same as lying.” One critic astutely responded: “It’s almost as though she is saying that if Ahmed Chalabi told her that in Iraq the sun rises in the west, and she duly reported it, that would not be ‘the same as lying.’” However, she should be judged “by her authoring studies for the ‘newspaper of record’ that were questionably sourced and very often misleading.” Despite her post-facto denials, Miller “played a pivotal role in building the public case for an attack on Iraq based upon shoddy reporting . . . including over-reliance on a single-source of easy virtue and questionable credibility—Ahmed Chalabi.” One might think that after the inept coverage of the Balkan Wars—and the utter fiasco of the Iraq coverage—journalists would have learned not to place unquestioning trust in foreign factions that profess to be champions of democracy and loyal friends of the United States. Foreign conflicts are almost always complex struggles between murky adversaries with an abundance of ulterior motives. They are not Manichean contests between good and evil. Unfortunately, media treatments of the Syrian conflict, especially the pervasive hero-worship of the Kurds, suggest that most journalists have learned little or nothing from their profession's previous cases of malpractice. Ted Galen Carpenter, a senior fellow in security studies at the Cato Institute and a contributing editor at the National Interest, is the author of twelve books and more than 850 articles on international affairs.
  • Three Decades After the Fall of the Berlin Wall    (Tanja Porčnik, 2019-11-09)
    Tanja Porčnik The fall of the Berlin Wall on 9 November 1989 was not only the beginning of the reunification of the Germans as the people of Berlin brought down a monstrous physical barrier that cut through their city since 1961, it was also one of several events that in the months and years to come would have more than 100 million people turn their back to communism, also because of their fortitude to steer their economies out of socialism toward the market, writes Tanja Porčnik is a Senior Fellow of the Fraser Institute specialising in economic and human freedom studies. , Prior to the fall of the Iron Curtain, the former socialist economies — to the East of the now infamous barrier dividing Europe — varied considerably in their degree of openness, soundness of their institutions, economic growth and the development process. Similarly, these countries opted for different paths of market liberalisation, some of them moving rapidly and with great strides to reform and liberalise their economies, while others were only undertaking gradual and few transitional steps. Today, thirty years later, unsurprisingly, public policies and political institutions of the former socialist economies do not equally support economic freedom. However, notably, they support it to a greater extent than they did before the 1990s. , A high pace of economic liberalisation , Providing a quantitative assessment of the degree of market liberalism, the Fraser Institute Economic Freedom of the World index displays that the highest levels of economic freedom in Eastern Europe, Caucasus and Central Asia in 2017, the most recent data available, were in Georgia, Estonia, Lithuania, the Czech Republic and Latvia, while the lowest levels of economic freedom were in Ukraine, Tajikistan, Azerbaijan, Belarus and Moldova. , , As the data show, all the former socialist economies in Eastern Europe, Caucasus and Central Asia have strengthened their market features since the fall of the Iron Curtain. This sizeable and wide-spread transformation reflects the region’s wholehearted embrace of private property, the rule of law, entrepreneurship, free trade, foreign direct investment and globalisation. Actually, in the last few decades, economic liberalisation has spread across the former socialist region at a higher pace than in the world, with the average degree of market liberalism in the former socialist economies increasing from 5.47 in 1995 to 7.20 in 2017, while the average level of economic freedom in the world went from 6.06 in 1995 to 6.59 in 2017. The notable transition of the former socialist economies is also reflected in the fact that while in 1995 none of them ranked in the top economic freedom quartile, when in 2017 ten of them (Georgia, Estonia, Lithuania, the Czech Republic, Latvia, Armenia, Romania, Albania, Bulgaria and the Slovak Republic) were in the top quartile. By contrast, only two countries (Tajikistan and Ukraine) rank in the fourth quartile of economic freedom, while in 1995, six (Romania, Albania, Bulgaria, Croatia, Tajikistan and Ukraine) out of fourteen former socialist countries ranked in the fourth quartile. , Continue to reform and liberalise , Several of the former socialist economies, such as Georgia and Estonia, have embraced the economic freedom to such an extent and with such a robustness that they have become world-known success stories of market liberalisation by way of opening their markets, decreasing barriers to trade, lowering tax burden, stabilising the monetary system, engaging in deregulation and strengthening the legal system. By contrast, other former socialist economies have been reserved to a change, finding it challenging or unwilling to increase their level of economic freedom. As an example, Hungary observed the smallest move from socialism toward the markets during the 1995-2017 period; however, it still increased the economic freedom score by 0.90. At a time when nationalism and protectionism are emerging in many countries across the world, which only adds to downward pressure on the global economy, countries in Eastern Europe, Caucasus and Central Asia should draw from their own experience and continue to reform and liberalise their economies, which shall not only have positive impact on economic growth, foreign direct investment and wellbeing of the citizens, but will also reduce poverty levels and economic inequalities more successfully than any other economic system. Tanja Porčnik is a Senior Fellow of the Fraser Institute specialising in economic and human freedom studies. She is also an adjunct scholar at the Cato Institute in the United States and the president of the Visio Institute in Slovenia.
  • The Austrian Theory of Efficiency and the Role of Government    (2019-11-09)
    Introduction Orthodox public goods theory and its corollary — the standard economic justification for government intervention — have both been based on particular definitions of efficiency and optimality. According to the orthodox approach, if a market is not operating "efficiently," some sort of government intervention to correct the inefficiency may be warranted. But this view of efficiency is derived directly from a neoclassical view of market structures and in particular from the notion of perfect competition. The point to be emphasized in this paper is that if one starts with a different view of efficiency and market optimality, an entirely different set of conclusions relative to government intervention can be reached. In particular we will examine the approach to economics taken by the Austrian School and detail how that approach is applied to arrive at the Austrian theory of efficiency. In addition, we will examine how Austrians view government interventions into the market and their ultimate conclusions on the role of government in society. The Neoclassical Approach to Efficiency: An Overview1 Before beginning a discussion of the Austrian model, a brief examination of the orthodox, neoclassical perspective is necessary. This examination will help sharpen our understanding of the major differences in both methodology and final policy conclusions that separate the two points of view. There are two cornerstones that provide the foundation for the traditional discussion of efficiency. These are the concepts of Pareto optimality and perfect competition. When viewed in its most basic form, a Pareto optimum represents a static state of affairs within which no possible change can be made that would result in one person being made better off without another person being made worse off. This notion is important to our discussion because it has been adopted by most economists as the state of perfect "efficiency" in economic affairs. In other words, to achieve a perfectly efficient market, all economic transactions in society should be such that no one is made better off at the expense of another. In addition, the final equilibrium should represent a situation where no further transactions can be made without violating the Paretian rule. It is at this point that the neoclassical notion of perfect competition comes into play. It can be demonstrated that the equality of marginal cost and price that is inherent in the perfectly competitive model is sufficient to insure Pareto optimality, and, therefore, "efficiency" in the market. When price equals marginal cost, the marginal benefit received by the consumer (reflected by the price) equals the marginal value of the alternate uses of the factors that went into the production of the output (given by marginal cost). Under these circumstances if output were increased, the value to the consumer of the added product would be less than the value given up from other uses. On the other hand, if output were reduced the value lost would be greater than the value to be gained in some alternative use. In both instances one sector is being made better off at the expense of another. Hence this state, where marginal cost equals price or marginal benefit, is Pareto optimal and efficient, and any deviation from this equality is always less efficient. We have now reviewed the standard against which the relative efficiency of a market is measured and, consequently, by which the necessity of government intervention into the market (to correct "inefficiency") is determined. From a neoclassical perspective, market inefficiency is an indication of "market failure" and may call for government intervention to make the market succeed, i.e., be efficient. Certain classic situations exist where, by employing these neoclassical standards, markets inherently fail, and the orthodox view is that intervention is necessary. For illustrative purposes I will briefly examine two of these: pure public goods, and the externalities "problem." Definitionally, a pure public good is one in which benefits to additional consumers can be provided without additional costs to the producer. The most commonly given example of a pure public good is national defense. Because the marginal cost of producing additional "defense" is assumed to be zero, price would have to equal zero for the market to work "efficiently" in a neoclassical sense. Since no one in the private market would provide this type of good at the "efficient" price, it is argued that the government's responsibility is to step in and provide such outputs. The second situation is the "problem" of externalities. Here costs and benefits external to both the buyer and the seller are being produced, and these externalities are not being considered when price and quantity are determined. Therefore, the real marginal cost and marginal benefit are not being equated and the result is "market failure." The typical solution suggested here is subsidization, taxation, or direct regulation, in order to insure the efficient price-output combination. The most common example of this problem is pollution, where the costs incurred by a community because of polluted air are not considered by the firm creating the pollution. It should be emphasized at this point that this neoclassical notion of market externalities brings into play the idea of costs and benefits to society as a whole and the expanded concept of social efficiency. This concept is usually presented as being distinct from the efficient actions of individuals within the society. I note this for one reason — in the following discussion on the Austrian theory of efficiency we will see that from its perspective there can be no rational explanation of "efficiency" apart from the individual actors that make up society. The Methodology of Austrian Economics Individual valuation is the keystone of economic theory.2           M. N. Rothhard The importance of the work of the Austrian School for the history of ideas finds perhaps its most suggestive expression in the fact that here, acting man-stands in the center bf economic events.3           Ludwig M. Lachmann It is this consistent focus on the actions and subjective valuations of individuals that distinguishes the methodology of the Austrian School from all other approaches to economic theory. This approach, sometimes referred to as "methodological individualism" or "radical subjectivism," stems from the fact that Austrians see economics as a branch of the more general science of human action or praxeology.4 To truly understand the Austrian point of view, it is necessary to understand the concept of human action as the Austrians define it. Simply stated, human action is viewed as "purposeful behavior."5 In other words, it is the application of specific means to achieve desired ends. This concept of human action has been developed, with respect to economics, most thoroughly in the writings of economist Ludwig von Mises, and the notion might best be summed up and clarified in his words: No sensible proposition concerning human action can be asserted without reference to what the acting individuals are aiming at and what they consider as success or failure, as profit or loss.6 Due to the nature of their existence all humans act and all economic activity is based on action. It therefore follows that Austrians see praxeology as the logical foundation for economic science. The question for Austrians then becomes, how does the purposive behavior of all individuals and the means they choose to accomplish those purposes interact in a market economy? As one observer put it in explaining the views of Ludwig Lachmann:7 Economic phenomena cannot be explained unless they are related, either directly or indirectly, to subjective states of valuation as manifested either in choice or in expectations about the market.8 This notion of subjective valuation and the purposes and choices of individuals permeates every aspect of Austrian economic analysis. For example, the concept of cost is defined completely in terms of privately perceived foregone opportunities,9 the market rate of interest is the expression of the individual time preferences of the members of society,10 and as we shall see in detail below, efficiency is expressed in light of the success or failure of individual plans.11 The Austrian Theory of Efficiency12 A. Efficiency and the Individual. Consistent with their approach to all economic analysis, Austrians begin their discussion of efficiency by first focusing on the individual. The problem then becomes, what constitutes efficient activity for the individual actors in society? In answering this question the Austrians again turn to the praxeological roots of their analysis. From this they conclude that efficiency must be seen in terms of the purposeful behavior of individuals, and more specifically, whether that behavior is consistent with attaining the purposes and goals that are being sought. To the Austrian economist, then, an efficient course of action would be to apply means that are consistent with attaining the desired goal or program of goals. Inefficiency arises when means are chosen that are inconsistent with the desired goals. It should be made clear that the particular nature of the goals being pursued has no bearing on the analysis. These are taken as given. They are derived from the subjective valuations and preferences of each individual. It is not the ends whose efficiency is under question, but the means used to attain them. I point this out, because, very often obtaining something for the smallest available monetary cost or for the smallest possible input of time is considered "efficient." But, if these aspects are considered as part of a program of goals by the individual, they need not be of concern to the economist. For example, suppose a person set out to spend an entire afternoon mowing a lawn that he could possibly finish in an hour. Because it was part of his goal, the fact that he took the extra time could not be seen as inefficient. In fact, if he finished mowing the lawn in an hour in spite of the fact that he had planned to spend the entire afternoon, it then could be said that he acted inefficiently. Assuming he did not change his mind during the process, his methods would be inconsistent with his goals. To the Austrian, this notion of efficiency plays an important part in all economic analysis, for it is the crux of the economic problem facing the individual. The degree to which an individual acts efficiently will determine success and failure in his economic Life. (The word "success" is used in its subjective sense; i.e., success stems from the achievement of individually determined goals and not from what any observer sees as successful.) B. Society and Efficiency. With the above analysis of efficiency for the individual in mind, we can now proceed to examine how Austrians view the concept of social efficiency. As with the individual, Austrians see the economic problem facing society to be that of securing efficiency. But, the important point to be made is that Austrians do not see societal efficiency apart from the efficiency of the individuals that comprise it. In other words, they recognize that society cannot have goals apart from those of the individuals within it. This notion might best be expressed in the words of Professor Israel Kirzner: Society is made up of numerous individuals. Each individual can be viewed as independently selecting his goal program … and each individual adopts his own course of action to achieve his goals. It is therefore unrealistic to speak of society as a single unit seeking to allocate resources in order to faithfully reflect "its" given hierarchy of goals. Society has no single mind where the goals of different individuals can he ranked on a single scale.13 From this Kirzner goes on to conclude that: Efficiency for a social system means the efficiency with which it permits its individual members to achieve their several goals.14 Given this concept of social efficiency it is easy to understand why Austrians generally agree that a free market is the most efficient system. With its emphasis on voluntary cooperation, the market economy ensures that each individual is allowed to pursue his goals in the most efficient manner available, given his knowledge of the situation. C. Determinants of Efficiency: Knowledge and Coordination. The key to economic efficiency, for both the individual and society, is knowledge. The extent to which an individual acts efficiently will be determined by the amount of knowledge he possesses regarding the appropriate means for attaining his desired ends. A brief example can illustrate this point. Suppose Mr. Jones has established as a goal the purchase of a new car. But, because of extremely limited knowledge, he decides to go to a department store to make his purchase. It is obvious that because of ignorance, he has chosen a very inefficient course of action with respect to his desired goal. Through trial and error his knowledge will improve, and as it improves so will the efficiency of his actions. For example, someone in the department store may tell Mr. Jones that he needs to go to an automobile dealer, thus improving his knowledge of the situation and therefore the efficiency of his subsequent acts. Efficiency for the market as a whole is also dependent on individual knowledge of market conditions. In a market economy it is the mutually beneficial nature of voluntary exchange that allows all individuals to simultaneously pursue their goals. The key to the efficient pursuit of goals in society then becomes a question of coordination between buyers and sellers, and the extent to which this coordination exists will reflect the knowledge of opportunities within the market held by its participants. To have efficiency in an economy, there must be more than just the opportunity to exchange, there must be knowledge of these opportunities on the part of buyers and sellers. To illustrate this notion of coordination, let's go back to Mr. Jones' shopping for an automobile. Suppose he has now gained the knowledge it takes to realize that, in order to find a car at an acceptable price, he must go to various automobile dealers and make comparisons. The problem Mr. Jones now faces is this: he is willing to pay a maximum of $4,000 for a car and no dealer he knows of is willing to sell him one for that low a price. The fact is, though, that a dealer on the other side of town is willing to sell a new car for $3,500. Without the two parties knowing about each other there is no coordination of plans, and inefficiency arises in the market. For Austrians, then, it is only when all market participants have perfect knowledge and foresight of the availability of means, that market plans will be perfectly coordinated and "perfect" efficiency will exist. To the Austrian, this notion of perfect knowledge in a market is the distinguishing feature of equilibrium. According to Kirzner: The state of equilibrium is the state in which all actions are perfectly coordinated, each market participant dovetailing his decisions with those which he (with complete accuracy) anticipates other participants will make. The perfection of knowledge which defines the state of equilibrium ensures complete coordination of individual plans.15 From this we can conclude that a market in equilibrium is a market working with perfect efficiency. This concept of equilibrium should not be confused with the notion of a perfectly competitive equilibrium and the neoclassical state of "perfect efficiency." The Austrian notion of perfect efficiency and market equilibrium sets no restrictions on market structure, the heterogeneity of products, or the relationship between marginal cost of production and the price of the output. It is simply a situation where "all acts are coordinated," where there are no shortages or surpluses in the market. D. Inefficiency and the Coordinating Process. Now that we have examined the concept of efficiency, we can take a closer look at inefficiencies in a market and the process that occurs to correct them. It should be apparent that a state of perfect efficiency, i.e., perfect knowledge, cannot be achieved completely in an economy. At any given point in time the available information will be scattered throughout the market. Some plans will be uncoordinated, and inefficiencies will arise. But, it is the "natural forces" in the market itself which act to correct for these inefficiencies. It is the market concepts of price and entrepreneurial activity that ensure the diffusion of knowledge and the tendency toward efficient use of resources, i.e., "means," in a market economy. Simply stated, it is the price system that makes available the pertinent information, and the entrepreneur — motivated by potential profits — who takes the information and uses it in a manner that tends to improve efficiency. The price system lets it be known that inefficiencies exist through discrepancies in the price for undifferentiated goods within the market. This is true because, everything else being equal, people will buy at the lowest prices available. With perfect knowledge of all prices, the movement toward the lower prices and away from the higher ones would, under conditions of perfect efficiency, result in a uniform market price for the good. Therefore, price discrepancies would represent the existence of imperfect knowledge, i.e., inefficiency in the market. It should be made clear that this uniformity of price under conditions of perfect efficiency holds only for goods that are homogeneous in the mind of the consumer. For goods that are differentiated in the consumer's mind, the price discrepancies may simply reflect the relative values placed on the goods that arise from perceived differences. The point to be emphasized is that, contrary to the implications of the neoclassical model of perfect competition, homogeneous products are not more efficient to society than relatively heterogeneous products. The degree to which products are differentiated in an economy reflects individual desires and preferences and, as stated previously, the Austrian model analyzes the efficiency of the means used and not the ends desired. Under the given conditions, then, when inefficiencies (i.e., price discrepancies) occur, the opportunity for profit will present itself to the alert entrepreneur. As Kirzner puts it: A profit opportunity exists wherever a given resource or a given product can be bought in the market at one price and sold again for a higher price. [Therefore,] a possibility for profit exists wherever there is a price discrepancy.16 It is these opportunities for profits and the entrepreneurial activity they stimulate that tend to promote coordination and therefore efficiency in the market. Our previous example can be used to illustrate this point. As we recall, Mr. Jones is in a position where he is willing to spend $4,000 on a car and no dealer he knows of is willing to sell for that low a price. Let's say that the lowest price he's been offered is $5,000. At the same time a dealer Mr. Jones is not aware of is willing to sell the car for $3,500. Now a price discrepancy exists and, along with it, a chance for entrepreneurial profit. Into the picture comes Mr. Smith, a profit-seeking entrepreneur, who's always on the lookout for a "fast buck." Seeing the opportunity for profit, Mr. Smith buys the car at the lower price and sells it to Mr. Jones for $4,000. What Mr. Smith has effectively done is coordinate the plans of Mr. Jones and the dealer selling at the lower price, thus improving efficiency in the market. We can conclude from this that, in a free market, inefficiencies promote their own corrective action. Again in the words of Israel Kirzner: A price discrepancy means a chance to make profits. By definition entrepreneurs seek profits; thus the very situation that symptomizes the need for a correction creates the force capable of inducing such actions. Moreover … the entrepreneurial search for profits implies a search for situations where resources are misallocated.17 One might protest that we have no assurance that entrepreneurs will recognize every inefficiency in the market or correctly perceive the ones that do exist, and this is true. But the fact remains that the market will reward successful entrepreneurs and penalize unsuccessful ones. Therefore, "the market process itself … attracts only those most able and competent to direct the future course of the process."18 As Kirzner concludes: "If the best entrepreneurial talent is insufficient to remove all misallocations, even with the inducement of the profit motive, then the remaining misallocations must simply be undetectable."19 (Kirzner uses the term "misallocation" to refer to a situation caused by discoordination of plans and therefore inefficiency in the market.20) The Role of Government From our discussion up till now, it is clear that the neoclassical notion of market failure, discussed in the first section of this paper, cannot be used to justify government intervention in order to correct inefficiencies as defined from an Austrian perspective. Even though a market can never attain perfect efficiency, the corrective forces which arise from the market's own mechanism will make it as efficient as possible. In fact, any notion of market failure from the Austrian perspective would have to arise, not from the free market, but from government interventions that would distort market prices and allocate resources toward ends other than those being pursued by market participants. In his book Market Theory and the Price System, Kirzner makes his conclusions about interference in the market perfectly clear. He states: Interference with the webs and forces that are woven through the market process limits the attempts of participants to coordinate their activities through an engine of remarkable efficiency — the market. The analysis of the market process can clarify the costs involved through such interference, making it possible for market participants to decide, through the political process, on the extent to which they are willing to lay aside their engine of efficiency for the sake of special purposes of possibly overriding importance.21 It is clear from the first part of this statement that Kirzner feels government intervention into a market can never be justified on the basis of improving efficiency. This is both consistent with the Austrian view of efficiency and generally accepted by contemporary Austrian economists. The second part of Kirzner's statement implies that there may be a justification for government intervention on other than efficiency grounds; for "special purposes of possibly overriding importance." This leads us into the area of welfare economics and brings in considerations of utility and equity which are beyond the scope of this paper. But it should be noted that many Austrians feel that judgments on these concepts can never be made by society as a whole and can only be made by individuals. This leads to the conclusion that there is no justification for any form of government interference. This view might best be summed up in the words of the noted Austrian economist Murray N. Rothbard: No government interference with exchanges can ever increase social utility … whenever government forces anyone to make an exchange which he would not have made, this person loses in utility as a result of the coercion. But taxation is just such a coerced exchange. … Since some lose bv the existence of taxes, therefore, and since all government actions rest-on its taxing power, we deduce that: no act of government whatever can increase social utility.22 This may appear to be an extreme position, but it is consistent with the radical subjectivist nature of Austrian methodology. The question might now arise as to how the problems in society that have been traditionally taken care of by government would be handled. What about the externalities "problem" and all the "public goods" that governments have traditionally provided? A full explanation of how the free market would take over all of the functions of government would, again, be beyond the scope of this paper. This subject has been covered in quite some detail in a number of volumes.23 But briefly, "public goods" such as roads, education, parks, and, in a Rothbardian system, courts and defense, would be services provided by the market as demand conditions warranted. The fact that these services could not be priced where marginal cost equals marginal benefit would have no bearing on efficiency from an Austrian point of view. It also must be realized that a completely free-market economy implies a clearly defined system of property rights to all resources in society. It is this system of property rights that would act as the general regulator of all social and economic acts. To be more specific, the problem of spillovers and externalities would be nothing more than a problem of property rights violation, and would be handled through the courts just as for any other act of aggression. It should be noted here that most neoclassical economists also view externalities, such as pollution, as a problem of unenforced property rights. The crucial difference is that the neoclassicist sees property rights as variable and to be granted, presumably by the state, on the basis of who stands to benefit most or to lose least from the particular rights assignment.24 This is consistent with the neoclassical notion of social efficiency mentioned in the first section of this paper, the logic being that if property rights are assigned to the party with the most to gain or least to lose as a result of the externality, the net benefit to society will be increased, and social efficiency will be improved.25 The Austrian approach is quite different. Along with the objection to interpersonal cost-benefit analysis and social efficiency implied by the subjectivist nature of Austrian methodology,26 there is a major difference in the Austrian view of property rights in general. It should be clear that in order to pursue goals and make plans it is necessary to have a system of property rights that is clearly defined and that each individual can count on into his foreseeable future. Any involuntary alteration of a given property rights structure will necessarily interfere with plans being made by some owners of property with respect to the pursuit of their goals. Because of this, Austrians take the particular property rights system as given and examine the efficiency of actions within the confines of the rights arrangement. As one Austrian economist has put it: A property rights system lays down the rules, it defines the freedoms and restrictions according to which we evaluate alternatives and make choices, hut as such it is conceptually distinct from alternatives among which we choose.27 On what basis, then, do Austrians believe property rights should be assigned? The answer to this might best be expressed by Prof. Rothbard. He states that: We cannot decide on … rights or liabilities on the basis of efficiencies or minimizing of costs. But if not costs or efficiency, then what? The answer is that only ethical principles can serve as criteria for our decisions. Efficiency can never serve as the basis for ethics; on the contrary, ethics must be the guide and touchstone for any consideration of efficiency.28 In other words, it is felt that the choice of a particular property rights structure is beyond the realm of economic science, and has no place in positive discussions of efficiency. Dr. Rothbard goes on to conclude that: Economists will have to get used to the idea that not all of life can be encompassed by our own discipline. A painful lesson no doubt, but compensated by the knowledge that it may be good for our souls to realize our own limits-and, just perhaps, to learn about ethics and about justice.29 Concluding Remarks This paper has brought to light the fact that there is more than one approach to the concept of efficiency in the economic literature. Furthermore, depending on which theory of efficiency is adopted, one can arrive at far different conclusions concerning the role of the state both in the economy and in society in general. It should be apparent that all methodologies within economics deserve full consideration by scholars and analysts. It is only after the alternatives have been considered that intelligent decisions can be made concerning the role economics should play in policy analysis. 1. The discussion in this section has been generalized entirely from H. T. Kolin, Microeconomic Analysis, Welfare and Efficiency in Private and Public Sectors (New York: Harper and Row, 1971), pp. 10–14, 25–60. 2. Murray N. Rothbard, "Toward a Reconstruction of Utility and Welfare Economics," Occasional Papers Series, #3 (New York: Center for Libertarian Studies, 1977), p. 1. 3. Lawrence H. White, "Methodology of the Austrian School," Occasional Papers Series, #1, (New York: Center for Libertarian Studies, 1977), p. 1. 4. Ibid., p. 9. 5. Rothbard, Man, Economy, and State (Los Angeles: Nash Publishing, 1972), p. 1. 6. Ludwig von Mises, The Ultimate Foundations of Economic Science, with a Foreword by Israel Kirzner, 2nd ed. (Kansas City: Sheed, Andrews, and McMeel, 1978 p. 80. 7. Ludwig M. Lachmann, along with F. A. Hayek, is one of the elder statesmen of currently active Austrian economists. He is presently Visiting Professor at New York University. 8. Walter E. Grinder, "In Pursuit of the Subjective Paradigm," Introduction to Ludwig M. Lachmann's Capital, Expectations and the Market Process: Essays on the Theory of the Market Process (Kansas City: Sheed, Andrews, and McMeel, 1977), p. 3. 9. Israel Kirzner, Market Theory and the Price System (Princeton, N.J.: D. Van Nostrand Co., 1963), p. 184. 10. Grinder, "In Pursuit of the Subjective Paradigm," p. 4. 11. Kirzner, Market Theory, pp. 34, 35. 12. The major points in this section (A–D) have been extrapolated from Kirzner's Market Theory, pp. 33–44, 297–310; and also his Competition and Entrepreneurship (Chicago and London: University of Chicago Press, 1973), pp. 13–17, 212–31. All examples used are my own. 13. Kirzner, Market Theory, p. 35. 14. Ibid. 15. Kirzner, Competition, p. 218. 16. Kirzner, Market Theory, pp. 302–303. 17. Ibid., p. 303. 18. Ibid., p. 304. 19. Ibid. 20. Ibid., p. 301. 21. Ibid., p. 309. 22. Rothbard, "Toward a Reconstruction," p. 29. 23. See Rothbard, For New Liberty (New York: Collier MacMillan, 1978); David Friedman, Machinery of Freedom: Guide to a Radical Capitalism (New Rochelle, N.Y.: Arlington House. 1973): and Jarret B. Wollstein. "Public Services Under Laissez-Faire," (publisher not given). 24. Harold Demsetz. "Ethics and Efficiency in Property Rights Systems," in Mario J. Rizzo, ed., Time, Uncertainty, and Disequilibrium: Exploration of Austrian Themes (Lexington, Mass.: Lexington Books, D.C. Heath and Co., 1979), pp. 102û104. 25. Ibid., p. 101. 26. John B. Egger, "Comment: Efficiency is Not a Substitute for Ethics," in Rizzo, Time, Uncertainty, p. 121. 27. Ibid., p. 121. 28. Rothbard, "Comment: The Myth of Efficiency," in Rizzo, Time, Uncertainty, p. 95. 29. Ibid.
  • The Cost of Government Is Rising Much Faster than Housing and Healthcare    (2019-11-09)
    This year, as it has for several consecutive years now, the Tax Foundation reported that “Americans will collectively spend more on taxes in 2019 than they will on food, clothing, and housing combined.” That the combined federal, state and local tax bill for the American populace is larger than the amount we spend on essential costs of living like food, clothing and housing certainly is an attention-grabbing snapshot of the federal government leviathan. But what about the trendline? In other words, is the cost of funding government growing at a faster clip than other costs of living? Indeed, for decades now, politicians have promised to make items like housing and healthcare more “affordable,” but such efforts seem to have made those problems worse. What about making government more “affordable”? Unsurprisingly, as rapidly of the cost of living has been rising for the last quarter century, in particular in housing and healthcare, the cost of the federal government has been rising significantly faster. We can examine the cost of the federal government by tracking per capita federal government tax receipts and spending from 1993 to 2018.1 Tax receipts serves as a reasonable — albeit incomplete — proxy for the direct cost to citizens of financing the federal government. Moreover, federal outlays — including deficit spending — is a reflection of the costs imposed on the citizens and its economy because it represents the amount of scarce resources being controlled by the government instead of private citizens. Both measures are important components that evaluate what Murray Rothbard described as “the parasitic burden of government taxes and spending upon the productive activities of the private sector.” As shown in the graph below, nominal per capita federal receipts ballooned by 122 percent between 1993 and 2018, while nominal per capita federal government outlays climbed by an even more dramatic 132 percent. Compare this growth rate to the cumulative growth rate of the Consumer Price Index (CPI) – a measure intended to reflect the overall cost of living for the average citizen – of 73 percent during that time.2 In other words, the cost of the federal government to citizens grew at a rate two-thirds faster than the overall price index. The Federal Reserve has aided in an alarming increase in the overall cost of living, but that pales in comparison to the growth rate of the costs imposed by the federal government over the last two-and-a-half decades. We can also evaluate the federal government’s growth rate against the rising cost of common items like housing and utilities, clothing, groceries — and yes, even healthcare. Think your dollar doesn’t go as far at the grocery store as it did in the 1990s? Of course it doesn’t. But the price index for groceries rose at a rate only half of the rate of growth of federal government tax receipts since 1993, and less that half the rate of federal government outlays. Here we see that, despite federal policies to inflate and then re-inflate housing bubbles, that the cost of housing & utilities rose by 97 percent since 1993. A concerning rate to be sure, but well short of the federal government’s growth rates. Healthcare costs, the focus of so much political consternation, including the significant overhaul in 2010 known as Obamacare, grew by 83 percent between 1993 and 2018 — a rate roughly 40 and 50 percentage points lower than the growth of per capita federal government tax receipts and outlays, respectively.3 It’s interesting to note that prominent Democratic presidential nominees like Bernie Sanders and Elizabeth Warren decry the rising cost of healthcare and housing as virtually criminal but fail to acknowledge that the cost of federal government has been rising at a far higher pace. Moreover, how compelling it is to note that the cost of clothing/footwear, by far the least regulated of the items listed, has actually fallen by 14 percent since 1993. Many people lament the rising costs of living over time, especially of items like housing and health care, and they are right to do so. But so many simply take for granted the largesse of the federal government, and fail to recognize the dramatically increasing fiscal burden being imposed by Uncle Sam. 1. Total federal outlays taken from the Tax Policy Center chart, accessed here: https://www.taxpolicycenter.org/statistics/federal-receipt-and-outlay-summary Total nominal annual federal tax receipts obtained from the St. Louis Fed data series, accessed here: https://fred.stlouisfed.org/series/W006RC1A027NBEA#0 U.S. population figures used to calculate per capita amounts taken from US Census Bureau data aggregated here: https://www.multpl.com/united-states-population/table/by-year 2. CPI measure taken from Jan. of each year, within each given fiscal year. Data accessed online June 6, 2019 at https://inflationdata.com/Inflation/Consumer_Price_Index/HistoricalCPI.aspx?reloaded=true 3. Price indexes for specific goods accessed from: Federal Reserve of St. Louis, Table 2.3.4. Price Indexes for Personal Consumption Expenditures by Type of Product. Available at https://fred.stlouisfed.org/release/tables?rid=53&eid=43831&od=1988-01-01#
  • Why Government Should Not Fight Deflation    (2019-11-09)
    For most experts, deflation is considered bad news since it generates expectations of a decline in prices. As a result, they believe, consumers are likely to postpone their buying of goods at present since they expect to buy these goods at lower prices in the future. This weakens the overall flow of spending and in turn weakens the economy. Hence, such commentators hold that policies that counter deflation will also counter the slump. Will Reversing Deflation Prevent a Slump? If deflation leads to an economic slump, then policies that reverse deflation should be good for the economy. Or so it is held. Reversing deflation will simply involve introducing policies that support general increases in the prices of goods, i.e., price inflation. With this way of thinking inflation could actually be an agent of economic growth. According to most experts, a little bit of inflation can actually be a good thing. Mainstream economists believe that inflation of 2 percent is not harmful to economic growth, but that inflation of 10 percent could be bad for the economy. There’s good reason to believe, however, that at a rate of inflation of 10 percent, it is likely that consumers are going to form rising inflation expectations. According to popular thinking, in response to a high rate of inflation, consumers will speed up their expenditures on goods at present, which should boost economic growth. So why then is a rate of inflation of 10 percent or higher regarded by experts as a bad thing? Clearly there is a problem with the popular way of thinking. Price Inflation vs. Money-Supply Inflation Inflation is not about general increases in prices as such, but about the increase in money supply. As a rule the increase in money supply sets in motion general increases in prices. This, however, need not always be the case. The price of a good is the amount of money asked per unit of it. For a constant amount of money and an expanding quantity of goods, prices will actually fall. Prices will also fall when the rate of increase in the supply of goods exceeds the rate of increase in the money supply. For instance, if the money supply increases by 5 percent and the quantity of goods increases by 10 percent, prices will fall by 5 percent. A fall in prices however cannot conceal the fact that we have inflation of 5 percent here on account of the increase in the money supply. The reason why inflation is bad news is not because of increases in prices as such, but because of the damage inflation inflicts to the wealth-formation process. Here is why. The chief role of money is the medium of exchange. Money enables us to exchange something we have for something we want. Before an exchange can take place, an individual must have something useful that he can exchange for money. Once he secures the money, he can then exchange it for the goods he wants. But now consider a situation in which the money is created "out of thin air," increasing the money supply. This new money is no different from counterfeit money. The counterfeiter exchanges the printed money for goods without producing anything useful. He in fact exchanges nothing for something. He takes from the pool of real goods without making any contribution to the pool. Note that as a result of the increase in the money supply what we have here is more money per unit of goods, and thus, higher prices. What matters however is not that prices rise, but the increase in the money supply that sets in motion the exchange of nothing for something, or "the counterfeit effect." The exchange of nothing for something, as we have seen, weakens the process of real wealth formation. Therefore, anything that promotes increases in the money supply can only make things much worse. Why Falling Prices Are Good Changes in prices are just a symptom, as it were — and not the primary causative factor — of a falling growth momentum.  Thus attempts to reverse price deflation by means of a loose monetary policy (i.e., by creating inflation) is bad news for the process of wealth generation, and hence for the economy. On the other hand, in order to maintain their lives and well-being, individuals must buy goods and services in the present. So from this perspective a fall in prices cannot be bad for the economy. Furthermore, if a fall in the growth momentum of prices emerges on the back of the collapse of bubble activities in response to a softer monetary growth, then this should be seen as good news. The fewer non-productive bubble activities we have, the better it is for the wealth generators, and hence for the overall pool of real wealth. Likewise, if a fall in the growth momentum of the CPI emerges on account of the expansion in real wealth for a given stock of money, this is obviously great news since many more people could now benefit from the expanding pool of real wealth. We can thus conclude that contrary to the popular view, a fall in the growth momentum of prices is always good news for the wealth generating process and hence for the economy.
  • The Berlin Wall Reminds Us of What Happens After We "Smash Capitalism"    (2019-11-08)
    This week marks the thirtieth anniversary of the fall of the Berlin Wall. Decades later, the wall remains a symbol of the violence employed by socialist states, and a reminder that the egalitarian workers' paradise of East Germany was so hated by its residents that the state had to build a wall to keep residents in. It is ironic, then, that only a generation later, Americans are becoming increasingly enamored with socialism. According to a recent Gallup poll, 43 percent of Americans say socialism is a "good thing." It's unclear how many of those respondents can actually define socialism. Some believe socialism to simply be policies that promote equality. Others define it using the more historically orthodox view: government ownership of the means of production. There is no doubt, however, that a vocal and not-insignificant minority — of the sort represented by Jacobin magazine, for example — advocates for the total destruction of capitalism. When American democratic socialists who want to "smash capitalism" say they like "socialism," of course, they are likely to add that they don't want the sort of socialism they had in East Germany. They want kindly, happy, well-lit socialism. Not the gray, dour, socialism of the Eastern Bloc. I have no doubt this is indeed what they want, although that's what the founders of East Germany and the Soviet Bloc thought they would get too. Many of them no doubt truly believed they were leading the way to a kinder, gentler, more equal society. After all, up until the 1980s, the socialists of the Eastern Bloc were still entertaining the idea that they could deliver a higher standard of living to ordinary people than could the "decadent" economies of the West. In 1959, of course, Richard Nixon and Nikita Khrushchev literally debated whether the West or the Communist world could deliver the best kitchen appliances to the general public. Obviously, the West won that debate, although many Western socialists failed to get the memo. Right up until the end (of the Soviet Bloc) the highly influential American economist Paul Samuelson maintained that communist economies worked perfectly well. As David Henderson noted in 2009: Samuelson had an amazingly tin ear about communism. As early as the 1960s, economist G. Warren Nutter at the University of Virginia had done empirical work showing that the much-vaunted economic growth in the Soviet Union was a myth. Samuelson did not pay attention. In the 1989 edition of his textbook, Samuelson and William Nordhaus wrote, "the Soviet economy is proof that, contrary to what many skeptics had earlier believed, a socialist command economy can function and even thrive." As it turned out, the socialist economies — designed to deliver an easier life to consumers and workers — were really vehicles of impoverishment, not to mention environmental degradation. A Lasting Legacy of Poverty To this day — thirty years after re-unification — the standard of living is lower in the parts of Germany that were once part of East Germany. In 2014, for example, the Washington Post reported how East Germany has lower levels of disposable income, high unemployment rates, and is generally less prosperous. This in turn has led to the old East Germany having fewer young people, many of whom move west for better jobs. Fortune's Chris Matthews went on to observe "If you look at statistics such as per capita income or worker productivity, they also point to the large disparity in economic development between east and west." And Claudia Bracholdt further notes: "Today, Germany’s east has many structural problems similar to those of countries like Greece and Spain, though on a much smaller scale." During the Cold War, numerous opponents of Communism pointed to Germany as the perfect example of how soviet-style communism destroyed economic prosperity. But that was then. Nowadays, the East German regime is gone, and Germany is, relatively speaking, one of the most market-oriented economies on earth. Eastern Germany shares a government with western Germany. So, why is eastern Germany still poor compared to its western German neighbors? The answer lies in the fact that even though the legal and political systems in eastern Germany are the same as in the West, the East suffers from the fact that it lost out on decades of capital accumulation and growth in worker productivity while under the boot of the Soviets. The German case offers the most excellent comparison of course, because prior to World War II, western and eastern Germans enjoyed similar political systems for many decades. Moreover, the western and eastern Germans were similar both ethnically and culturally. Thus, the comparison allows us to focus on regime differences in the age of the Cold War. We can look beyond just the East Germans as well. We might ask ourselves, for example, why Poland, with its Western orientation and long tradition of parliamentary and decentralized governments remains so relatively poor. The same might be said of the Czech Republic as well, where the principal city, Prague, was once the second city of the Austrian Empire and was a center of European wealth and culture. The Czechs too, have never regained their relative place in terms of European wealth. Part of the explanation lies in the fact that the legacy of an abandoned political system can live on for decades even after regime change. As Nicolás Cachanosky has observed in the context of South American regimes: Institutional changes ... define the long-run destiny of a country, not its short-run prosperity. ... For example, as China opened parts of its economy to international markets, the country started to grow, and we are now seeing the effects of decades of relative economic liberalization. It is true that many areas in China continue to lack significant freedoms, but it would be a much different China today had it refused to change its institutions decades ago. Clearly, the fact that the old Eastern Bloc countries have moved toward liberalization has set those countries on a path toward greater economic prosperity. That by itself, however, cannot put it on a par with countries that never suffered the effects of decades of communism. Smash Capitalism: And Replace it With What? The experience of the Eastern Bloc should serve to inoculate us against the idea that a market based system can be replaced wholesale, and that a decent standard of living can still be achieved. It is one thing to advocate for a five-percent increase in government spending on the pension system. It's another to advocate for the nationalization of the banking sector or — even worse — expropriating every major industry. Yet, the smash-capitalism crowd thinks they want the latter. But the US isn't as far from the socialist end of the spectrum as many think. After all, the United States is itself already far down the road of the typical Western welfare state. Contrary to the persistent myth that the United States is some sort of laissez-faire free-for-all, the US welfare state in terms of social spending is already comparable to that of Canada, Australia, the Netherlands, and Switzerland. If the Netherlands is "socialist," then so is the United States. Yet we're being told the US needs to just move a little more to left to be like its European "peers." Except the US is already there. So how much further must it be moved in the direction of even more government control of its economy? The socialists give no answer beyond "we'll let you know when we get there." But it is not necessary to completely destroy capitalism to ensure a less prosperous future. That is, we need not become a clone a East Germany to share at least a share of its fate. Suffice it to say, the further a regime move in the direction of the "egalitarian" states of the old communist world, the worse the impoverishment will be.
  • FDR and the Collectivist Wave    (2019-11-08)
    In granting official diplomatic recognition to the Soviet Union in November 1933 Franklin Roosevelt was "unintentionally," of course, returning to the traditions of American foreign policy. From the early days of the Republic, throughout the 19th century and into the 20th — in the days, that is, of the doctrine of neutrality and nonintervention — the US government did not concern itself with the morality, or, often, rank immorality, of foreign states. That a regime was in effective control of a country was sufficient grounds for acknowledging it to be, in fact, the government of that country. Woodrow Wilson broke with this tradition in 1913, when he refused to recognize the Mexican government of Victoriano Huerta, and again a few years later, in the case of Costa Rica. Now "moral standards," as understood in Washington, DC — the new, self-anointed Vatican of international morality — would determine which foreign governments the United States deigned to have dealings with and which not. When the Bolsheviks seized power in Russia, Wilson applied his self-concocted criterion, and refused recognition. Henry L. Stimson, Hoover's secretary of state, applied the same doctrine when the Japanese occupied Manchuria, in northern China, and established a subservient regime in what they called Manchukuo. It was a method of signaling disapproval of Japanese expansionism, though there was no doubt that the Japanese soon came into effective control of the area, which had been more or less under the sway of competing warlords before. In later years, Roosevelt would adopt the Stimson doctrine of nonrecognition and even make Stimson his secretary of war. But in 1933 all moral criteria were thrown overboard. The United States, the last holdout among the major powers, gave in, and Roosevelt began negotiations to welcome the model killer state of the century into the community of nations. Recognizing Soviet Russia To the Soviet negotiator, Foreign Minister Maxim Litvinov, FDR presented his two chief concerns. One had to do with the activities of the Comintern. This worldwide organization is often ignored or slighted in accounts of the interwar years, but the fact is that the history of the period from 1918 to the Second World War cannot be understood without a knowledge of its purpose and methods. With his seizure of power in Russia, Lenin turned immediately to his real goal, world revolution. He invited members of all the old socialist parties to join a new grouping, the Communist International, or Comintern. Many did, and new parties were formed — the Communist Party of France (CPF), the Communist Party of China (CPC), the Communist Party of the United States (CPUSA), and so on, all under the control of the mother party in Moscow (CPSU). The openly proclaimed aim of the Comintern was the overthrow of all "capitalist" governments and the establishment of a universal state under Red auspices. Hypocrisy was not one of Lenin's many vices: the founding documents of the Comintern explicitly declared that the member parties and movements were to use whatever means — legal or illegal, peaceful or violent — that might be appropriate to their situations at any given time. This was the stark specter facing the non-Communist nations in the decades before World War II: a power covering one-sixth of the earth's surface had at its command a global movement that was fighting to wrest control of organized labor everywhere, fomenting revolutions in the colonial regions, vying for the allegiance of the western intelligentsia, and planting spies wherever it could — all with the goal of bringing the blessings of Bolshevism to the all of the world's peoples. The first commitment FDR asked of Litvinov was that the Comintern should cease subversion and agitation within the United States. This the Soviet minister readily agreed to. When, less than two years later, Washington complained that Russia was not living up to its agreement, Litvinov, in true Leninist fashion, denied that any such pledge had been given. The second major point brought up in the negotiations involved freedom of religion in Soviet Russia. Ever the politician, Roosevelt was worried about Catholic hostility to the Red regime, a hostility based on the murder of thousands of priests, the wholesale destruction of churches, and the ongoing crusade to stamp out all religious faith. In discussing the issue with Litvinov, FDR caused the foreign minister acute embarrassment. He brought up Litnivov's parents, who, Franklin supposed, had been pious, observant Jews. They must have taught little Maxim to say his Hebrew prayers, the president averred, and deep down Litvinov could not be the atheist he, as a good Communist, claimed to be. Religion was very important to the American people, and many would oppose recognition unless the regime ceased its persecutions. "That's all I ask, Max — to have Russia recognize freedom of religion." It was Franklin at his most fatuous. In the end, Roosevelt got Litvinov to concede that Americans in the Soviet Union would have religious freedom, which was never in doubt anyway, and palmed this off as a major Communist concession. FDR had won the public-relations contest once again. When Ukrainian-Americans tried to hold protest rallies in New York and Chicago, they were broken up by Communist goons. Roosevelt's strange bias toward the Stalinist regime continued to the end of his life. The massive documentation accumulating in the hands of the State Department on the real events in Russia was never made public, although it could have affected the great debate going on, in the United States and throughout the world, on the relative merits of communism and capitalism. Nor did FDR's State Department ever issue any complaints on Soviet crimes, not on the terror famine, not on the Gulag, not on the purge trials, not on the never-ending executions, including the Katyn massacre of Polish POWs. Yet before the United States entered the war, Secretary of State Cordell Hull frequently called the German envoy on the carpet for the Nazi persecution of the Jews. The grotesque double standard in judging Communist and Nazi atrocities, which Joseph Sobran keeps pointing out and which continues to this day, originated with the administration of Franklin Roosevelt. The Collectivist Wave There was a peculiar affinity between Roosevelt's New Deal and the European dictatorships that on occasion extended even to fascism and national socialism (the correct term, incidentally, for which "Nazism" is a nickname). Early on, FDR referred to Benito Mussolini as "the admirable Italian gentleman," stating to his ambassador in Rome, "I am much interested and deeply impressed by what he has accomplished" (though Franklin's praise of the founder of fascism stopped far short of Winston Churchill's gushing admiration of Il Duce at this time). Mussolini, in turn, was flattered by what he saw as the New Deal's aping of his own corporate state, in the NRA and other early measures. When Roosevelt "torpedoed" the London Economic Conference of June 1933, Reichsbank President Hjalmar Schacht smugly told the official Nazi newspaper Völkischer Beobachter that the American leader had adopted the economic philosophy of Hitler and Mussolini. Even Hitler had kind words at first for Roosevelt's "dynamic" leadership, stating that "I have sympathy with President Roosevelt because he marches straight to his objective over Congress, over lobbies, over stubborn bureaucracies." What linked the New Deal to the regimes in Italy and Germany, as well as in Soviet Russia, was their fellowship in the wave of collectivism that was sweeping the world. In an essay published in 1933, John Maynard Keynes observed this trend and expressed his sympathy with the "variety of politico-economic experiments" under way in the continental dictatorships as well as in the United States. All of them, he gloated, were turning their backs on the old, discredited laissez-faire and embracing national planning in one form or another. It goes without saying that the New Deal was a much milder form of the collectivist plague. (Italian fascism, too, never remotely matched the brutality and oppression of Nazi Germany and Communist Russia.) It is a matter of family resemblances. All of these systems tilted the balance sharply towards the state and away from society. In all of them, government gained power at the expense of the people, with the leaders seeking to impose a philosophy of life that subordinated the individual to the needs of the community — as defined by the state. The inner affinities of the New Deal with the continental dictatorships is well illustrated by a program that was one of FDR's favorites. The Civilian Conservation Corps One of the first measures passed during FDR's first hundred days was the act establishing the Civilian Conservation Corps (CCC). Young men were enrolled as amateur forest rangers, marsh drainers, and the like, on projects designed to improve the countryside. The recruits were given room and board, clothing, and a dollar a day. More than two and half million of them passed through the camps of the Civilian Conservation Corps, until the program was abolished in 1942, when the men were needed for the draft. In 1973, John A. Garraty published an important article on the CCC in the American Historical Review. Garraty was Gouverneur Morris Professor of American history at Columbia and later general editor of the American National Biography, a distinguished historian, and a pillar of the historical establishment. By no stretch of the imagination could he be considered one of the wretched band of Roosevelt haters. Yet, while a warm admirer of FDR, Garraty was compelled to note the striking similarities between the CCC and parallel programs set up by the Nazis for German youth. Both were essentially designed to keep young men out of the labor market. Roosevelt described work camps as a means for getting youth "off the city street corners," Hitler as a way of keeping them from "rotting helplessly in the streets." In both countries much was made of the beneficial social results of mixing thousands of young people from different walks of life in the camps. … Furthermore, both were organized on semimilitary lines with the subsidiary purposes of improving the physical fitness of potential soldiers and stimulating public commitment to national service in an emergency. Garraty listed many other similarities between the New Deal and National Socialism. Like Roosevelt, Hitler prided himself on being a "pragmatist" in economic affairs, trying out one panacea after another. Through a multitude of new agencies and mountains of new regulations, both in Germany and America, owners and managers of enterprises found their freedom to make decisions sharply curtailed. "Both FDR and Hitler 'tended to romanticize rural life and the virtues of an agricultural existence' and harbored dreams of the rural resettlement of urban populations." The Nazis encouraged working-class mobility through vocational training, the democratizing youth camps, and a myriad of youth organizations. They usually favored workers as against employers in industrial disputes and, in another parallel to the New Deal, supported higher agricultural prices. Both FDR and Hitler "tended to romanticize rural life and the virtues of an agricultural existence" and harbored dreams of the rural resettlement of urban populations, which proved disappointing. Characteristically for the collectivist movements of the time, "enormous propaganda campaigns" were mounted in the United States, Germany, and Italy (as well, of course, as in Russia) to fire up enthusiasm for the government's programs. It is no wonder, then, as Professor Garraty writes, that "during the first years of the New Deal the German press praised him [Roosevelt] and the New Deal to the skies. … Early New Deal policies seemed to the Nazis essentially like their own and the role of Roosevelt not very different from the Führer's." America under FDR did not, of course, follow Germany and Russia on that fateful road to the bitter end. The main reason for this lies, as scholars such as Seymour Martin Lipset and Aaron L. Friedberg have recently written, in our deeply rooted individualist and antistatist tradition, dating back to colonial and Revolutionary times and never extinguished. Try as he might, Franklin Roosevelt could bend the American system only so far. This article is excerpted from "FDR — The Man, the Leader, the Legacy," The Future of Freedom Foundation, 1998–2001.
  • Locke vs. Cohen vs. Rothbard on Homesteading    (2019-11-08)
    Last week in my article The Power of Self-Ownership, I discussed how uncomfortable self-ownership made the great Marxist political philosopher G.A. Cohen. Cohen saw that self-ownership leads to libertarianism, but he rejected libertarianism while he found self-ownership plausible. To save his socialism, he gave up self-ownership, but his reasons for doing so are weak. If self-ownership survives Cohen's half-hearted assault, the free market is not yet out of the woods. Cohen has another argument against libertarians, this one directed at Lockean theories of property acquisition. According to the Lockean theory, individual self-owners may, by mixing their labor with unowned land and other natural resources, come to acquire it. (Some people don’t like the phrase “mixing your labor,” but Lockean accounts don’t depend on accepting it. The important notion is that you have to occupy unowned land, or do something to it, in order to acquire it.) Cohen maintains that this theory fails just by itself to support property rights in land. It is, as it stands, incomplete. For the justification of property rights to be successful, an additional premise is needed. The premise in question is that land is initially unowned. If everyone starts off with rights to an equal share of the earth's surface and resources, the Lockean theory has nothing on which to operate. We may grant Cohen his point, but it avails him nothing. Why should we assume that people begin with property rights of the kind he wants? He gives no argument that they do; and the assumption that property is at the start unowned is a reasonable one. Murray Rothbard with characteristic insight dissected the equal shares position: If every man has the right to own his own person and therefore his own labor, and if by extension he owns whatever property he has “created” or gathered out of the previously unused, unowned state of nature, then who has the right to own or control the earth itself? In short, if the gatherer has the right to own the acorns or berries he picks, or the farmer his crop of wheat, who has the right to own the land on which these activities have taken place? Again, the justification for the ownership of ground land is the same for that of any other property. For no man actually ever “creates” matter: what he does is to take nature-given matter and transform it by means of his ideas and labor energy. But this is precisely what the pioneer — the homesteader — does when he clears and uses previously unused virgin land and brings it into his private ownership. The homesteader — just as the sculptor, or miner — has transformed the “nature-given” soil by his labor and his personality. The homesteader is just as much a “producer” as the others, and therefore just as legitimately the owner of his property. As in the case of the sculptor, it is difficult to see the morality of some other group expropriating the product and labor of the homesteader. (And, as in the other cases, the “world communist” solution boils down in practice to a ruling group.) Furthermore, the land communalists, who claim that the entire world population really owns the land in common, run up against the natural fact that before the homesteader, no one really used and controlled, and hence owned the land. The pioneer, or homesteader, is the man who first brings the valueless unused natural objects into production and use. (Ethics of Liberty, p. 49) Cohen, of course, dissents. But what happens if we grant him his assumption of an equal initial division of the earth's surface? The upshot, as our author recognizes, would not be socialism but a variety of libertarianism. Since the people with the initial endowments are by hypothesis self-owners, they would be free to carry on whatever ”capitalist acts between consenting adults” they wished. Hillel Steiner, a British political philosopher much esteemed by Cohen, has devised a libertarian system of precisely this kind; and Cohen says nothing against it. Cohen has another objection to Lockean property acquisition. Robert Nozick, for Cohen the main libertarian, included an undemanding version of the “Lockean proviso” in his account of property acquisition. As Nozick saw matters, if you acquire property, you can’t make others “worse off,” but it is easy to meet this requirement. Cohen objects that Nozick’s proviso would allow a single person to control all the property in a society. He may do so provided everyone else is slightly better off than he would have been in a society without any private property. Cohen’s student, the philosopher Michael Otsuka, explains Cohen’s objection: “Nozick’s version of the Lockean proviso is too weak, since it allows a single individual in a state of nature to engage in an enriching acquisition of all the land there is if she compensates all others by hiring them and paying them a wage that ensures they end up no worse off than they would have been if they had continued to live the meager hand-to-mouth existence of hunters and gatherers on non-private land.” This objection rests on a complete misunderstanding of how libertarians believe that property is initially acquired. Cohen reduces the libertarian principle of initial acquisition to the proviso. In point of fact, the proviso is only a modification of the principle. You cannot acquire vast amounts of property just by your say-so, if you follow the principle; you must combine your labor in the appropriate way with unowned land in order to acquire it. If this is taken into account, it seems next-to-impossible that the nightmare Cohen has conjured up could in practice arise. Cohen eliminates the limits on property acquisition contained in the libertarian principle; and, having done so, triumphantly proclaims that libertarians recognize practically no limits to property acquisition. If Cohen had studied Murray Rothbard, he wouldn’t have fallen into his mistake. Rothbard doesn’t include the proviso at all in his system. Why is it necessary? It is just a source of trouble.
  • More Dictator Than God: Kim Jong-Un's Cult of Personality Is Going Strong    (Doug Bandow, 2019-11-08)
    Doug Bandow Key point: North Koreans have faced a severe level of psychological indoctrination. , North Korea without doubt is unique. If nothing else, its claimed accomplishments rival the faux Russian achievements cited by Pavel Chekov in the original Star Trek. Yet only the Democratic People's Republic of Korea has established a Communist monarchy, now reaching the third generation. The real question for Kim Jong-un is whether he, like his father and grandfather, will follow the Ottoman practice of producing multiple children from multiple consorts. That always makes a succession fight much more interesting. Even so, royal baby sightings still are rare in the DPRK. But the North Korean regime has gone a step further in claiming that Great Leader Kim Il-sung, grandfather to Cute Leader Kim Jong-un, as I call the latter, is not only god, but recognized as such by America’s legendary evangelist Billy Graham. On Kim Il-sung’s birthday last week, reported Adam Taylor in the Washington Post, the DPRK paper Rodong Sinmun reported that Graham, who traveled several times to North Korea, praised the senior Kim’s rule. , Too much information passes over the DPRK’s borders for anyone any longer to believe the fantasies propagated in Pyongyang. , Indeed, “said” Graham: “Having observed the Supreme Leader Kim Il-sung’s unique political leadership, I can only think that he is God.” Moreover, “if God is the leader of another world, savior and ruler of the past and future life that exists in our imagination, I acknowledge the Supreme Leader Kim Il-sung is the God who rules today’s human world.” The man who raised the Bible at thousands of crusades then “said,” according to the Rodong Sinmun, “Kim is this world’s God. Why would a country like this need the Holy Bible?” Graham is long retired, in ill health, and out of public view. Officials at the Billy Graham Evangelistic Association dismissed the claims as not reflecting “Mr. Graham’s theology or his language.” Certainly there’s no evidence that in thrall of the Great Leader the evangelist tossed aside his life’s calling to worship the modern equivalent of Baal. Kim, whose parents reportedly were believers, offered the faith no favors. To the contrary, he became one of the last century’s great religious persecutors. Unauthorized faith activities could lead to prison and death. And Kim’s pernicious policies continued under his son and grandson. If anything, the repression has worsened. While most North Korean refugees returned to the DPRK are treated atrociously, greatest punishment is inflicted on those thought to have fallen under the sway of foreign Christians. The regime’s frantic fear of Christianity suggests inner doubts. The Kim dynasty is premised on loyalty to the top leader, who long has been treated with reverence and said to be capable of super-human feats. Indeed, the biography of Kim Jong-il, father of the Cute Leader, was rearranged to have the former born on sacred Mt. Paektu rather than in Siberia. That buttressed Kim’s claim to semi-divinity as well. No mere mortal was he. While Graham was not quoted talking about this Kim, presumably the son of a god also is a god—and the latter’s son as well. Yet it would behoove the North Koreans to be careful before so cavalierly claiming divine status. One of the great Old Testament face-offs came during the reign of Ahab, one of Israel’s more benighted rulers. The prophet Elijah challenged four hundred and fifty advocates of Baal, rather like the retainers surrounding the Kims today. Both sides prepared a bull for sacrifice on Mount Carmel. Baal’s prophets danced, shouted, and “slashed themselves with swords and spears, as was their custom,” but still there was no response. Then Elijah called upon the one true God and “the fire of the Lord fell and burned up the sacrifice, the wood, the stones and the soil.” At Elijah’s instruction, the people seized the false prophets and slaughtered them. (1 Kings 18:19-40) Then there was King Herod, his hands washed in the blood of John the Baptist. Herod spoke to the people of Sidon and Tyre, who were seeking to reconcile with the king. They responded: “‘This is the voice of a god, not of a man.’ Immediately, because Herod did not give praise to God, an angel of the Lord struck him down, and he was eaten by worms and died.” (Acts 12:21-23) It seems there is a price to be paid for claiming to be god. The penalty might be especially great for sullying the name of Billy Graham, claiming him to be a prophet of the modern Baal. Christianity once flourished in the north as well as south of the Korean peninsula. Despite extraordinary persecution under the Kim dynasty, the faith survived. Its continued existence challenges the current Kim's rule. Perhaps he believes that he can bolster the regime's credibility by claiming an improbable endorsement from Graham. But that's not likely to work. Too much information passes over the DPRK's borders for anyone any longer to believe the fantasies propagated in Pyongyang. Moreover, North Korean officials should beware of what happened to past prophets of Baal. A little fire from heaven just might be in store. Doug Bandow is a Senior Fellow at the Cato Institute and former Special Assistant to President Ronald Reagan.
  • City Wasting Money on Buses Few Residents Ride    (Randal O'Toole, 2019-11-08)
    Randal O'Toole VIA, San Antonio's transit agency, is in trouble. According to Federal Transit Administration data, the agency has spent tens of millions of dollars of your money to increase transit service by 17 percent since 2012, yet transit ridership (measured through the end of fiscal year 2019) has dropped by 24 percent. , Bexar County Judge Nelson Wolff thinks he has a solution: Throw more money at it. He wants to shift a sales tax now dedicated to protecting the Edwards Aquifer to VIA. This would give the transit agency additional funds to squander as it watches ridership continue to drop. Transit is already one of the most heavily subsidized industries in the country, costing taxpayers an average of $5 every time someone steps aboard a public transit bus or train. Despite these subsidies, ridership is declining nationwide, though not nearly as fast as it is falling in San Antonio. , VIA, San Antonio's transit agency, is in trouble. , In 2017, VIA collected $23.6 million in fares but spent more than $205 million operating transit. It also spends an average of $40 million a year on maintenance and capital improvements (mainly new buses). Transit advocates will point out that driving is subsidized, too. Those subsidies should end, but they average only about a penny per passenger mile. By comparison, VIA subsidies average well over $1 per passenger mile. There’s a good reason why VIA ridership is plummeting: Almost everyone today has a car. Census data reveal that, in 2018, only 2.7 percent of San Antonio workers lived in households that had no cars, well under the national average of 4.3 percent. Moreover, just 27 percent of workers without cars took transit to work in 2018, down from 42 percent in 2012. In fact, more people who lived in households without cars drove alone to work — probably in employer-supplied vehicles — than took transit to work. Although San Antonio’s population has grown by 11 percent since 2012, the number of people who take transit to work has declined by 14 percent. In 2018, just 19,600 people in the San Antonio urban area relied on transit to get to work. People have a good reason to shift from transit to cars. VIA buses average just 16 mph and don’t always go where people need to go. Driving speeds in San Antonio average 33 mph, and cars can take you exactly where you want to go when you want to get there. No wonder University of Minnesota researchers found the typical San Antonian can reach more than three times as many jobs in a 20-minute auto drive than they can reach in a 60-minute transit ride. Wolff thinks it is more important to prop up a transit system that carries just 2 percent of the region’s employees to work because, he says, spending more on transit “will do more for San Antonio environmentally.” In fact, the opposite is true. In 2017, VIA buses emitted more than twice as much greenhouse gases for every passenger mile as the average car and 80 percent more than the average SUV. That’s because VIA buses carried an average of just five passengers — that is, they carried five passengers for every vehicle mile they operated. It’s possible VIA could take some actions to reverse, or at least slow, the decline in transit ridership. In 2015, Houston rerouted its bus system, increasing frequencies on routes that had the most riders and putting more routes on a grid so people don’t have to go downtown every time they want to go from one neighborhood to another. The result was a 6 percent increase in riders, compared with an 8 percent decline in the rest of the nation. Moreover, it didn’t cost taxpayers anything because Houston merely rerouted existing buses. Ending subsidies to VIA won’t mean an end to transit. Instead, either VIA or private operators will continue to provide transit services where they can cover their costs. Census data say that about 13,700 San Antonians who earn less than $25,000 a year rely on transit to get to work. It would be far less expensive to give these people vouchers they could use for transit, taxis, Uber, Lyft or other transportation services than to keep subsidizing VIA. People like Wolff seem to think that taxpayers exist to serve the transit system when, in fact, agencies such as VIA are supposed to serve us. If they aren’t serving us anymore, it’s time to stop throwing money at them. Randal O'Toole is a transportation policy analyst with the Cato Institute and author of "Romance of the Rails: Why the Passenger Trains We Love Are Not the Transportation We Need."
  • 4 Reasons Why Socialism Is Becoming More Popular    (2019-11-08)
    The newfound openness of large numbers of Americans to socialism is, by now, a well-documented phenomenon. According to a Gallup poll from earlier this year, 43% of Americans now believe that some form of socialism would be a good thing, in contrast to 51% who are still against it. A Harris poll found that four in ten Americans prefer socialism to capitalism. The trend is particular apparent in the young: another Gallup poll showed that as recently as 2010, 68% of people between 18 and 29 approved of capitalism, with only 51% approving of socialism, whereas in 2018, while the percentage among this age group favoring socialism was unchanged at 51%, those in favor of capitalism had dropped precipitously to 45%. The same poll showed that among Democrats, the popularity of socialism now stands at 57%, while capitalism is only at 47%, a marked departure from 2010 when the two were tried at 53%. A YouGov poll from earlier this year showed that unlike older generations, which still preferred capitalist candidates, 70% of millennials and 64% of gen-Zers would vote for a socialist. The question is why socialism now? At a time when the American economy under Trump seems to be chugging along at a nice clip, why are so many hankering for an alternative? I would suggest four factors contributing to the situation. Factor #1: Ignorance of History The first cause of socialism’s popularity, especially among the young, is an obvious one: having grown up at a time after the end of the Cold War, the collapse of Europe’s Eastern Bloc and China’s transition to authoritarian capitalism, “these kids today” — those 18 to 29 year-olds who were born around the last decade of the 20th century — don’t know what socialism is all about. When they think socialism, they don’t think Stalin; they think Scandinavia. Americans’ — and especially young Americans’ — ignorance of history is well-documented and profound. As of 2018, only one in three Americans could pass a basic citizenship test , and of test-takers under the age of 45, that number dropped to 19%. That included such lowlights as having no clue why American colonists fought the British and believing that Dwight Eisenhower led the troops during the Civil War. Speaking of the war during which he actually led the troops, many millennials don’t know much about that one either. They don’t know what Auschwitz was (66% of millennials in particular could not identify it). Twenty-two percent of them had not heard of the Holocaust itself. The Battle of the Bulge? Forget it. Go back further in time, and the cluelessness just keeps deepening. Only 29% of seniors at U.S. News and World Report’s top 50 colleges in America — the precise demographic that purports to speak with authority about America’s alleged history of white supremacy — have any idea what Reconstruction was all about. Only 23% know who wrote the Constitution. So much for any notion that this is the most educated generation ever. Closer to the theme — socialism — the same compilation of survey results includes the attribution of The Communist Manifesto’s “from each according to his ability; to each according to his needs” to Thomas Paine, George Washington or Barrack Obama. Moreover, among college-aged Americans, though support for socialism is pretty high, when these same young adults are asked about their support for the actual definition of socialism — a government-managed economy — 72% turn out to be for a free-market economy and only 49% for the government-managed alternative (yes, it looks from those numbers like there are a lot of confused kids who are in favor of both of the mutually exclusive alternatives). As compared to about a third of Americans over 30, only 16% of millennials were able to define socialism, according to a 2010 CBS/New York Times poll. And though I haven’t seen polling on this, I’d be willing to bet that a good bunch of these same students, if asked to say what the Soviet Union was, would have no clue or peg it as some sort of vanquished competitor of Western Union. Compounding the problem still further is that the history that students are being taught increasingly falls into the category of “woke” history , America’s history of oppression as imagined by the influential revisionist socialist historian Howard Zinn . When socialists are writing our history books, the end result is preordained. Given such ignorance and systematic distortion of history, is it any surprise that millennials who never lived through very much of the 20 th century don’t think socialism is all that bad? Factor #2: Government Bungling When we try to explain the socialist urge, we cannot lose sight of the fact that our government keeps interfering in the economy in ways that give people every reason to think the system is corrupt and needs to be trashed. Take the skyrocketing cost of college, for instance. On the surface, this looks like greedy capitalist universities just keep on raising tuition, and since most college kids and their parents can’t pay the sticker price, almost 70% take out loans , saddling young people trying to start their careers with a mountain of debt (almost $30,000 on average). This results in all those socialist promises of free college or loan forgiveness sounding dandy. Underneath the surface, however, a huge part of the problem is federal grants and subsidized loans. If the government stopped footing a large part of their bill, more students and parents would be forced to pony up, which would mean, in turn, that colleges would not be able to keep hiking their prices without seeing a precipitous drop in enrollment. They would, instead, be forced to price themselves at some level that applicants could realistically pay, making college more affordable for a large segment of the American middle class. Another simple example of the problem is Obama’s Emergency Economy Stabilization Act of 2008, colloquially known as the big bank “Bailout.” When kids grow up seeing government tossing out free lifelines to businesses that get themselves in dire straits, cause a massive financial crisis and, in the process, lose ordinary folks lots of jobs and homes, we can’t blame them for concluding that the system is rigged. There are many more examples where these came from — our government frittering away trillions on foreign wars that increase instability throughout the world and end up costing us even more as we scramble to clean up our own messes is one expenditure that comes readily to mind — but the point is this: the more the government interferes in the economy to help out vested interests, the more reason many of us will see to ask government to interfere in the economy to help out the rest of us. The more reason we give anyone to think that capitalism means crony capitalism, the more they’ll clamor for socialism. Factor #3: Universities’ Ideological Monoculture The supporters of socialism are not simply the young, but rather, disproportionately those among the young who are college-educated. And the more college they have, the hotter for socialism they get. According to a 2015 poll , support for socialism grows from 48% among those with a high school diploma or less to 62% among college graduates to 78% among those with post-graduate degrees. Those on the left probably stop thinking hard about now and jump immediately to the conclusion that support for socialism is just a natural outgrowth of big brains and elite educations. But there is, in fact, a less obvious but ultimately far more compelling explanation that also manages to account for the general fact that more education correlates with more leftism: something — something bad — is happening at universities themselves to pull students toward the (far) left. We have already seen above that what’s not happening at universities, even elite universities, today is a whole lot of education in important subjects like history. What we are getting instead is a lot of groupthink and indoctrination. Universities have always skewed a bit left. But beginning in the early to mid 1990s (for reasons I’ve explained in some detail elsewhere ), ideological diversity began to vanish entirely, as the leftward deviation turned tidal. As documented in a 2005 paper from Stanley Rothman et al., as of 1984, 39% of university faculty were left/liberal, and 34% were right/conservative. By 1999, those numbers had undergone a seismic shift: faculty was now 72% left/liberal and 15% right/conservative. Since 1999, the imbalance has become starker still. A comprehensive National Association of Scholars report from April 2018 from Prof. Mitchell Langbert of Brooklyn College, tracking the political registrations of 8,688 tenure-track, Ph.D.-holding professors from 51 of U.S. News & World Report’s 66 top-ranked liberal arts colleges for 2017, found that “78.2 percent of the academic departments in [his] sample have either zero Republicans, or so few as to make no difference.” Predictably, given the composition of the professoriate, survey data also indicates that students’ political views drift further leftward between freshman and senior year. In light of this data, it should not be a surprise to us that students who have gone to college in this age of ideological extremism have come out radicalized and … socialized. Factor #4: Coddled Kids The young have always been more inclined to embrace pipe dreams — a lack of familiarity with the complicated way in which the world actually works, coupled with the college fix described above, will do that to most anyone — but there is a reason the mindset of today’s young’uns is particularly susceptible to the red menace. In last year’s The Coddling of the American Mind, the prominent social psychologist Jonathan Haidt and FIRE’s Greg Lukianoff describe the species of overprotective parenting and instilling of baseless and uncritical self-esteem by parents and educators alike that came to prevail as kids were growing up in the 90s and 00s. When we are raised in the belief we are wonderful just as we are, we never learn the critical life skills of self-soothing, working through anxiety, facing obstacles and overcoming adversity. The predictable result, as Haidt and Lukianoff observe, is a demand to be safeguarded — safe spaces, free speech crackdowns and so on. The state appears to many as the appropriate institution to provide this sort of “safety.” If these four are the primary causes of socialism’s rapid surge in our midst, then the next logical question is what to do about it. There is no easy answer, of course, but I would suggest that the radicalization of academia is the lynchpin issue. If we could succeed in reversing that tsunami, many dominoes would fall: we would be addressing the university monoculture that systematically distorts research, sends students veering hard left and graduates generations of left-orthodox clones who find their way into journalism, government, education, entertainment and other influential sectors driving public opinion and shaping the other three downstream issues factoring into socialism’s rise: government policy, educational philosophy and the manner in which history is taught. Many have observed that our universities are in crisis, but that crisis also represents an opportunity to avert the much larger socialist cataclysm that threatens to engulf us all.
  • Ocasio-Cortez is Wrong: We're Not Working 80-Hour Weeks Now    (2019-11-07)
    It has become nearly commonplace for pundits and politicians to claim that Americans are working more than ever before; that they're working more jobs, and working longer hours — all for a lower income. During the Democratic debates this summer, for instance, Rep. Tim Ryan of Ohio claimed  "the economic system now force[s] us to have two or three jobs just to get by.” Kamala Harris made similar comments. These claims echo statements from Elizabeth Warren in Alexandria Ocasio-Cortez. In a July 2018 interview, Ocasio-Cortez insisted "Unemployment is low because everyone has two jobs. Unemployment is low because people are working 60, 70, 80 hours a week and can barely feed their family." That same month, Warren stated "people" are "working two, three, or four jobs to try to pay the rent and keep food on the table." Ocasio-Cortez was the only one in this group unwise enough to claim "everyone" is working incredibly long hours, but the general sentiment is clear enough: a lot of people are working harder and longer just to attain even a basic standard of living.1 Fortunately, this doesn't appear to be the case at all. While it is no doubt true that some people work multiple jobs, and many work long hours, it is not clear that this situation is new, or that it has become worse in the past decade. In fact, in response to Harris's comments, The Washington Post reported that the number of working Americans with more than one job is lower now than the mid-90s: In all, there are 7.8 million people who hold more than one job — just 5 percent of Americans with jobs . The percentage has been roughly steady since the Great Recession, and in fact is lower than in the mid-1990s, when it hovered around 6 percent. Nor can we guess from the data as to why people were working more hours or working more than one job. It cannot be just assumed that people work more only because they risk hunger and eviction from their homes, without those extra hours. After all, there is a growing body of research showing that it is high-income workers who are most prone to working longer hours. For example, according to one study, the authors found Between 1979 and 2002, the frequency of long work hours increased by 14.4 percentage points among the top quintile of wage earners, but fell by 6.7 percentage points in the lowest quintile. When we see evidence of rising work hours, it's often a safe guess that among those working more are many higher-wage workers. These people are not working "60, 70, 80 hours a week" to "keep food on the table." This reverses the status quo of the past (i.e., the 1970s and before) when lower-income workers tended to work more. But, in a 2006 study, economists Mark Aguiar and Erik Hurst studied work and leisure trends over the last 40 years. They noticed that, in the 1960s, most men—regardless of their education, which serves as a proxy for income—worked the same amount of hours, about 50 per week, and spent about 105 hours dedicated to leisure activities. By 2003, a divergence emerged that mirrored growing income inequality: Men with less than 12 years of education worked, on average, 37.5 hours a week, while more educated (higher earning) men worked 43.4 hours. Both groups gained more leisure time (socializing, watching TV, playing sports), though the less educated group spent about 6 to 7 more hours a week engaged in leisure activities than their more educated (and presumably higher earning) peers. This, incidentally, has increased measures of income inequality overall. Higher income workers are electing to work more, while middle-and lower income workers opt for leisure. Since government measures of money income can't take into account the benefits of leisure, we then see an increased difference between the two groups. Another complicating factor is the fact many workers choose to work more when the economy is doing well. Thus, during periods of significant income gains, we might also see increases in working hours. Using numbers from the University of Groningen, we can see these trends at work in recent decades: Overall, annual working hours have declined in recent decades, although we do find that working hours increased — with some ups and downs — from the early 80s to the end of the Dot-Com boom in 2000. This was also a period during which median incomes increased sizably for most groups. The direction of causality probably goes both ways. As job opportunities grew during this time, many people took advantage of the situation and worked longer hours when they could, so as to increase purchasing power and their standards of living. As a result, both personal and household incomes went up. We can see how working hours tend to track with the business cycle using a the Bureau of Labor Statistics' numbers for average weekly hours: It should not be assumed that most people were working longer hours simply because they were headed toward the poverty line. Nor can it be assumed workers prefer leisure to additional consumption.  One example which suggests the American preference for consumption over leisure is the fact the size of American houses have increased in recent decades even as household sizes have decreased. Presumably a declining household size implies square-footage needs also decline. Yet, many Americans continue to opt for more spacious living quarters, which drives a greater need for more money income, and often longer hours. Yet, even as working hours have remained largely flat over the past generation, example, real incomes have increased. From 1980 to 2017, the median income increased 14 percent for men, and 24 percent for women. But using weekly average hours numbers from the Bureau of Labor Statistics, weekly average hours increased only 0.5 percent over the same period from 38.5 hours to 38.7 hours. Since the end of the Dot-Com boom in 2000, of course, median incomes have largely flattened. There was no change at all for men from 2000 to 2017, while the increase for women was 12 percent. But during that time, average weekly hours fell 0.5 percent for women, and fell 3.7 percent for men. Government income statistics, however, measure only money income, so the increased leisure is measured as zero income. The Larger Trend Overall, working hours haven't moved much in the past forty years — although incomes have increased significantly over that period. But, working hours are still well down from where they were during the first half of the twentieth century — at least for full-time workers. Weekly work hours plummeted from 60 per week in 1890 to 40.25 in 2000, according to calculations by Michael Huberman and Chris Minns.2 Civilian working hours collapsed between 1929 and 1950 due to the great depression and the Second World War, but averages have rarely exceeded 40 hours ever since. Researchers Valerie A. Ramey and Neville Francis, on the other hand, contend this drop-off is overstated, and they opt for a different method that shows a decrease of only 16 percent between 1900 and 2005, dropping from 27.7 hours to 23 hours worked per person. (The real median income likely quadrupled during this period.) But even the more modest gains showed by Ramey and Francis point to broad declines in work time. For example, since 1900, weekly work hours for children in the 10 to 13-year-old range decreased from 5.2 hours per week to zero hours per week in every year since 1940. Weekly work by teenagers (ages 14-17) has also collapsed, dropping from 20 hours per week in 1900 to 2.9 hours in 2005. Meanwhile, Americans are clearly retiring earlier since weekly work in the over-65 population fell from 19.3 weekly hours in 1900 down to 4.2 weekly hours in 2005.3 This gains appear to have been made possibly by the continued labor of workers in the 25-54 subset. Accoridng to Ramey and Francis, these workers have not seen any decline in total hours since 1900, although their standard of living has certainly increased. From 1900 to 2005, weekly hours for the 25-54 year olds increased from 29.6 to 31.3. Women in the Workforce This increase, however, has been largely fueled by women joining the workforce. Weekly hours worked by males — even in the 24-54 group — declined by 25 percent during the twentieth century, and remains down from the alleged "good ol' days" of the 1950s and 1960s. Declines in working hours for males were even larger in all other age groups. For women, however, weekly hours worked increased substantially, but mostly for women in the 25-64 range. For women aged 25-54, weekly hours increased from 7.9 in 1900 to 26.1 in 2005. Seeing this, some bearish observers of American standards of living often claim that everyone is really working much more because prior to the 1970s, women didn't "have to" work. These critics claim that as manufacturing and other presumably high-wage jobs went into decline, women were forced to get wage work to make up the difference. The problem with this claim, however, is that as women began to join the workforce in larger numbers during the 1950s and 1960s, this did not represent a shift from leisure to work. It only represented a shift from "home production," to production through wage work.4 Thanks to a variety of labor-saving devices, expanded schooling for children, and the introduction of part-time work, women were able to seek money income for themselves and their families without reducing overall leisure time. For example, for all women over age 14, the reduction in home production was larger than the increase in "hours worked," leaving more time that could be devoted either to leisure or schooling: From 1900 to 2005, home production for women fell from 42.5 hours to 27.6 hours. That's a drop of 14.9 hours. Meanwhile, "hours worked" increased by only 9.3 hours. Meanwhile, weekly time spent on home production overall — including both men and women — declined for all age groups except the over-65 group. As home production and working hours declined, this left more time for leisure and schooling. Thus, Ramey and Francis conclude that even with sizable increases in schooling in recent decades, leisure time increased across all age groups from 1900 to 2005, for men and women combined. The largest gains, not surprisingly, are found with the over-65 age group. Moreover, with growing leisure in the over-65 cohort, combined with growing life expectancies, lifetime leisure howas now reached the highest level it's ever been. Ramey and Francis calculate "cumulative lifetime hours of leisure" increased by 11 percent from 1970 to 2000. Given that working hours as of 2017 were still lower than they were in 2000, it is likely that time devoted to leisure and schooling has increased since then, especially as the population ages and more Americans retire.  1. The minimum necessary standard of living necessary to qualify as "decent" or "basic" is never defined. 2. Michael Huberman and Chris Minns, "The times they are not changin’: Days and hoursof work in Old and New Worlds, 1870–2000." Explorations in Economic History, 44 (2007) 538–567. July 2007. 3. Valerie A. Ramey and Neville Francis, "A Century of Work and Leisure, " American Economic Journal: Macroeconomics 2009, 1:2, 189–224. 4. In other words, leisure didn't necessarily decline as women entered the workforce. Women simply opted to do different types of work, often because wage work offered more monetary rewards than home production. (Home production includes food preparation, child care, shopping for goods and services, home maintenance, and laundry. This work was overwhelmingly done by women prior to the 1960s.)
  • The Problem of Poverty    (2019-11-07)
    [Chapter One of The Conquest of Poverty.] The history of poverty is almost the history of mankind. The ancient writers have left us few specific accounts of it. They took it for granted. Poverty was the normal lot.The ancient world of Greece and Rome, as modern historians reconstruct it, was a world where houses had no chimneys, and rooms, heated in cold weather by a fire on a hearth or a fire-pan in the center of the room, were filled with smoke whenever a fire was started,and consequently walls, ceiling, and furniture were blackened and more or less covered by soot at all times; where light was supplied by smoky oil lamps which, like the houses in which they were used, had no chimneys; and where eye trouble as a result of all this smoke was general. Greek dwellings had no heat in winter, no adequate sanitary arrangements, and no washing facilities.1 Above all there was hunger and famine, so chronic that only the worst examples were recorded. We learn from the Bible how Joseph advised the pharaohs on famine relief measures in ancient Egypt. In a famine in Rome in 436 B.C., thousands of starving people threw themselves into the Tiber. Conditions in the Middle Ages were no better: The dwellings of medieval laborers were hovels -- the walls made of a few boards cemented with mud and leaves. Rushes and reeds or heather made the thatch for the roof. Inside the houses there was a single room, or in some cases two rooms, not plastered and without floor, ceiling, chimney, fireplace or bed, and here the owner, his family and his animals lived and died. There was no sewage for the houses, no drainage, except surface drainage for the streets, no water supply beyond that provided by the town pump, and no knowledge of the simplest forms of sanitation. 'Rye and oats furnished the bread and drink of the great body of the people of Europe. ... Precariousness of livelihood, alternations between feasting and starvation, droughts,scarcities, famines, crime, violence, murrains, scurvy, leprosy, typhoid diseases, wars, pestilences and plagues ' -- made part of medieval life to a degree with which we are wholly unacquainted in the Western world of the present day.2 And, ever-recurring, there was famine: In the eleventh and twelfth centuries famine [in England] is recorded every fourteen years, on an average, and the people suffered twenty years of famine in two hundred years. In the thirteenth century the list exhibits the same proportion of famine; the addition of high prices made the proportion greater.Upon the whole, scarcities decreased during the three following centuries; but the average from 1201 to 1600 is the same, namely, seven famines and ten years of famine in a century.3 One writer has compiled a detailed summary of twenty-two famines in the thirteenth century in the British Isles, with such typical entries as: "1235: Famine and plague in England; 20,000 persons die in London; people eat horse-flesh, bark of trees, grass, etc."4 But recurrent starvation runs through the whole of human history. The Encyclopedia Britannica lists thirty-one major famines from ancient times down to 1960.5 Let us look first at those from the Middle Ages to the end of the eighteenth century: 1005: famine in England. 1016: famine throughout Europe. 1064-72: seven years' famine in Egypt. 1148-59: eleven years' famine in India. 1344-45: great famine in India. 1396-1407: the Durga Devi famine in India, lasting twelve years. 1586: famine in England giving rise to the Poor Law system. 1661: famine in India; no rain fell for two years.1769-70: great famine in Bengal; a third of the population -- 10 million persons -- perished. 1783: the Chalisa famine in India. 1790-92: the Deju Bara, or skull famine, in India, so called because the dead were too numerous to be buried. This list is incomplete -- as probably any list would be. In the winter of 1709, for example, in France, more than a million persons, according to the figures of the time,died out of a population of 20 millions.6 In the eighteenth century, in fact, France suffered eight famines, culminating in the short crops of 1788, which were one of the causes of the Revolution. I am sorry to be dwelling in such detail on so much human misery. I do so only because mass starvation is the most obvious and intense form of poverty, and this chronicle is needed to remind us of the appalling dimensions and persistence of the evil. In 1798, a young English country parson, Thomas R. Malthus, delving into this sad history, anonymously published an Essay on the Principles of Population as it affects the Future Improvement of Society . His central doctrine was that there is a constant tendency for population to outgrow food supply and production. Unless checked by self-restraint, population will always expand to the limit of subsistence, and will be held there by disease, war, and ultimately famine. Malthus was an economic pessimist, viewing poverty as man's inescapable lot. He influenced Ricardo and other classical economists of his time, and the general tone of their writings led Carlyle to denounce political economy as "the Dismal Science." Malthus had in fact uncovered a truth of epoch-making importance. His work first set Charles Darwin on the chain of reasoning which led to the promulgation of the theory of evolution by natural selection. But Malthus greatly overstated his case, and neglected to make essential qualifications. He failed to see that, once men in any place (it happened to be his own England) succeeded in earning and saving a little surplus, made even a moderate capital accumulation, and lived in an era of political freedom and protection for property, their liberated industry, thought, and invention could at last make it possible for them enormously and acceleratively to multiply per capita production beyond anything achieved or dreamed of in the past. Malthus announced his pessimistic conclusions just in the era when they were about to be falsified. The Industrial Revolution The Industrial Revolution had begun, but nobody had yet recognized or named it. One of the consequences of the increased production it led to was to make possible an unparalleled increase in population. The population of England and Wales in 1700 is estimated to have been about 5,500,000; by 1750 it had reached some 6,500,000. When the first census was taken in 1801 it was 9,000,000; by 1831 it had reached 14,000,000. In the second half of the eighteenth century population had thus increased by 40 percent, and in the first three decades of the nineteenth century by more than 50 percent. This was not the result of any marked change in the birth rate, but of an almost continuous fall in the death rate. People were now producing the food supply and other means to support a greater number of them.7 This accelerating growth in population continued. The enormous forward spurt of the world's population in the nineteenth century was unprecedented in human experience. "In one century, humanity added much more to its total volume than it had been able to ad d during the previous million years."8 But we are getting ahead of our story. We are here concerned with the long history of human poverty and starvation, rather than with the short history of how mankind began to emerge from it. Let us come back to the chronicle of famines, this time from the beginning of the nineteenth century: 1838: intense famine in North-Western Provinces (Uttar Pradesh), India; 800,000 perished. 1846-47 famine in Ireland, resulting from the failure of the potato crop. 1861:famine in northwest India. 1866: famine in Bengal and Orissa; 1,000,000 perished. 1869:intense famine in Raiputana; 1,500,000 perished. 1874: famine in Bihar, India. 1876-78:famine in Bombay, Madras, and Mysore; 5,000,000 perished. 1877-78: famine in north China; 9,500,000 said to have perished. 1887-89: famine in China. 1891-92: famine in Russia. 1897: famine in India; 1,000,000 perished. 1905: famine in Russia. 1916: famine in China. 1921: famine in the U.S.S.R., brought on by Communist economic policies; at least 10,000,000 persons seemed doomed to die, until the American Relief Administration, headed by Herbert Hoover, came in and reduced direct deaths to about 500,000. 1932-33: famine again in the U.S.S.R., brought on by Stalin's farm collectivization policies; "millions of deaths." 1943: famine in Bengal; about 1,500,000 perished. 1960-61: famine in the Congo.9 We can bring this dismal history down to date by mentioning the famines in recent years in Communist China and the war-created famine of 1968-70 in Biafra. The record of famines since the end of the eighteenth century does, however, reveal one striking difference from the record up to that point. Mass starvation did not fall on a single country in the now industrialized Western world. (The sole exception is the potato famine in Ireland; and even that is a doubtful exception because the Industrial Revolution had barely touched mid-nineteenth-century Ireland -- still a one-crop agricultural country.) It is not that there have ceased to be droughts, pests, plant diseases, and crop failures in the modern Western world, but that when they occur there is no famine, because the stricken countries are quickly able to import foodstuffs from abroad, not only because the modern means of transport exist, but because, out of their industrial production, these countries have the means to pay for such foodstuffs.In the Western world today, in other words, poverty and hunger -- until the mid-eighteenth century the normal condition of mankind -- have been reduced to a residual problem affecting only a minority; and that minority is being steadily reduced.But the poverty and hunger still prevailing in the rest of the world -- in most of Asia,Central and South America, and Africa -- in short, even now afflicting the great majority of mankind -- show the terrible dimensions of the problem still to be solved. And what has happened and is still happening in many countries today serves to warn us how fatally easy it is to destroy all the economic progress that has already been achieved. Foolish governmental interference led the Argentine, once the world's principal producer and exporter of beef, to forbid in 1971 even domestic consumption of beef on alternate weeks.Soviet Russia, one of whose chief economic problems before it was communized was to find an export market for its huge surplus of grains, has been forced to import grains from the capitalist countries. One could go on to cite scores of other examples, with ruinous consequences, all brought on by short-sighted governmental policies. More than thirty years ago, E. Parmalee Prentice was pointing out that mankind has been rescued from a world of want so quickly that the sons do not know how their fathers lived: "Here, indeed, is an explanation of the dissatisfaction with conditions of life so often expressed, since men who never knew want such as that in which the world lived during many by-gone centuries, are unable to value at its true worth such abundance as now exists, and are unhappy because it is not greater."10 How prophetic of the attitude of rebellious youth in the 1970s! The great present danger is that impatience and ignorance may combine to destroy in a single generation the progress that it took untold generations of mankind to achieve. Those who cannot remember the past are condemned to repeat it. 1. E. Parmalee Prentice, Hunger and History, Harper & Bros., 1939, pp.39-40. 2. Ibid., pp.15-16. 3. William Farr, "The Influence of Scarcities and of the High Prices of Wheat on the Mortality of the People of England," Journal of the Royal Statistical Society, February 16,1846, Vol. IX, p.158. 4. Cornelius Walford, "The Famines of the World," Journal of the Royal Statistical Society, March 19, 1878, Vol.41, p.433. 5. "Famine," Encyclopedia Britannica, 1965. 6. Gaston Bouthoul, La population dans Ia monde, pp. 142-43 7. T. S. Ashton, The Industrial Revolution (1760-1830), Oxford University Press, 1948,pp.3-4. 8. Henry Pratt Fairchild, "When Population Levels Off," Harper's Magazine, May, 1938,vol.176, p.596. 9. "Famine" and "Russia," Encyclopedia Britannica, 1965 10. Hunger and History, p.236.
  • The Economics Behind the Fall of the Berlin Wall    (2019-11-07)
    Friday marks the thirtieth anniversary of the fall of the Berlin Wall. Like most historical events that are commemorated as if they took place on a single day, the fall of the Berlin Wall on November 9, 1989, was just one of many interrelated events that led to the end of the system of Soviet client states in Eastern Europe, and the end of the Soviet Union itself, in December of 1991. With the fall of the Berlin Wall, East Germans, who had lived under severe restrictions on travel and emigration, were able to freely travel to West Berlin, which continued a chain of events already begun earlier that year in which many anti-Soviet dissidents throughout Eastern Europe became emboldened and met with unprecedented success. Meanwhile, East Germans flooded into neighboring countries by the thousands, seeking refuge from Soviet-sponsored oppression in Austria and West Germany. Why It Was Different in 1989 Throughout the mid-twentieth century, Eastern Europe was home to numerous anti-Soviet revolts and acts of civil disobedience. In Hungary in 1956, Prague in 1968, and especially in Poland throughout the 1970s and 1980s, resistance flared up, but was reliably crushed with Soviet-sponsored martial law and outright military intervention. But in the summer of 1989, the Poles held an election that essentially overthrew the Soviet-approved regime in Poland. This, time, however, instead of sending tanks to crush the Polish agitators, the USSR did nothing. By November of that year, dissidents had become emboldened by Soviet inaction. Hungary and Czechoslovakia haphazardly opened their borders, allowing East Germans to stream into Austria and on to West Germany. East Berliners began to demand free passage to the West. The “fall” of the wall, soon followed. Americans today, and especially American conservatives, like to claim that the end of the Soviet bloc and the Soviet Union was America’s doing; that the Soviet oligarchs feared American military might, and simply decided to give up and vote themselves out of existence, as they did two years later. This tale makes for nice domestic propaganda in America, but the fact that regimes virtually never just “give up” without firing a shot when faced with a threatening foreign power makes it rather unlikely. We are far more likely to find an answer if we ask ourselves not why the American state was so strong in the 1980s, but why the Soviet state was so weak. If the Soviets were more than capable of maintaining “order” in Eastern Europe during the 50s, 60s, and 70s, why was it unable or unwilling to do the same in the 1980s? An inquiry along these lines quickly leads us to find that by the 1980s, the soviet economy, and most of the economies of Eastern Europe were economic basket cases. Housing was in disrepair. Vehicles and appliances were incredibly old-fashioned and unreliable. The standard of living was a fraction of what it was in the “West.” Basic items like soap and women’s pantyhose were often luxuries. In other words, the centrally-planned economies of the Soviet bloc produced little actual wealth, and as the regimes siphoned off more and more of what little wealth was being produced, the people, as well as the regimes, became poorer and poorer. This economic weakness meant not only that the legitimacy of the regime was imperiled, but that the Soviets no longer enjoyed a military “surplus” with which they could simply roll into every rebellious neighborhood and re-establish order. In other words, the USSR was too poor to pay the political bills. Mises and the Calculation Problem None of this would have surprised Ludwig von Mises. Decades before, Mises had shown that a socialist economy (by which he meant a centrally planned economy) could not possibly know what to produce, when to produce it, or for whom to produce. In explaining this, Mises proved that the Soviet Union, regardless of any victories it might have in remolding human nature, was economically impossible. Rothbard explains: Before Ludwig von Mises raised the calculation problem in his celebrated article in 1920, everyone, socialists and non-socialists alike, had long realized that socialism suffered from an incentive problem. If, for example, everyone under socialism were to receive an equal income, or, in another variant, everyone was supposed to produce “according to his ability” but receive “according to his needs,” then, to sum it up in the famous question: Who, under socialism, will take out the garbage? That is, what will be the incentive to do the grubby jobs, and, furthermore, to do them well? ... But the uniqueness and the crucial importance of Mises’s challenge to socialism is that it was totally unrelated to the well-known incentive problem. Mises in effect said: All right, suppose that the socialists have been able to create a mighty army of citizens all eager to do the bidding of their masters, the socialist planners. What exactly would those planners tell this army to do? How would they know what products to order their eager slaves to produce, at what stage of production, how much of the product at each stage, what techniques or raw materials to use in that production and how much of each, and where specifically to locate all this production? How would they know their costs, or what process of production is or is not efficient? Mises demonstrated that, in any economy more complex than the Crusoe or primitive family level, the socialist planning board would simply not know what to do, or how to answer any of these vital questions. Developing the momentous concept of calculation, Mises pointed out that the planning board could not answer these questions because socialism would lack the indispensable tool that private entrepreneurs use to appraise and calculate: the existence of a market in the means of production, a market that brings about money prices based on genuine profit-seeking exchanges by private owners of these means of production. Since the very essence of socialism is collective ownership of the means of production, the planning board would not be able to plan, or to make any sort of rational economic decisions. Its decisions would necessarily be completely arbitrary and chaotic, and therefore the existence of a socialist planned economy is literally “impossible” (to use a term long ridiculed by Mises’s critics). The Soviet central planners never had an answer to this critique. Indeed, their “answer” only came in 1991 when the USSR finally shut itself down. And even up to the end, American Keynesians never figured it out either, and Paul Samuelson still claiming in 1989 that a “socialist command economy can function and even thrive.” Why Did it Take So Long? In response to Mises’s claim of the impossibility of central planning, some then ask “well, if central planning was impossible, why did it last so long?” The answer can be found in the fact that even in a centrally planned state, capital does not simply vanish overnight. The soviet planners were not starting with nothing. They had the accumulated capital of centuries of savings and investment by Russians, Ukrainians, Germans, Poles, and others under their control. True, it was not possible for them to correctly plan or determine non-arbitrarily what goods should be produced. But they nevertheless had large amounts of capital at their disposal, and even if the centrally planned state produced zero wealth (which was not true since even the Soviet state produced some things people wanted), the state still had plenty of wealth to redistribute until it was all gone. This is all the more true for regimes that are only partly centrally planned, as in the case of Venezuela, on which Nicolás Cachanosky observed: [I]f one of the wealthiest and developed countries in the world were to adopt Cuban or North Korean institutions overnight ... [t]he wealth and capital does not vanish in 24 hours. The country would shift from capital accumulation to capital consumption and it might take years or even decades to drain the coffers of previously accumulated wealth. In the meantime, the government has the resources to ... enjoy the wealth, highways, electrical infrastructure, and communication networks that were the result of the more free-market institutional realities of the past. Eventually, though, the “reserve fund,” as Mises called it, is used up: An essential point in the social philosophy of interventionism is the existence of an inexhaustible fund which can be squeezed forever. The whole system of interventionism collapses when this fountain is drained off: The Santa Claus principle liquidates itself. In addition to this, the Soviets made money for the regime by selling oil (and other goods) in international markets, and high oil prices in the 1970s propped up the regime so well, that had it not been for Soviet oil sales, it’s quite possible the regime would have collapsed a decade earlier. Conclusion As the mainstream news outlets cover the anniversary of the Berlin Wall’s fall this year, they will surely spend much time discussing the role of various American politicians, and military programs, and international relations. It is quite possible that all of these things had an effect on the regimes of Eastern Europe that were non-trivial. Nonetheless, such analysis ignores the huge elephant in the room which is the inevitable failure of regimes that are built on central planning and wealth re-distribution. Without markets and prices, there can be no planning, and without planning, no wealth creation, and ultimately, no political durability. The rebels and demonstrators of Eastern Europe deserve immense credit for courageously standing up to the state. But in the end, those who were successful were helped immensely by good timing and bad economics. [Editor's Note: This article was first published in 2014 to mark the 25th anniversary of the fall of the Berlin Wall. It has been slightly updated for the 30th anniversary.]
  • As Gun Owners Look to Nullify Gun Laws, "Sanctuary" Isn't Just for Immigrants    (2019-11-07)
    Since President Donald Trump won in 2016, people were led to believe that the Trump administration would be one of the most pro-gun administrations ever . However, a bump stock ban courtesy of the ATF and the passage of Fix-NICS legislation has disappointed gun owners who believed Trump would make at least some marginal pro-gun reforms. To Trump’s credit, he is reportedly resisting the temptation of pushing “red flag” gun confiscation legislation — for the time being. Not surprisingly, the real progress has been at lower levels of government, where local activists and policymakers have managed to expand laissez faire on the matter of private self defense. For starters, constitutional carry has had unexpected success in 2019, with states like Oklahoma , South Dakota , and Kentucky making it law in their respective jurisdictions. This continues a decades-long trend in which state governments have scaled back gun regulations. Even more interesting are the county nullification efforts taking place across the country. What started out as Second Amendment Preservation Ordinances in rural Oregon counties tired of gun control coming from Salem, has turned into a nationwide movement of local government officials and their supporters. From Oregon to Rhode Island , counties and municipalities have announced they will not enforce various state and federal fun laws. [RELATED: " Make Every State a Sanctuary State " by Ryan McMaken ] From the looks of it, it some of America’s smaller states with Democratic-controlled legislatures are witnessing a rural uprising. But it would be a mistake to believe only small and medium-sized states are joining in the gun control nullification fun. In fact, there is evidence these nullification movements are going beyond Democratic and Republican politics. States like California and Texas are joining in the fray by passing their own gun sanctuary resolutions. The small town of Needles, California got the ball rolling in July by becoming a gun rights sanctuary. California is already ranked at an abysmally low 46th place for “best states for gun owners” according to Guns & Ammo magazine . Considering its state politics, it makes more sense for gun owners to set up pro-self-defense enclaves in California’s rural areas. Texas is also seeing an rising trend in support for gun sanctuaries. This is occurring as Texas is experiencing a changing political environment, where it may no longer be viable for bold pro-gun policies such as constitutional carry to be pursued at the state level. On top of that, an allegedly “pro-gun” Republican Lieutenant Governor Dan Patrick has entertained passing universal background check legislation. Given these circumstances, Texas gun owners should throw in the towel, right? Not so fast. Recognizing that the state and the federal government won’t save them, gun activists have taken to rural counties to stand up against gun control. Since the border county of Presidio became a Second Amendment sanctuary in July, several other counties such as Hood county and Parker County in North Texas have passed sanctuary resolutions. The passage of gun sanctuaries in Texas are not isolated incidents, and due to the deeply-rooted gun culture in the state’s rural areas, more rural counties will likely follow. Even states like North Carolina are witnessing counties within their jurisdiction take the initiative on nullifying gun control. There’s no question this entire process is a mess. Having a patchwork of Second Amendment zones across the state looks chaotic. But it’s a politically mature appraisal of the changing political and demographic trends that makes conventional political tactics obsolete. After all, America is a massive country, with a diverse political environment from state to state, and even within localities. In turn, political operatives will have to adapt. Importantly, localist action changes people’s mindsets from relying on centralized and universalism-minded public administration. Some activists are re-learning how decentralized political systems (i.e., confederations and other federal systems) are supposed to work. They’re supposed to allow for — and even institutionalize — local opposition to political efforts by the central government. After all, local control allows for different jurisdictions to enact the sorts of laws that local residents support — regardless of what some politicians at the national capital might think.
  • Austrian Student Scholars Conference, Feb. 21-22, 2020    (2019-11-07)
    Grove City College will host the sixteenth annual Austrian Student Scholars Conference, February 21-22, 2020. Open to undergraduates and graduate students in any academic discipline, the ASSC will bring together students from colleges and universities across the country and around the world to present their own research papers written in the tradition of the great Austrian School intellectuals such as Ludwig von Mises, F.A. Hayek, Murray Rothbard, and Hans Sennholz. Accepted papers will be presented in a regular conference format to an audience of students and faculty. Keynote lectures will be delivered by Drs. T. Hunt Tooley and Christopher Coyne. Cash prizes of $1,500, $1,000, and $500 will be awarded for the top three papers, respectively, as judged by a select panel of Grove City College faculty. Hotel accommodation will be provided to students who travel to the conference and limited stipends are available to cover travel expenses. Students should submit their proposals to present a paper to the director of the conference (herbenerjm@gcc.edu) by January 15. To be eligible for the cash prizes, finished papers should be submitted to the director by February 1.
  • Evidence-Based Economics: What the Doctor Ordered?    (2019-11-06)
    Will the randomized control trial bring more clarity and certainty to economic science? Is “evidence-based economics” something to be hailed as a welcome innovation or should it be appraised with a more sober attitude? To examine this topic and discuss the relative place of randomized trials in economics and medicine we have as our guest Peter G. Klein, W. W. Caruth Chair and Professor of Entrepreneurship at Baylor University’s Hankamer School of Business. Professor Klein is also the Carl Menger Research Fellow at the Mises Institute. He obtained his PhD in Economics from the University of California Berkeley, and his BA from the University of North Carolina Chapel Hill. His field of interest is in the area of the economics of entrepreneurship and business organization. He taught previously at the University of California, Berkeley, the University of Georgia, the Copenhagen Business School, and the University of Missouri, and served as a Senior Economist with the Council of Economic Advisers. He is the author of five books and numerous peer-reviewed articles.
  • Family Formation, Fertility, and Failure: A Literature Review on Price Increases and Their Impact on the Family Institution    (2019-11-06)
    Quarterly Journal of Austrian Economics 22, no. 2 (Summer 2019) full issue. ABSTRACT: Inflation not only debases the value of currency by lowering purchasing power. It also serves to erode the quantity and quality of marriages while creating distortions in the decision-making processes of those hoping to form marriages and to have children. Furthermore, a loss of purchasing power helps to create relational tension for married couples, contributing to increasing divorce rates throughout the globe. As for the formation of families via marriage, the literature surrounding inflation and the family shows that price increases in higher education and housing both limit the number of first marriages as well raising the average age at which they occur. These phenomena are present in Western democracies, Islamic theocratic regimes, and highly-developed East Asian economies. Rising prices impact already married couples who would pro-create, but decide to accelerate or nearly eliminate child-bearing based on the inflationary environment in which they live. Finally, the literature shows that a loss of purchasing power leads to marital tension and higher rates of divorce. This trend is exhibited all over the world. This relationship occurs across cultural and religious systems as well as differing levels of economic development. While the problem of rising prices is economic in nature, it is shown to have deleterious effects upon the family institution. inflation    family JEL Classification: D10, E34 INTRODUCTION At the centennial gathering of the American Economic Association, Dr. Gary Becker addressed those assembled and described a growing awareness of how macroeconomic forces affect the family institution. In his concluding remarks, he noted that the, “evolution of the economy greatly changes the structure and decisions of families.” (Becker 1988) The aim of this review is to summarize the literature that describes how a variety of rising prices impact family formation, fertility, and failure. Since family institutions exist throughout the varied cultures of the world, the review will observe the heterogeneity of inflation’s impact on the family across cultures and national borders. There are two key distinctions that this writer wishes to articulate. The first is in regards to the philosophical framework and definition of the family institution. In my understanding of the role of family in society, I borrow from the Dutch Calvinist philosopher Abraham Kuyper (while rejecting his views on state intervention into various markets among other views). He describes the family as a divine creation. As such, this ‘sovereign sphere’ is an institution designed with its own rights, responsibilities, norms, roles, and limits. In addition, the family institution is not subject to the control of other institutions such as ruling authorities, religious institutions, or markets. While at the same time, the family social unit will freely interact with all of the other institutions without being absorbed by or otherwise diminished in its role as the primary way in which children are raised, educated, and socialized. In Kuyper’s view, the other divinely inspired institutions such as markets (for goods, services, money creation, and financial assets) and governing authorities also have their own divinely constructed purposes, jurisdiction, and limits. Our world now represents what happens when mankind rejects the divine order and decides to merge the distinct spheres, regardless of the rationale for doing so. The most important example of such an ‘unholy marriage’ in our time occurs when the market for money creation is combined with the governing institution. In such instances both of these institutions have already stepped outside of their divinely-ordained spheres of operation. Furthermore, this new, man-made organization necessarily sets itself up against the other institutions that choose to retain their intended form and function. As such, this man-made entity will inevitably infringe on the proper operations of the other sovereign spheres. This viewpoint provides a narrative for the corrosive effects of modern central banking cartels upon the family institution. In Kuyper’s view, “surely, to centralize all power in the one central government is to violate the ordinances that God has given for nations and families. It destroys the natural divisions that give a nation vitality, and thus destroys the energy of the individual life-spheres and of the individual persons.” (Van Dyke 2015) This observation is quite meaningful in our time as we observe both Central Bank inflation and the ‘deinstitutionalization’ (Cherlin 2004) of the family and of marriage throughout the world. Altered family structures and decisions are seen in the delay of family formation and by increasing divorce rates across the globe. Perhaps unwittingly, Cherlin affirmed that a growing disregard for Kuyper’s definition of the family institution has emerged in 20th century America for the very reasons that Kuyper describes. In light of this deinstitutionalization, this literature review seeks to describe how researchers have linked declining purchasing power to the crumbling institution of marriage. The second point of clarification is that the writer of this review defines inflation as any increase in the supply of money and credit. I reject the notion of ‘inflation’ as an increase in government-measured price levels. In most cases, the literature does not adopt this writer’s definition. Therefore, when some writers refer to “inflation”, I will refer to “price increases”. The first reason for this clarification is based on Richard Cantillon’s observation that price increases are not simultaneous, universal, nor do they occur by the same degree in all places after monetary injections have occurred. (Murphy 1989) In addition, the writer discards the mainstream use of the term ‘inflation’ on the grounds that it is considered to be synonymous to the US Bureau of Labor Statistics’ Consumer Price Index. I reject the CPI and commonly used term ‘inflation’ precisely because it does not reflect the experience of rising prices within households, across income levels, races, or even genders. (Michael 1975, Hobijn and Lagakos 2003, Armantier et al. 2012, Sequino and Heintz 2012, Bryan and Venkatu 2002) To provide an overview of the literature on family formation, I first begin with the economic theory of family formation articulated by Gary Becker. In 1974, he described marriage as a process of “Positive Assortive Mating” where potential spouses seek to improve their overall utility as compared to the utility held by remaining single. The existence of mate selection processes by no means requires one to dismiss marriage as a divinely created institution. The fact that men and women select partners to improve overall utility does not require that the institution is therefore simply a man-made institution. Just as a person may select a seat on a airplane based on their subjective values that account for prices, comfort and other factors does not mean that they are responsible for the construction of the plane or for the physical laws that govern its operation. Likewise, in mate-seeking, the desire to gain additional marginal utility in a state of marriage, potential partners consider factors such as IQ, education level, height, ethnicity and more, while not altering the fundamental design of marriage itself. Recent scholarship has revealed that in the US, price increases and the ensuing distress borrowing by young people is positively correlated to the average age of first marriage. (Bosick and Estacion 2014, Gicheva 2016) To be more precise, education-based debt levels of the potential spouse have been shown to have a negative impact on the positive assortive mating that Becker describes. These high debt levels are associated with inflation in higher education tuition. However, these high debt levels are more consistent with the educational attainment of middle and upper middle-class American youths who can afford to delay the necessity of work. Among these more affluent young people, it is obvious that not every couple wishes to live together or to have children. The literature shows that among those with higher incomes and higher levels of educational attainment, many couples avoid traditional marriage and choose to cohabitate simply because they have different costs in view than their poor counterparts. The literature affirms that these couples are waiting to reach financial milestones, not in terms of nominal cash holdings or income, but in terms of real asset and property accumulation. The achievement of such goals is made more difficult with a lack of purchasing power. (Smock et al. 2005) All told, the literature paints a picture wherein inflationary pressure across geographic, racial, and educational descriptors is linked to delayed marriage formation and non-formation in the case of cohabitating couples. In the case of the poor in the US, rising prices for the goods that they consume have been shown to increase criminality among single males. Males who have resorted to more lucrative criminal activity and the incarceration that often ensues has, resulted in low marriageability status across cohorts in the United States. Another consequence of male criminality is that there are increasing levels of fatherless children throughout racial groups, as these men are viewed as reproductive partners, but not as traditional husbands or fathers. (Rosenfeld et al. 2018) This situation further erodes the family institution when coupled with the wage inflation that has been more prominent for females in the US, which leads many women (even those with low labor productivity) to eschew husbands as providers in exchange for provision from the state, their own wages, or older family members. (Schneider et al. 2018) The presence of increasing prices has also been shown to impact the fertility rates of married couples over time and in different cultural and economic settings. Robert T. Michael observed that wealthy and poor households experience price increases differently in the modern US economy. Since this is the case, it follows that the poor and the wealthy would approach fertility decisions differently as well. (Michael 1979) This observation foreshadowed Caldwell’s work in international family economics, which asserted that families in low-income countries respond to price increases by growing the number of children they bring into the world. They decide to do so because their offspring represent net positive income flows and because the children can contribute to overall family wealth with their low-skilled labor. Conversely, in developed nations, the increasingly high price of educating children for modern economic life leads families to have fewer children, as each child produces negative income flows during their years under their parents’ roofs. (Caldwell 1983) In more recent research, Kaplan suggested an update to Caldwell’s view by introducing different measures for ascertaining intergenerational wealth flows. Specifically, there is a call to recognize the fact that underdeveloped economies often measure wealth in terms of commodity acquisition rather than nominal monetary amounts. (Kaplan 1994) When considering the body of literature on how higher prices effects fertility, it is shown that a lack of wealth across time, culture, and economic standing all produce fertility rates that are distortions from a natural state of reproductive supply and demand within households. Ultimately, developed nations tend to have lower fertility rates, leading some to fall short of zero population growth, while developing nations still struggle with the challenges of young, booming populations. When families in these poor nations also experience price increases, fertility rates are also shown to increase. The literature also demonstrates that there is a negative relationship between price increases and fertility in developed nations. Across the world, families also dissolve under the pressure of escalating prices. The literature examining US divorce rates since 1929 has shown that during periods of large price increases that there is a robustly positive relationship to marriage dissolution. (Nunley and Zietz 2012) This relationship was most powerfully illustrated throughout the 1960s and all the way through the Vietnam Era. The escalating prices caused by ‘Great Society’ legislation such as Medicare and Medicaid, coupled with the massive expenditures on the war in Vietnam both occurred during this period and represented a shift of real resources from American households to the welfare state and to war-making. These macroeconomic realities left the already married with a loss of purchasing power and all of the relational strains that come along with it. Some literature has asserted that this ongoing increase in the divorce rates in the US, and particularly the high divorce rates of the 1970s, were caused by the adoption of no-fault divorce law. (Peters 1993, Friedberg 1998, Rogers et al. 1999) However, Wolfers finds that while these changes in the legal environment did have an initially positive but weak correlation to divorce rates, these effects did not persist over time. (Wolfers 2006) Meanwhile, in the UK, a similar conclusion was reached by research which showed that the liberalization of divorce law simply lowered the cost of the divorce transaction, ensuring the end of marriages that were already “on the rocks”, while having no impact in the long-run trend. (Smith 1997) Throughout the European continent, divorce rates have also climbed substantially in the post-WWII era. Some have indicated that the rise of the welfare state (itself a part of the inflationary regime) has encouraged both lower rates of family formation and more frequent divorce. (Balestrino et al. 2013) In southwestern Asia, price increases in the Iranian housing sector and for dowry payments have been shown to drive increased divorce rates from 1982 through 2010. (Farzanegan and Gholipour 2015) The authors note that this is a particularly troubling social trend in such a conservative Islamic state. Also, in Pakistan, connections have been drawn between price increases for the goods that households typically consume to increases in domestic violence and female spousal abuse, clinical depression among both men and women, and understandably, an increasing divorce rate. (Khanam et al. 2015) Across the planet, central bank inflation has led to price increases in the markets for goods that are important to families everywhere. There is little doubt that local customs surrounding family formation and expectations for familial behavior can produce a wide variety of responses to the loss of purchasing power. In order to capture more of those specific examples, this review will now turn to a more precise look at the realities of family life under the pressure of elevated prices. PRICE INCREASES AND FAMILY FORMATION In 1960, the average age for first family formation for US females was 20.1 years and 22.2 for males. Forty years later, the first marriage for US women had jumped to 24.4 and to 26.1 for men. (Schoen and Canudas-Romo 2005) Even greater change has been afoot in England and Wales during the same time period. There, the average age of first marriage for women went from 21.0 to 26.3 and from 23.4 to 28.3 for men. At the same time these researchers note the increase in co-habitation as an alternative arrangement for adults living together, leading to a decline in the real prevalence of marriage over time. Recent research in the US regarding the later age of first marriage has yielded findings stating that the rising cost of higher education and the accompanying debt load held by both men and women is positively correlated to the average age of first marriage. (Addo et al. 2018) More specifically, Addo’s research shows that the greater the student loan debt, the more likely young men and women are to cohabitate and for longer as opposed to entering a marriage relationship. Furthermore, with more education, the expectations of young people are that they would marry one with similar educational status. (Becker 1974) In addition, it has been shown that MBA students not only raise their age of first marriage, but that they decrease the likelihood of ever being married at all. (Gicheva 2016) This relationship is stronger among female MBAs than for their male counterparts. Research has found that for every $1,000 in student loan debt that women carry, they reduce their odds of first marriage by 2 percent per month after undergraduate graduation. (Bozick and Estacion 2014) Although massive higher education debt loads are peculiar to the US it is plausible that if significantly negative net worth is carried into the housing market, that a person carrying the debt will seem less marriageable. (Bleemer et al. 2014) Alongside such credit-based challenges is the lack of affordable housing, a phenomenon that is hardly unique to the US. In Great Britain price increases in the housing market have also kept young people at home (and single) longer than in past decades, thus delaying the age of first marriage there as well. (Ermisch and Francesconi 2003) The marked increases in East Asian first marriage age have also been driven by housing prices. In Singapore, qualitative studies on the attitudes of young singles make it clear that the male is under considerable social pressure from family and their potential spouse to acquire a flat. (Jones et al. 2012, Quah 2008) Delays in first marriage in Eastern Asia demonstrate similar trends as those in the US but to an even higher degree. In Japan the average age of first marriage for men has risen from 26.9 in 1970 to 30.5 in 2010. Their female counterparts have seen a change in this statistic go from 24.2 to 28.8. (Raymo et al. 2015) The pattern is similar in South Korea and Taiwan. The question addressed here is whether price increases have anything to do with this phenomenon. The literature does indicate that the later age for family formation in East Asia is largely driven by rising prices. These prices include the housing, education, food, and energy sectors. It is apparent to researchers that these increasing costs of living do have a positive relationship to age at first marriage. (Park and Sandefur 2005) These rising prices are coupled with cultural expectations of aspirational consumption which also contribute to first marriage delays. (Mu and Xie 2014) Other features that delay first marriages in East Asia include extended family expectations of co-residence, educational mismatches in the marriage market, and extended family expectations regarding fertility. Despite these nuances, the common theme of price increases and their positive correlation to age at first marriage is present both in the East and the West. PRICE INCREASES AND FAMILY FERTILITY When Robert T. Michael observed that poor families within the US experienced price increases differently than the wealthy, and that their experiences were worse than the reported CPI measurements, it became clear that the poor would behave differently than the wealthy in the face of rising prices than those of higher income levels. (Michael 1979) This reality inside the US makes Caldwell’s views on family behavior in underdeveloped economies versus industrialized nations all the more understandable. In his theory of wealth flows, it was explained that parents in underdeveloped parts of the world would respond to their lack of labor productivity and purchasing power with a set of choices that was distinct from their counterparts in the industrialized nations of the world. He observed that because low-skilled labor and wages were attainable by young children, their parents would seek their income generating efforts to combat the family’s lack of wealth. With this reasoning, parents would not only expect their children to work at an early age, but they themselves would respond to these conditions by having even more children, thus increasing the fertility per female in the underdeveloped world. In addition, the US welfare system creates incentives for unwed mothers to have more children and not to educate them beyond their years of free public education. By funding higher education, children would begin represent a negative net income flow. Thus, unwed and poor mothers are presented with an incentive structure that encourages non-education and the immediate (though short-term) benefits of children working in low-skilled labor markets in order to contribute to increased family income. (Caldwell 1983) When it comes to the middle-class and wealthy in the United States, declining fertility rates have not only been linked to the reasoning provided by family economists like Caldwell, but healthcare economists have weighed in as well. The Journal of Medical Economics contends that the delay in first marriage and family formation contributes to overall lifetime fertility decline. With the increase in age of first marriage, and subsequent first conception within marriage, fertility rates are lower among women who have their first child later in life. This observation may seem as obvious as it is trivial. However, if it is clear that economic realities impact physiological outcomes, it is easy to see why some would describe increasing prices and the subsequent loss of fertility as a public health concern. (Tannus and Dahan 2018, Sunderam et al. 2015) Simply put, the literature demonstrates a chain of events where the increases in education costs and housing prices delay first marriage, first childbirth, and ultimately lead to diminished fertility. To quantify the decline in fertility in the US, the average number of children per woman has plummeted from 3.65 in 1960 to 1.84 in 2015. (FRED 2019) Further study on the connection between rising prices and falling fertility suggests that parents sense a moral obligation to refrain from having children during periods of money and credit expansion via central bank policy (Abo-Zaid 2013) In this line of reasoning, parents observe climbing prices and recognize that providing education, nutrition, and general care will be more difficult as they lose purchasing power. Furthermore, this loss of purchasing power leads many married couples to seek more than one income, making child-rearing more difficult as the couple demonstrates a subjective preference for time working over time spent raising children. Earlier literature defends a model where this outcome means that the children of these parents will decrease the next generation’s labor supply, driving output per capita higher for women who will then substitute child-bearing for income earning. (Galor and Weil 1996) This theoretical connection, however, has not been found robust by some (Jones et al. 2012) who assert that the same would be true for males whose greater earning power would enable women to resume more traditional child-rearing roles. Innovative research from England and Wales has emerged as researchers have sought to distinguish between the fertility response to home price increases between renters and existing home owners. The findings are complementary to those in the US for renters as higher home prices deter would-be owners from having more children. This negative relationship between housing prices and fertility does not hold for British homeowners from 1995 to 2008. (Washbrook 2018) Although there is a positive relationship between home prices and fertility for homeowners, this effect was found to be temporary. This finding is not necessarily contrary to economic theory because homeowners believe they will acquire more wealth in the future through the sale of that home. It is plausible that this anticipated increase in wealth makes them feel as though they are able to support more children. This explanation of homeowner behavior is consistent with earlier studies in the US where renters have a 2.4 percent decline in fertility for every $10,000 in average home prices, while the homeowners respond with a 1 percent increase in fertility. (Dettling and Kearney 2011) If we use Becker’s reasoning to shed some light on this outcome, it is reasonable to posit that for renters, the cost a future home will be too great to afford the cost of the delivery and care of an additional child. However, the reasoning could be reversed for families who currently own a home. They may look at the potential proceeds of the sale of their home as being a greater financial benefit allowing them to have another child and perhaps the purchase of a new home with more space to accommodate those additional children. It has been found that for East Asian families, the negative relationship between price increases and fertility are not only present but have even stronger effects than they do in the West. Japan has been at or below replacement rates since 1957. South Korea has experienced a rapid decline in fertility since the 1970s, and Taiwan’s fertility has reached an extremely low 0.9 children per mother in 2010. (Raymo et al. 2015) Once again, these lower lifetime fertility rates are associated with higher age for a mother’s first marriage and first birth, spurred by the high costs of education and housing. In fact, the average age of first delivery in Japan reached 29.3 years of age in 2010. In the same year, the mean age at first birth reached 30.1 in South Korea and 29.6 in Taiwan. Other literature on East Asia explicitly refers to Becker’s model of fertility behavior when studying the impact of housing prices in Hong Kong upon fertility rates from 1971 until 2005. (Yi and Zhang 2009) Using a cointegration analysis, researchers found that for every 1 percent increase in housing prices there was a statistically significant negative relationship in fertility rates of 0.45 percent. Further testing revealed that housing price inflation can account for about 65 percent of the fertility decrease in Hong Kong since the 1970s. The general pattern of the literature paints a picture of middle-class and wealthy families in developed nations who reduce their fertility in response to rising costs of housing and education. In pre-modern economies as well as among the poor in developed nations with sizeable welfare states, the literature points to a pattern where parents increase their fertility rates in order to benefit from the net positive income that children can produce. This is especially the case in low-skilled labor markets within those nations. Parents in those situations will often remove their children from schooling as the opportunity costs to the family’s standard of living is too great. (Rosenzweig and Evenson 1977) PRICE INCREASES AND FAMILY FAILURE When substantial price increases occur in the markets for goods and services demanded by married couples, the returns on staying married are diminished. This finding by Nunley and Zietz is clarified by observing the dramatic rise in the US new divorce rate through the 1960s and 1970s. When the stagflation era ended in the early 1980s, they find that a slowing of price level increases also contributed to a decline in the rate of new divorces which continued through 2005. (Nunley and Zeist 2012) The literature also shows that when unexpected macroeconomic shocks occur, changes such as increasing prices or increasing unemployment also produce higher rates of divorce. (Becker et al. 1977) The causal link between the rising price of consumption goods and divorce begins when spouses have to increase the quantity of labor supplied to maintain the same levels of spending and leisure as they had previously enjoyed. This then leads to a decrease in the time spent on leisure and household production, leading to relational tension and conflict. Since potential wage increases do not keep pace with price increases, there are worsening financial and relational returns on the marriage relationship. (Christiano et al. 2001) Some literature has emphasized the increases in women’s educational attainment and their higher labor force participation rate as leading causes of increased US divorce rates in the 1960s and 70s, but these findings are not without controversy. (Lombardo 1999) This dispute arises because others have found that it is the higher divorce rate which drives an increase in the labor force participation rate among women who have already received higher amounts of education in the preceding decades. (Bremmer and Kesselring 2004, Spitze and South 1986, Mincer 1984) This approach suggests a feedback loop where more divorce leads to more female labor force participation, which leads to greater earning power, and eventually more divorce. Yet another explanation for the higher divorce rates in the 60s and 70s is that rising prices required both spouses to work outside the home. This macroeconomic shock required women to begin accelerating their entry into the workforce, which created the relational tensions already described. The resulting large-scale female entry into the workforce disrupted familial harmony, child-rearing patterns, and the domestic division of labor. This narrative is substantiated by Nunley and Zietz’s empirical methodology which produced results showing a positive relationship between inflation rates, nominal GDP growth rates, increasing amounts of women’s educational attainment, and divorce rates. (Nunley and Zietz 2012) However, this study does not establish links between those 3 determinants which leads to opportunities for further study. One important caveat to this explanation is that it does not include the liberalization of divorce laws. While some have suggested that this is a driving force in the high divorce rates, (Friedberg 1988) Nunley and Zietz exclude this variable due to the literature that shows that the advent of no-fault divorce law in the US has a small and short-lived positive impact on increasing divorce rates, but no impact on long-term frequency of divorce. (Wolfers 2006) In the British context, the literature also seeks to establish the multivariate causes of divorce including the legal framework within the UK as well as macroeconomic triggers. The literature notes that when reforms in divorce law were introduced in the European context, divorce rates were already climbing and the supply of these reforms merely met the demands for innovation and lower costs for divorce. In other words, the reforms were a response to rather than a cause of rising divorce rates. (Becker 1993, Michael 1988) Over the course of the post WWII period, the UK has experienced similar trends in the divorce rates as those in the US. The British had rapid increases in rates of divorce in the 1960s and 70s and found that rates were lower from the rise of Thatcher onwards. (Smith 1997) In Britain, Smith reaches similar conclusions as Wolfers did in the US. While there is a positive correlation between divorce liberalization and the divorce rate, the correlation is weak and temporary. However, a lack of literature is clear in the case of identifying the impact of price increases on divorce rates in Great Britain. This lack of research may be the result of a lack of concern over the issue in general as societal values regarding divorce have moved from viewing divorce as a taboo to viewing it with indifference. In the Middle East, the issue of divorce is hardly viewed lightly and is even considered a public health risk due to its negative impact on children and women. (Barikani et al. 2012) A significant body of research has come from Iran in recent years. When examining the causes for a rising divorce rate in the Islamic Republic, both women and men cite economic dependence upon other family members for maintaining an acceptable standard of living as a leading cause of divorce. Fifty eight percent of men seeking divorce in this literature cite economic dependency upon extended family members as a driving force in the dissolution of their marriages while 49 percent of women say the same. Furthermore, 53 percent of divorced females specifically cited their former husband’s inability to pay for the rising cost of living as a prominent factor in their divorces. Additional research from Iran indicates that from 2002 through 2010, Iran had reached the highest divorce rate in the Islamic world. Furthermore, the price of housing both for renters and owners was directly linked to marital tension and divorce. (Farzanegan and Gholipour 2015) In addition, rising unemployment rates and increases in both public and private spending on education were positively correlated to this change in divorce rates. In a unique urban setting, Tehran was found to have thousands of vacant investment residences thus reducing the supply and driving rental prices to very high levels. Farzanegan and Gholipour are careful to point out that when there are sudden and unexpected surges in housing prices, there are even stronger positive effects on the divorce rate. In an interesting note on education spending in Iran, these researchers explain (like Caldwell) that increasing prices for education also suggest lower fertility rates among married couples. The lower number of children in turn diminishes the social pressure for families to remain together. In other words, families with fewer children have a higher likelihood of divorce than those who have offspring. A compelling cultural idiosyncrasy in Iran that has been shown to drive increasing divorce rates is the practice of ‘Mehrieh’ or a dowry. This payment is traditionally required to be delivered in gold coins. (Farzanegan and Gholipour 2018) The price of a gold as a reliable measure of a loss of overall purchasing power is one of the most commonly accepted premises in monetary economic theory. The Mehrieh asserts the legal right of the wife to request payment in gold jewelry or coin at the time of marriage or after the marriage has already begun. The ever-increasing nominal price of gold places great financial strain on the male partner, thus producing further marital tension. The existence of this arrangement has also been found to cause an increasing age of marriage formation by an average of 3 additional years from 1986–2011. The cultural purpose of the Mehrieh is to act as a form of self-insurance for the wife and her family. It us used to cushion the financial blow of a divorce in order to protect women from economic ruin after a divorce. The Mehrieh this lowers the cost of divorce for women, making it less likely that women will remain in tense marriages. In addition, young brides who are aware of the diminishing purchasing power of currency versus gold actually plan for an early divorce in order to collect the Mehrieh as an appreciating asset in order to facilitate their own independent living arrangement. Although this narrative may present a system of perverse incentives to the western mind, it does illustrate the similarity of effects on families due to falling purchasing power against real assets like gold or housing and the increasing divorce rates to match. (Conger et al. 1990, Jensen and Smith 1990, Amato and Beattie 2011, Harknett and Schneider 2012, Dehghanpisheh 2014) Eastern Asia, like the Near East and West, has also experienced rising divorce rates and the literature points to similar causes as those in other parts of the world. The general literature surrounding East Asian marriage points out that marriage as an institution has become increasingly less attractive for both those who would be married and for those who are already in marriages and that macroeconomic factors play a significant in the decaying esteem of marriage. (Bumpass et al. 2009, Rindfuss et al. 2004) While many values of East Asian marriage remain intact, some researchers show that it has adopted western values as well. (Cai 2010, Thornton et al. 2012) In light of these changes, projections show that 20 percent of South Korean marriages are expected to fail by 2023. (Park and Raymo 2013) Nearly 1/3 of Japanese marriages are expected to end in divorce. (Raymo et al. 2004) One important contrast between those who divorce in East Asia is that divorce is clearly more prevalent among lower income couples than for higher income families. This narrative is similar to the one in Iran where young and relatively low-income families have difficulty affording suitable housing and the relational strain placed on marriages has a corrosive effect on their longevity. These lower income families are also less educated and as such researchers have shown that there is a strong negative relationship between education level (and thus earning and purchasing power) and divorce rates. (Chen 2012) CONCLUSION Across the globe, family formation, fertility, and failure are all impacted by rising prices. In the US, the age of first marriage is higher than ever due to high education costs that are manifest in increasing debt loads for young adults. This reality means that their incomes are redirected to debt repayment, making already increasing housing prices even harder to afford. Throughout Europe and East Asia housing affordability is also leading young people to delay their first marriage as well. Across the globe, these increasing prices are making cohabitation more financially sensible than marriage. The literature shows that fertility decisions are distorted in the developing world and for the poor in developed countries. A pattern emerges that when prices rise, parents have more children as they are viewed as adding to family assets. Families make these reproductive decisions and at later ages may pull children out of school due to their ability to earn incomes through their low-skilled labor in order to combat price increases. This keeps families from making investments towards their children’s education and eventually, a higher standard of living. Meantime, in the developed world, wealthier parents choose to have fewer children in response to increasing costs for housing and for educating their offspring. The body of literature concurs with much of Caldwell’s theory on intergenerational wealth flows. Research from across the globe shows that when married couples are met with falling purchasing power, marital tensions rise. In the developed world these couples who face rising prices have less motivation to stay married as the average number of children is already relatively low. In the underdeveloped world, the literature shows that the relational tension brought on by rising prices is exacerbated by cultural expectations of male provision and extended family pressures. This is a recipe for higher divorce rates even among some of the most traditional societies. All of these realities show a body of literature that affirm Kuyper’s vision of the family institution being under stress from man-made institutions such as central banks who produce debased currency and easy credit. This state of affairs leads to the detriment of families as they see the value of their savings and purchasing power evaporate for the things that are most important to maintaining a suitable standard of living. The literature described in this review paints a picture of the ‘deinstitutionalization’ of marriage and family that Cherlin described. In seeking a common thread among the erosion in the quantity and quality of marriages throughout the world, it is the loss of purchasing power brought on by central bank money supply inflation which drives people to avoid marriage across the world through delay, cohabitation and divorce. Children in poor nations suffer under inflation as well because their parents require them to work so that the family might survive. Meanwhile, in the developed world, children see less of their parents as two incomes are often necessary to make ends meet, while their parents’ marriages are under threat of divorce due to the relational and financial difficulties brought on by rising prices. There is room in the body of literature for a stronger statement on the strength of the correlation between rising prices in the categories that are important to family formation. More work remains to establish how tuition rate increases in the US lead to higher ages of first marriage. There is also an opportunity to describe the distinctions between fertility decisions in the developed and developing economies of the world. Research could focus on price increases for childcare, education, food, energy, and housing for their impact on fertility rates in both types of economies. The literature on housing and gold prices increases and divorce has received a very promising set of studies from Iran and researchers could attempt the same type of examination in other nations as well. While several attempts have been made to describe liberal divorce laws and changing norms regarding sexuality and views on cohabitation there is also room for more specific links to discover which prices in the economy have the most powerful effect on marriages that end in divorce. In the end, we observe that a divinely-created institution can have its definition and even its existence malformed and eventually crushed under the weight of man-made institutions like central banking cartels. This description of the deinstitutionalization of the family and marriage should warrant serious attention from Austro-libertarian thinkers. The simple reason for this is that marriage and family are distinctly non-state institutions that have always been capable of providing wealth, order, and continuity, within a framework of peaceful and voluntary cooperation. Therefore, family holds a unique place among institutions as a bulwark against the deleterious effects of the welfare state. As such, it is an institution worth defending and strengthening as the Austro-libertarian school aims to abolish man-made central banks and replace them with free-market money production and consumption.
  • Building the Wall Using Eminent Domain Hurts Americans    (David Bier, 2019-11-06)
    David Bier President Trump is hiring attorneys to carry out his mission to build a border wall with Mexico. No, these lawyers will not be tasked with welding steel together or digging holes. Instead, they will focus on filing lawsuits to take private property from Americans who own the land where the president wants to build the wall. This is a misallocation of public resources and a violation of the rights of Americans. , The reason is that the border wall will not be constructed on the physical border with Mexico. Border Patrol will build it wherever it is most convenient for its purposes. Indeed, some Americans actually live on the “Mexican” side of the wall, even though their home is located on U.S. soil. They need to pass through the “gate” every day to get to work or school. , American taxpayers (not Mexican ones, as was promised) should not pay to steal private property from Americans to help the re-election prospects of one politician. , In Texas, the entire “border” is actually in a body of water: the Rio Grande, meaning it is physically impossible for the government to build a border wall on the actual border line with Mexico. Due to flooding, it is also generally impossible for it to place the fence anywhere near the banks of the river either. The end result is that the government needs to build as far as mile inland. Land along the Rio Grande in Texas is almost entirely privately owned. Indeed, a third of the border is under non-federal ownership. This means that those lawyers that Trump is hiring will be busy, filing hundreds or thousands of actions to pull land away from Americans who live where it will place his wall. In Texas, there are 4,900 parcels of property within 500 feet of the border. Those parcels may have multiple owners — each of which will lose out from Trump’s wall. Eminent domain not only deprives the person of the value of the land taken — which could be minimal in some cases; in other cases, it could be everything — but also the value of the rest of the property owner’s land. For example, the Bush administration built a fence effectively putting a golf course in “Mexico,” which put it out of business in 2015. All the people living “in Mexico” don’t get compensation unless they actually lose their land for the fence. The fact that they now have to knock to enter their own country isn’t worth a dime. Likewise, a 10-foot fence built a foot from your property doesn’t count either. The entire process of “eminent domain” — the legal term for government taking private property — along the border makes a sham of the Constitution’s requirement that “private property (not) be taken for public use, without just compensation.” Border Patrol can send a letter requesting that the owner “voluntarily” give up the land in exchange for some minor amount, not based on an appraisal of its value. If they don’t believe that’s “just compensation,” the government will just stick the money in an escrow account and take the land anyway. They can build a huge wall through your house, while you fight for what you think is fair in court. Of the land already taken, wealthy property owners fought and won settlement triple what the government offered, while poor ones took whatever the government offered. That’s a mockery of the Constitution, and multiple bills would fix the issue. The Constitution also requires that taking be done for “public use.” Obviously, a border wall based on legitimate security analysis would qualify as a public use, but no independent authority has ever concluded that a border wall will significantly affect illegal crossings. The Congressional Research Service concluded that a single-layer “fence, by itself, did not have a discernible effect on the influx of unauthorized aliens coming across the border in San Diego.” The reason that the president is so adamant about building a border wall is not because it is in the public interest, but because it is in his private interest, since he made it the central promise of his campaign in 2016. Building a border wall solely to satisfy a political campaign promise would also render the words of the Constitution a laughingstock. American taxpayers (not Mexican ones, as was also promised) should not pay to steal private property from Americans to help the re-election prospects of one politician. David Bier is an immigration policy analyst at the Cato Institute.
  • Supporters Summit 2020    (2019-11-06)
  • Does Milton Friedman's "Plucking Model" Refute Austrian Business Cycle Theory?    (2019-11-06)
    Noah Smith has a new Bloomberg column titled, "Milton Friedman Got Another Big Idea Right." Specifically, Smith points to a new paper that apparently confirms Friedman's famous "plucking model" of recessions. Here's how Smith contrasts Friedman's theory from others, including the Misesian theory of boom-bust: Some [business cycle] theories hold that booms cause busts, because good times allow bad investments to build up in the financial system. According to these theories, the larger the boom, the larger the crash that follows. Then there’s the so-called plucking model. Proposed by the legendary economist Milton Friedman, it holds that the economy is like a string on a musical instrument — recessions are negative events that pull the string down, and after that it bounces back. Just as a string snaps back faster if you pull it harder, this theory holds that the deeper the recession, the faster the recovery that follows. But you can only pluck the economy in one direction; bigger expansions don’t lead to bigger recessions. Back in 1996, Roger Garrison published an excellent Austrian response to Friedman. But let me give a quick version here: The data by which Noah Smith thinks Friedman's plucking model has been vindicated, are perfectly consistent with Mises' theory as to what causes recessions. To see why, let's just imagine a simple (and exaggerated) example. Suppose the whole labor force is dedicated to making hardware. In a normal, sustainable period of growth, 10% of the workers make hammers, 40% make nails, 10% make screwdrivers, and 40% make screws. This is healthy, balanced growth, where the underlying capital structure is sustainable. Now suppose because of central bank manipulation of interest rates, the price system is screwed up and all the workers start making hammers. This won't boost "total output." There will still be "full employment." It just means that the composition of output will be heavily skewed to hammers, and away from nails, screws, and screwdrivers. This is obviously not a sustainable path. Soon enough, the supplies of nails and screws will run out. It does little good to be cranking out hammers, if there are no new nails coming online. What I've just described is a metaphor for what happens during an unsustainable boom, in the Misesian sense. Now once the crisis hits, "total output" does indeed drop, as workers need to be reallocated back to a more sensible niche in the division of labor. Now as a completely separate issue, it's clear that the increase in output measured from the trough is going to be directly related to how deep the trough is. In our example, the "crash" is going to be really bad, because the economy is going to suddenly run out of nails and screws completely. If instead the workers had only been slightly knocked out of balance, then the ensuing crash wouldn't be as bad. But either way, my point is that the wild swings in "total output" will appear to follow a bust-boom pattern, rather than a boom-bust one. This is so, even though by construction, my story fits the Misesian pattern of an unsustainable production period being followed by an inevitable bust. For more details, read Garrison's article, or my earlier "sushi article" that many people found illuminating.
  • The Feds Spend More on National-Debt Interest Than You Think    (2019-11-06)
    Recently, the Treasury Department reported a 26% increase in the federal budget deficit with a 2019 deficit of $984 billion. The reported data on the budget can be misleading. You might think that a budget deficit is the amount of spending that exceeds budget revenue, in other words, the amount of borrowing needed to make up for this shortfall. However, in the world of Washington D.C., not all spending is counted as spending and it’s possible for the government to borrow money from itself. Let’s look at the actual Treasury Department budget numbers. The Treasury reports the Total Public Debt Outstanding of almost $23 trillion, which is the sum of the Intragovernmental Holdings and the Debt Held by the Public. There is roughly $6 trillion of Intragovernmental Holdings. This is money that the federal government says that it owes to itself. Over the years, the government has earmarked tax revenues for one use, say Social Security spending, and spent those revenues on some other category of spending. So now they owe themselves this money. However, this is not truly debt. No business or household is concerned about being in debt to itself. If you promise to spend $100 of your income on a car payment and instead you buy $100 of food, you don’t pretend that you owe yourself $100. However, in the feds’ budget this is called Intragovernmental Holdings. When looking at the debt numbers we should ignore these Intragovernmental Holdings. That leaves us with the Debt Held by the Public, what I consider to be the true amount of federal government debt. In your personal life, if you earn $100, you spend $120, and you borrow $20 to cover this shortfall, then your personal deficit is $20. Similarly, if the feds have $100 billion of revenue and spend $120 billion, then they must borrow $20 to cover this spending. That $20 increase in their debt is the deficit. So the true deficit is the change in the Debt Held by the Public. Here is the Treasury Department data for the Debt Held by the Public since 2001. The Congressional Budget Office has reported that the 2019 deficit is the highest that it’s been in seven years. As you can see from the numbers above, that report is not quite accurate. The deficit peaked at over $1.7 trillion in 2009 and while the deficit is distressingly high, the 2018 and 2016 deficits were slightly higher. The deficits of this century under the Bush II, Obama, and Trump administrations should concern all of us. The government’s debt has increased 400% in 18 years. And we’re projected to have trillion dollars plus deficits for the foreseeable future. How much interest does the government pay on their debt? Since the government owes is in debt to itself, it pays itself interest. We should ignore these intragovernmental interest payments for the same reason we should ignore the intragovernmental debt. Fortunately, the Daily Treasury Statements provide us with the Interest on Treasury Securities. This is the actual amount of withdrawals from government accounts for interest payments, so this number ignores intragovernmental interest payments. Here are the numbers. From FY 2001 to 2019, interest payments increased 88% from $162.5 billion to $305.7 billion. As I previously stated, during that same time, Debt Held by the Public increased 400%. For the last several years, the feds have taken advantage of artificially low interest rates. If interest payments had increased at the same rate as the level of debt, the 2019 interest payments would be $818 billion. For comparison sake, payments for Security Benefits in FY 2019 were $921 billion. As the government continues to pile up trillion dollar deficits, when interest rates return to a historical norm, interest payments may exceed payments to Social Security recipients. With the coming budget deficits, it’s possible that interest payments could surpass a trillion dollars annually in the next decade. Generally, the political class appears to be unconcerned about the budget deficits. Those who are troubled about budget issues are generally concerned that the deficits will out of control in a couple of decades. The 2019 Congressional Budget Office Long Term Budget Outlook report states that the 2019 federal debt held by the public equals “78 percent of gross domestic product (GDP) — its highest level since shortly after World War II. If current laws generally remained unchanged, growing budget deficits would boost federal debt drastically over the next 30 years, the Congressional Budget Office projects. Debt would reach 92 percent of GDP by the end of the next decade and 144 percent by 2049.” Don’t be fooled. A budget crisis could occur much earlier than 2049 because of the level of borrowing needed to fund the deficit and its debt payments. It’s reported that the federal government spent about $4.75 trillion last year . This ignores the government’s debt payments. According to the Treasury Department, total spending in FY 2019 was nearly $16 trillion. (In the Daily Treasury Statements, this is calls Total Withdrawals.) By reporting spending to be $4.75 trillion, the feds are hiding most of their spending from us. The federal government is borrowing a tremendous amount of money to make its payments on its Debt Held by the Public. The final Daily Treasury Statement of 2019 tells the story. In the past fiscal year, they borrowed $11.9 trillion (called Public Debt Cash Issues) and made debt payments of $11 trillion (called Public Debt Cash Redemptions). If we include all borrowing and debt payments to be part of the federal budget, then the $11.9 trillion of borrowing constituted 74.5% of federal spending and debt payments were 68.5% of federal spending. Debt payments in 2019 were over twice as much as all other combined spending. Here is the historical data for the Public Debt Cash Issues and the Public Debt Cash Redemptions. Note the skyrocketing amount of borrowing in the past 19 years. Since 2001, Public Debt Cash Issues (total borrowing) increased 375% and Public Debt Cash Redemptions (debt payments) increased 311%. The danger here is that lenders at some point may not be willing to loan our government these trillions of dollars a year. In the last 18 years, Public Debt Cash Issues increased at an average rate of almost 9% per year. This is not sustainable. If the federal government continues to increase its borrowing at 9% annually, in 2030, the feds will need to borrow over $28 trillion to cover their spending on the deficit and debt payments. The moment lenders become unwilling to fund this budget recklessness, the government’s financial houses of cards will collapse.
  • The American Bar Association Broke Its Own Rules    (Josh Blackman, 2019-11-06)
    Josh Blackman For decades, the American Bar Association has played a unique role in vetting federal judges. Starting with President Dwight Eisenhower, administrations would give the lawyers’ group a heads-up about whom they intended to nominate to the federal bench. A committee would then assess the candidate’s qualifications. In theory, at least, if the organization rated the nominee as “not qualified,” the administration would reconsider the appointment. , Conservatives have long alleged that the ABA’s process was biased against conservative nominees. And some data do back this claim up, though the ABA vigorously defends its independence. Unsurprisingly, over the past two decades, the ABA has whipsawed in and out of the White House. In 2001, President George W. Bush opted out of the process, and stopped giving the ABA “such a preferential, quasi-official role.” In 2009, President Barack Obama welcomed the ABA back into the fold. And, like clockwork, in 2017, President Donald Trump fired the ABA. Since then, the group has reviewed Trump’s nominees after they were announced, in its own capacity but not as part of the formal process, and found most of them qualified. Last week, however, there was one notable exception. President Trump nominated Lawrence VanDyke to fill a vacancy on the U.S. Court of Appeals for the Ninth Circuit. He previously served as the solicitor general of Nevada and Montana. As the top appellate lawyer of two states in the Ninth Circuit, VanDyke argued two dozen cases and briefed scores more. (I worked with VanDyke on several cases over the past few years.) By any objective measure, VanDyke is qualified to serve as a federal judge. , The organization has proved it can’t be trusted to fairly review nominees. , The American Bar Association, however, rated him “not qualified.” On the eve of VanDyke’s confirmation hearing, the organization released a two-page letter relaying anonymously sourced criticisms. But I find many of the allegations are simply implausible, and border on misleading. For example, the letter stated, “In some oral arguments [VanDyke] missed issues fundamental to the analysis of the case.” Oral arguments are matters of public record. It should have been easy enough to cite several, or at least one, case in which VanDyke missed a fundamental issue. But the letter offers no such citation. (The law professor Orin Kerr reviewed a few of VanDyke’s arguments, and said he seemed to be a “very good advocate.”) Likewise, the letter asserted that “his preparation and performance were lacking in some cases in which he did not have a particular personal or political interest.” If some objective evidence exists to back up this accusation, none was provided. The letter said VanDyke was “lacking in knowledge of the day-to-day practice including procedural rules.” But it offered no evidence to support this claim, either. Other claims in the letter were quite personal. For example, based on “assessments of interviewees,” the ABA reported that “VanDyke is arrogant, lazy, [and] an ideologue”; “lacks humility”; and “has an ‘entitlement’ temperament.” And it reported “a theme” that he “does not have an open mind, and does not always have a commitment to being candid and truthful.” Who would make such unfounded accusations? The letter states that the ABA’s evaluator conducted “60 interviews with a representative cross section of lawyers (43), judges (16), and one other person” who have worked with VanDyke. Those interviews included “attorneys who worked with him and who opposed him in cases and judges before whom he has appeared at oral argument.” Did all 60 people have the same opinions? The letter itself concedes that they did not, stating that “the interviewees’ views, negative or positive, appeared strongly held on this nominee.” Those positive views are not relayed in the letter, though, and it gives no indication of how widely held the negative views actually were. Indeed, there is some evidence that the interviewees who supported VanDyke’s nomination were not asked to rebut such slanderous charges. Former Nevada Attorney General Adam Laxalt told National Review that when he was contacted by the ABA, he’d spoken of VanDyke in glowing terms. (His assessment matches my own.) Laxalt was interviewed by Marcia Davenport, a Montana trial attorney who led the ABA’s evaluation. Laxalt said that the interview was “short and perfunctory,” and that Davenport “did not ask me to comment on anyone else’s critiques of his character or professionalism.” Nor did she ask Laxalt to comment on VanDyke’s most important cases during his tenure as Nevada solicitor general. Laxalt told Fox News that Davenport “seemed completely disinterested.” If people told Davenport that VanDyke was “arrogant” and “lazy” and routinely made errors in his professional dealings, then Laxalt and other interviewees with more positive impressions should have been given a chance to address those accusations. Laxalt says he was not, and he is not alone. Davenport also interviewed Ashley Johnson, who worked with VanDyke at the Gibson Dunn law firm for several years. She wrote on Twitter that “the call lasted fewer than 5 minutes.” Davenport did not tell Johnson “that she had received ANY negative comments or ask if they matched my experience over the 13 years I have known Lawrence. Instead, [Davenport] read through what was clearly a script of questions, thanked me for my time, and hung up,” Johnson wrote. Davenport also interviewed Joseph Tartakovsky, who served as Nevada’s deputy solicitor general for three years under VanDyke. Tartakovsky told Fox News his interview also lasted about five minutes, and “it was clear to me that she was going through the motions.” She did not ask follow-up questions, he said. The Regent University law professor Brad Lingo also spoke with Davenport. He offered a similar account, also on Twitter. Lingo tweeted that he told Davenport that VanDyke was “one of the most earnest, humble, kind-hearted, and intellectually engaged lawyers I know.” He added, “I was surprised that the interview lasted all of about 5 minutes.” Indeed, VanDyke himself testified at his Senate confirmation hearing that during his ABA interview, Davenport repeatedly cut him off whenever he attempted to respond. She said they didn’t have enough time to go through all the points. There seems to be a pattern. People who had good things to say about VanDyke, including VanDyke himself, report that they were cut short, and that their opinions did not make it into the letter. How many of these 60 people thought VanDyke was “arrogant” and “lazy”? We have no idea. The most salacious accusation came from Davenport herself. The letter states: “Mr. VanDyke would not say affirmatively that he would be fair to any litigant before him, notably members of the LGBTQ community.” I have watched many confirmation hearings. Often a nominee is asked whether he or she would be fair to a particular group. The nominee invariably replies, “I will be fair to everyone.” It would be improper for a judge to single out any group for particular treatment. When I first read the letter, I simply assumed that Davenport asked VanDyke the same question: Would he be fair to people in the LGBTQ community? No reasonable nominee would admit a bias toward LGBTQ people. During his hearing, VanDyke stated that he would be fair to everyone. But that is not what the ABA reported. During his confirmation hearing, VanDyke rejected the letter’s insinuation: “I did not say that,” he recounted, while holding back tears. “I do not believe that. It is a fundamental belief of mine that all people are created in the image of God, and they should all be treated with dignity and respect, Senator.” We now have a situation of “he said, she said.” I believe VanDyke. Davenport’s account is utterly implausible. The Senate should call Davenport to testify under oath about her assertion. She should also be called upon to explain why her investigation appears not to have complied with the ABA’s own procedures in three important regards. First, ABA rules require members to recuse themselves from an investigation if their “impartiality might reasonably be questioned.” In 2014, VanDyke ran for election to the Montana Supreme Court. The race was extremely divisive. According to public records, Davenport donated to VanDyke’s opponent. Based on those standards, Davenport should have recused herself. She should not have been the lead investigator. Second, ABA rules state that when a nominee is rated as “‘not qualified,’ the Chair will appoint a second evaluator” who will conduct “a new interview of the nominee.” VanDyke was never interviewed a second time. The final letter considered only Davenport’s interview with VanDyke. A follow-up discussion could have resolved any doubt about the LGBTQ comment, but none was held. Third, the ABA rules provide that the written statement must be submitted to the Senate Judiciary Committee, as well as the nominee, 48 hours before the confirmation hearing. This gap is designed to address any possible errors, and perhaps to make last-minute corrections. In this case, the letter was released at 7 p.m., in advance of a hearing the next morning. VanDyke was ambushed. At every juncture, the ABA seems to have cut corners. It apparently failed to ask VanDyke’s supporters to respond to charges against him. The letter may have mischaracterized VanDyke’s statements. And the investigation was led by a conflicted person who did not even appoint a second person to interview the nominee. The process was flawed from the outset, and should not be afforded any deference. Even if Davenport testifies, and justifies her actions, the damage has already been done—not to VanDyke, but to the ABA. This letter demonstrates that the organization can no longer be trusted to perform a fair assessment of nominees. (William Hubbard, chairman of the ABA committee that conducts judiciary-nominee evaluations, said in a statement, “The evaluations are narrowly focused, nonpartisan, and structured to assure a fair and impartial process.”) What happens next? Nominees, of course, could refuse to meet with the ABA. Though that option includes a risk: The most damning allegations will not be refuted. There is a far more productive approach. These interrogations should be treated as hostile depositions. A court reporter and videographer should be present, as well as privately retained counsel to push back on unfounded accusations. In the event that the nominee is rated as qualified, there would be no need to release the transcript. Going forward, when a nominee is rated as unqualified, the transcript should be released, and the recording should be posted publicly online. There is no reason to rely on disputed accounts of the interview. As originally designed, the confidential nature of this process made some sense. The interviews were not recorded to ensure that members of the bar could candidly critique a potential jurist, and to prevent the nominee from facing public embarrassment if the report was released. But the VanDyke letter turns that practice on its head. He was sandbagged at the last minute, and he was not given a chance to address any of the accusations it contained. This wound was entirely self-inflicted. If the ABA wanted to rate a nominee like VanDyke as unqualified, the organization should have followed its own rules to a T. Instead, it ran a slipshod process, led by a person whose objectivity was open to question. This process should no longer be a black box. If reports faithfully reflect the interviews, faith can be restored in the ABA. If the process remains shrouded in secrecy, Americans can safely discount future findings. Josh Blackman is a constitutional law professor at the South Texas College of Law Houston and an adjunct scholar at the Cato Institute.
  • The American Middle Class Isn't Disappearing — But it's Not All Good News    (2019-11-05)
    I'm not of the opinion that the American economy is doing amazingly well. However, I'm also not of the opinion that it is falling apart, or that the American middle class is disappearing before our eyes. Nor is there is no one, single, magic statistic we can point to and say "see, we're all worse off — or better off — now." Aggregate economic data is by its very nature lacking in nuance. Moreover, different measures of economic growth and prosperity can be conflicting and woefully incomplete. This doesn't mean all such measures are worthless, but it does mean we can't responsibly point to, say, a single jobs statistics and declare "happy days are here again!" or "the middle class is disappearing!" The fact that the economy may be getting better or worse doesn't necessarily help me as a proponent of laissez-faire one way or the other. Since we live in a very complex economy with both a sizable private sector and an enormous government sector, I could easily spin statistics on income and economic growth either way. For example, we live in a time of ever-increasing government intervention in terms of tax collections, government spending, and regulation of the private sector. Given the enormous size of both state and federal governments, I could claim bad economic news proves big government is ruining the economy. At the same time, markets — that is to say, workers and entrepreneurs — have historically shown an amazing amount of resilience in delivering a higher standard of living in the face of enormous intervention through gains in productivity and innovation. Thus, I could also claim good economic news is evidence markets are making us better off. [RELATED: Jeffrey Herbener: Are We Richer and Better Off Than We Think?] All else being equal, of course, real wealth and income — i.e., not just wealth and income measured in dollars — goes up the more the state leaves people alone. But given that all else is not equal, a correlation between two data points doesn't — by itself — prove anything. That's why we need good economic theory. Data by itself is never good enough. We're Becoming Better Off, but Only Very Slowly But before we can start to blame the state of the economy on any particular thing, we have to try to figure out what the state of the economy is. Over the years, I've concluded the available data strongly suggests the American standard of living for most Americans has slowly increased in fits and starts over the past twenty years. As I've noted here, here, and here, the American standard of living is clearly up considerably from where it was in the days of our grandparents's middle-age — i.e., the 1950s and 1960s. The size of homes, the quality and number of automobiles, the the number of hours worked have all sizably improved if we look at a time horizon longer than twenty years. The American standard of living also improved considerably from the late 1980s through the Dot-Com Boom. Since the Dot-Com Bust of 2000, however, improvements are less clear. This is especially the case for men. Household and Personal Income So, is income going up for ordinary American households? At the household level, the good news is the median income does indeed appear to be doing up.1 The bad news is it's only up from 2000's peak level since 2016. In other words, for most of the decade following the Great Recession, the median household income was actually down from both 2007's and 2000's peak levels.2 With the exception of the period since 2016, household income has basically gone sideways over the past eighteen years. As of 2018, household median income was only up 3.6 percent from the 2007 peak. We could compare that to the 8.5 percent growth that occurred from 1989 to 1999.3 Some critics of the household measure point out that the composition of households has changed over time, and thus household income is not a great measure. That's fair enough, but personal median income doesn't show a very different trend. Median personal income shows that — unlike with household income — 2007's peak is higher than the 2000 peak. However, it is only since 2016 that personal median income has exceeded the 2007 peak. That is, over the past decade the median income has only shown growth in the past three years, and median personal income was up only 4.2 percent from 2007 to 2018. That's compared to 15.8 percent growth from 1989 to 2000. Much of the lackluster growth is due to real declines in median incomes for men. Median income for men began to fall in 2001, and it did not exceed the 2000 level again until 2018. In other words, by this measure, the median income for men went sideways for 17 years. It fell 2.7 percent during the economic expansion from 2002 to 2007. It finally topped 2000's median-income level in 2018, rising 0.4 percent over the 2000 peak.4 What growth we do find in median incomes is being driven in part by gains in incomes for women. From 2007 to 2018, the median income for women rose 6.6 percent, although it only exceeded the 2007 peak after 2015. The median income for women rose an amazing 24.4 percent from 1989 to 2000.5 The Middle Class Isn't Disappearing, So Far Clearly, not every group is getting richer at the same rate. Moreover, median incomes took a long time to recover from the Great Recession. Yes, median incomes are now up from where they were a decade ago. But for eight years after the 2007 peak, the median household found itself worse off in real terms. Not everyone is at or near the median, however, and many contend that larger numbers of households are slipping into lower income levels while median numbers are buoyed by a relatively small number of wealthy individuals and households. In response to this, we should note that median numbers — as opposed to average numbers — are not easily pulled upward by a small number of very wealthy persons or households. Moreover, if we break out households and individuals by income categories, we find little evidence that more people are joining the ranks of the low-income. Indeed, if anything, more and more people are moving into income categories we think of as being toward the high end of middle class — or even upper-middle class. Over the past forty years, for example, the proportion of households making less than $50,000 per year has continued to decline. The percentage of households with incomes under $50,000 fell from 49.5 percent to 39.9 percent, from 1969 to 2018. It's true that households in the $50,000 to $100,000 range also went down, dropping from 38.2 percent to 29.7 percent over the same period. But where did those people go? They weren't going into the under-$50,000 group because that group was shrinking. In fact, there are now more households in the over-$100,000 than in the $50,000-$100,000 category. Indeed, it looks like many of them moved up into the over-$100,000 income category because the size of that group more than doubled from 12.3 percent of households in 1968 to 30.4 percent of households in 2018. (All of these categories are measured in constant 2018 dollars.) Breaking out the categories further, we find that the under-$35,000 category fell from 34 percent of households in 1970, to 27.9 percent of households in 2018. The middle categories, from $35,000 to $100,000 were largely flat while the over-$100,000 category more than doubled. The middle-income group isn't disappearing any time soon, but we do find significant growth in the number of households entering the highest-income levels. Those households have to come from somewhere, any many are coming from the middle class. Contrary to the narrative that the middle class is becoming impoverished, this suggests the middle class is actually getting richer. Gains have been less pronounced for men, although the same trend still holds. The proportion of males in the under-$25,000 income category fell from 52.6 percent in 1969 to 45.5 percent in 2018. Meanwhile, the proportion of males in the over-$75,000 category almost doubled from 13.1 percent in 1969 to 25 percent in 2018. The proportion of men in the middle-income categories has been largely unchanged in recent decades, although the great recession appears to have pushed many men back into the lower-income category in the immediate wake of the Great Recession. Since then, the low-income category has shrunk again, and both the over-$75,000 category and the over-$100,000 category rose to all-time highs in 2018: Women gained more than men, however, and women showed more of a shift from lower-income categories to higher ones. The proportion of women in the under-$25,000 category plummeted over forty years, dropping from over 73 percent in 1969 down to 46.4 percent in 2018. Meanwhile, the percentage of women workers making over $75,000 increased by more than twelve times over the same period. The Great Recession slowed these trends, but did not end them. Broken out into smaller categories, we find the low-income category is falling while the $25,000 to $50,000 category has flattened. Meanwhile, the three top categories for income have increased sizably: For women, the lowest income category has fallen considerably, while the income categories above $25,000 have all increased in recent decades. The highest income categories have increased the most, by far. In spite of the many claims being made by both leftists and conservatives that the middle class is disappearing, and that basic amenities and comforts are becoming unaffordable. Nor are Americans separating into a bifurcated population of very-rich and very-poor groups, with only a small middle class in between. The more likely reality is that the lowest-income groups are getting smaller while the "middle class" now more frequently includes people and households in low-six-figures territory. This isn't to say no one is becoming worse off. Declining real incomes are a reality for many people who lack schooling, job skills, and proximity to employment. Moreover, a rising cost of living in some parts of the country can be devastating. But expensive markets like California, Boston, and New York are not the entire country, and a great many Americans have managed to realize income increases in real terms in recent years. This is now true even when compared to the peak reached before the Great Recession. The bad news, however, is the fact that incomes took so long to recover from the last recession. More than a decade of income gains were lost in many cases after 2007, and those losses were not fully reversed for nearly nine years in some cases. Moreover, we're now a decade into the current expansion, and even a moderate recession in the near future is likely to set income levels back twenty years. 1. The income measures discussed here measure "money income," which includes transfer payments, such as social security payments. This does not include in-kind benefits such as health-insurance benefits, food stamps, etc. 2. See Table A-2, Households by Total Money Income. (https://www.census.gov/library/publications/2019/demo/p60-266.html)  3. In all these income graphs, I have looked at peak-to-peak growth: specifically from 1989 to 1999 or 2000; from 2000 to 2007; from 2007 to 2018. 4. See Table P-54. Total Money Income of People, by Race, Hispanic Origin and Sex. (https://www.census.gov/data/tables/time-series/demo/income-poverty/historical-income-people.html) 5. See Table P-54. Total Money Income of People, by Race, Hispanic Origin and Sex. (https://www.census.gov/data/tables/time-series/demo/income-poverty/historical-income-people.html)
  • Per Bylund on The Laws of Agile    (2019-11-05)
    The management methods and practices that have been gathered under the term agile claim the status of a Copernican Revolution. Agile reverses the traditional view of business revolving around the firm, instead placing the customer at the center and viewing all other elements as revolving around the customer. This is a welcome development — but just a step towards the Austrian vision of consumer sovereignty and the concept of value as created by the consumer, not the producer. Key Takeaways And Actionable Insights We examined the three "Laws of Agile" proposed by Stephen Denning in his book The Age Of Agile, and Per Bylund notes the elements that are useful for entrepreneurs, and the extra insights provided by Austrian Economics that can help entrepreneurs to perform at a higher level in facilitating value experiences for their customers and consumers. The Law Of The Customer Agile recognizes that the one valid definition of business purpose is to create a customer.The customer — with mercurial thoughts and feelings — is at the center, and demands to be delighted.What the firm thinks it produces is less important than what the customer thinks he / she is buying — what they consider “value”.Everyone in the firm must view the world from the customer’s perspective, and share the goal of delighting the customer.The firm must have accurate and thorough knowledge of the customer.Continuous innovation is a requirement to delight customers.The firm’s structure changes with the marketplace.Speed of response becomes crucial and time is a strategic weapon.Austrian Enhancements The Austrian concept of Customer Sovereignty is even more powerful for entrepreneurs — customers create firms, in the sense that customers decide what is produced by buying / not buying, and therefore which firms are successful.Value is subjective — and so customer preferences can change rapidly and frequently.Responsiveness is not enough — the goal is to imagine the customer’s future needs, and involve them in the production of future value.The Law Of Network Collaborative network of competence replaces hierarchy of authority.The network has no leader, but it does have a shared, compelling goal.The network is the sum of the small groups (rather than individuals) it contains.Each group has an action orientation.The network’s administrative framework stays in the background. No bureaucratic reporting.Austrian Enhancements Agile is based on too narrow a view of the economic network. It’s still producer-centric.The true network is the market — which includes customers (of which there are many more than firms, and who exert more economic influence than firms).Networking the production side of the firm is an incomplete act.A fully-functioning network includes customers and consumers with equally valid connections to the firm, not just collaborative production partners.The Law of Small Teams Big and difficult problems are disaggregated into small batches and performed by small cross functional teams — scaling down the problem.7 +/- 2 is a good rule of thumb for team size.Each team is autonomous, and works in small batches and short cycles.Each team aims to get to “done” — it’s binary: either done or not done, never almost done.No interruption.Radical transparency.Customer feedback each cycle.Retrospective reviews.Austrian Enhancements A pure focus on short term execution can divert attention away from longer term considerations – especially, imagining the future, which is the core component of entrepreneurship.Focus on creating value for the future, while ensuring no loss of current reputation and relationship.Administration — and therefore “bureaucracy” — can’t be eliminated entirely without a reduction in customer value.Required services can be a component of value creation — such as compliance, operations management, etc.Additional Resource "The Laws of Agile Meet Austrian Economics" (PDF): Mises.org/E4E_38_PDF
  • The Berlin Wall: Its Rise, Fall, and Legacy    (Doug Bandow, 2019-11-05)
    Doug Bandow Democratic Party candidates for president advocate socialism. Young adults view collectivism as a serious alternative to capitalism. Most anyone under 40 has little memory of the Berlin Wall, probably the most dramatic symbol of the most murderous human tyranny to afflict the world. After decades of oppression, hundreds of millions of people were finally free, which today we take for granted. , The Soviet Communist or Bolshevik Revolution was an accident of sorts, a tragic consequence of economic and social collapse resulting from World War I. Absent that conflict, Vladimir Ilyich Lenin probably would have lived out his life in Swiss exile spouting radical doctrines and playing chess. His later colleagues would have suffered obscurity in imperial prisons or exile. Russia’s Czar Nicholas would have lived out his reign as his country prospered economically and reformed politically. Wilhelmine Germany, with a franchise broader than that of Great Britain, also would have seen a gradual shift in power toward liberal constitutional rule as Junker conservatism lost influence. Alas, Europeans collectively jumped into the abyss of cataclysmic conflict, leading to a continent dominated by fascism, Nazism, and communism. The USSR concentrated its brutality on its own people until Adolf Hitler took control of Germany. The Fuhrer triggered the convulsion known as World War II, a conflict Hitler began but could not finish. In 1945, he committed suicide in the bunker of the ruined Reich chancellery. And the Soviet Union, led by Joseph Stalin, occupied Berlin. , The collapse of communism was a magnificent triumph of the human spirit. The commitment to liberty defeated the lust for power. , A Divided Germany Germany was divided among the US, Great Britain, France, and USSR. The first three combined their zones into what became the Federal Republic of Germany in 1949. The Soviet zone became what was unofficially known in the West as the “sogenannt,” or the so-called German Democratic Republic (GDR). The four victorious powers occupied Germany’s capital, as well, which left West Berlin as an oasis of freedom in the middle of East Germany. In 1948, Moscow blocked land routes to Berlin, hoping to force out the allies; America refused to risk war by forcing passage, instead responding with the famed airlift. The following year, Stalin dropped the blockade, though relations remained tense. The Soviets stripped “their” zone of productive assets and created a dictatorship in their image. Totalitarianism impoverished Germans spiritually as well as economically. The result was an exodus of people, especially younger, better-educated professionals. To help stem the human tide, East Germany’s Walter Ulbricht, with Stalin’s support, in 1952 turned Winston Churchill’s “Iron Curtain” image into a real, fortified border with the West. However, the GDR left Berlin’s internal border open. One reason was the fact that the East’s rail lines ran through the capital. The Ulbricht regime began to develop a rail network that avoided Berlin, which was only completed in 1961. People and traffic moved freely between the two Berlins, which made defection easy. Worse, noted the Soviet ambassador to the GDR, Mikhail Pervukhin, , the presence in Berlin of an open and essentially uncontrolled border between the socialist and capitalist worlds unwittingly prompts the population to make a comparison between both parts of the city, which unfortunately does not always turn out in favor of Democratic [East] Berlin. , Actually, the comparison never turned out in favor of the communists. Republikflucht, or “republic flight,” was a crime, but largely unenforceable. By 1961, an estimated 1,000 East Germans were fleeing every day. From 1949 to 1961, an estimated 3.5 million people, or fully one-fifth of the GDR’s citizens, had left. And the productive young were disproportionately represented among those heading West. The percentage of working-age people in the GDR’s population dropped from 71 percent to 61 percent by 1960. If those trends continued, the GDR would cease to exist. For some years, Ulbricht pressed the Soviets for permission to seal off Berlin, as well. USSR Communist General Secretary Nikita Khrushchev said no, apparently out of fear of the negative symbolism of walling in workers for whom the revolution supposedly had been won. However, the latter changed his mind in mid-1961, perhaps because he perceived US President John F. Kennedy, who had indicated he would not oppose construction of such a barrier, to be weak. In any case, during the night of August 12, 1961, East German security personnel began constructing what became known as the Berlin Wall. Initially, streets were torn up and wire fences were strung, soon to be replaced with a brick wall, and then much more. The barrier got ever higher, more complex, and deadlier. Eventually, there were two walls with a death strip in between. The Berlin Wall had miles of concrete walls, wire mesh fencing, barbed wire, trained dogs, and anti-vehicle trenches. The boundary was supplemented with watchtowers, bunkers, and mines. Border guards were told to shoot those attempting to escape, the infamous “Schiessbefehl” order. The people’s paradise would kill its people to stop them from fleeing. A Wall of Death The wall did not stop human flight. Instead, it forced people to be more creative. East Germans climbed over, tunneled under, and flew over. They jumped from windows of buildings along the border—which later were demolished. GDR residents used balloons, built submarines, and created secret compartments in cars. An estimated 100,000 people tried to escape, and some 5,000 made it. Many of those who failed in their lunge for freedom paid a high price. Tens of thousands of East Germans were imprisoned for Republikflucht. Around 200 were killed—no one knows how many for sure—challenging the Berlin Wall. Include those murdered while attempting to cross the border elsewhere, and the death toll probably exceeded 1,000. The first Berliner to die in an escape attempt was 58-year-old Ida Siekmann, who on August 22, 1961, jumped from a window in her building onto a West Berlin road (the area later was cleared and turned into a “death strip”). Two days later the first Berliner was murdered by the GDR authorities: 24-year-old tailor Guenter Litfin was shot while attempting to swim across the River Spree. The true horror of a system that imprisoned an entire people was most dramatically illustrated almost a year later, on August 17, 1962, when East German border agents shot an 18-year-old bricklayer, Peter Fechter, as he sought to surmount the wall. They left the conscious Fechter to bleed out in full view of residents in West Berlin. He was the 27th Berliner to die seeking freedom. The carnage continued year in and year out, even as the Soviet Empire began to implode. The GDR government, at this point under ruthless hardliner Erich Honecker, continued to murder people who simply wanted to live free. On February 6, 1989, 20-year-old Chris Gueffroy became the last East German to be murdered while fleeing. He worked in a restaurant but was about to be drafted into the army. He and his friend Christian Gaudian mistakenly thought the order to shoot had been lifted. While climbing the last fence along a canal, he was shot and killed. Gueffroy would have been 51 today. Gaudian was injured, arrested, and sentenced to three years in prison. But he was released on bail in September 1989 and sent to West Berlin the following month. The four border guards who fired on Gueffroy and Gaudian received awards, but they, along with two Communist Party officials, were later tried in a reunited Germany (ultimately spending little or no time in prison). One more Berliner was to die. An electrical engineer, 32-year-old Winfried Freudenberg, used a home-made balloon to flee. It crashed on March 8, killing him. By then communism was disintegrating in Poland and Hungary. When the latter began pulling down its border fence with Austria in May, the Iron Curtain had a huge hole. East Germans began flooding out. Demonstrations erupted in the GDR, highlighted by people determined to stay and transform their country. Honecker reportedly wanted to shoot and requested Soviet intervention. Mikhail Gorbachev refused, and Honecker’s colleagues retired him in October. But their tepid attempts at reform could not stem the freedom tsunami. On November 4, a million people marched in East Berlin demanding the end of communism. On November 9, 1989, decades of oppression were symbolically swept away. There had been other moments of hope. The 1953 East German demonstrations, the 1956 Hungarian Revolution, and the 1968 Prague Spring. But all were crushed with various degrees of bloody brutality. However, 1989 was different. And it was the result of a mistake. The GDR decided to allow East Germans to apply for visas to travel. Politburo spokesman Guenter Schabowski missed most of the critical meeting but was tasked with announcing the new policy to the international press. He indicated that people could travel now, “immediately, without delay.” Crowds gathered at Berlin’s crossing points as GDR border guards unsuccessfully sought guidance from above. Receiving none, they opened the gate after 10,316 brutal, sometimes murderous days. The euphoria of that evening—with Berliners East and West heading west and east—was not the end of the GDR. But those powerful emotions heralded the regime’s end. Nothing, including East German officials’ desperate attempts to preserve their state and West European officials’ furtive objections to Germany reunification, could stem popular demand to put the German Humpty Dumpty back together. However, liberty was not fully restored until the rest of the Eastern European states defenestrated their communist regimes, including Romania, whose leader, Nicolae Ceausescu, was a crackpot even by communist standards. He and his wife fled by helicopter when demonstrators they had gathered to harangue instead shouted him down. Their pilot observed: “They look as if they were fainting. They were white with terror.” On Christmas Eve, soldiers couldn’t wait to start shooting to carry out the death sentence of a drumhead court-martial. Most important, the Soviet Union ultimately dissolved. Mikhail Gorbachev resigned Christmas Day 1991; the Soviet flag was lowered for the last time at midnight. On the 26th there was no more USSR. After the Soviet Union It is impossible to overstate the importance of that moment. There was a unique evil in Nazi Germany, with the attempted extermination of an entire people, a group long scapegoated and persecuted. However, communism's body count dwarfs that of fascism generally and Nazism specifically. The Black Book of Communism estimated the death toll at more than 100 million. R.J. Rummel’s figures in Death by Government are similar, though analysts vary in their figures for specific countries. And brutal repression, if not necessarily mass murder, continues in Communist survivors China, Cuba, Laos, North Korea, and Vietnam. Often the murder didn’t even make logical sense. Rummel described Stalin’s USSR: , [M]urder and arrest quotas did not work well. Where to find the "enemies of the people" they were to shoot was a particularly acute problem for the local NKVD, which had been diligent in uncovering "plots." They had to resort to shooting those arrested for the most minor civil crimes, those previously arrested and released, and even mothers and wives who appeared at NKVD headquarters for information about their arrested loved ones. , Surely this system was an Evil Empire, as President Ronald Reagan described it. On November 9, the Berlin Wall opened, never to close again. The European communist autocracies disappeared, though they found the transition to democratic capitalism to be more difficult than most analysts predicted and all hoped. Perhaps most tragic has been Russia’s retreat into authoritarianism. Nevertheless, the collapse of communism was a magnificent triumph of the human spirit. The commitment to liberty defeated the lust for power. There were many heroes in the fight for freedom. Some are famous, such as Alexander Solzhenitsyn, the Soviet novelist who chronicled the horrors of the gulag, and Lech Walesa, the Polish electrician who climbed atop a shipyard wall in Gdansk to challenge his country’s rulers. Before them came Imre Nagy and Pal Maleter, who led Hungarian revolutionaries and were executed by the Soviets and their local lackeys. Particularly important was Mikhail Gorbachev, a reform communist who critically kept Soviet troops in their barracks throughout 1989. And, of course, Ronald Reagan. He believed communism could be defeated. On June 12, 1987, he stood in front of Brandenburg Gate and issued his famous challenge: , General Secretary Gorbachev, if you seek peace, if you seek prosperity for the Soviet Union and Eastern Europe, if you seek liberalization: Come here to this gate! Mr. Gorbachev, open this gate! Mr. Gorbachev, tear down this wall! , Most important, however, were the common folk across the continent who made the revolution. They resisted the apparatchiks. They kept the dream alive. They demonstrated for change. They suffered in prison and sometimes were killed. They ultimately ended communism in country after country. It has been three decades—the wall has been down longer than it was up—but we should continue to celebrate the fall of the Berlin Wall and end of the monstrously evil system behind it. The spirit of liberty survives today. There are additional freedom revolutions that should and must be staged in the future. Doug Bandow is a senior fellow at the Cato Institute and the author of a number of books on economics and politics. He writes regularly on military non-interventionism.
  • Question and Answer Panel    (2019-11-04)
    A question and answer period with Joseph Salerno and Murray Rothbard. Recorded at "The Federal Reserve: History, Theory and Practice," hosted by the Mises Institute at Jekyll Island, Georgia; September 4-7, 1986.
  • John Law and His Modern Counterparts    (2019-11-04)
    Recorded at "The Federal Reserve: History, Theory and Practice," hosted by the Mises Institute at Jekyll Island, Georgia; September 4-7, 1986.


Nobody rated this page (yet).
Be the first one!

Page Contents

Page Attributes

This page has no attributes yet.

Content Rating/Stars Module [toggle]

Star Box

Join us and help contribute to Liberty!

Feel free to Sign Up and join our efforts!

This is a "toolbox" called "Star Box".
After joining in you will be able to manage your own personal archive of your favorite LibertyLion content and contribute with many more actions (depending on your user group).

You can resize (drag the handles), minimize (by clicking on the star) or re-open this box again any time.