Probability and God

Occasionally, rare things happen to us.

You might land a competitive job (3/100), appear on the big screen at a sporting event (1/70,00), or win the lottery (1/12,271,512). It’s also possible for you to get a US green card (1/126), be struck by lightning (1/700,000), or have an idea so good it’s “like getting struck by lightning” (1/???).

Whether it’s good or bad when the improbable becomes actual, there’s always a question lurking in the background: is this evidence of anything? If what seemed impossible is staring us in the face, what can we say about it?

This question is fascinating with respect to life in the universe and God. “God” in this post will not refer to the God of the new testament, the God of the old testament, Allah,  Shiva, Mahavira, Zeus, Ra, Spinoza’s God of substance, or any other popular deity. Formal religion aside, we will be interested in the quite general question of whether a being designed the universe to support life. This designer, whether s/he exists, will be referenced as “God.” I repeat, there is nothing Judeo-Christian, Muslim, Hindu, Wiccan, etc… about my invocation of “God.” I chose the capital-g for ease of reference and because I knew it would grab your attention.

Improbability

Our existence is an anomaly.  We can get an intuitive feel for this by gazing at the night sky. Billions of stars, millions of planets, and somehow, we’re alone (so far). We have yet to find evidence of even microbes in the vast expanse of the universe, so the fact beings as sophisticated as humans came about represents something uncommon and significant.

The improbability goes deeper. As it turns out, even the laws of the universe that allow life to exist are rare and unlikely to come about by chance. If we were to slightly change the basic rules of force and gravity, for instance, the resulting universe would be hostile to life. Philip Goff has examples. The following three bullets are his words.

  • The strong nuclear force has a value of 0.007. If that value had been 0.006 or less,
    the Universe would have contained nothing but hydrogen. If it had been
    0.008 or higher, the hydrogen would have fused to make heavier elements. In
    either case, any kind of chemical complexity would have been physically
    impossible. And without chemical complexity there can be no life.
  • The physical possibility of chemical complexity is also dependent on the
    masses of the basic components of matter: electrons and quarks. If the mass
    of a down quark had been greater by a factor of 3, the Universe would have
    contained only hydrogen. If the mass of an electron had been greater by a
    factor of 2.5, the Universe would have contained only neutrons: no atoms at
    all, and certainly no chemical reactions.
  • Gravity seems a momentous force but it is actually much weaker than the
    other forces that affect atoms, by about 10^36 . If gravity had been only slightly
    stronger, stars would have formed from smaller amounts of material, and
    consequently would have been smaller, with much shorter lives. A typical
    sun would have lasted around 10,000 years rather than 10 billion years, not
    allowing enough time for the evolutionary processes that produce complex
    life. Conversely, if gravity had been only slightly weaker, stars would have
    been much colder and hence would not have exploded into supernovae. This
    also would have rendered life impossible, as supernovae are the main source
    of many of the heavy elements that form the ingredients of life.

This is the cosmological equivalent of tweaking the rules of your favorite game and then finding out it is unplayable. If the laws of physics differed slightly from what they are now, life as we know it wouldn’t stand a chance. It appears every law was formulated to lie just inside the narrow range that allows complex organisms like us to exist.

When we consider the fact life is highly uncommon in our current universe, and the second-order fact that it was incredibly unlikely the fundamental structure of said universe could be compatible with even the potential for life, our existence looks even more astounding. Roger Penrose —winner of a Nobel prize in physics with Stephen Hawking— calculated the odds of a universe such as ours being created by chance as one in 10^1,200. Lee Smolin, another physicist, calculates the probability of life arising in the universe as 10^229. These estimates differ by about a thousand orders of magnitude, but their point is clear. If left to chance, nature conspires against us.

It’s natural to find these odds unsettling. “But,” someone might say, “we exist! The odds say it’s nearly impossible for us to be around, yet here we are. If something so improbable happens there has to be some explanation for it that doesn’t appeal to pure chance.” Here, we reach for God. If the universe wasn’t the result of a random process but the product of a creator with life in mind, it’s much easier to believe we exist despite the astronomical odds against us. God is a much more satisfying, and, in a certain sense, more simple, explanation than blind luck. As the reasoning goes, a low probability of life existing in the universe, coupled with the fact life actually exists, constitutes evidence of a creator.

Necessity

There’s an alternative perspective to probability and God. Life being necessary, in some sense, should be evidence of a creator. If God exists, we assume she wants life to come about and will not tolerate the possibility it could be otherwise. Such a God would make it impossible for a universe to exist that cannot support life, like us.

For instance, if we discovered there was a 99.99% chance any given universe could support life, wouldn’t this mean that possible universes were optimized for our presence? What better evidence of God could there be than odds stacked in our favor? If anything, a low probability of life originating in the universe might be an indication our existence was somehow left to chance. It’s possible we would not have existed, and that is incompatible with there being a God.

These two camps, those that stress the improbability of life, and those that stress its necessity, are at odds. One claims a low probability of life arising in the universe is evidence of God, while the other asserts high probabilities are. In other words, they use opposite premises to draw identical. At first pass, one of the camps has to be wrong. After all, how can low and high probabilities both be evidence for God? Does this mean we get the evidence either way? Did I just prove the existence of God? Definitely not. Probabilistic inference appears to have gone awry, and we’re going to get to the bottom of it.

Interpreting probability (can skip if frequency-type/objective and belief-type/subjective probabilities are familiar)

There are two ways to interpret probability. The first is characterized by statements like:

“There is a 60% chance you draw a red marble out of the urn.”

“The odds of getting pocket aces is 6/1326.”

“You will only win the lottery 1 out of 12 million times.”

Here, we use probability to talk about the outcomes of repeatable chance events. In this sense, a probability tells you, on average, how frequently an outcome occurs per some number of events. “60%” in the first sentence tells us, on average, if we draw 10 marbles out of the urn 6 of them will be red. Likewise, “6/1326” in the second sentence tells us that if we play 1326 hands of poker we should expect 6 pocket aces. In each case, the probability tells us about the distribution of a certain occurrence over a number of trials. We learn something about how often a chance event yields a specific outcome. This is the frequency-type interpretation of probability.

The second interpretation of probability is characterized by statements like:

“I am 90% sure I left my keys at home.”

“The odds of getting Thai food tonight are 1/10.”

“What’s the probability Trump wins the election? I say 28.6%”

These statements are similar to the others in that they use fractions and percents to express probabilities. The similarities end there. Rather than describe the outcomes of chance events, they express subjective levels of confidence in a belief. This is called the belief-type interpretation of probability. Higher probabilities correspond to more certainty in a belief, while lower ones express doubt. For instance, saying there is a “1/10” chance of getting Thai food means you are very unsure it will happen. Saying there’s a “90%” chance you left your keys elsewhere means you’re very confident you don’t have your keys.

It’s important to note that the frequency and belief-type interpretations apply to different things. We formulate frequency-type probabilities about the outcomes of chance events, like poker hands and lottery-drawings. Belief-type probabilities do not apply to chance events. They’re used to describe our subjective degree of confidence in statements about the world, like who is going to win an election or what we are going to eat tonight.

Reconciling the two camps

In the arguments given for God, which interpretation of probability is operative?

Frequency-type looks unlikely. The creation of a universe does not appear to be the outcome of a repeatable chance event, like drawing a marble from an urn. By most scientific accounts, there was a single “Big Bang” that yielded our universe, and there will never be a similar moment of creation. Because the event is unique, it makes no sense to talk about the frequency of a certain occurrence over a number of trials. We cannot say whether a universe containing life will arise 1 out of 10^10 times as it’s impossible to create 10^10 universes and observe what happens.

Belief-type probabilities don’t run into these difficulties. It’s coherent to say you have more or less confidence God created the universe, though a bit unnatural to express the sentiment as a probability. However, philosophers wrestle with how to further interpret belief-type probabilities and discover complications. Many take belief-type probabilities as the odds an individual would set for an event to avoid being dutch-booked. This view has the advantages of being intuitive and mathematical, but the betting analogy breaks under select circumstances. Individuals might refrain from setting odds (and if we compelled them, would those odds be accurate?), and it’s not clear there’s a single, precise number that we would set the odds at to express our confidence in a proposition.

While belief-type probabilities appear to be the best choice, I’m going to ignore them. This is because my “solution” to this issue relies on a frequency-type interpretation of probability so I’m going to shamelessly ignore the alternative. We will assume the creation of our universe is the outcome of a repeatable chance event. It’s also true belief-type probabilities have been critiqued in the context of reasoning about religious hypotheses, but I will not discuss such objections.

Using frequency-type probabilities can also be somewhat legitimate. We can circumvent the objections to using it in reference to the creation of the universe with a multiverse theory. If you believe multiple universes have been created — perhaps there are 10^10 parallel universes, for example — it’s perfectly acceptable to use a frequency-type probability to describe the odds of life arising. Your statement simply expresses the odds of picking a universe with life at random out of all the created universes. Personally, I have no idea whether multiverse theories are actually plausible, but this is a potential way to justify a frequency-type interpretation.

Given the above, I don’t think the two camps are incompatible with each other. It’s possible for low and high probabilities of life to serve as evidence for God. Both parties are making valid inferences from the probabilistic evidence they have. The caveat is that I believe each can only argue for a certain type of God.

Consistent with our assumption the creation of the universe is the outcome of a repeatable chance event, imagine it is determined by the spin of a roulette wheel. Each slot in the wheel represents a possible universe, and wherever the ball lands, the universe corresponding to that slot is created. One slot might represent a universe with physical laws like our own, and compatible with life. Another slot might represent a universe so different from ours that life could never originate.

Those who think low probabilities of life are evidence of God might imagine the roulette wheel of possible universes to be enormous. There are trillions of possible slots, and only a handful of them will correspond to universes that contain life. The probability the ball lands in a slot that creates a universe containing life will be minuscule. Yet, our universe exists and contains life. Since it defied nearly-impossible odds, the reasoning goes, it must have had some assistance beyond pure chance. The “assistance” in roulette wheel terms might be thought of as God picking up the ball and deliberately placing it in a slot corresponding to life. God intervenes in the chance event, making the highly improbable, actual.

The high-probability camp’s perspective can also be thought of in terms of the roulette wheel. In this case, a high probability of life would translate into every slot in the wheel corresponding to a universe where life exists. No matter where the ball lands, a hospitable universe will be created. Chance can select any slot it desires, but the outcome will be the same. God enters the picture when we ask ourselves who made the roulette wheel and dictated the nature of possible universes. The wheel being as it is constitutes evidence of an intention to bring life about. In this instance, God creates living things by stacking the odds in our favor.

When we consider how both camps might imagine God, the tensions between them fade. High and low probabilities of life can both constitute evidence of a creator because they support different versions of her. Low probabilities imply the existence of a God that chose our universe out of innumerable alternatives. High probabilities suggest a God that creates life by making a desolate universe metaphysically impossible.

This doesn’t guarantee either type of God exists, though. Individuals may use high or low probabilities of life arising in the universe as evidence for certain types of Gods, but how effective these arguments are is an open question. At any rate, neither line of reasoning abuses probability.

 

 

pandemics and ethnic violence

We begin with two questions:

  1. Under what conditions will a population persecute an ethnic minority during and after a pandemic?
  2. Who is most likely to instigate the violence?

Their answers are important. In case you haven’t heard, a deadly coronavirus has surfaced in Wuhan, China, and has now spread around the world. Some hold Asians responsible for the pandemic and act on their prejudice. A 16-year-old Asian-American boy in the San Fernando Valley was assaulted and sent to the emergency room after being accused of having the virus. In Texas, an assailant stabbed an Asian family in line at a Sam’s Club. Asians in communities like Washington D.C. and Rockville, Maryland are purchasing firearms en masse in an attempt to protect themselves. If general answers to our questions exist they may suggest ways to relieve ethnic tensions and prevent additional violence.

We turn to medieval Europe for guidance. It experienced a horrifying pandemic followed by decades of severe antisemitism. Our best bet to understand these questions is to examine the Black Plague and the subsequent treatment of medieval Jews.

The Black Plague

Without exaggeration, the Black Plague (also called the Black Death) was the worst pandemic in human history. From 1347, when it arrived in Italy, to 1351, between 40 and 60 percent of the entire European population perished. Aggregate figures obscure the losses of individual towns. Eighty percent of residents died in some locales, effectively wiping cities off the map (Jedwab et al., 2018). France was hit so hard that it took approximately 150 years for its population to reach pre-plague levels. Medieval historians attempted to capture the devastation of the plague by describing ships sailing aimlessly on the sea, their crews dead, and towns so empty that “there was not a dog left pissing on the wall” (Lerner, 1981).

Screen Shot 2020-04-03 at 12.58.58 PM.png
Source: (Jedwab et al., 2018)

The statistics are scary, but the plague was also visually terrifying. After catching the disease from an infected flea an individual develops black buboes (lumps) in the groin and neck areas, vomits blood, and dies within a week. In severe cases, plague-causing bacteria circulate through the bloodstream and cause necrosis in the hands and feet, leading them to turn black and die while still attached to the body. As soon as these symptoms develop, victims usually pass within 24 hours. Seeing most of your acquaintances develop these symptoms and die was arguably more psychologically damaging than actually contracting the disease yourself.

The scientific response, if it can be called one, to the pandemic was weak. Medieval medicine was still in the throes of miasma theory, which held that disease was spread by “bad air” that emanated from decomposing matter. Still, miasma was deemed an insufficient explanation for a calamity of this size. The most educated in medieval Europe saw the plague not as a natural phenomenon beyond the grasp of modern medicine, but as a cosmic indicator of God’s wrath. Chroniclers claim the plague originated from sources as diverse as “floods of snakes and toads, snows that melted mountains, black smoke, venomous fumes, deafening thunder, lightning bolts, hailstones, and eight-legged worms that killed with their stench” all ostensibly sent from above to punish man for his sins (Cohn, 2002). A common addendum was that these were all caused in one way or another by Jews, but we’ll get to that later.

Some explanations, if squinted at, bear a passing resemblance to the secular science of today. They invoked specific constellations and the alignment of planets as the instigators of the plague, drawing out an effect from a non-divine cause. How exactly distant celestial objects caused a pandemic is unclear from their accounts, though.

In general, medieval explanations of the plague reek of desperate mysticism. They had quite literally no idea of where the disease came from, how it worked, or how to protect themselves from it.

Medieval Jews

Jewish communities were widespread in Europe by the time the plague began. In the eighth century, many Jews had become merchants and migrated from what we today call the Middle East to Muslim Spain and southern Europe. Over the next two centuries, they spread northward, eventually reaching France and southern Germany before populating the rest of the region (Botticini & Eckstein, 2003). Jews were overwhelmingly located in large urban centers and specialized in skilled professions like finance and medicine. By the 12th century, scholars estimate that as many as 95% of Jews in western Europe had left farming and taken up distinctly urban occupations (Botticini & Eckstein, 2004).

Screen Shot 2020-04-03 at 7.21.36 PM.png
Map indicating percent of neighboring towns with resident Jews by 1347 in a limited dataset. In other words, it does not account for every town with a Jewish population in Europe but should give you some idea of settlement patterns. The fact England is devoid of Jews is not a limitation of the data; they were expelled from the region in 1290. Map source: (Jedwab et al., 2018)

Unfortunately, the medieval period offered Jews little respite from persecution. Antisemitism was constant and institutionalized, as Christianity was the official religion of most states. Restrictions were placed on the ability of Jews to proselytize, worship, and marry. For example, in 1215, the Catholic Church declared that Jews should be differentiated from the rest of society via their dress to prevent Christians and Jews from accidentally having “sinful” intercourse. These rules eventually begot requirements that Jews wear pointed hats and distinctive badges, foreshadowing the infamous yellow patches of the Holocaust.

Medieval antisemitism was violent as well as bureaucratic. Pope Urban II fueled Christian hysteria by announcing the First Crusade in 1096, and shortly after, bands of soldiers passing through Germany on their way to wage holy war in Jerusalem killed hundreds of Jews in what became known as the Rhineland Massacres. These attacks were so brutal that there are accounts of Jewish parents hearing of a massacre in a neighboring town and killing their own children, and then themselves, rather than face the crusaders. Explanations for these massacres vary. Some scholars claim they were fueled by the desire to seize the provisions of relatively wealthy Jews in preparation for travel to the Middle East. Others attribute them to misplaced religious aggression intended for Muslims but received by Jews due to their proximity and status as “non-believers.” While there are earlier recorded instances of antisemitism, the pogroms of the First Crusade are believed to represent the first major instance of religious violence against the Jews.

Strangely, the Medievals seem to vacillate between ethnic loathing and appreciation for Jewish economic contributions. The Catholic Church forbade Christians from charging interest on loans in 1311, allowing Jews to dominate the profession. As a result, they were often the only source of credit in a town and hence were vital to nobility and the elite. This, coupled with Jews’ predilection to take up skilled trades, gave leaders a real economic incentive to encourage Jewish settlement. Rulers occasionally offered Jews “promises of security and economic opportunity” to settle in their region (Jedwab et al., 2018).

The Black Plague and Medieval Jews

As mentioned, the plague was a rapid, virulent disease with no secular explanation, and the Jews were a minority group—the only non-Christians in town—with a history of persecution. Naturally, they were blamed. Rumors circulated that Jews manufactured poison from frogs, lizards, and spiders, and dumped it in Christian wells to cause the plague. These speculations gained traction when tortured Jews “confessed” to the alleged crimes.

The results were gruesome. Adult Jews were often burned alive in the city square or in their synagogues, and their children forcibly baptized. In some towns, Jews were told they were merely going to be expelled but were then led into a wooden building that was promptly set ablaze. I will spare additional details, but these events spawned a nauseating genre of illustrations if one is curious. As the plague progressed, more than 150 towns recorded pogroms or expulsions in the five-year period between 1347 and 1351. These events, more so than the Rhineland massacres, shaped the distribution of Jewish settlement in Europe for centuries afterward.

Screen Shot 2020-03-31 at 8.16.43 PM
Black Death Jewish pogroms on a map of the Weimar Republic (early 20th century Germany). “No data” means there are no records of Jewish settlement in the area at the time of the Black Death (Voigtländer & Voth, 2012).

If we can bring ourselves to envision these pogroms, we imagine a mob of commoners whipped up into a spontaneous frenzy. Perhaps they have pitchforks. Maybe they carry clubs. Given what we know about the economic role of medieval Jews, you might impute a financial motive upon the villagers. It’s possible commoners owed some debt to the Jewish moneylenders that would be cleared if the latter died or left town. If asked what income quintile a mob member falls into, you might think that’s a strange question, and then respond the lowest. Their pitchforks indicate some type of subsistence farming, and only the poorest and least enlightened would be vulnerable to the crowd mentality that characterizes such heinous acts, you would think.

If so, it might be surprising to hear the Black Death pogroms were instigated and performed by the elite of medieval society. (Cohn, 2007) writes that “few, if any, [medieval historians] pointed to peasants, artisans, or even the faceless mob as perpetrators of the violence against the Jews in 1348 to 1351.” Bishops, dukes, and wealthy denizens were the first to spread the well-poisoning rumors and were the ones to legally condone the violence. Before even a single persecution in his nation took place, Emperor Charles IV of Bohemia had already arranged for the disposal of Jewish property and granted legal immunity to knights and patricians to facilitate the massacres. When some cities expressed skepticism at the well-poisoning allegations, aristocrats and noblemen, rather than the “rabble,” gathered at town halls to convince their governments to burn the Jews. Plague antisemitism, by most accounts, was a high-class affair.

Mayors and princes recognized the contagious nature of violence. If elites persecuted the Jews, they thought, the masses might join in and the situation could spiral out of control. As a result, the wealthy actively tried to exclude the poor from antisemitic activities. Prior to a pogrom, the wealthy would circulate rumors of well-poisoning. Those of means would then capture several Jews, torture them into “confessing,” and then alert the town government. Its notaries would record the accusations, and the matter would be presented before a court. After a (certain) guilty verdict, patrician leaders would gather the Jews and burn them in the town square or synagogue. Each step of the process was self-contained within the medieval gentry, providing no opportunity for commoners to amplify the violence beyond what was necessary. Mass persecutions often take the form of an entire society turning against a group, but the medieval elites sought to insulate a substantial amount of their population from the pogroms. Ironically, they feared religious violence left unchecked [1].

Persecutions were widespread, but not universal. (Voigtländer & Voth, 2012) say only 70 percent of towns with a Jewish population in plague-era Germany either expelled or killed their Jews. To be sure, 70 percent is a substantial figure, but the fact it is not 100 percent demonstrates that there were conditions under which ethnic violence would not ensue. What were these conditions?

In pursuing this question, (Jedwab et al., 2018) observed something strange. As plague mortality increased in some towns, the probability Jews would be persecuted actually decreased. A town where only 15 percent of inhabitants died is somehow more antisemitic than one where 40 percent did. How odd. Common sense tells us that the more severe an unexplained disaster, the stronger the incentive to resolve ambiguity and blame an outgroup. Why reserve judgment when things are the worst?

It turns out economic incentives were stronger than the desire to scapegoat. Jedwab only observed the inverse relationship between mortality and probability of persecution in towns where Jews provided moneylending services. The Jews’ unique economic role granted them a “protective effect,” changing the decision calculus of would-be persecutors. It’s true there’s still an incentive to persecute Jews since debts would be cleared if the moneylenders died, but this is a short-term gain with long-term consequences. If all Jews in a town are eliminated then future access to financial services is gone. Everyone in town would be a Christain, and thus forbidden from extending credit. As mortality increases, Jews qua moneylenders became increasingly valuable, since killing or expelling them would exacerbate the economic crisis that accompanies losing a significant fraction of your population. As a result, they were spared [2].

This is consistent with the picture we developed earlier of the upper classes undertaking plague pogroms. Often, only the wealthy utilized Jewish financial services, thus only they were sensitive to the financial implications of killing the sole bankers in town (Cohn 2007). If commoners were the major perpetrators of Black Death massacres, Jedwab and colleagues would probably not encounter evidence of a protective effect tied to Jewish occupations. Indeed, they looked for, and could not find, the protective effect in towns where Jews were doctors, artisans, or merchants. It only appeared when they provided financial services.

Persecution frequency also fell dramatically after the initial wave of the plague. The disease made semi-frequent appearances in the decades and centuries after 1347, but not one of them sparked as much violence as the first bout. A potential explanation is that many Jews had already been killed or expelled from their towns, leaving nobody to persecute. Plague severity was lower in later recurrences, so there might have been less of an incentive to persecute a minority for a mild natural disaster as opposed to a major one.

Screen Shot 2020-04-07 at 4.05.38 PM.png
Jewish persecutions by date. Source: (Jedwab et al., 2018)

I’ll describe a somewhat rosier phenomenon that could have contributed to this decline in persecutions. Remember that the medieval intellectual elite was clueless when it came to the causal mechanisms of the plague. Among other theories, they believed it spread via bad air, worm effluvium, frogs and toads that fell from the sky, the wrath of God, or Jewish machinations. Because Jews were the only salient member of this list present in most towns, they had borne the brunt of popular frustration and become a scapegoat [3].

Yet, science progressed slowly. By 1389, roughly 40 years after Europe’s first encounter with the plague, doctors noticed that mortality had fallen after each successive recurrence of the disease. Instead of attributing this to fewer toads in the atmosphere or less worm stench, they settled on the efficacy of human interventions. Institutions had learned how to respond —the quarantine was invented in this period— and medicine had progressed (Cohn 2007). Medievals had increasingly effective strategies for suppressing the plague and none of them involved Jews. Blaming them for subsequent  outbreaks would be downright irrational as you would be diverting time and resources away from interventions that were known to work.

I want to be clear that this did not end antisemitism in Europe. Jews for centuries were —and still are— killed over all types of non-plague related allegations like host desecration, blood libel, and killing Jesus. Yet, they enjoyed a period of relative calm after the first wave of the Black Death, in part, I believe, because their persecutors had begun to understand the actual causes of the disease.

Generalizations

1. Under what conditions will a population persecute a minority?

Persecutions are more likely when members of the minority in question don’t occupy an essential economic niche. Jews as moneylenders provided a vital service to members of the medieval elite, so the prospect of killing or expelling their only source of credit may have made them think twice about doing so.

2. Who is most likely to instigate the violence?

Wealthy, high-status members of medieval society instigated and undertook the Black Plague pogroms. They were responsible for spreading well-poisoning rumors, extracting “confessions,” processing and ruling on accusations, and attacking Jewish townsfolk. Some medieval elites even conspired to insulate the poor from this process for fear of the violence escalating beyond control.


How applicable are these conclusions? Can they tell us anything specific about ethnic violence and COVID-19?

Probably not. For starters, the world today looks nothing like it did during the 14th century. The Medievals may have discovered how to attenuate the effects of the plague, but it remained more or less a mystery for centuries afterward. We didn’t get germ theory until the 19th century, and it wasn’t until 1898 that Paul-Louis Simond discovered the plague was transmitted to humans through fleas.

Perhaps as a result of scientific progress, we’re also much less religious, or nature of our religiosity has changed. Very few believe hurricanes are sent by God to punish sinners, and we don’t supply theological explanations of why the ground trembles in an earthquake. We have scientific accounts of why nature misbehaves. As a result, we’re skeptical of (but not immune to) claims that a minority ethnic group is the source of all our problems. In short, we have the Enlightenment between us and the Medievals.

Cosmopolitanism is also on our side. Jews were often the only minority in a town and were indistinguishable without special markers like hats and badges. To this day, parts of Europe are still pretty ethnically homogeneous, but every continent has hubs of multiculturalism. 46% percent of people in Toronto were born outside Canada. 37 percent of Londoners hail from outside the UK. Roughly 10 percent of French residents are immigrants. All this mixing has increased our tolerance dramatically relative to historical levels.

Perhaps most importantly, COVID-19 is not even close to the plague in terms of severity. The medical, economic, and social ramifications of this pandemic are dire, but we are not facing local death rates of 80 percent. We do not expect 40 to 60 percent of our total population to die. COVID-19 is a challenge that is taxing our greatest medical minds, but we have a growing understanding of how it functions and how to treat it. It’s definitely worse than the flu, but it’s no Black Plague.

An investigation into the Black Plague and medieval Jews can provide historical perspective, but its results are not easily generalizable to the current situation. The best we can say is that when things are bad and people are ignorant of the causes, they will blame an outgroup they do not rely on economically. The cynical among us perhaps could have intuited this. Thankfully, things aren’t as bad now as they were in 1347, and we are collectively much less ignorant than our ancestors. We’ve made progress, but intolerance remains a stubborn enemy. What Asians already have, and will, endure as a result of this pandemic supports this.

 

Acknowledgments

Huge thanks to Michael Ioffe and Jamie Bikales for reading drafts.

 

Footnotes

[1] I draw heavily on (Cohn, 2007) in the preceding two paragraphs. It’s definitely true some pogroms were instigated and undertaken by the poor while the elites sought to protect the Jews. For instance, Pope Clement VI issued an (unheeded) papal bull absolving Jews of blame. Yet, Cohn has convinced me (an amateur) that these cases constitute a minority.

Still, the class demographics of medieval pogroms are a matter of scholarly debate. (Gottfried, 1985) describes members of Spanish royalty unsuccessfully attempting to protect their Jewish denizens. However, he does not specify whether the antisemitism was primarily instigated by the masses or regional leaders. (Jedwab et al., 2018) mentions “citizens and peasants” storming a local Jewish quarter, but whether local antisemitism was spurred by the gentry or not is also unclear. (Haverkamp, 2002) supposedly also argues for the commoner-hypothesis, but the article is written in German and thus utterly inaccessible to me.

[2] A simple alternate explanation for the inverse relation between mortality and probability of persecution is that there are fewer people left to facilitate a pogrom or expulsion at higher levels of mortality. Jedwab and colleagues aren’t convinced. They note that the plague remained in a town for an average of 5 months, and people weren’t dying simultaneously. It’s entirely possible that even at high mortality a town can muster enough people to organize a pogrom. Also, many of the persecutions were preventative. Some towns burned their Jewish population before the plague had even reached them in an attempt to avert catastrophe.

[3] God was also “present” —in the form of churches and a general religious atmosphere— in medieval Europe, so he was another popular figure to attribute the plague to. However, you can’t really blame God in a moral sense for anything he does, so adherents to this view blamed themselves. So-called “flagellants” traveled from town to town lashing themselves in an attempt to appease God’s wrath and earn salvation. This was strange and radical even by medieval standards. Pope Clement thought the movement heretical enough to ban it in 1349 (Kieckhefer 1974).

 

 

What I cited

(These are all the scholarly sources. My guideline is that if I downloaded something as a pdf and consulted it, it’ll be here. Otherwise, it’s linked in the text of the post).

Botticini, M., & Eckstein, Z. (2003, January). From Farmers to Merchants: A Human Capital Interpretation of Jewish Economic History.

Botticini, M., & Eckstein, Z. (2004, July). Jewish Occupational Selection : Education, Restrictions, or Minorities? IZA Discussion Papers, No. 1224.

Cohn, S. (2002). The Black Death: End of a Paradigm. American Historical Review .

Cohn, S. (2007). The Black Death and the Burning of Jews. Past and Present(196).

Gottfried, R. S. (1985). The Black Death: Natural and Human Disaster in Medieval Europe. Free Press.

Haverkamp, A. (2002). Geschichte der Juden im Mittelalter von der Nordsee bis zu den Su¨dalpen. Kommentiertes Kartenwerk. Hannover, Hahn.

Jedwab, R., Johnson, N. D., & Koyama, M. (2018, April). Negative Shocks and Mass Persecutions: Evidence from the Black Death. SSRN.

Kieckhefer, R. (1974). Radical tendencies in the flagellant movement of the mid-fourteenth century. Journal of Medieval and Renaissance Studies, 4(2).

Lerner, R. E. (1981, June). The Black Death and Western European Eschatological Mentalities. The American Historical Review, 533-552.

Voigtländer, N., & Voth, H.-J. (2012). PERSECUTION PERPETUATED: THE MEDIEVAL ORIGINS OF ANTI-SEMITIC VIOLENCE IN NAZI GERMANY*. The Quarterly Journal of Economics, 1339-1392.

A dialogue in defense of business majors

Sophia Booth and Taylor Hutchins are both freshmen in college. Sophia has just told Taylor she wants to switch her major to business.

Taylor: Why would you do that? How can you learn anything about business in the classroom? They’re only going to teach you theory, and we all know that’s worse than useless when you go out into the real world.

Sophia: What makes you qualified to piss on business majors like that? Just because you’re mechanical engineering doesn’t give you a license to demean an entire subject you haven’t even studied.

Taylor: Are you kidding me? The business model canvas? The Boston matrix? You’re going to spend your undergraduate years filling out forms and pointing out farm animals as opposed to learning anything useful. Even if that stuff was the cutting edge of business knowledge at one point it’s surely going to be outdated by the time you graduate. You think you’re learning something applicable but it’s all empty theory.

Sophia: Woah, Taylor, you have it all wrong. I can see how you think we’re at college to learn eternal truths about how things operate and then apply them, but that’s just not how higher education works.

Taylor: What do you mean? Are you saying my education is as worthless as a SWOT analysis?

Sophia: No. You’re probably going to apply much of what you learn here in the future, but you need to understand you’re in a minority. The rest of us come to college to signal.

Taylor: You’re making less and less sense. What the hell is a “signal?” We’re all here to learn. That’s why our university exists in the first place.

Sophia: The people that are here to learn things for their jobs are future engineers, programmers, scientists, doctors, professors, and researchers. If I work outside any of those fields I will most likely never draw upon anything I learned in undergrad. Firms like Deloitte hire junior consultants from any major and train them on the job. The way most people get employed is not by learning whatever skills might be necessary to actually do their job, but by sending strong signals to the labor market.

Employers want to know I’m sharp, have a good work ethic, and am enthusiastic about working for their company. It’s possible for me to demonstrate these things by acing my classes, maxing out on credit-hours, and radiating excitement during my job interview. These are the signals I’m talking about, and I’ll get employed by sending them.

Taylor: Wait, so you’re on my side now? I hear you saying that what most people learn in college is useless. That gives you a much stronger reason to do mechanical engineering or computer science rather than business.

Sophia: My point is that even if you’re right and nothing in my business degree is applicable to my career, it doesn’t matter. A business degree sends a strong signal to the job market so getting one is not a total waste of time. I can still show employers I’m sharp, (by acing my classes) hard-working, (by taking a lot of credits) and enthusiastic about being their employee. Remembering any of the stuff in class just doesn’t matter.

Taylor: So you only care about your signals, right? The actual substance of what you learn doesn’t matter, yes? That sounds pretty cynical.

Sophia: I’m just not willing to delude myself into thinking everyone learns applicable things, including in business. It could be the case what I’m learning is useful, but it really doesn’t matter. Academically, college is only a big obstacle course with employers waiting at the other end to see who gets through first.

Taylor: You said employers want to know you’re sharp, right?

Sophia: Yeah.

Taylor: So you should still switch to computer science. It’s much harder than business, so if you do well you’ll be sending a much stronger “signal” to the job market, as you say. Straight A’s in computer science say much more about your ability than A’s in business, and employers know that. There is no reason to get a business degree.

Sophia: Sure, you’re right that computer science sends a better signal in the “sharpness” category, but there are still two other types of signals I’m trying to send. If I do business, I can still take a lot of credits and show employers I’m hard-working. We’ll say business and computer science are approximately equal on that front. Yet, business sends a much better enthusiasm signal. A business degree tells employers I’ve been thinking about business-related things for four years. Who cares if those things are applicable. My willingness to do that demonstrates a deep commitment to private industry that doesn’t come across in a computer science degree. Employers understand I wanted a job in business when I was 18-19 and had enough conviction to stick with it. Sure, doing well in computer science would show I’m sharp, but business says something deeper about my attitude and commitment, two vital things about any potential employee. Plus, I can still ace my management classes and check the sharpness box.

Taylor: You know you still won’t learn anything about doing business.

Sophia: Maybe I will, maybe I won’t, but who cares? I’m sending a good signal and will be able to get a job in business in the end. Still, we both know I’ll probably learn at least one applicable thing. I might have to take accounting or finance, and everybody agrees those are useful.

Taylor: I still think you’re making a mistake. You can send a good signal in a different major. Just switch to engineering and we can do problem sets together.

Sophia: Too late — I have a meeting with my academic counselor now. See you later!


Scholium

Is Sophia convincing? I think so. She has given a strong argument as to why you should pursue a business degree, but it’s crucial to distinguish between what she is and is not saying.

Sophia is not arguing everything people learn in a business major is unapplicable.

Her argument is agnostic on this point. Maybe she’ll learn applicable things, and maybe she won’t, but it does not matter. To her, there’s no use arguing about applicability. Education is signaling, and all she’s claiming is that doing business sends a good signal regardless of whether applicable learning happens.

This is a strong and important point. It allows someone to say something like this:

“Ok business major skeptic. Let’s assume I learn nothing applicable in the business major for the sake of argument. I’m still making a good decision because my signal to the labour market will be strong and I will be hired.”

That’s it. You don’t need to say anything about how management 101 is highly applicable or how accounting is useful. Signaling will justify your decision regardless.

A business major can certainly strengthen her case by saying, “oh by the way, management 101 is great and finance is applicable,” but these are independent points. What Sophia has shown is that you can theoretically concede a lot of ground and have a strong position.

 

Jewish occupational selection

I came across this paper while researching a forthcoming post on Medieval Jews and the Black Death. The abstract:

This paper documents the major features of Jewish economic history in the first millennium to explain the distinctive occupational selection of the Jewish people into urban, skilled occupations. We show that many Jews entered urban occupations in the eighth-ninth centuries in the Muslim Empire when there were no restrictions on their economic activities, most of them were farmers, and they were a minority in all locations. Therefore, arguments based on restrictions or minority status cannot explain the occupational transition of the Jews at that time. Our thesis is that the occupational selection of the Jews was the outcome of the widespread literacy prompted by a religious and educational reform in the first century ce, which was implemented in the third to the eighth century. We present detailed information on the implementation of this religious and educational reform in Judaism based on the Talmud, archeological evidence on synagogues, the Cairo Geniza documents, and the Responsa literature. We also provide evidence of the economic returns to Jewish religious literacy.

This reminds me of Protestant advantages that accrued due to increased literacy. What would the 21st century equivalent of this be? A religion that mandated all adults teach their children algebra? C++?

Questions to ask people

I like to have a couple of these in my pocket so when I meet someone new I can ask something that hopefully yields an interesting answer. My go-to for years has been “what’s the worst advice you’ve ever received?”

 

-What’s a story your parents like to tell about you? (credit to Anshul Aggarwal for this. I have only heard this one in a formal interview setting, though).

-What’s the best book you’ve read that you disagree entirely with?

-Who did you worship when you were younger?

-Do you think the rate of technological progress has slowed over the last 50 years? (then you proceed to convince them that it has).

-What’s the worst decision you made that turned out well? (and vice-versa: best decision that went terribly)

-Do you know of any fun online blogs?

-What do you do to remain weird?

-What’s something only people from your hometown do?

-Why do you think people buy expensive things they don’t need?

-What’s something you take seriously?

-What’s your opinion of Los Angeles? (works in any locale)

-What’s the taboo thing to do in [insert person’s hobby]?

-What’s the weirdest joke you find funny?

-What do you think is the most underrated personality trait?

-I don’t think computer science is really a science. Do you? (only works for CS people).

-If you could write an op-ed for the NYTimes, what would it be about?

-Do you trust economists?

-What’s the best city that’s not your hometown?

Educational Signaling

I recently finished Bryan Caplan’s “The Case Against Education” which is a rollercoaster of a book. Caplan basically makes two claims: education is much more effective at signaling worker productivity than imparting practical/employable skills, and as a result of this we should cut state and federal funding for it entirely.

It’s natural to approach these assertions with a healthy dose of skepticism. I’ll withhold judgment on the second claim for now, but I admit I am moved by his arguments for educational signaling. In short, he demonstrates we learn nothing in school beyond basic literacy and numeracy. Take science. Caplan supplies the following table with data from the General Social Survey and his own corrections for guessing. IMG_0057He has similar tables for basic political and historical knowledge. Clearly, we retain very little in the form of pure information.

What about the old adage that education is supposed to “teach you how to think”? Caplan has an answer for that as well. He cites studies demonstrating that the entirety of one’s undergraduate education increases general reasoning ability marginally, and only specific areas depending on the choice of major. “Learning transfer,” or the ability to apply principles learned in one situation to another, is also rare, especially when the initial principles were learned in a formal context. Self-reflection confirms this. How many times have you explicitly used information/a pattern of reasoning developed in class to solve a problem outside of a test?

In fact, my own decision to study philosophy, and expect employment post-graduation, presupposes a signaling theory of education. I do not plan on becoming a philosophy professor, but that is the only occupation where what I learn in class will be relevant. Nobody uses Kant on the job, and I knew that ex-ante. Instead, I have to rely on the fact choosing philosophy signals something about me that employers value. In Caplan’s terms, I’m hoping it demonstrates my ability, conscientiousness, and/or conformity, as these are the primary signals a college degree functions to send.

I’m a convert. Signaling explains why students cheer when class is canceled, but are reluctant to skip, why MOOCs can provide an objectively better educational experience than many brick and mortar institutions but pose no threat to the establishment, why STEM graduates do not work STEM jobs, and why the years of college that see the most payoff are graduation years. The personal and academic evidence for signaling is gigantic. Why ignore it?

The individual implications for this conclusion are Excellent Sheepy. Because your degree + extracurriculars are the only measurements employers have of your productivity, maximize those. Do the minimum amount of work for each class. Cheat on tests. Join as many clubs as possible and try to get a leadership position in each. You’re not going to remember course material, and it’s surely not going to be relevant to your job, so who cares? Even if you’re overcredentialed for your ability and turn out to be a poor employee, you’ll stick around as long as your incompetence isn’t egregious. Firms will keep a marginal employee for years to delay finding a replacement and upsetting other employees.

The societal implications are also gigantic. If education is just signaling, should there be less of it? (Yes, but I’m not a full Caplanian). If education is just signaling, should it not be a human right? If education is just signaling, should this be an indicator e-learning companies should create better, more informative credentials rather than trying to improve content delivery?

 

I didn’t check my grades for two years

By the time college rolled around, I had made the decision not to check my grades at all. They had, quite literally, ruled my life during high school, and I was intent on making sure the future was different. This was not (fully) an irresponsible retreat from reality, but the consequence of some positions I began to hold about schools and learning. As a result, up until the end of my sophomore year, I never knew what letter grade I had received in a class or even what my GPA was. I never even got around to learning how to check my grades.

I knew how I did on tests and essays and the like. My policy was that, if the assessment is handed back to me, I’m allowed to know the score. In fact, knowing those types of scores was of the utmost importance to me.

To explain how I arrived at this strange and perhaps irresponsible decision, I am going to outline the general reasoning I used to get there. It will help to define some terms at the start. I’m going to use “marks” to refer to the individual scores one receives on specific assignments. 87/110 is a mark that one might receive on a test. “Grades” refer to numbers that are usually composed of the weighted averages of all of your marks. 88.4% is an example of a grade. “A,” “B,” “C,” etc. are also examples of grades because they are assigned by calculating the weighted average of your marks.

Here’s the reasoning:

  1. The objective of school is to learn.
    1. You should do things that contribute to this objective.
  2. Learning, roughly, consists of knowing things you did not know before.
    1. This is a two-part process. First, you must identify the things you do not know. Next, you learn those things.
  3. Knowing the marks you receive on individual assignments helps identify the things you do not know. Example: if you receive poor marks on a quiz that covers the binomial theorem, this is a good indicator you don’t understand the binomial theorem.
  4. Knowing marks on assignments contributes to the first part of learning. (2.1, 3)
  5. You should know your marks. (4, 1.1)

As the example in (3) suggests, marks are helpful because they give you feedback at a low level. What you receive on a single essay or test can often tell you what you know or don’t know with a fair amount of specification. This, I admit, mostly happens if you get the actual assignment handed back to you with a certain level of feedback. If you only know the “true mark,” or what percentage of points you received, it can sometimes be difficult to figure out the exact extent of your knowledge. Luckily, individual assignments often cover only a handful of topics, so it can be easier to infer what you missed.

So far, we’ve only established that you should know your marks. Great. How did I arrive at the idea these are the only things you should check?

This conclusion was established negatively. I couldn’t find any argument that appealed to learning that says I should check my grades. Let’s try to start from the first two premises of the prior argument, incorporate grades, and arrive at the conclusion we should check them.

  1. The objective of school is to learn.
    1. You should do things that contribute to this objective.
  2. Learning, roughly, consists of knowing things you did not know before.
    1. This is a two-part process. First, you must identify the things you do not know. Next, you learn those things.

We can’t use the same move as last time. Grades, because they are a weighted average of many individual assignments, can’t carry information about what specifically you don’t know. If you get a C in a Shakespeare class, you can convincingly say “I don’t know Shakespeare,” but how helpful is that? You’ve identified what you don’t know on a superficial level, but that statement doesn’t provide a useful direction towards actually learning the stuff. Are you shaky with Othello? Does the play format give you fits? Can you even penetrate Shakespearean English? A poor grade only says you need to do work. It does a horrible job of specifying exactly what needs to be done.

We’ve seen that grades are ineffective at identifying the things you do not know. The only remaining way to relate them to learning, and establish that you should pay attention to them, is to claim they somehow help you learn those things.

Now, how can grades help you in the act of learning? This is unclear. In what way does knowing the weighted average of your marks allow you to understand academic material better? A certain type of knowledge about the points you’ve earned doesn’t seem related to your ability to comprehend unrelated concepts.

I can see a skeptic coming up with a counterargument. They might say that knowing grades can give someone motivation to study and learn. They might want a better grade rather than a worse one, so knowing where they stand pushes them to spend time with the material. An attitude like this violates the first premise, though. A student who is motivated by grades supplants learning with credentialism as the goal of school. The student would have to reject premise 1 and commit themselves to a school experience that makes learning an afterthought. No good.


This is reasoning, in some inchoate form, was what was floating around my head when I decided not to check my grades after my first quarter of college, and the second, and the third, and the fourth, etc… I had a faint idea of what my GPA was since I saw the marks on the tests I got back, but it was very vague. Basically, I knew I wasn’t going to be on academic probation anytime soon.

I must also emphasize this is the literal truth. I didn’t peek here and there or do some back of the napkin math to get an approximate sense of my grades. I was in complete and utter ignorance of my official GPA.

And it was wonderful. For the first in my life, I felt a deep sense of academic freedom. I understood that I was at university to learn, and my behavior was fully consistent with this fact. I did close readings of texts when I could have skimmed them. I revised essays two, three times when I knew the first draft would do. I read more books in my free time than I had since middle school. I felt comfortable talking candidly with my professors and TAs because I wanted to learn, not grovel.

This doesn’t mean my college experience was/is faultless. An unambiguous focus on learning was somewhat of a compensatory mechanism meant to address the many faults I found in my institution. Nonetheless, the experiment was a success.

Readers at this point are perhaps left with several questions. “Riley,” they ask, “do you still not know your grades? Why submit yourself to such paralyzing uncertainty? Don’t you understand your GPA determines a significant part of your adult life, playing an instrumental role in considerations like, but not limited to, your attainment of internships, full-time jobs, graduate school admissions, scholarships, loans, and other professional opportunities? Are you out of your mind?!?”

In response to the first question, I currently know my GPA. This experiment concluded at the end of my sophomore year, but the effects are, hopefully, permanent. I understand what it’s like to put learning first, and, though it takes much effort, I intend to keep it this way.

An explanation of why the experiment ended addresses the remaining questions. I began to pay attention to my grades again because I realized the argument I gave above is wrong. Premise 1 is faulty. Ask anyone who isn’t a philosophy major what the purpose of school is and they will say anything but learning. Formal education can be about creating a workforce, increasing social mobility, instilling civic knowledge, cultural assimilation, personal maturation, landing a job, proving something to society, or proving something to yourself. All of these are valid ends of the enterprise. Surely, you can find an argument that appeals to one of them and concludes you should check your grades. Instead of reading “the objective of school is to learn,” premise 1 should say “an objective of school is to learn.”

Do I regret this decision? Absolutely not. I acted in full accordance with my values. Were my values those of an educational idealist, dismissive of the many social, cultural, and economic objectives of formal schooling? Sure. Should you ignore values arrived at via ample reflection because you’re unsure if they will change in the future? Almost never.

I’ll maintain that learning should be the principal objective of school, or at least near the top of the list. However, as soon as you introduce other grade-influenced ends into the mix, saying you should not check your grades is indefensible. That’s why I know my GPA now. I don’t obsess over it. I don’t stake my emotional health on whether it twitches in one direction or another. It is a metric that, for one reason or another, people care about. If I want to convince these people I deserve to study with them, work with them, or use their money, I shouldn’t neglect it entirely.

Yet, I remain committed to learning. I’ve seen what wonderful academic experiences are possible if I let it motivate my decision making. I recommend you take this idea seriously. If you do, this doesn’t mean you should ignore your grades. Just check them a little less often.

 

Did household appliances actually save time? Does this matter?

I used to have this idea that the introduction of electric appliances circa 1960 dramatically reduced the number of hours people, especially housewives, spent on domestic tasks. The thought is pretty intuitive. It’s quicker to wash clothes with a washing machine than with a tub and washboard. Dishwashers are more efficient than washing dishes by hand. In the span of a couple of years, I thought, American housewives had much more free time. This idea fits nicely with the fact women’s workforce participation rate increased in the 1960s and the decade marked the beginning of second-wave feminism. Just perhaps, a decrease in the domestic load allowed women time to reflect on unjust norms and mobilize against them.

Unfortunately, the data do not support me. Household appliances are not the fiery instruments of social change I imagined them to be. There is no doubt they altered American domestic life, but they did not reduce the aggregate time spent on household duties.

Several time-use studies from 1925 to 1965 corroborate this.

Screen Shot 2019-12-29 at 8.04.51 PM.png
Source: (Ramey 2009)

As we can see, total home production, or the amount of time housewives spent on domestic tasks, remained approximately constant from the 20s to the 60s. This is surprising considering home appliances diffused rapidly during this period. In 1925, fewer than 20% of American households had a washing machine. By 1950, more than 75% of them did (Bowden and Offer 1994).

However, the allocation of time had changed between the decades studied. Hours spent in food preparation and care of clothing decreased and were shifted towards general managerial tasks (purchasing, management, travel, and other). We can imagine appliances and relying more on store-ready food contributing to decreases in food-prep time for the 1960s housewife. We can also imagine frequent trips to the grocer increasing the time spent on travel, for example.

Even the presence of basic utilities didn’t seem to decrease time spent on domestic duties. Time-use studies comparing black and white families in the 1920s reach surprising results. Black housewives at the time, most without the luxury of a kitchen sink or running water, spent just as many hours on housework as their white counterparts. Both parties averaged around 53 hours a week. As it turns out, time spent on household production is not correlated with income (which is —most likely— correlated with the amount of technology in the home). How could this be? (Ramey 2009) explains:

[Since] lower income families lived in smaller living quarters, there was less home production to be done. Reid argued that apartment dwellers spent significantly less time in home production than families in their own houses; there was less space to
clean and no houses to paint or repair. Second, there is a good deal of
qualitative evidence that lower-income families produced less home
production output. A home economist noted during that time “if one is
poor it follows as a matter of course that one is dirty.” Having clean
clothes, clean dishes, a clean house, and well-cared-for children was
just another luxury the poor could not afford.

There are additional historical factors that explain why household production did not decrease with the introduction of appliances. The first views appliances as a substitute for another labor-saving device whose availability was dwindling: servants. In the early 20th century, it was common for middle-class housewives to hire help. Servants and maids assisted with cooking, cleaning, and caring for children, among other things. Foreign-born residents were usually employed in this capacity, but when immigration restrictions were imposed, domestic labor became scarce. The decline in servants and maids coincided with the rise of electrification and home appliances, allowing a single woman to accomplish what used to take a staff of two or three servants to do (Ramey 2009). Appliances, in this sense, compensated for the loss of a maid. Perhaps it used to take an hour for you and two servants to do the laundry. With a laundry machine, it’s now possible to do the laundry by yourself in an hour. Time spent in home production remains the same, but there are now fewer “inputs” to the process.

The second explanation appeals to changing standards. While appliances were being introduced to American families, new ideas about sanitation and nutrition were also spreading. Housewives learned they could positively influence the health and well-being of their family through their housework, so they opted to do more of it (Mokyr 2000). Even though they could have cleaned the house or cooked dinner much quicker than before, housewives decided to keep cleaning, or cook a more demanding meal, rather than take the time saved by appliances as leisure.

Do not fear. Hours spent in home production by women did begin to decrease near the end of the 1960s. Yet, this was not due to advances in labor-saving technology.

 

Screen Shot 2019-12-30 at 5.00.26 PM.png
Source (Ramey 2009)

Men began to contribute more. As a result, the average woman devoted only 30 hours per week hours towards household production in 2005, compared to the ~50 hours she may have expended in 1900. However, total hours spent in home production per week, the sum of male and female hours, has not shifted much over the century. The average person today is likely devoting as much time towards domestic tasks as their ancestors did over 100 years ago.


I think there are potential implications here for thinking about the future. Many of us imagine that advances in technology will increase productivity dramatically, and as a result, we will be able to enjoy much more leisure. If we can accomplish in 20 hours what it used to take 40 hours to do, why work the additional 20 hours?

Yet, history suggests that advancements in domestic technology do not necessarily save us time. Their benefits roll over into things like increasing living standards before we see additional leisure. Standards for health and cleanliness may increase steadily as our capacity for nutritious meals and clean homes increases as well. Perhaps something vaguely similar to the immigrant/servant situation could happen. Innovations in household technology might decrease time spent in home production, but maybe our jobs become incredibly demanding, eating any time saved by being able to cook or clean quicker. In both scenarios, leisure loses.

I don’t think this is disheartening news. Recall the black and white families studied in the 1920s. Even though housewives of both races spent approximately the same amount of time on household production, the white families were much better off. They enjoyed a higher standard of living due to basic utilities like running water and probably also other bits of technology present in their homes. Even though inputs, in terms of hours expended, are similar, the difference in outputs is astounding.

As a result, a more realistic version of the future might still have ~50 hours of home production for each household, but living standards that are much, much higher. Our health will be fantastic, we will conform to the highest standards of sanitation and hygiene, and we will unequivocally be better off, leisure be damned. We imagine an ideal future as a place with infinite leisure, but a society in which our standard of living is ten times as high with us putting in the same amount effort is still pretty damn good.

 

Works Cited:

Bowden, S., & Offer, A. (1994, November). Household Appliances and the Use of Time: The United States and Britain Since the 1920s. The Economic History Review, 725-748.

Mokyr, J. (2000). Why “More Work for Mother?” Knowledge and Household Behavior, 1870-1945. The Journal of Economic History.

Ramey, V. (2009, March). Time Spent in Home Production in the Twentieth-Century United States: New Estimates from Old Data. The Journal of Economic History, 1-47.

 

What I’ve Learned at 21 (with brief justifications/explanations)

It’s almost obligatory at this point. Around your birthday, you write a post/article detailing what you’ve learned thus far and some thoughts going forward. Yet, just because the “what I’ve learned at x” post is common doesn’t mean it’s without value.

Hopefully, this blog post is the first step of what will become a lifelong project. I already journal every day and record some of what I’ve learned there, but making a public list helps me clarify my thoughts and allows friends to challenge them. In the future, it can also afford me an opportunity to publicly affirm or refute something I said in previous years. It’s certain some things I mention below will turn out to be false. Other things might take on additional significance with the passage of time.

Instances of the “what I’ve learned at x” genre typically proceed as follows: a brief introduction, a flurry of aphorisms, and an optimistic conclusion drawing attention to the next 365 days. I’m keeping the beginning and end, but modifying the middle. Instead of dispensing with the things I’ve learned in bullet-point format with bullet-point brevity, I aim to provide some additional justification/explanation for each one. Where I can provide an adequate argument for why something is true, I hope to do that. If not, the least I can do is outline why I think it’s plausible.


Three things matter a lot to me

(1) Having intimate relationships with wonderful people (2) Being interesting to myself (3) contributing to progress.

Explanation of (1): My friendships are my most prized possessions. To have people with whom you can speak candidly, who will push you in unexpected ways is invaluable beyond expression. Some of our most basic human powers can only be fully exercised in friendships like these. For this reason, I consider them essential for making a life go right, and count myself fortunate beyond belief to have already experienced several of these relationships in my short life thus far.

Justification for (2): A thought experiment: you must wear a secret service style earpiece for the rest of your life that relays you the real-time mental activity of another human being. Every thought they have, every idea that flashes through their mind enjoys the same force inside of your own head. What type of person would you hope to be connected to? Beyond wanting them to be kind and generally nice, chances are, you would also want them to be interesting. You would like them to have varied thoughts about varied things and play with ideas you might not have encountered otherwise. Under these circumstances, this freaky mind-reading scenario might actually be enriching, and you wouldn’t mind having another person occupy your head.

This might be even more convincing when you consider the alternative: having the same bland thoughts piped into your mind every day. I can imagine being neutral towards this possibility in the short run. After all, boring thoughts are inescapable. Yet, years of this might erode you until you are just as uninspired as your mental companion. This is, as it were, death by dullness.

If the possibility of mental poverty caused by foreign thoughts is unacceptable, then the same possibility caused by endogenous ones should be equally terrifying. Thankfully, our thoughts are controllable to a large extent. We can choose who we’re “hooked up to.” As a result, we can aspire to have our heads be exciting places to live rather than arenas of tedium and routine.

Explanation of (3): I don’t have an argument here (at least not yet) but this value stems from something intuitively compelling about the idea of progress. The world today is much better than the world prior to the industrial revolution, and that world was still superior to the world during the middle ages. Doing my part to make sure the future is still better than the past just seems like a reasonable thing to do.

Additionally, I can thank Lincoln High School and the entire city of Portland for instilling in me a desire to be normative. I spent my formative years in a community that publicly valued righting historical wrongs and securing our future from existential threats like climate change. I learned that it’s not only possible to shape the future into a just and prosperous society, but that we’re morally obligated to do so.

Wonderful people are rare and cannot be taken for granted. Do everything possible to maintain close ties even though time and circumstances may pull you and them apart.

Explanation: Whether there actually are few wonderful people out there or the conditions under which we interact make it difficult to recognize the wonderful-ness of others is an open question. Yet, it’s clear their company is not guaranteed. I’ve been fortunate enough to meet wonderful people who, though they no longer belong to the same institutions as me I’ve managed to keep in touch with. Our relationships have been rich and deeply fulfilling, and life would be much harder without them. The benefits of being around wonderful people need not decrease with distance, though ensuring this requires deliberate effort.

Everything worth doing is difficult, but not everything difficult is with doing.

Justification: This is best illustrated by an example. Curing cancer is incredibly difficult, but the suffering endured in pursuit of this goal is justified by your service to humankind and the pure pleasure of solving a seemingly impossible problem. Attempting to run up Mt. Everest is also hard, but it’s much more difficult to get a convincing answer as to why it’s worth doing. Even if you find a plausible reason (I must prove something to myself, I enjoy setting absurd goals and achieving them, etc.) it cannot have the same gravity that the reason behind curing cancer has.

Interrogate your goals. They may be ambitious and difficult, but this does not mean they are worth your time.

You are the average of the 5 people you spend the most time with, but this doesn’t give you an excuse to be an asshole.

Justification: The first part is almost a cliché, and I take it most people can recognize the immense power of your immediate social circle on your thoughts and attitudes. Exercising personal aspiration by controlling your company, however, carries a hint of snobbery that’s difficult to dismiss. Pick your friends wisely, but having high standards is also compatible with being kind and open-minded.

Good roommates are incredibly valuable

Justification: This is intuitive, though it’s tough to know how much more valuable until you’ve gone from having a poor roommate to a fantastic one.

The majority of your interestingness is determined by how much you read.

Justification: I’ll claim that your level of interestingness is related to the volume and quality of ideas that go through your head. It’s possible to have a lot of interesting thoughts on your own, but we’re all limited by our experience and expertise. The solution is to maximize exposure to ideas, and this comes either through reading or interacting with interesting people. However, the people you can interact with are also limited by their experiences, and all of you are limited by time and place. The fact you can interface with them and speak the same language means you all live in the same era, within roughly the same culture, and mentally developed with respect to the same dominant ideas.

Reading faces these problems to a minimal extent. Translators alleviate the language barrier, and our compulsion to write and record has given us the opportunity to hear the major ideas of every civilization up to the present, provided the relevant texts survive. The volume of potential ideas you can be exposed to expands dramatically with reading. I’ll also claim reading exposes you to higher quality ideas. Poor thinking is less likely to have survived millennia, or be published in collected essays or anthologies. It’s possible to get your fix of ideas via oral exchange, but reading is generally superior.

Live music is wonderful

Explanation: I always forget this until I see a live performance with someone really good. There’s nothing quite like feeling the bass in your chest or getting chills from a vocalist. We all need a little more of this in our lives.

Girardian Terror is real

Explanation: Very roughly, Girardian terror refers to the idea that our desires are mimetic. We want things that we see other people want, and competition between us and others similar to us who desire the same thing leads to anxiety, conformity, and terror. For a lengthier discussion, see Girard’s wikipedia page, or his IEP entry. To see how Girard’s theories apply in a business context, check out Zero to One.

The intuitive appeal of this idea is easy to see. If everyone in your community (school/friend group) wants to be a doctor/lawyer/engineer, it takes a substantial amount of awareness and willpower to resist finding yourself aspiring to the same careers. Once desires are standardized, then competition between you and your peers for the limited number of med school/law school slots is fierce. So little differentiates you from the others. Every triumph over them represents a step towards distinction. Every failure is a slide backwards into obscurity.

Dan Wang has an excellent post on how American colleges and universities are perfect incubators of Girardian terror. I highly recommend reading it.

Alcohol is overrated

Justification: Who are you more likely to have a good conversation with, a drunk person or a sober person? Is this more likely to happen when you’re drunk, or when you’re sober?

Always have several uncommon/interesting questions on hand.

Justification: Unless it’s the case you and another person happen to have much in common, meeting someone new can be painful. Bypassing small talk with pointed, interesting questions can be the first step towards making a mundane interaction interesting, or performing conversational triage. Two of my favorite examples include: what’s the worst advice you have ever received? What’s something true but unpopular? Lama Al Rajih has a fantastic list of such questions here.

I want to die in Portland

Justification: circularity.

I do not want to raise children in Los Angeles or the Bay Area.

Justification: I have a clear bias towards my non-Californian upbringing. Yet, I still think both locales fail in several major areas that are, in my opinion, crucial for healthy development.

Los Angeles is devoid of natural beauty. It’s a great place to be if you’re into highways and overpasses, but I have not once looked around and thought to myself “this is a really beautiful place to be.” Malibu and Pacific Palisades suffer from this problem less, but let’s be real. It is unlikely I will live in either location.

For the Bay Area, if half of what I’ve heard about their high schools is half true, the entire educational experience is psychologically damaging for the average student.

Air quality is also a concern for both locations. The Bay Area less so, but if we continue to discover that air pollution is really bad for you, this would become more of a factor.

Both also suffer from housing and transportation woes that may only increase in severity. Being wealthy solves these problems to a certain extent, but I do not my children to grow up thinking they need to make at least $117,000 to enjoy a decent life.

Human reason can solve any problem

Explanation: This is a big claim that I can’t defend well. To do so requires an incredible understanding of history and philosophy that I do not have.

What brings me to this view is an intuition. Humanity has solved countless, seemingly intractable problems, and there is nothing to say we will not continue to do so. A skeptic mentions the problems we haven’t quite solved yet. P=NP,  the Goldbach conjecture, the existence of God, etc.. My naïve reply is that these problems are really hard, but solvable given enough time. Perhaps we eventually get there on our own. Perhaps we “solve” these problems indirectly by cooking up an AI that can handle them for us. Who knows? I think it’s a lack of imagination that causes people to think that just because we cannot solve something now means we will never be able to do it.

Ask and ye shall receive

Justification: This is all anecdotal, but cold emails work wonders when you’re a student. When I was running the Stumptown Speaker Series in high school, we booked free event space, got advertising, and brought in fantastic speakers like Kim Malek of Salt and Straw, all pretty much by asking. The strategy worked the same in college. While I was in charge of bringing in speakers for Bruin Entrepreneurs, we booked local entrepreneurs and members of Forbes 30 under 30 all via cold email.

The trick is to never copy/paste the same email template. Every message I sent was “handwritten” and included something specific to the receiver. I wanted them to come in, as opposed to someone else, and the emails demonstrated that I had done my homework. This is why to book three speakers I only had to send four emails.

People aren’t comfortable asking strange/intrusive questions but are perfectly fine answering them.

Explanation: I actually learned this from some social science research that came my way, but I can’t find the exact paper at the moment. The takeaway is that you should ask more questions, regardless of whether you think they go a bit too far. Obviously, there’s a boundary, but it’s not where we think.

Become friends with the strange people you meet. They’re much more interesting.

Explanation: Self-explanatory. A corollary is that if there are no strange people around, you are not in for a good time.

Sexy things are almost always overvalued

Justification: Here’s an example. The global professional sports industry took in revenues of $91 billion in 2017. The global cardboard box industry recorded revenues of $500 billion in 2014.

People like to be applauded for what they do. They like to feel their industry is “hot” or “sexy.” Some industries certainly deserve the hype they generate to a certain extent, but many of the products and services that are absolutely integral to our daily lives and current standard of living are very “unsexy” by mainstream standards. Think public infrastructure and the like.

I’m willing to generalize this beyond the economic sphere, too. Sexy restaurants, institutions, or ideas are probably so because there’s at least one thing extraordinary about them. Yet, what they offer is probably small in comparison to the price we pay for them.

Take more risks.

Justification: I’ve noticed people tend to regret the risks they didn’t take more than the consequences of a risk that didn’t work out well. Obviously, this only goes for situations where you can afford to lose what you wager. Still, I believe living an interesting life, stumbling upon new ideas, or learning new things, requires more risk than we think.

A just society is opportunity oriented, not outcome oriented.

Justification: Ensuring outcomes is problematic for two reasons. (1) Tough decisions need to be made about what outcomes are acceptable. Settling on a finite list might exclude outcomes some really desire. This privileges some people’s “good life”‘s over others, which is not consistent with a commitment to treating everyone equally. (2) Attempting to guarantee outcomes would require meddling with our lives to an unacceptable extent. Not only would this represent a gross intrusion, but it limits us. Part of our self-respect is founded upon making decisions for ourselves and living with them. To understand our social outcomes are fixed as a result of some gigantic scheme would be disheartening, and deny us an opportunity to exercise the agency that is an essential part of being human.

The best we can do is attempt to ensure everyone has the opportunity and resources at their disposal to pursue the good life as they see it. This idea is borrowed from John Rawls, and a lengthier discussion of it can be found on page 94 of A Theory of Justice.


The plan is to learn more this coming year. The thing I think is most likely to turn out false is that I hope to die in Portland. The thing I actually hope is false is that “wonderful people are rare.” The thing I’m most convinced is true at this point is “Girardian terror is real.”

Friends and strangers: hold me accountable! If you want clarification, aren’t convinced by what I’ve said, or want to chat, send a message.

Here’s to another year.

-Riley

 

An overview of online dating

Here are several fun bits and pieces from an article that appeared on MR a bit ago. We’ll start with the funny parts.

Screen Shot 2019-12-12 at 1.52.26 AM.png

Screen Shot 2019-12-12 at 1.53.06 AM.png

The authors’ take on the effects of online dating on consumer behavior:

An additional knock-on effectof online datingthat initial potential mate matching is increasingly visual, leading to secular demand growth in cosmetics and photography products, while fragrance sales remain flat because their value is irrelevant in the current market. This is largely facilitated via Instagram and mobile usage, and while it is a less important point to this thesis overview, it is an area for more detailed research and discussion

On the forces driving the fundamental structural changes in the dating market:

A conservative estimate of the percentage of new relationships begun online in 2019 is at least 65%, but likely over 75%. So, online dating now produces most new relationships. Why? From the perspective of prime reproductive age individuals, cost structures (safety, monetary, time, social frictions, etc.) have shifted, with many dropping to effectively zero. Because costs (physical safety, social stigma)have been disproportionately impactful to women, their elimination has had the effect of flipping the power dynamic in the market to favor women in prime reproductive age, though the dynamic changes with age.

and an (uplifting) implication:

With the advent of online dating, women in prime reproductive age are in the dominant position in the dating market for the first time in human history.

People tend to be rude and nasty on the internet, especially in romantic contexts if there’s some level of anonymity involved. Yet, online dating seems to be a great improvement for those have been historically disadvantaged by traditional dating methods.