Consider the ideal

Imagine a young person trying to decide what to do with her professional life. Excluding financial considerations, she might consider three principles [1].

  • The principle of approbation
    • Do the thing that earns the most social praise.
  • The principle of altruism
    • Do the thing that increases the well-being of humankind.
  • The ideal principle
    • Imagine the world is perfect and your opportunities are endless. No wars, no famine, no disease, and no obstacles preventing you from pursuing whatever you wish. Do the thing you would choose to do in this alternate world.

If our young adult is independent-minded, she will dismiss or heavily discount the principle of approbation. There are two main reasons why it might get priority over others, but neither is convincing. For one, she could think picking a plan that satisfies the principle of approbation is a good bargain. Following the principle, she could only select professions that win wide social approval (doctor/lawyer/engineer). In exchange, she gets a lifetime status boost.

The discussion could end here. Different people require different levels of praise to feel content and secure. If our young adult is the type of person that needs affirmation, yielding to the principle of approbation might be a good decision. Yet, we will continue with our assumption she is an independent thinker. Her sense of self-worth is not heavily tied to approval so she doesn’t find the trade compelling. It diminishes her freedom to choose, and, in return, she receives a good she has no dire need for.

We can also think of the principle of approbation derivatively. Our young adult might actually want to help others, so she gives precedence to the principle of altruism. However, she realizes she is in a poor position to determine the best method to go about it. She might reason that others know a lot more about helping people than she does, so listening to popular opinion might be productive. Social approval could serve as a proxy for how effective her altruism is. After all, doctors help people, and people really love doctors. Praise for professional choices is actually a sign she’s on the right path.

While tempting, she does not buy this line of thought. She acknowledges the limits to her finding the best way to help others, but she has reason to believe the crowd can’t do much better. Doctors have always been popular, but their treatments weren’t always effective. Obviously, doctors now have a much better understanding of the human body, but the point is that praise and occupational prestige may not correlate well with how effective one is at aiding others.

The principle of approbation has been dismissed. Choosing between the remaining principles might look easy. Why not choose the principle of altruism and leave the world better than you found it? It’s inexcusable, the thought goes, to yield to self-indulgence when you can heal the sick and feed the hungry. Those in favor of altruism will say we face moral obligations in our professional lives. Not only must we “do no harm,” as Hippocrates might say, but we must also alleviate the suffering around us. By being of sound mind, able-bodied, and of comfortable means, we are automatically tasked with using our individual talents to help others flourish. From this perspective, the ideal principle is irrelevant. As long as people suffer, we must rush to their aid. There is no need to even think about how you would live in a perfect world, as the world is, in fact, deeply flawed.

Despite this, I think the ideal principle is crucial. We should all have an answer to “what would you do all day in a perfect world?” To be clear, I also believe the principle of altruism should weigh heavily in informing our professional lives, but neglecting where our own preferences and inclinations might lead us in an ideal scenario is a mistake. We suppress our individuality when the question goes unasked. In the same way someone who adopts the principle of approbation is subject to external forces when deciding where to put her professional energies, the principle of altruism ties our professional lives to processes beyond our control.

Note I am well aware what we can and cannot do is generally determined by things beyond our control. It is also true the claims those suffering have on us are more legitimate than those of popular opinion. These two facts make a lack of control normal and even a consequence of being moral. My only point is the principle of altruism subjects you to additional, exogenous constraints that diminish autonomy [2].

Why should we care about autonomy? The answer is that the people who we really are, the alternate selves that are the most “us,” are the people we would be in a state of absolute freedom. Imagine tomorrow you received the ability to do whatever you want. All restrictions — financial, emotional, cultural, and otherwise — are lifted. The world is perfect (it doesn’t need your help) so you can, in good conscience, choose to do what you wish. This is absolute freedom [3].

In the real world, some decisions are made for us. An authority figure could make you choose from a small selection of alternatives, or natural circumstances might do the same. The problem is we can’t infer much about an individual based on these choices. Can we really say someone is kind if they are forced to be? Is it appropriate to call someone cruel if they had to choose between three cruel alternatives? Our intuition says no [4]. It’s easier to call someone brave and courageous if they could have been otherwise. The choices we make under favorable circumstances reflect who we really are, and the ones we might make in a state of absolute freedom do so the most [5].

When we dismiss even considering the ideal principle, it’s a failure of self-knowledge. As long as we don’t have an answer to “what would you do all day in a perfect world,” we fall short of understanding ourselves. We might grasp how the world influences us, either through approbation or altruism, but we are distinct from it and deserve independent consideration. Thinking about how we would behave in absolute freedom illuminates who we are, without the external demands and obligations layered upon us.

Lacking self-knowledge poses a deep difficulty. How can we shape our lives when it’s unclear who lives them? To me, it’s like building a house without knowing the inhabitant or writing a love letter to an unknown recipient. Even if we happen to build or write for the most extraordinary people, what we create will be uninspired. We can only address their most general features (i.e. “put a kitchen in the house” or “you have a beautiful smile”) without touching anything specific to them. Building something wonderful requires knowing who it will serve. In a certain (obvious?) way, a life first and foremost serves the person who lives it.

Thinking about ourselves is natural, but it’s unnatural to embrace it. Yet, this does not mean the ideal principle should be the only criterion that guides our lives. It is a thought experiment, necessary to consider and grapple with, but not something that should (or even can) dictate your entire life. For instance, some of our ideal lives might be out of reach (playing center for the LA Lakers) while others could be mundane and a little abhorrent (watching TV all day?). In either case, we learn something about ourselves by considering the ideal principle. It could prompt conviction (“I really like sports. Perhaps I should work in the Lakers front office”) or reassessment (“Am I really that type of person? Something needs to change”).

This does not settle the matter as to how a young person should decide where to put her professional energies. I believe the ideal principle is important, but primarily as an exercise in hypothetical reasoning whose results are used to inform a broader decision. Exactly how much weight the ideal principle deserves to guide real professional conduct is up for debate. At the very least, I think two things are clear. First, the principle of approbation should be ignored in the vast majority of cases. Second, we all need an answer to the question posed by the ideal principle. After all, it might be who you are.

 

 

 

[1] This is far from an exhaustive list of principles we could use to make professional decisions. At varying levels of specificity, potential principles include: the excellence principle (“do the thing you’re best at”), the aesthetic principle (“do the thing that brings the most beauty into the world”), the comfort principle (“do the thing that gives you the highest standard of living”), a smattering of ideological principles (“do the thing that furthers communism/socialism/capitalism/anarchism etc…”), the contrarian principle (“do the thing people will dislike”), the hedonistic principle (“do the thing that gives you the most sensual pleasure”), and variants of a religious principle (“do the thing God/Allah/Yahweh wants you to do”).

There are a lot of principles people might deem relevant. The three I chose are among the most general, and, I’ve observed, often crop up in actual conversations young people have about their futures (barring financial considerations)

[2] But if you choose the constraints, are they even constraints at all? I’m the one who brought them about and limited my own behavior, so aren’t they best understood as artifacts of my freedom, rather than obstacles to it?

[3] Absolute freedom is a state without hardship. If you’re concerned about the effects of hardship and struggle on personal identity, remember we enter absolute freedom tomorrow. All of our past experiences, the things that have made us “us” are preserved. In fact, I don’t think a person born and raised in absolute freedom would be much of a person at all (or at least not a very good one). Considering absolute freedom is meant to make us understand how we would behave if circumstances changed dramatically, not who we would be if our pasts were different.

[4] It is true we can observe how people react to external circumstances and make inferences that way, though. It’s the mark of a strong and magnanimous soul not to harbor spite when wronged, for instance. I’m on the fence as to whether we can really understand who a person truly “is” solely by looking at these reactions. Our ability to harbor discomfort is only a part of who we are (though a very important one) and it has reasonable limits. I’m not sure what we can infer about character from someone irascible who suffers from chronic pain.

[5] There’s a “moral luck” problem here. If we think of absolute freedom as a kind of utopia, then there’s no opportunity for courageous or kind actions, and perhaps, no possibility for courageous or kind people. A solution would be to think of absolute freedom as exactly like the regular world sans obligations. War and famine still exist but somehow our obligations to stop them disappear. Now it’s possible to be compassionate in a state of freedom by deciding to be a peacemaker or go on humanitarian missions, for example.

It gets tricky when we realize what motivates this person must be completely endogenous. Their rationale for their actions won’t appeal to the principle of altruism, because that is an external obligation that cannot exist in a state of absolute freedom. Yet, if we want to call their actions kind, they must accord with some desire for the well-being of others. It looks like we’re forced to posit an “internal” principle of altruism that operates independently of the one that creates external obligations. The strength of this internal force is the measure of how kind and compassionate someone is. Yet, how is this different from the external principle? Is the internal one best understood as a heightened sensitivity to the external principle of altruism?

The other solution is to bite the bullet and say no kind actions are possible in a state of absolute freedom, as these must be motivated by the (external) principle of altruism which doesn’t exist. This, taken in conjunction with my claims in the main body, produce strange implications. Does this mean the person who is most “you” is entirely selfish? Does altruism diminish our individuality? Broader questions about utopias are also at stake. Is being an altruist in a utopia redundant? If everyone is doing fantastic, why would you care about someone else’s well-being? Your caring won’t make them do any better (the world is perfect!).

 

 

Misidentification of Cliché

Note: this paper was originally submitted for Philosophy 23: Meaning and Communication taught by Sam Cumming and TA’d by Esther Nikbin. Because I’m still not over cliché and think there’s a lot more to it than what I talk about in this paper, I’m currently writing another paper that goes more into cliché’s “thought stopping” capabilities and how this can lead to moral danger, among other things.

My interest in cliché started in middle school when I thought Shakespeare was a boring author because he lacked original plot. Of course, murderous Macbeth has to be crushed under his own hubris and Romeo and Juliet are destined to be together only in the afterlife. In the media age, these clichés had made their way into the children’s books I read to the cartoons on TV, and I felt like a sucker for being told an old, old, man in England can get away with using them and even being called the greatest for doing it. I might as well have been crowned the best literary critic ever to have lived in the 7th grade if I hadn’t been criticizing the very origin of the storylines I had grown to recognize, and resent, but I’ll argue I have hundreds of years of Shakespeare imitators to blame for my misperception.

Still, the question of what is cliché and what is not remains. How is cliché different than idiom? Are all resonant expressions or ideas destined to become cliché?  The goal of this investigation is to the uncover the bounds that limit the application of the word “cliché” and differentiate it from other trite expressions that fill our language. In order to carry out this investigation, I consulted native English speakers, dictionaries, academic papers, and a book on brainwashing.

“Cliché” is not a word like “aunt” or “billionaire” that denotes a clear relationship or quantity of ownership. As a result, we are going to have to rely on less exact methods to pin down its meaning. Using to a survey I administered to (relatively few) native English speakers over the internet, a typical cliché begins to take shape. When respondents were asked to define “cliché” as they use it, several parameters were important. First, an overwhelming majority of answers invoked ideas of over-use through phrases such as “too much,” “too often,” and “too many people.” In order to qualify as a cliché to native speakers, the phrase in question must be notoriously ubiquitous in common language or in certain contexts. Examples given include phrases like “there’s plenty of fish in the sea” in situations about relationships, or captioning your old vacation photos on Instagram “take me back.” Second, speakers emphasized how clichés belie a sense of unoriginality in their users. According to one, these phrases become “cop-outs,” that are used in the English classroom to appear thoughtful without having thought. The phrases themselves are undeniably true to a fault, and consequently their use says more about the lack of knowledge of the speaker as opposed to any real understanding.

Dictionaries tend to corroborate these two main aspects of cliché, overuse and lack of real substance, gleaned from the intuitions of native English speakers. Definitions range from, “a phrase or opinion that’s overused and betrays lack of original thought,” to “a trite phrase or expression; also: the idea expressed by it,” but these definitions always invoke the two main pillars of cliché (Oxford English Dictionary; Merriam-Webster). A less authoritative source has also voiced her own opinion on cliché and defined it as “a metaphor characterized by overuse,” and even supplied her own cliché test. If you can begin a sentence, stop half-way, and then know the conclusion of it, then it is a cliché (Morgan). However, either by accident or design, this test includes swathes of phrases that the general population would normally regard as idiom and not cliché. An idiom, such as “read between the lines,” is defined as a non-compositional phrase, and although common, the phrase does not seem especially trite or meaningless, and thus wouldn’t be called cliché. While this test is inaccurate, it does begin to expose how speakers observe or ignore the frequency of certain words.

This image of cliché in the minds of speakers seems to be at odds with how we actually deploy the word. For example, the phrase “a penny saved is a penny earned” is undeniably a cliché, but you are highly unlikely to encounter it as frequently now in the 21st century as in the 18th, when it was conceived and presumably more popular in common conversation. Indeed, some common phrases such as the aforementioned “read between the lines” and “bearing in mind” are not considered trite in the slightest, yet appear more frequently than established cliché (Dillon). ‘

Frequency of Phrase “A Penny Saved is a” From 1800 to 2000

Frequency of Phrase “Bearing in Mind” From 1800 to 2000

(Google Books)

The data supports this. According to Google’s analysis of books, the popularity of the phrase “a penny saved is a penny earned” has been declining since the 1820s, but the phrase “bearing in mind” reached an all-time high in usage in the 1970s, as seen above. This is evidence against the idea clichés are overused phrases or ideas, and for a working definition that focuses on how salient the unoriginality, or “trite-ness,” of the phrase is to the hearer, regardless of how many times they have encountered it before.

This is the defining characteristic of cliché. Consider one of my earlier respondents who said clichés are “cop-outs,” and the OED, which claims clichés “betray lack of original thought.” The psychiatrist Robert Jay Lifton has done research on cliché as a tool of thought reform carried out by totalitarian regimes, and has supplied his own definition. He says cliché is when, “the most far-reaching and complex of human problems are compressed into brief, highly reductive, definitive-sounding phrases easily memorized and easily expressed” (Lifton). To him, repetition is not what makes something cliché, but how well it reduces huge problems or ideas into simple phrases. This definition gets more into the essence of what makes a cliché, as to have a simple phrase that addresses a large, common problem is useful, and it will probably be repeated as often as a holder of the phrase encounters the problem.

Given what we know about how we deploy the word and what definitions, institutions, and fellow speakers supply to us, the meaning of cliché is slightly different than we believe it to be. Cliché as we use it leans more towards statements that conspicuously display a lack of thought as opposed to ones that are often repeated. The two share a large intersection, but don’t necessarily encompass the same phrases or ideas. This is why I (wrongly) thought Shakespeare to be cliché, but give no notice to the banality of phrases such as “bearing in mind.” Perhaps we’re in denial. There is much more cliché in our language than we care to admit, and thus reserve the term for the most egregious offenses to originality.

 

Bibliography

Dictionary.com. Cliché. n.d. Febuary 2018. <http://www.dictionary.com/browse/cliche&gt;.

Dillon, George L. “Corpus, creativity, cliché: Where statistics meet aesthetics.” Journal of Literary Semantics 35.2 (2006): 97-103.

Google Books. Google Books Ngram Viewer. n.d. 12 Febuary 2018. <https://books.google.com/ngrams/graph?content=as+a+matter+of+fact&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cas%20a%20matter%20of%20fact%3B%2Cc0&gt;.

Lifton, Robert Jay. Thought Reform and the Psychology of Totalism: A Study of “Brainwashing” in China. UNC Press Books , n.d.

Merriam-Webster . Cliché. n.d. Febuary 2018. <https://www.merriam-webster.com/dictionary/cliche&gt;.

Morgan. What is a Cliché? . n.d. 12 Febuary 2018. <https://westegg.com/cliche/definition.html&gt;.

Oxford English Dictionary. Cliché. n.d. Web.