Tag Archives: Logic

A Place for Philosophy

In 2004, the city of Memphis celebrated the “50th Anniversary of Rock ‘n’ Roll,” attributing the roots of rock to Elvis Presley’s 1954 recording of That’s All Right. Some have rightly taken issue with this; black artists were playing what sounded like rock music for years before Elvis. Perhaps the origin of rock music is better attributed to Jackie Brenston’s Rocket 88, recorded in 1951. This event seems to be just one in music history where white artists capitalize on the talent of lesser-known black artists. Still today, some argue that white artists are more successful at pushing boundaries simply because they are white.

Just as the achievements of black artists sometimes go unacknowledged, philosophers are often not acknowledged for their past and ongoing contributions to furthering knowledge (scientific or otherwise). Moreover, scientists rely on philosophical thinking in their work but often don’t acknowledge it as such. In what follows, I hope to convince the reader that philosophical thinking has a central place in modern science and academia and that we ought not marginalize philosophy in favor of pure scientism.

Science as we know it today developed out of philosophy. Galileo and Isaac Newton were known as “natural philosophers.” It was not until the modern period that science began to separate from philosophy. Even after this separation, there was plenty of interaction between the two disciplines. The formalization of calculus was occasioned by a philosopher and many philosophers made contributions to modern mathematics.  However, science does not just owe a tip of the hat to philosophy for its origins; many modern scientific theories, such as string theory, rely on reasoning that goes beyond physically observable phenomena–what I think of as philosophical reasoning.

This past week, I attended Denver’s third Nerd Nite. Kayla Knopp, Rachel Miller, and Lane Nesbitt, clinical psychology students at the University of Denver, spoke about the way in which both biology and culture influence our understanding of sexual behavior. Although they didn’t state their conclusions explicitly (more on this below!), I take one of their main conclusions to be that (C) the set of all morally permissible sexual activities is larger than the set of all culturally accepted sexually activities. After what I think was an honest assessment of their talk, I took the argument for (C) to be the following:

P1. In nonhuman animals, there exists a set of sexual activities that our culture would find unacceptable if practiced by humans (e.g., one member of this set is deer threesomes!).
P2. We have evolved to enjoy (some) activities in this set.
P3. Since these activities are part of our natural evolutionary heritage, we ought to explore them (i.e., they are not immoral).
C. The set of all morally permissible sexually activities–i.e., the set of culturally acceptable activities plus the set of “natural-yet-culturally-unacceptable” activities that we ought to explore–is larger than the set of all culturally accepted sexually activities.

To be clear, I agree with (C); it is the argument for (C) that I take issue with.The problem with this argument–as many philosophy students may pick up on–is that P3 is fallacious: just because an act is natural doesn’t mean that that act is right, moral, or ethical. There are many examples that illustrate this fact. For one, we have evolved to eat meat—i.e., it is natural—but it’s not at all clear that eating meat is ethical; there are very strong arguments that support the claim that eating meat is unethical. To consider a more relevant example, sex between adult and juvenile animals occurs naturally among certain types of weasels and primates. But, among humans, such acts are considered immoral by most (and for good reason, since these relationships in humans often cause significant harm to at least one party, they are often non-consensual, etc.).

I notice this fallacy often in scientific discussion, and it is sometimes made by scientists. But philosophers have been warning us about making this mistake so much that they’ve given it a name: the appeal to nature fallacy. Generally speaking, my (admittedly anecdotal) experience suggests that philosophers are more careful with the form of their reasoning than scientists are. As further evidence for this claim, philosophers are notorious for writing their arguments and conclusions explicitly and very carefully; they sometimes even number their premises in the way that I did above. Philosophers follow this practice because they think it removes a lot of potential ambiguity and misinterpretation. Scientists follow this practice less often but I think that, in many instances, doing so would make their work more clear. After all, if explicitly outlining arguments was a standard scientific practice, then I wouldn’t have had to create my own (possibly misinterpreted!) outline of the sex-talk argument above. The point here is that, in so far as scientists use arguments in their work, they are doing philosophy in some sense, and that scientists may have something to learn from philosophers about how to present their arguments more effectively.

This week’s Radiolab provides another example of how philosophical thinking fails to get the credit that it deserves. It’s worth looking at the description of the episode:

The “mind” and “self” were formerly the domain of philosophers and priests. But in this hour of Radiolab, neurologists lead the charge on profound questions like “How does the brain make me?”

The connotation here (if I’m not being overly sensitive to the science-philosophy tension) seems to be that, for centuries, philosophers have spun their wheels with the question of personal identity, but that no serious work has been done until brain scientists caught on to the question. This kind of thinking is problematic for a number of reasons. For one, in an incredibly complicated area such as personal identity, just formulating the right set of questions is difficult. In many areas, philosophers deserve credit for formulating the right questions. Second, as philosopher Alva Noë has argued, neuroscience operates under the philosophical assumption that consciousness and the self are cognitive phenomena that are to be found by studying neural activity in the brain. This assumption is not empirical or “scientific”; it has roots in Cartesian philosophy (and I don’t think it is at all obviously true). Noë’s position, which builds upon work by Martin Heidegger and Maurice Merleau-Ponty, is that we are missing something of fundamental importance if we do not try to understand the self in terms of our embodied existence. Finally, the idea that discoveries in neuroscience (and many other disciplines) are reliable has been called into question by epidemiologist John P. A. Ioannidis. Ioannidis, in the celebrated paper, “Why Most Published Research Findings Are False,” argues that, in “hot” fields where there are small studies, small effect sizes, or great financial interest, many findings are likely to be false (arguably, neuroscience falls into each of these categories). While Ioannidis is not a philosopher, his arguments attack the (poor) statistical reasoning of some scientists. Since statistics is really a formalization of inductive reasoning, it can be understood as a sub-discipline of philosophy. Thus, I think Ioanidis’s criticism of scientific findings is largely a philosophical criticism.

There are many other examples of the importance of philosophy in scientific practice: disputes in the foundations of statistics, the interpretation of probability in modern physics, the definition of life in biology, and the understanding of measurements and mathematical models are all inherently philosophical. We ought to  acknowledge that philosophy place such an important role.




1 Comment

Filed under Uncategorized

Ethical Reasoning

Experimental philosophers and sociologists have become interested in the way in which people attempt to reason about ethical choices. Some of their studies have found that people, and especially Americans, think about ethical choices from a relativistic framework. Such people are called ethical relativists. In short, ethical relativists believe that ethical standards are a matter of personal opinion or taste. So, for example, when one says that it is wrong to cheat or lie, one is simply expressing their opinion with respect to cheating or lying in a particular context.

But ethical relativism is a problematic position. For one, if ethical relativism is true, we are in no position to say that an action performed by others is wrong; after all, if ethical standards are a matter of personal opinion, then it is only one’s opinion that some action is wrong. This has far reaching consequences. It follows that, if we truly are ethical relativists, then we cannot condemn actions that many believe to be obviously wrong— e.g., the senseless and unjustified torture of another person, rape, etc. For, as relativists, our condemnations are only our personal opinions; and, the torturer and rapist also have opinions on whether these actions are wrong. If relativism is true, each opinion is as good as any other.

Fortunately, most people that assert ethical relativism in haste aren’t really ethical relativists. Most have deeply held ethical intuitions that are counter to ethical relativism. For example, most people do believe that senseless and unjustified torture is wrong for reasons that go beyond their own tastes and opinions: most plausibly, the action is wrong because senseless torture unnecessarily violates the preferences of the person being tortured.

Once we discover that ethical relativism is inconsistent with our deeply held ethical intuitions, we are presented with a choice: either we abandon ethical relativism and choose to act in a way that is consistent with the consequences of our deeply held ethical intuitions, or, we choose to accept that, at times, we are inconsistent.

The latter action—accepting that we are inconsistent—can have disastrous consequences. To see this, suppose that we believe some contradiction—for example, that, ‘it is permissible to torture and it is not permissible to torture’. Then, we must believe that, ‘it is permissible to torture or the moon is made of green cheese’. Why must we believe this claim? Well, an ‘or’ statement is true if at least one of the disjuncts (i.e., ‘it is permissible to torture’, and ’the moon is made of green cheese’) is true. If we assert the first disjunct as true, as we did above, then the ‘or’ statement must be true. But, since we also believe that ‘torture is not permissible,’ we can (validly!) deduce that ‘the moon is made of green cheese.’

So, starting from an inconsistency (a contradiction), we are logical forced to believe anything. The consequences of this are disastrous: if we are inconsistent, and thus, hold contradictory beliefs, then we are susceptible to believe anything whatsoever—e.g., that ‘the moon is made of green cheese‘ or, worse, that ‘life is not worth living’. This fact is reason enough for us to try hard not to hold contradictory beliefs. Thus, if we have deeply held ethical intuitions about torture, murder, etc., we are better off if we abandon relativism and consider the logical consequences of those intuitions.


Filed under Uncategorized