Blog Archive


Come Reason's Apologetics Notes blog will highlight various news stories or current events and seek to explore them from a thoughtful Christian perspective. Less formal and shorter than the Web site articles, we hope to give readers points to reflect on concerning topics of the day.

Powered by Blogger.
Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Friday, December 15, 2017

Why a Scientific Consensus isn't What it's Cracked Up to Be

A couple of years ago, the Internet blew up over a huge debate—one that captured the attention of popular culture and caused fierce disagreements between friends and family members. I am, of course, talking about the infamous "What color is the dress?" meme portrayed in the accompanying image. One can perceive the dress colors to be either blue and black or white and gold, and it seems for most people once you see the colors a certain way, you simply can't see them from the other perspective.

Now, imagine you want to buy a gift for your mother's birthday and your father had sent you that same picture with the recommendation that since he's buying her a dress, you should purchase the accessories. Would your purchases make sense? We don't know. It all depends on what you see and whether your perception matches reality. Even if the one buying the accessories had the most exquisite fashion sense and was gifted in picking out the most tasteful and appropriate accoutrements, it matters what their perception of the dress colors were.

Scientific Consensus is Founded on Paradigms

I offer the thought experiment because it helps us to better understand how paradigms influence people. We all make choices based on a specific way of seeing things, and this is true in the fields of science as much as it is in any other discipline. In fact, the terms "paradigm" and "paradigm shift" were coined by Thomas Kuhn in his earthshaking book The Structure of Scientific Revolutions. Kuhn there demonstrates how scientific knowledge hasn't been acquired in a slow, steady, progressive line. That's a myth.

Kuhn states that what really happens is young scientists accept certain assumptions about how the world works because they've been taught that from those already in the field. He writes that the student studying in whatever scientific discipline he will eventually practice,
joins men who learned the bases of their field from the same concrete models, his subsequent practice will seldom evoke overt disagreement over fundamentals. Men whose research is based on shared paradigms are committed to the same rules and standards for scientific practice. That commitment and the apparent consensus it produces are prerequisites for normal science, i.e., for the genesis and continuation of a particular research tradition.1(emphasis added).
What this means is that scientists within a particular field of study all start with some basic assumptions and then they rely upon those assumptions to solve problems on details within that model. So, if one were to start with the paradigm that the dress is white and gold, then the answer to the problem of what kind of accessories would complement the dress will come out differently than if one were to hold the paradigm that the dress is blue and black.

The Consensus Can be Influenced by Outside Factors

If you are basing your accessory choices on the paradigm of a white and gold dress, and you find that the majority of those who you learn from and those you work with have also accepted this paradigm, you no longer ask about the color of the dress or whether whiter is a better color for a handbag than back. When someone comes into your fold and suggests black for a handbag, your reaction would be one of incredulity. Certainly any fool can see that black is the wrong color choice! You might even make fun of them and dismiss them as not doing good science. But what they've questioned is the paradigm you have assumed, not the reasoning to the color if the paradigm were true.

Here's the thing, though. These paradigms themselves are frequently caused by factors beyond dispassionate science. Kuhn himself discovered this when investigating the Ptolemaic and Copernican ideas of the solar system. Ptolemy's paradigm was first formed by Aristotle, who held to a couple of very Greek ideas, one of which was that some bodies are naturally inclined to move in a circular pattern. In other words, planets by their nature would move circularly because that's what they do. Aristotle's assumption set the paradigm that worked for many centuries and allowed the scientists for those days to come up with accurate predictions.

It's much like another image that takes on conflicting perceptions. Look at the drawing of the animal I have here. Is this a drawing of a rabbit or a duck? Normally, you will perceive one or the other first. Interestingly, outside factors make a difference in what you see. The Independent reports "At different times during the year, the result of the test seem to change. During the Easter period, people are more likely to see a rabbit first but in October, seeing the duck first is more common."2

Aristotle's assumption on the nature of bodies moving in a circular pattern was based on Greek philosophy. Thus it was a philosophical commitment that shaped the science of planetary orbits and of our understanding the nature of our solar system for centuries. It was only when instruments became more sophisticated that flaw could be seen in the model. These flaws grew to the point of crisis until those within the community had to abandon their paradigm and adopt a new one. This is what Kuhn labels a paradigm shift.

The Consensus Can Be Wrong

Before a paradigm shift occurs, there is a scientific consensus about whatever point one is discussing. But even though a consensus exists, that doesn't mean those who oppose the consensus are wrong. They may in fact be right, but they are simply offering a different paradigm.

When you read about the contentious scientific issues of our day like the origin of life, man-caused climate change, and neo-Darwinian evolution, it won't be long before someone makes the claim that given a scientific consensus exists on topic X, anyone who holds a contrary view is anti-science. That's simply wrong. It may be that those who hold to the contrary position see the flaws and wish to question the paradigm itself. The bigger question thinking people need to ask is "what are the assumptions implicit in this position and have they been tested?" The question of the color of the dress can be answered, if one enlarges the frame to see more of the picture. Doing this isn't anti-science but what Kun calls extraordinary science.

So let's not point to the idea of a scientific consensus as the final card in a debate. The consensus may be the very thing that needs to be questioned.


1. Thomas Samuel Kuhn, The Structure of Scientific Revolutions, Second Edition., University of Chicago Press, 1970. 11.
2. Chloe Farand. "Duck or Rabbit? The 100-Year-Old Optical Illusion That Could Tell You How Creative You Are." The Independent, Independent Digital News and Media, 14 Feb. 2016,

Tuesday, May 16, 2017

Book Review: Dictionary of Christianity and Science

It should be no secret science plays an inordinately large role on modern culture. As I've noted before, scientific advancements have allowed human beings banish diseases that were once fatal, create new materials in the lab that outrival nature, and generally control and command their world in ways that had heretofore been thought impossible. In short, the last 150 years of scientific discovery have changed everything about how humans live and interact with their world.

Because of these great successes, societal attitudes toward science have become distorted. People place science on a pedestal, believing that if a claim is scientific, it will be unbiased and more reliable than other forms of knowledge. Science and faith are seen as foes and atheists will challenge Christians, claiming scientific facts are incrementally undermining Christian beliefs.

In reality the war between Christianity and science is a myth and the recently released Dictionary of Christianity and Science goes a long way toward helping to dispel that myth as the fraud it is. General Editors Paul Copan, Tremper Longman III, Christopher L. Reese, and Michael G. Strauss have assembled a strong collection of writings covering a wide range of topics in what would more properly be understood as a cyclopedic volume instead of a dictionary. With over 140 top scholars writing on over 450 topics, the Dictionary serves as an excellent starting point to research various topics that most Christians will face when researching or discussing these issues.

Given the breadth of the subject matter, the articles could have all been relegated to short introductory overviews and a list of additional resources at the end of each entry. But the editors wisely chose to have three different types of articles appear in the Dictionary. For the less controversial and more agreed upon topics (such as key historical figures in science or specific terms like emrpicism), an introductory article is all that's warranted. But for other entries they chose to include longer articles  labeled essays that give more background, competing views, and the evidence they rely upon. The entry on "The Genesis Flood and Geology" is an example of one such essay.

Finally, there are the multiple-view discussions where different scholars who take up contrary positions are each allowed an extended article within the same entry. For example, of one were to look up the state of creationism, the user would be greeted with an introductory article on the concept of creation, an article entitled "Creation, Intelligent Design and the Courts," and four essays on creationism: one critical and one supportive of old-earth creationism and one critical and one supportive of young-earth creationism.

I'm really impressed with the level of scholarship and the wide range of topics that have been compiled in the volume. The editors included key figures like Thomas Kuhn and philosophical concepts like Inference to the Best Explanation that are not well-known outside the study of the philosophy of science. Further, articles on people like Galileo Galilei seek to strip the legendary tales of his scientific advancement and show why it would be incorrect to see his conflict with the College of Cardinals as a case of science versus religion.

There are a few drawbacks to the book. First, there is no table of contents or topical index. I suspect that is because it is marketed as a dictionary and as such will have its entries placed in alphabetical order. However, if someone looks up the aforementioned creation entry, he would be missing several other articles that focus on the topic, with multiple-view entries on the flood and on the Genesis account in the F and G areas respectively. One would then have to turn to the I section in order to read the Intelligent Design entry. And if someone doesn't know who Thomas Kuhn is and why his work is so important, it may be easy to miss this entry.

Secondly, while it cannot be avoided, the book is a product of this particular time. The articles that have the most information are those that are the most debated right now. In ten years, this volume will suffer from its age as some debates will change, others may be settled, and new discoveries will make several of the entries obsolete. I would hope an accompanying online site would be able to provide some kind of resource direction until the inevitable updated volume will be released. But these are just quibbles in an otherwise excellent product.

I think every Christian family should have a copy of the Dictionary of Christianity and Science. Anyone who has sought to understand controversial issues on science and faith by searching on Google or looking up the topic on Wikipedia knows that getting solid information from top scholars is challenging to say the least. I've noted myself that any old fool with a modem and an opinion can post online or edit a Wikipedia entry. The Dictionary of Christianity and Science gives the Christian a strong place to start in his or her understanding of how their faith does not contradict modern scientific advancement as well as to get a deeper understanding of what science actually is and where the state of the debates lie.

Tuesday, April 11, 2017

Unhinging the Extraordinary Claims Require Extraordinary Evidence Mantra

As it is Easter season, skeptic Michael Shermer has an article in appearing in Scientific American entitled, "What Would It Take to Prove the Resurrection?" Shermer writes that as a skeptic, there are propositions he can accept as true, such as the number of pages in a magazine, the extinction of the dinosaurs, and the origin of the universe by a big bang. Unsurprisingly however, Shermer can think of nothing that would count as enough evidence for the resurrection for that particular proposition to be considered true. He claims this is due to the "principle of proportionality," something that "demands extraordinary evidence for extraordinary claims. Of the approximately 100 billion people who have lived before us, all have died and none have returned, so the claim that one (or more) of them rose from the dead is about as extraordinary as one will ever find." 1

So, Shermer has fallen back to the old canard that extraordinary claims require extraordinary evidence. But what does he mean "extraordinary evidence?" The phrase sounds good, but is truly fuzzy when one thinks about it. As I've stated before, evidence is either strong or weak; categories like extraordinary don't really fit here. But it isn't as though we have no evidence. Shermer himself brings up eyewitness testimony, quickly dismissing them as possibly being superstitious or seeing "what they wanted to see." But what evidence has Shermer offered for those motivations? He's offered nothing except the claims "The principle of proportionality also means we should prefer the more probable explanation over less probable ones, which these alternatives surely are."2

Extraordinary claims don't only deal with miracles

One problem with Shermer's use of the "extraordinary claims require extraordinary evidence" trope is he is inconsistent in using it himself. Remember I said that Shermer holds to the universe as having a beginning. But ask him who was ultimately responsible for that beginning, and Shermer dismisses the idea of God out of hand. In a previous article, he wrote, "For millennia humans simply said, ‘God did it': a creator existed before the universe and brought it into existence out of nothing. But this just begs the question of what created God—and if God does not need a creator, logic dictates that neither does the universe."3

Here Shermer makes an obvious category error, one that has been brought to his attention several times in debates with Christians. Yet, he persists in believing the universe (or possibly some kind of universe-generating machine) has come into existence from nothing. But isn't this an equally extraordinary claim? If his statement "Of the approximately 100 billion people who have lived before us, all have died and none have returned, so the claim that one (or more) of them rose from the dead is about as extraordinary as one will ever find" is the criteria for an extraordinary claim, then the universe beginning from nothing is surely even more extraordinary. In all of human history, there has never even once been anyone who has observed something coming into existence from nothing at all. Not once. Even quantum fluctuation/quantum foam is not nothing, for it has specific attributes and potentials. None of those 100 billion people Shermer points to will bolster his claim for an uncaused universe. Yet, he isn't skeptical about that proposition. In fact, he prefers it.

If the principle of proportionality were to be applied consistently, Shermer would have to admit that the evidence for a personal cause for the origin of the universe is much more probable than an uncaused universe popping into existence out of nothing. Is Shermer guilty of what he claims about the eyewitnesses of the resurrected Jesus? Is he only seeing what he wants to see or perhaps superstitious or credulous? I don't think he would admit to any of these. But if Shermer's principle of proportionality fails here, then perhaps it isn't the last word on how to discern the truth for events like the resurrection, either.


1. Shermer, Michael. "What Would It Take to Prove the Resurrection?" Scientific American. Scientific American, 08 Mar. 2017. Web. 11 Apr. 2017.
2. Shermer, 2017.
3. Shermer, Michael. "Much Ado about Nothing." Michael Shermer. Michael Shermer, May 2012. Web. 11 Apr. 2017.

Monday, August 15, 2016

What if Morality was Based on Empiricism instead of Christianity?

The Western world is what it is because of the enormous influence of Christianity. Without a Christian understanding of human beings as those who bear the image of God, our society would be a far different place.

However, atheists have been pretty vocal in their contention that a society based on empirical mortality and not Christian values would be better for humanity. Neil deGrasse Tyson has recently advocated for such a virtual society he named "Rationalia." Tyson's proposal is problematic on many grounds, but he isn't the only one advocating for such a world.

New Atheist Sam Harris doesn't believe a Christian worldview is necessary to ground moral principles, either. In his book The Moral Landscape, Harris tries to argue for a secularly based moral framework. He believes that values and morality "translate into facts that can be scientifically understood: regarding negative social emotions, retributive impulses, the effects of specific laws and social institutions on human relationships, the neurophysiology of happiness and suffering, etc. The most important of these facts are bound to transcend culture—just as facts about physical and mental health do."1

Viewing People through Empirical Lenses

Is Harris right? What would happen if a thoughtful, advanced culture viewed individuals through only an empirical framework? Physical and mental health states, as Harris mentions above, would feed into the value society places upon those individuals. This isn't speculation; we have a couple of good examples to show how this happens.

Along with Christianity, ancient Greek thought has significantly shaped western culture. At its zenith, Greece was one of the most advanced civilizations the world has ever seen and its philosophers continue to impact how we understand our world. Aristotle sought to scientifically categorize the various relationships between people in his On Politics. There, he begins

As in other departments of science, so in politics, the compound should always be resolved into the simple elements or least parts of the whole. We must therefore look at the elements of which the state is composed, in order that we may see in what the different kinds of rule differ from one another, and whether any scientific result can be attained about each one of them.2

Aristotle then goes on to systematically build his case. There are different kinds of communities to which we all belong: households/families, villages, city-states. He also notes there are also two kinds of necessary relationships for the human species to survive: the male-female relationship, which is necessary for the propagation of the species, and the ruler-servant relationship. Of the second, Aristotle's observations lead him to conclude that some people are naturally predisposed to be slaves of other, more capable men:
But is there any one thus intended by nature to be a slave, and for whom such a condition is expedient and right, or rather is not all slavery a violation of nature?

There is no difficulty in answering this question, on grounds both of reason and of fact. For that some should rule and others be ruled is a thing not only necessary, but expedient; from the hour of their birth, some are marked out for subjection, others for rule.3
When reading Aristotle's reasoning, one can see how systematically it moves from empirical observation through reason to its conclusions. Certain people are not smart, or not capable of leadership, or they don't measure up in any one of a myriad of ways. To Aristotle, it makes sense that those individuals are naturally predisposed to be the servants of others—the Gammas and Deltas of Huxley's Brave New World.

Darwinian Theory Leads down a Similar Road

But many people would dismiss this example as an argument against a "scientific approach" to morality simply because it's old. They may be tempted to say something like "We've learned so much in 2500 years, no one would come to such conclusions today." Yet, the modern eugenics movement, based on Darwinian evolutionary theory, took the United States by storm, classifying certain people as less worthy to reproduce. This even led to a Supreme Court case where the Court upheld the forced sterilization of Carrie Buck. Justice Oliver Wendell Homes, Jr. famously ordered Buck's sterilization concluding:
It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes. Jacobson v. Massachusetts, 197 U.S. 11. Three generations of imbeciles are enough.4
Adding to this, just two years ago famous atheist Richard Dawkins held that for a pregnant woman who has discovered her unborn baby has Down's Syndrome, morality means killing the child:
For what it's worth, my own choice would be to abort the Down fetus and, assuming you want a baby at all, try again. Given a free choice of having an early abortion or deliberately bringing a Down child into the world, I think the moral and sensible choice would be to abort. And, indeed, that is what the great majority of women, in America and especially in Europe, actually do. I personally would go further and say that, if your morality is based, as mine is, on a desire to increase the sum of happiness and reduce suffering, the decision to deliberately give birth to a Down baby, when you have the choice to abort it early in the pregnancy, might actually be immoral from the point of view of the child's own welfare.5
Each of these positions begin with a natural or empirical understanding of human beings. They measure people based on their output. But Christianity holds there is more to a person than his or her observable advantages for each one bears the image of God, which gives each one transcendent value. What other rational basis can one offer for holding that all people, even those with mental disabilities, hold inherent worth? There is no empirical measurement that makes us otherwise equal and at that point Aristotle and Dawkins may well be right.

What would a society without Christianity look like? It looks pretty scary indeed.


1. Harris, Sam. The Moral Landscape: How Science Can Determine Human Values. New York: Free, 2010. Print. 1-2.
2. Aristotle. "Politics." The Basic Works of Aristotle. Ed. Richard McKeon. New York: Random House, 2001. Print. 1127.
3. Aristotle, Pol. 1132.
4. Russell, Thomas D. "BUCK v. BELL, Superintendent of State Colony Epileptics and Feeble Minded, 274 U.S. 200 (1927)." American Legal History – Russell. 18 November 2009. Web. June 24, 2013.
5. Dawkins, Richard. "Abortion & Down Syndrome: An Apology for Letting Slip the Dogs of Twitterwar." Richard Dawkins Foundation for Reason and Science. Richard Dawkins Foundation, 21 Aug. 2014. Web. 15 Aug. 2016.
Image courtesy Wellcome Library, London and licensed via the Creative Commons Attribution 4.0 [CC BY 4.0] license.

Wednesday, August 10, 2016

Neil deGrasse Tyson Violates Rationalia's One Principle

I find it fascinating how blinded people can be to their own biases. One recent case in point is cosmologist Neil deGrasse Tyson and his imaginary country of Rationalia. Originally spawned by a single tweet, Tyson asserted "Earth needs a virtual country: #Rationalia, with a one-line Constitution: All policy shall be based on the weight of evidence."

It's pretty easy to see the glaring holes in such a proposition and several commentators were quick to point out a few of them. Practices such as eugenics, abortions for disability or population control, legislating against an unnatural ice age, and other disastrous consequences would have easily followed if Tyson's dream was a reality in prior decades. Several commentators for organizations like The Federalist, U.S. News and World Report, and New Scientist  pointed out the foolishness in his original tweet.

However, Tyson doubled-down on his proposition with a recent Facebook post. Linking to those articles before casually dismissing them out of hand, Tyson upped the ante for his proposition, maintaining that Rationalia would not only solve deep political divisions, but it would usher in a new panacea of prosperity for humanity:
Unlike what typically occurs between adversarial politicians, in scientific discourse, when we disagree with one another there's an underlying premise: Either I'm wrong and you're right; you're wrong and I'm right; or we're both wrong. The unfolding argument actually shapes the quest for more and better data to resolve the differences so that we can agree and move on to the next unresolved problem.

In Rationalia, the Constitution stipulates that a body of convincing evidence needs to exist in support of an idea before any Policy can established based on it. In such a country, data gathering, careful observations, and experimentation would be happening all the time, influencing practically every aspect of our modern lives. As a result, Rationalia would lead the world in discovery, because discovery would be built into the DNA of how the government operates, and how its citizens think.1

The Competitive World of Scientific Theory

Of course, Tyson's Pollyana-ish assumption that scientists are always objective about the data while politicians are simply adversaries is ridiculous. Thomas Kuhn's The Structure of Scientific Revolutions lays out just how nonsensical such an assumption is. Kuhn argues that scientific consensus of a certain concept, such as the nature of light, can have "a number of competing schools and sub-schools"2 arguing for their own understanding even when they are all using the same data. Kuhn states:
"Each of the corresponding schools derived strength from its relation to some particular metaphysic, and each emphasizes, as paradigmatic observations, the particular cluster of optical phenomena that its own theory could do most to explain. Other observations were dealt with by ad hoc elaborations, ort they remained as outstanding problems for further research. (emphasis added)"3
These are not detached, non-emotional observations. Scientists are people and each has a dog in the fight, so to speak. It isn't surprising that they would want to see their own theories succeed, just as politicians would want to see their own legislation pass. It isn't malicious, it's being human. And in modern research, when you add research grant money into the mix, there's a potent motivator to really push to justify one's efforts.

Paradigms and Flaws

Kuhn goes on to tell of other problems that plague scientific discourse, such as the "body of evidence" that Tyson looks toward may itself be limited given the limits of technology. Scientists may not be able to see how their theories are flawed simply because they have to guess at what data they should measure, where to look for it. Maybe the instrument that proves their theory false hasn't yet been invented. Charles Darwin couldn't have realized the complexity of living cells since there were no microscopes capable of displaying the amazing molecular machinery that allow the cell to function in his day.

This "body of evidence" that Tyson references may also be deeply flawed.  Researchers at Open Science and at Bayer labs recently found 65 to 75% or more of results reported in some of the most prestigious scientific journals could not be repeated. There was a strong body of evidence for the researchers' conclusions, but no one had previously bothered to check and see if the evidence was good or not. In turn, we get biased polices such as the Food and Drug Administration's 60 year ban on dietary fat when it turned out the scientist pushing for the restrictions was more concerned with his legacy than the facts.

Some of the problem lies in the technicality and specialization of the scientific disciplines themselves. Kuhn notes that as one of the competing concepts gathers a majority, it becomes a consensus and ultimately a paradigm that closes out others.4 Then, as the field becomes more specialized, the paradigm is assumed by the practitioners and "his communiques will begin to change in ways whose evolution has been too little studied but whose modern end products are obvious to all and oppressive to many."5

Tyson Ignores This Body of Evidence

Kuhn's arguments are based on historical observation for how scientific paradigms have developed. He has quite a body of evidence from which to draw: the entire history of the scientific enterprise. Yet, Tyson seems to completely ignore this in his proposal for a country of Rationalia. I find that interesting. If Tyson won't even acknowledge the body of evidence against science being just as flawed as politics or other governing methods, then he is proving the very point his critics are making. Just because a scientist comes to the conclusion of X doesn't make it right, morally good, or unbiased.


1. Tyson, Neil deGrasse. "Reflections on Rationalia.", 7 Aug. 2016. Web. 10 Aug. 2016.
2. Kuhn, Thomas. "The Structure of Scientific Revolutions." The Philosophy of Science: An Historical Anthology. By Timothy J. McGrew, Marc Alspector-Kelly, and Fritz Allhoff. Chichester, U.K.: Wiley-Blackwell, 2009. 491. Print.
3. Kuhn, 2009. 491.
4. Kuhn, 2009. 492.
5. Kuhn, 2009. 492.
Image courtesy NASA Goddard Space Flight Center from Greenbelt, MD, USA (Dr. Neil deGrasse Tyson Visits NASA Goddard) [CC BY 2.0], via Wikimedia Commons

Thursday, June 30, 2016

Why God Doesn't Create a World with Less Suffering

Probably one of the most difficult objections a Christian faces to his or her faith is how an all-powerful, all loving God can allow so much suffering in the world. I've talked at length about the Free Will Defense (see the short video here), which is the most common response to the problem of evil.

This means in order for human beings to be free, we must be free to do what is wrong as well as what is right. As I've explained, God cannot do what is logically impossible. That means he cannot make a square circle or a light-filled darkness. To have the ability to freely love God and obey him, an individual must also have the ability to reject God and disobey his laws. If I grant my son the freedom to drive my car while I lock him in his room to prevent him from crashing it, I've certainly not given him freedom. That means if God wants to create free creatures, they simply must be able to do evil acts as well as good ones.

Alvin Plantinga argues in a similar way, arguing if God wanted to create creatures that are significantly free, he must give them the ability to do things that are morally evil. Thus, Planting concludes "The fact that free creatures sometimes go wrong, however, counts neither against God's omnipotence nor against His goodness."1

Why Can't an All Powerful God Make People Only Hurt Themselves?

While the Free Will Defense has convinced philosophers that there is no logical contradiction between an all loving, all powerful God and the fact that evil exists in the world, many atheists still object to what they perceive as too much evil or the wrong kinds of evil in the world today. Of course, explaining just how that would play out is a much more daunting task, one the atheist is usually incapable of so doing.

Richard Norman feels that God should have created a world where people are still free, yet if they do evil things, they will only inflict suffering upon themselves.2 In the abstract, this sounds reasonable, but it really isn't. It strikes me that in order for God to create a world where evil acts hurt only the perpetrator, one of three scenarios must exist. The first is that the perpetrator lives in a world where his or her actions have no significant consequence to any other being. Think of a person taking a sword and slicing another in two but when he does so the sword passes through the other person's body like an apparition. We have a good model for this kind of world in the video game area; you may lose your lives and lose the game, but you harm no one else.

But we know that any virtual world isn't as valuable as the real world. We are appalled by those who would shun actual relationships so as to only seek sexual gratification only through virtual reality apparatus. It is because video games don't provide real world consequences for one's actions that we understand them as an occasional pastime activity and not what should be driving and informing our humanity.

The second scenario is to reduce the choices people can make to those that are non-meaningful, save self-destruction. In this case, a person would have no choices available to him or her regarding others. We could not choose who we love, if we love, if we walk, if we work. Everything would be run off a program. The only choice we would have is a "harm thyself" button on our chests. That one we can push, knowing that if one did decide to blow himself up, the machine-maker would immediately replace him with another so no other cog in the wheel would fail. This obviously denies the significant portion of creating a significantly free being, which doesn't make it much of a choice.

We Need Suffering to Be Human

The last option available in order for no one to harm another is a world that I think God could actually construct. That is simply to create a world where every single human being is isolated from one another. If one has no relationship with others, it becomes impossible for any emotional attachments to develop and for one's actions to produce any suffering upon another. In such a world, we would perceive ourselves to be alone. God would basically be creating billions of worlds of single individuals. But again, this robs humanity of one of its distinguishing features—the ability to develop meaningful relationships, empathy, and love for one another.

The fact that people suffer in this world is difficult to understand. However, the fact that it bothers us is hugely valuable. Empathy is part of what makes us human and it is part of what it means to be made in God's image. To be human means to understand that suffering is a bad thing, which requires us to be relational beings who can make free choices. Take away our freedom to choose to do evil and you are left with beings who are less than human.


1. Plantinga, Alvin. God, Freedom, and Evil. Grand Rapids: Eerdmans, 1977. Print. 30.
2. Dr. Norman explained his position on the Unbelievable? radio program "Why does God allow Evil? Clay Jones vs Richard Norman." 21 June, 2014.
Image courtesy Tripwire Interactive - , CC BY 3.0

Tuesday, May 31, 2016

When Does Cultural Insanity Hit the Breaking Point?

The Internet is ablaze with all kinds of opinions on about the shooting of Harambe, a seventeen-year-old gorilla zookeepers shot at the Cincinnati Zoo after the beast grabbed a three year old child who had fallen into his enclosure. Twitter showed the hashtag #JusticeForHarambe was trending over the weekend and a petition entitle "Justice for Harambe" has garnered over 350,000 signatures urging that the parents of the toddler "be held accountable for the lack of supervision and negligence that caused Harambe to lose his life."1

Obviously, this only proves there are 350,000 people in the world who have never had to watch a toddler for an extended period of time.

Others are decrying the response of the zoo in shooting the ape. NBC News reported "Animal rights activists continued to protest Monday" over Harambe's death. But just what is there to protest? A child's life was in danger and the only way to guarantee his safety was to shoot the animal. This is a no-brainer, yet it has seen a significant amount of coverage and discussion across the various media outlets.

Detaching Desire from Reality

The gorilla protesters aren't a big thing by themselves. However, the event is indicative of a very scary trend that has been developing rather quickly in society. People have basically decoupled themselves from reality. We have seen it in the transgender issue where people not only wish to believe their desire is enough to change the reality of their biology; they demand that everyone else reinforce their desire. We've seen it in spoiled college kids who think if they only hear opinions and ideas about how they want the world to be, they won't be "triggered" and therefore bad things won't happen to them. We've seen it in every televised police pursuit where each felon seems to really believe that he or she can unilaterally escape an entire police squad wit radios, spike strips, and helicopters to track their every move. How do those always end?

While it's easy to point at each scenario and shake our heads, I'm wondering when will enough be enough? I understand and accept in any free society one will face competing belief systems. I think that's actually healthy. Everyone should be challenged to understand and produce reasons for the beliefs he or she holds. But that isn't what this is. We've moved from reasoning to reactionary, and from truth to tale. Just as those who use edited photos and posts to craft a non-real version of their lives on social media, there are those who now believe they can similarly shape their entire world experience.

The problem is that the real world doesn't play this game. People end up getting hurt. Zookeepers explained that tranquilizers don't work like you see in the movies. They can take up to 30 minutes to take effect. In the interim, you've just angered a 450 lb. gorilla who can crush that toddler like an empty soda can. Is that really a good plan? If it were your child, would you still advocate for it?

Reality can be hard. Ignore it and sooner or later it comes back at you like Glenn Close in Fatal Attraction, coldly asserting "I'm not going to be ignored!" If protesters were there to stand in front of the zoo marksmen, stopping them from shooting and the child died, then what? Who would be to blame then?

I applaud the zoo officials for making the right call in this instance. Human beings are more valuable than animals, full stop. If you must choose between one or the other, choose the human. That's what being civilized is.


1. Hurt, Sheila. "Cincinnati Zoo: Justice for Harambe.", 29 May 2016. Web. 31 May 2016.

Tuesday, May 24, 2016

Morality Must be More than Increasing Happiness

Moral knowledge is something people take for granted. We understand that someone who inflicts pain and torture for recreational purposes is evil. That's why the rapist is so reviled. It shows morality as something objective, not merely a preference to be held. Moral values and duties must be anchored in God to be meaningful.

Yet, people don't like to admit an objective morality means they themselves may be morally culpable for acts they choose. So they try to escape the consequence of objective morality. Some do this by trying to claim that morality isn't objective but relative. This reduces them to state absurd conclusions like rape may be morally justified. Others try to ground objective morality in something other than God. I think those folks don't really have an appreciation for what true morality entails. Still, they offer ideas such as morality is a byproduct of evolutionary survival benefits.

One of the more popular ways to try and hold to an objective morality while dismissing God comes in the form of utilitarianism. Utilitarianism holds that increasing happiness and diminishing suffering is the measure of what's good. There are many problems with utilitarianism, but one particular problem is its definition is too narrow to identify all questions generally agreed as moral questions as being truly questions about morality. Richard Swinburne makes this point well:
For some, a belief about moral worth is simply a belief about which actions are important to do. But that use would allow the narcissist, who thinks that is important that he promote his own happiness, to have a moral view, and so it would fail to bring out the distinction which most of us make among the considerations by which we judge the worth of actions, and which I argued to have such importance. For others, a belief about moral worth is a belief about the importance of actions in virtue of their universalizable properties of a certain kind—e.g. those concerned with sex or (more widely) those concerned with the promotion of happiness or unhappiness of other people. On the latter account it would be a moral view that men ought to feed the starving; but not a moral view that men ought to worship God, or that artists who can paint great pictures ought to do so even if those pictures will be seen only by themselves. If you use 'moral' in this limited sense, you can say without contradiction 'I think that religion is more important than morality'; but on my preferred use it would be self-contradictory to assert of anything describable in universal terms that it was more important than morality. A man's morality is (with the qualification that it be not centred on self or any other particular individual) what he believes most important. My grounds for preferring my use are that so many men's beliefs about which actions are important to do are supported or opposed both on grounds which concern the happiness and unhappiness of other people and also on other grounds (e.g. whether the action shows due loyalty, pays honour to whom honour is due, involves keeping a promise or telling the truth), that confining the term to the narrower use would obscure the overlap of grounds of the different kinds in leading to beliefs about overall worth.1
Swinburne is arguing for a view of morality that is objective and prescriptive; moral values and duties are real "oughts" to which all of humanity are beholden. If something like telling the truth is valuable in itself, then morality must be larger than simply adding to the happiness of an individual or individuals. If morality is only about increasing happiness, it begs the question of whether honor has any real meaning since one can bestow false honor on another to make him or her happy. But there is something not quite right with honoring a coward alongside a hero after the battle. And the weighing of the rightness or wrongness of an action identifies it as a moral question, one that defining morality as increasing happiness alone cannot solve.


1. Swinburne, Richard. The Evolution of the Soul. Oxford: Clarendon, 1986. Print. 223-224.

Wednesday, March 16, 2016

The Good, the True, and the Beautiful

Is beauty something that is objective? As I speak with people today, many answer with a quick "no." "Beauty is in the eye of the beholder," they say. However, the thought may be a bit too easily dismissed. Certainly, we all would have serious questions about a person who upon observing a radiant sunset over the Grand Canyon would exclaim "Ew! That's so ugly!" There's something universal in our appreciation for the beauty of that vista.

In his book Beauty: A Very Short Introduction, Roger Scruton tackles this point. He explains:
There is an appealing idea about beauty which goes back to Plato and Plotinus, and which became incorporated by various routes into Christian theological thinking. According to this idea beauty is an ultimate value-something that we pursue for its own sake, and for the pursuit of which no further reason need be given. Beauty should therefore be compared to truth and goodness, one member of a trio of ultimate values which justify our rational inclinations. Why believe p? Because it is true. Why want x? Because it is good. Why look at y? Because it is beautiful. In some way, philosophers have argued, those answers are on a par: each brings a state of mind into the ambit of reason, by connecting it to something that it 1s m our nature, as rational beings, to pursue. Someone who asked 'why believe what is true?' or 'why want what is good?' has failed to understand the nature of reasoning. He doesn't see that if we are to justify our beliefs and desires at all, then our reasons' must be anchored in the true and the good.1
Scruton then begins exploring the question in more depth. He notes that the good and the true would never be at odds with each other, yet someone can be so charmed by a mythical account, they choose to believe it "and in this case beauty is the enemy of truth."2 However, as he unpacks just what the beautiful entails, Scruton demonstrates that real beauty is more than attraction. It goes deeper, to a deep appreciation for the thing as it is. We appreciate the sunset not because of what it can do for us, but what it is in itself. "When our interest is entirely taken up by a thing, as it appears in our perception, and independently of any use to which it might be put, then do we begin to speak of its beauty" (emphasis added.)3 Scruton defines this appreciation for the thing itself as a "disinterested interest," meaning we are disinterested in what they thing can do for us, but what it's intrinsic essence is. In this sense, the sunset is truly beautiful while a myth is not. The myth is a false beauty, for it is not true, it's intrinsic essence is built upon falsehood.

There is much more that I could write in this regard, but I will leave it to those interested to grab Scruton's book and explore the ideas further. I do think, though, that the case for the beautiful to be placed beside the true and the good as objective ultimate categories is compelling. As such, we should understand that just as the good and the true are rooted in an all-good and all-truthful God, the ultimate grounding of the beautiful would too be found in a God whose nature is the source of all beauty.


1. Scruton, Roger. Beauty: A Very Short Introduction. Oxford: Oxford UP, 2011. Print. 2.
2. Scruton, 2011. 2.
3. Scruton, 2011. 14.
Image courtesy Todd Petrie and licensed via the Creative Commons Attribution 2.0 Generic (CC BY 2.0) license.

Thursday, February 18, 2016

Why Science Cannot Ground All Knowledge

Is science the best, most assured way of learning about reality? In the minds of more and more people, the answer is "yes." Yesterday, I highlighted a quote from scientist Peter Atkins on how he relies upon science to inform him about the world, dismissing even the consideration of God's existence as "lazy." But, relying on science as the only arbiter for judging the verity of truth claims will never work, because science cannot function as one's starting point.

When explaining reality, everyone must have a starting point. For example, one may observe an event, such as a strike of lightning, and ask "what makes that happen?" A person may respond by describing how a storm cell moving across the land scrapes off electrons until the charge is to such a degree they rush back to the ground, which is reasonable scientific. The first person would be justified in asking "how do you know that?" More conversations could ensue about the structure of atoms, experimental testing and predictions, etc. But each tome, the questioner could ask for further justification for the facts being presented. Sooner or later, there must be a starting point for science.

Four Assumptions Scientist Must Hold

Assuming the questioner drives his respondent back further and further (i.e. "But, how do you know that?") one will quickly see the scientific method relies upon several assumptions. The first is the world will behave consistently. Scientists assume that because electrons have behaved in a certain way in the past, they will also do so tomorrow, and next week, and fifty billion years from now. Science cannot prove this; the scientist must assume it to make predictions.

Secondly, in order to draw any conclusions at all, scientists must assume logic takes us towards the truth. Without logic, one could never infer anything. How can one infer any electron in the universe will behave in the same manner as the electrons creating the lighting strike if one cannot build an argument? The scientific method is really a logical argument offering support for its premises by way of experimentation and concludes with its hypothesis either confirmed or denied. The scientist gives reasons for his conclusion!

Thirdly, the scientist must assume ethics are important. Much research today draws its conclusions not simply from its own findings but from prior research and publication. Falsifying data to arrive at the conclusion one wants is considered wrong. Even unintentional bias and flawed research methods can corrupt results. That's why there's a real concern that so much of what's being published in scientific journals is irreproducible.  Without assuming ethical standards of truth-telling and the importance of solid methodology, scientific endeavors would be a confusing mishmash of conflicting data, with everyone's opinion held as equally valuable.

Lastly, the scientist must assume that his or her own mind is reliable in reporting how the world works. This is a key component to the scientific process and it also poses the biggest problem in cutting God out of the picture. If your brain is the product of mutations whose only benefit to its host is that of survival, then why should you trust it? Survival is not the same thing as truth-telling. In fact, lying can make survival much easier in many circumstances. As long as survival is the outcome, it doesn't matter whether you believe you need to run from the tiger because you're in a race or because it may eat you. If you get away, the same result is achieved. So, if we evolved from some primate species, why trust our "monkey-brains" to tell us the truth? How could one argue that a mindless, random process would even act in an orderly way?

God Grounds the Starting Points

Going back to pour first point, one must assume some intentional ordering of universe in order to ground the assumption of a consistent universe. Christianity teaches that God is a consistent God. He would create his universe in such a way that it would be consistent as well. This gives us a reason to believe in the consistency of the universe, a reason which science cannot offer. Scientists certainly assume the universe is consistent in its laws, but they have no basis for doing so, other than that's what they've seen. But even our dreams have an air of consistency to them until we wake up. Then we realize how inconsistent they are. To assume

Secondly, in the assumption of logic, God also becomes the starting point. If God is the logos—that is Reason itself—then logic and reason are built into the universe as reflections of his nature. Logic works because God is a logical God and we, as rational creatures, bear his image. Thus, we can understand and use reason to discover truths about the created order.

Thirdly, morality must have its grounding in God. The concept of classifying things as right or classifying them as wrong is central to theology. One cannot have the absolute standards of right and wrong without appealing to a being who transcends all of creation. That is God.

Lastly, the fact that a God of reason created us with the capacity to reason gives us grounding for believing our capacity for reason itself. AS part of God's created order, we can experience it in meaningful ways.

Science is a wonderful tool that tells us much about a very small slice of reality: the natural world. But the world is much bigger than its mechanics. Logic, ethics, aesthetics, relationships, mathematics, abstract concepts, and spiritual realities also comprise our lives and our experiences. Not only can science not explain these things, it must assume them before it gets going. It cannot explain its own assumptions, and therefore shows its incapacity for being the proper starting point.

Image courtesy Longlivetheux - Own work, CC BY-SA 4.0

Friday, January 22, 2016

You Need an End Game for the Origin of Life

Antony Flew was one of the more formidable philosophers who argued against Christian theism over the course of his career. Flew was intelligent, a powerful writer but fair in his argumentation. But he never let his ideology get in the way of his investigation. As he said, "My own commitment then as a philosopher who was also areligious unbeliever was and remains that of Plato's Socrates: 'We must follow the argument wherever it leads.'" 1

Even as an "areligious unbeliever" philosopher, Flew had become more and more bothered by certain inherent problems associated with the neo-Darwinist conception of evolution. Primarily, Flew was concerned about the origin of life, or as the question he later asks in his book, "How did life go live?" Even prior to his announcement that he was renouncing atheism and identifying as a theist, he wrote:
Probably Darwin himself believed that life was miraculously breathed into that primordial form of not always consistently reproducing life by God, though not the revealed God of then contemporary Christianity, who had predestined so many of Darwin's friends and family to an eternity of extreme torture.

But the evidential situation of natural (as opposed to revealed) theology has been transformed in the more than fifty years since Watson and Crick won the Nobel Prize for their discovery of the double helix structure of DNA. It has become inordinately difficult even to begin to think about constructing a naturalistic theory of the evolution of that first reproducing organism. 2

The End Goal of Life Must be there in the Beginning

Flew identifies three key questions about the origin of life that are philosophical in their purpose. He asks, "How can a universe of mindless matter produce beings with intrinsic ends, self-replication capabilities, and 'coded-chemistry'?"3 These are key issues in the debate over the origin of life.

The first concept, that of the goal of an organism is tied in come ways to the second concept. Living things reproduce. Without reproduction, evolution is a non-starter. When one discusses the origin of life, one of the goals of that organism's function must be to make more of itself; otherwise we only see a recurring series of dead ends. But we don't see this as a result of any other laws of nature. Just how did this function of organisms that are living and have some kind of end goal (e.g. surviving and reproducing) come about? And how did the DNA, which represents the coded chemistry, become representative of those functions?

Goals and desired ends don't come about by random acts. Neither do codes. Codes are really arbitrary. Flew points to David Berlinski's example of Morse code, noting the connection of dots and dashes to specific letters is the connection a mind makes.4 The codes are a vehicle to carry information, but they aren't the important part of the equation. The message is. Therefore, codes exist first in the minds of the code-builders who construct them for a specific purpose, to communicate messages over a certain medium.

So, the purpose or design or the end game—what is known in philosophy as teleology— of an organism is crucial to not simply sustaining life but to life's origin. From the very beginning, we see life must have the goal of survival and replication built in.  It uses coded DNA to carry out this goal; and the code itself implies a goal-oriented creation.

The very first life requires purpose and cannot be explained away as mere randomness.The question becomes, how can you get goals without a mind?


1. Flew, Antony. "Letter from Antony Flew on Darwinism and Theology." Philosophy Now, Issue 43. October/November 2003. Web. 22 Jan 2016.
2. Flew, 2003.
3. Flew, Antony, and Roy Abraham. Varghese. There Is a God: How the World's Most Notorious Atheist Changed His Mind. New York: HarperOne, 2007. Print. 124.
4. Flew, 2007. 127.

Monday, January 04, 2016

Belief without Evidence is Crucial for Knowledge

Being a reasonable person is a great goal; no one wants to be thought of as foolish or gullible. But does being reasonable mean one needs to have reasons for all of one's beliefs? I've run onto many people who would answer "Yes" to that question. I mean, even the word "reasonable" contains the root of "reason!" How could one be reasonable without having reasons for one's beliefs?

This kind of thinking is prevalent in the online conversations I have with atheists. I recently offers one in this example. But not only is my interlocutor unreasonable in asking for evidence for what would be rather benign claims (like a person's academic achievements in casual conversation), he is wrong about what constitutes reasonable belief at all.

Principle of Credulity

In the introduction of his book The Evolution of the Soul, Philosopher Richard Swinburne lays out some key principles we all use in our reasoning. The first is the Principle of Credulity. Swinburne defines it as "in the absence of counter-evidence probably things are as they seem to be."1 This principle holds that we should basically trust what our senses tell us. While sometimes our sense can be wrong, we trust them to tell us true things about the world, for that's simply how we observe the world. As Swinburne points out:
Without this principle, there can be no knowledge at all. If you cannot suppose thigs are as they seem to be unless further evidence is brought forward—e.g. that in the past in certain respects things were as they seemed to be, the question will arise as to why you should suppose the latter evidence to be reliable. If ‘it seems to be' is good enough evidence in the latter case, it ought to be good reason to start with. And if ‘it seems to be' is not good enough reason in the latter case, we are embarked on an infinite regress and no claim to believe anything with justification will be correct.2
This is the key point in when debating with a person who will only accept something based on evidence or that evidence only counts if it is scientifically testable.

What Counts as Evidence?

Take a claim like the one Paul made in 1 Cor. 15:5-7 that the resurrected Jesus appeared to Peter, then all of the apostles, then to James, and then to five hundred people, and lastly to Paul himself. Paul is offering evidence in the form of eyewitness testimony, both his own and of others. If one discounts that as evidence, by what criteria are they doing so? If it is because eyewitnesses can get things wrong, then why ever allow them in courts? What about scientist who base all of their research on visual observation of events or instruments. Doesn't it follow that their eyes could deceive them as well?

The objector might claim, "My problem with that testimony is we simply don't observe people rising from the dead!" But that objection really begs the question, as Swinburne notes. If observation cannot be trusted, why should we trust the observation that people don't rise from the dead?  Maybe they have in the past and we missed it!

If you press for evidence before you believe anything, you will never reach a starting point. There is always the question of "What is the evidence that backs up the evidence you're presenting? Why should I believe that to be true?" It becomes as Swinburne said an infinite regress, where one can never justify anything at all.

In the next post, I highlight another of these principles, one that states why in the absence of any evidence to the contrary testimony specifically should be believed. Stay tuned.


1. Swinburne, Richard. The Evolution of the Soul. Oxford: Clarendon, 1986. Print. 11.
2. Swinburne, 1986. 12.
Image courtesy jon crel and licensed via the Cretive Commons Attribution 2.0 Generic (CC BY 2.0) License

Friday, December 04, 2015

The Logical Incoherence of Arguing God is a Social Construct

Yesterday, I received a message from a Christian student who was frustrated at his professor's dismissal of religious belief as socially constructed. He writes:
Today in my Sociology class, we covered a very controversial topic--Religion. My professor explained to us that his goal was to be as objective as possible, but still, implemented his ideas into the lecture.

Some notable points he brought up, which are straight from the Sociology textbook, is that all religion is "socially constructed" and that faith is "belief without scientific evidence." He then brought up the Council of Nicea, concerning the nature of Christ, which reconciled the two ideas that Christ was both fully man and fully God, but attributed it to maintaining unity in the church. In short, we made this up in order to keep peace.

He stated that religion is constantly evolving and falsely asserted that Christianity was the first to develop monotheism. His final statement was made near the end of the lecture that "we all need to exercise some level of spirituality in order to survive" since religion provides comfort in the case of tragedy.

How does one, especially as a student, respond to such claims? It's apparent the professor has already chosen where he stands concerning religion. When another spoke up during the lecture, it was clear all he wants to do is debate. As Christians, should we speak up or not cast our pearls before swine?
There are really a couple of questions here. On Monday, I'll tackle how Christians should respond when placed in these difficult situations, but first I want to talk about some of the professor's claims, many of which are demonstrably false. The easiest one to dismiss is the one the student already recognized: that Christianity was the first to develop monotheism. Simply put, no one believes this! Judaism had monotheism down well before Jesus ministered on earth, a fact that is widely accepted by sociologists of all stripes. Either the prof misspoke, was misunderstood, or chose to ignore an accepted point of history on this.

Dealing with the "Socially-Constructed Religion" Charge

What about the larger point that religion is "socially constructed?" The charge isn't new. It was probably most famously made by the philosopher Ludwig Feuerbach in his The Essence of Christianity in 1841. There, Feuerbach lays out the argument that human beings will see and interpret their world to reflect their own nature. In other words, God doesn't really exist; he is an expression of understanding the world in human terms and is a super-human projected onto the world.1 Freud taught a similar concept, that the belief in God, salvation, and the resurrection was simply forms of wish-fulfillment to satiate the desires of humanity's frailty.2

Feuerbach's charge has been offered over the years as the trump card to explain the universality of belief in the divine. There's only one problem; it doesn't follow. Another philosopher named Eduard von Hartmann spotted the logical flaw in Feuerbach's argument and clearly dismantled it. Alister McGrath explains:
At the heart of Feuerbach's atheism is his belief that God is only a projected longing. Now it is certainly true that things do not exist because we desire them. But it does not follow from this that, because we desire something, it does not exist. Yet this is the logical structure of Feuerbach's analysis. Eduard von Hartmann pointed this out nearly a century ago, when he wrote: ‘it is perfectly true that nothing exists merely because we wish it, but it is not true that something cannot exist if we wish it. Feuerbach's entire critique of religion and the proof of his atheism, however, rest upon this single argument – a logical fallacy.'3
The thing von Hartmann realized is that people wish for all kinds of things. Snowboarders in California have been wishing for the drought to end so they can go snowboarding, for example. However, just because people wish for something doesn't mean the thing they wish for is untrue. If the meteorologists are right, California is in for a very wet winter this year! Similarly, whether or not people wish that God exists has no bearing on whether or not he does in fact exist. Those are two separate issues and von Hartmann rightly notes that Feuerbach, and Freud by extension, have staked their dismissal of God on fallacious reasoning. They're being illogical to hold to their position.

Nicaea Was Not About Making Nice

On the idea that Nicaea was held to reconcile the divinity and humanity of Jesus so that everyone could, to quote Rodney King, "just get along" is simply untrue. The concept of the Trinity predated Nicaea by some time. In fact, Tertullian used the word to describe God at least a century earlier. By 325, there were the Trinitarians who held to Jesus's equality with the Father and the Arians, who held that Jesus was divine but not in the same way as the Father. Both sides held to their views adamantly and Nicaea was called to discuss which view was correct.

The Council at Nicaea clarified the orthodox stance that most Christians already held, but it certainly didn't make everyone get along. The fight continued for another fifty years and got so heated that Pope Liberius who had supported the Nicaean Creed was exiled by the Arian Emperor Constantius II. He was then pressured to excommunicate the Trinitarian champion Saint Athanasius and ultimately even signed off on a creed that espoused Arianism and rejected Trinitiarianism!4 It wasn't until the Council of Constantinople in 381 AD that Arianism was definitively defeated and Trinitarianism was solidified as the orthodox position of the church. So, if the Trinity was invented at Nicaea to maintain unity in the church, it was an incredible failure!

As you can see, when one studies the history and the background of these claims, a much different picture of them emerges. I will address the thorny issue of how to engage in class discussion on these topics next time, but one thing you should consider is that the more you know about the history of your faith, the better prepared you can be when such discussions arise.


1. Feuerbach writes, "Religion is that conception of the nature of the world and of man which is essential to, i.e., identical with; a man's nature. But man does not stand above this his necessary conception; on the contrary, it stands above him; it animates, determines, governs him. The necessity of a proof, of a middle term to unite qualities with existence, the possibility of a doubt, is abolished. Only that which is apart from my own being is capable of being doubted by me. How then can I doubt of God, who is my being? To doubt of God is to doubt of myself." Feuerbach, Ludwig. The Essence of Christianity. London: Trubner, 1881. Print. 20.
2. Holt, Tim. "Sigmund Freud Religion as WishFulfilment." Philosophy of Religion.  Philosophy of Religion. N.p., n.d. Web. 04 Dec. 2015.
3. McGrath, Alister. "God as Wish Fulfilment?" UCCF: The Christian Unions, 12 May 2005. Web. 04 Dec. 2015.
4. Pavao, Paul F. "Pope Liberius." Christian History for Everyman. Greatest Stories Ever Told. Paul Pavao, 2009. Web. 04 Dec. 2015.
Image courtesy Maciej Chojnacki and licensed via the Creative Commons Attribution 2.0 Generic (CC BY 2.0) License.

Sunday, September 13, 2015

Is Methodological Naturalism Question-Begging?

Metaphysical naturalists may be inclined to suggest that they cannot be accused of question-begging in endorsing methodological naturalism, since this methodology is simply a logical extension of their metaphysical views. If one has good reason to believe there exist no nonnatural entities, then one can hardly be faulted for adopting a methodology which refuses to countenance nonnatural causes.

What this suggestion ignores is that metaphysical naturalists typically assert the truth of naturalism on the basis of Ockham's Razor. Very few naturalists are willing to argue that it can be demonstrated that the existence of nonnatural entities is logically impossible. Rather, they assert that there is insufficient evidence for the existence of such entities and that one should, therefore, refuse to posit them.

It seems, however, that the existence of physical events which are best explained on the hypothesis of a nonnatural cause would meet the requirements of Ockham's Razor and thus constitute evidence for a nonnatural entity. For the metaphysical naturalist to adopt a methodology which holds that it is never, even in principle, legitimate to posit a nonnatural cause for a physical event, is to guarantee that the requirements of Ockham's Razor will not be met. This begs the question of whether there exists sufficient evidence to justify belief in nonnatural entities and thus disbelief in metaphysical naturalism, since what is being proposed is a methodology that, by its refusal to countenance the legitimacy of ever postulating a nonnatural cause for a physical event, precludes any marshaling of evidence in favor of nonnatural causes.1

-Robert Larmer
Larmer, Robert A. "Is Methodological Naturalism Question-Begging?" Philosophia Christi 5.1 (2003): 113. Print.

Wednesday, September 02, 2015

If You're Skeptical of Miracles, Then Why Not Morality

Is it unreasonable to believe in miracles? Numerous atheists I've spoken to over the years not only don't believe in miracles, they consider any belief in miracles as illogical. Most point to David Hume's Of Miracles in his An Enquiry Concerning Human Understanding as to why belief in miracles should be considered unreasonable.

I don't find Hume's arguments at all convincing. Still, many atheists hold Hume in the highest esteem when it comes to matters of reason and conviction. They believe Hume's skepticism is the model to be followed as a foundation for rationalism. However, there is one area where Hume's reasoning leads to uncomfortable conclusions, that is in the area of morality.

Christians argue for the necessity of God's existence given the fact that objective moral values and duties really exist. If there is no God to ground them, no binding moral values and duties exist. Hume came to a similar conclusion. Book III of his Treatise of Human Nature focuses on the question of morality and Hume begins by dismissing the concept of morality as being derived by reason at all. He writes:
It has been observed, that nothing is ever present to the mind but its perceptions; and that all the actions of seeing, hearing, judging, loving, hating, and thinking, fall under this denomination. The mind can never exert itself in any action, which we may not comprehend under the term of perception; and consequently that term is no less applicable to those judgments, by which we distinguish moral good and evil, than to every other operation of the mind. To approve of one character, to condemn another, are only so many different perceptions.1
Hume explains that reasoning shouldn't be colored by a man's passions. Whether or not a proposition is true is irrelevant to the feelings one has about that statement. You may be passionate about your hockey team winning the game, but your feelings don't affect the score in any way. As Mark Linville put it, Hume "maintained that belief in objective moral properties is, at best, unwar­ranted, and talk of them is, in fact, meaningless."2 Here's Hume discussing how even murder cannot be considered objectively wrong:
Take any action allowed to be vicious: Wilful murder, for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. In which-ever way you take it, you find only certain passions, motives, volitions and thoughts. There is no other matter of fact in the case. The vice entirely escapes you, as long as you consider the object. You never can find it, till you turn your reflection into your own breast, and find a sentiment of disapprobation, which arises in you, towards this action. Here is a matter of fact; but it is the object of feeling, not of reason. It lies in yourself, not in the object. So that when you pronounce any action or character to be vicious, you mean nothing, but that from the constitution of your nature you have a feeling or sentiment of blame from the contemplation of it. Vice and virtue, therefore, may be compared to sounds, colours, heat and cold, which, according to modern philosophy, are not qualities in objects, but perceptions in the mind.3
So Hume holds that there really isn't any objective morality; moral laws are simply our feelings projected outwards trying to get people to not do things we feel are disgusting. It's all about what we personally like or don't like. Reason has nothing to do with the matter.

As Linville notes, modern Darwinists, such as Edward O. Wilson and Michael Ruse agree with Hume, holding that objective morality is a "useful fiction" that evolution used in order to increase survivability.4 If it is a fiction, a falsehood, then it isn't reasonable to believe morality at all.

Both miracles and moral laws make sense if God really exists. Without God, miracles are a contradiction and moral laws are nothing more than the outward voicing of feelings of discomfort or dislike. If atheists are going to be skeptical of miracles, then why wouldn't they be just as skeptical of morality?


1. Hume, David. "Moral Distinctions Not Derived from Reason." A Treatise of Human Nature. The University of Adelaide, 3 July 2015. Web. 02 Sept. 2015.
2. Mark D. Linville. "The Moral Argument." The Blackwell Companion to Natural Theology. By William Lane. Craig and James Porter Moreland. Chichester, U.K.: Wiley-Blackwell, 2009. 393. Print.
3. Hume, 2015.
4. Linville, 2009.
Image courtesy Andreas Schamanek and licensed via the Creative Commons Attribution-NonCommercial 3.0 License.

Friday, August 28, 2015

Discovering God the Way Sherlock Holmes Would

I recently received a comment on my post on how the origin of life creates a significant problem for the naturalist. I was charged with making a "God of the gaps" argument. While a reading of the actual article displays no such breech in logic, it did begin an exchange with my critic that proves all too familiar: any logical argument that ends by inferring a supernatural actor as the best explanation of the facts at hand is easily dismissed as "God of the gaps" while any assumption that "science will one day figure it out" is supposedly rational.

This is an old canard that I've dealt with before (here and here), but I tried to take a different tact in this engagement. I wanted to place the burden on my objector, so I asked "Can you tell me the distinction between a valid inference for God and what you would classify as a God of the Gaps argument?" His reply is telling:
I'm not sure there is one. Abduction seems to be little more than a guess until a better explanation comes along. Science may well provide an answer to the origin of life in the future. (Which is something we may conclude through induction, a much stronger epistemology than abduction.)
There's so much wrong with this statement that it's hard to know where to being. First, let's unpack some terms. There are two ways we can draw conclusions based on reasoning, known as deductive reasoning and inferential reasoning. In deductive reasoning, the conclusion is inescapable from the facts presented. The oft-used example is given the facts that all men are mortal and Socrates is a man, one is forced to conclude that Socrates is mortal.

Understanding Inferences

While Sherlock Holmes is well known for what's Doyle's books called "the science of deuction," he actually didn't deduce things. He used inferential reasoning. An inferential argument takes what is generally understood to be the case and applies it to the greater whole. For example, people have observed that like electrical charges repel each other and opposite charges attract. Thus, when English physicist Joseph John Thomson saw that cathode rays would bend certain ways based on whether a positive or negative magnet was placed near it, he inferred that the cathode ray was made up of negatively charged particles. The electron was discovered.1

The argument that Thompson used is known as abduction, which simply means reasoning to the best explanation. We take the facts that we know and try to get at the truth. Usually, that means applying a rule we already understand, such as the laws of magnetism, and seeing if it does a good job of explaining the specific circumstance we see. Your doctor does this all the time, such as when he prescribes penicillin for your bacterial infection. Prescribing penicillin isn't "little more than a guess" but is based on what is most likely, though not necessarily the case.

Abductive Arguments Drive Science

Because deductive arguments are few and far between in the real world, most of science is built on inference to the best explanation. Ironically, my critic got induction and abduction kind of backwards; induction in this sense is actually the weaker of the two. The Stanford Encyclopedia of Philosophy clarifies the difference:
You may have observed many gray elephants and no non-gray ones, and infer from this that all elephants are gray, because that would provide the best explanation for why you have observed so many gray elephants and no non-gray ones. This would be an instance of an abductive inference. It suggests that the best way to distinguish between induction and abduction is this: both are ampliative, meaning that the conclusion goes beyond what is (logically) contained in the premises (which is why they are non-necessary inferences), but in abduction there is an implicit or explicit appeal to explanatory considerations, whereas in induction there is not; in induction, there is only an appeal to observed frequencies or statistics. 2

Closed to the Best Explanations

I explain all this to make sure you understand that the arguments like the one inferring God from the origin of life are not merely guesses or "God of the gaps" claims. They are just like those abduction arguments that are the cornerstone of scientific and medical research. Human beings have observed life throughout our history. Never once in all of that time observing life have we ever seen life come from non-life. In fact, Louis Pasteur's science shows life doesn't spontaneously arise from non-living material. Therefore, it is reasonable to conclude that all life comes from other living beings and therefore the first life came from a living being. That's abduction.

Notice that when asked for a distinction as to what would make a valid inference for God's existence, my critic replied "I'm not sure there is one." That answer is as telling as the rest of the conversation. He has rejected any argument that leads to the conclusion that God exists at the outset. That's his prerogative, but doing so is anti-logic, anti-science, and inconsistent.


1. Douven, Igor. "Abduction." Stanford Encyclopedia of Philosophy. Stanford University, 09 Mar. 2011. Web. 28 Aug. 2015.
2. Douven, 2011.

Come Reason brandmark Convincing Christianity
An invaluable addition to the realm of Christian apologetics

Mary Jo Sharp:

"Lenny Esposito's work at Come Reason Ministries is an invaluable addition to the realm of Christian apologetics. He is as knowledgeable as he is gracious. I highly recommend booking Lenny as a speaker for your next conference or workshop!"
Check out more X