Blog Archive

Followers

Come Reason's Apologetics Notes blog will highlight various news stories or current events and seek to explore them from a thoughtful Christian perspective. Less formal and shorter than the www.comereason.org Web site articles, we hope to give readers points to reflect on concerning topics of the day.

Powered by Blogger.
Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Thursday, February 18, 2016

Why Science Cannot Ground All Knowledge



Is science the best, most assured way of learning about reality? In the minds of more and more people, the answer is "yes." Yesterday, I highlighted a quote from scientist Peter Atkins on how he relies upon science to inform him about the world, dismissing even the consideration of God's existence as "lazy." But, relying on science as the only arbiter for judging the verity of truth claims will never work, because science cannot function as one's starting point.

When explaining reality, everyone must have a starting point. For example, one may observe an event, such as a strike of lightning, and ask "what makes that happen?" A person may respond by describing how a storm cell moving across the land scrapes off electrons until the charge is to such a degree they rush back to the ground, which is reasonable scientific. The first person would be justified in asking "how do you know that?" More conversations could ensue about the structure of atoms, experimental testing and predictions, etc. But each tome, the questioner could ask for further justification for the facts being presented. Sooner or later, there must be a starting point for science.

Four Assumptions Scientist Must Hold

Assuming the questioner drives his respondent back further and further (i.e. "But, how do you know that?") one will quickly see the scientific method relies upon several assumptions. The first is the world will behave consistently. Scientists assume that because electrons have behaved in a certain way in the past, they will also do so tomorrow, and next week, and fifty billion years from now. Science cannot prove this; the scientist must assume it to make predictions.

Secondly, in order to draw any conclusions at all, scientists must assume logic takes us towards the truth. Without logic, one could never infer anything. How can one infer any electron in the universe will behave in the same manner as the electrons creating the lighting strike if one cannot build an argument? The scientific method is really a logical argument offering support for its premises by way of experimentation and concludes with its hypothesis either confirmed or denied. The scientist gives reasons for his conclusion!

Thirdly, the scientist must assume ethics are important. Much research today draws its conclusions not simply from its own findings but from prior research and publication. Falsifying data to arrive at the conclusion one wants is considered wrong. Even unintentional bias and flawed research methods can corrupt results. That's why there's a real concern that so much of what's being published in scientific journals is irreproducible.  Without assuming ethical standards of truth-telling and the importance of solid methodology, scientific endeavors would be a confusing mishmash of conflicting data, with everyone's opinion held as equally valuable.

Lastly, the scientist must assume that his or her own mind is reliable in reporting how the world works. This is a key component to the scientific process and it also poses the biggest problem in cutting God out of the picture. If your brain is the product of mutations whose only benefit to its host is that of survival, then why should you trust it? Survival is not the same thing as truth-telling. In fact, lying can make survival much easier in many circumstances. As long as survival is the outcome, it doesn't matter whether you believe you need to run from the tiger because you're in a race or because it may eat you. If you get away, the same result is achieved. So, if we evolved from some primate species, why trust our "monkey-brains" to tell us the truth? How could one argue that a mindless, random process would even act in an orderly way?

God Grounds the Starting Points

Going back to pour first point, one must assume some intentional ordering of universe in order to ground the assumption of a consistent universe. Christianity teaches that God is a consistent God. He would create his universe in such a way that it would be consistent as well. This gives us a reason to believe in the consistency of the universe, a reason which science cannot offer. Scientists certainly assume the universe is consistent in its laws, but they have no basis for doing so, other than that's what they've seen. But even our dreams have an air of consistency to them until we wake up. Then we realize how inconsistent they are. To assume

Secondly, in the assumption of logic, God also becomes the starting point. If God is the logos—that is Reason itself—then logic and reason are built into the universe as reflections of his nature. Logic works because God is a logical God and we, as rational creatures, bear his image. Thus, we can understand and use reason to discover truths about the created order.

Thirdly, morality must have its grounding in God. The concept of classifying things as right or classifying them as wrong is central to theology. One cannot have the absolute standards of right and wrong without appealing to a being who transcends all of creation. That is God.

Lastly, the fact that a God of reason created us with the capacity to reason gives us grounding for believing our capacity for reason itself. AS part of God's created order, we can experience it in meaningful ways.

Science is a wonderful tool that tells us much about a very small slice of reality: the natural world. But the world is much bigger than its mechanics. Logic, ethics, aesthetics, relationships, mathematics, abstract concepts, and spiritual realities also comprise our lives and our experiences. Not only can science not explain these things, it must assume them before it gets going. It cannot explain its own assumptions, and therefore shows its incapacity for being the proper starting point.

Image courtesy Longlivetheux - Own work, CC BY-SA 4.0

Wednesday, February 17, 2016

The Unvarished Bias of Scientism



Recently, an episode of the Unbelievable? program featured a discussion on whether it is reasonable to claim that advances in science somehow undermine the existence of God. It pivoted on the assertion that science and religion are somehow opposed to one another, In other words, once certain scientific explanations for some observed phenomena are found, it removes the need for God to "do that."

The show pitted mathematician and physics Dr. David Glass against Oxford Emeritus Professor Peter Atkins and humanist James Croft. Peter Atkins is a physical chemist and a primary example of what it means to believe in not simply science, but scientism.

Atkins over and over again characterizes any appeal to a divine intelligence for explaining why things are the way they are as "lazy." It's as though the more he repeats the charge, the more believable he thinks it becomes. Then, at about the 44:52 mark, he offers this statement:
I'm just taking the world as it seems to me, from an utterly unprejudiced point of view. Lying here, looking at the evidence, assessing the evidence, accepting that this purported alternative explanation has arisen from sentiment, misogyny, power, hegemony, you name it… fear of personal annihilation, manipulation. All those things don't convince me that it's a better explanation.
Is this really the viewpoint of someone who holds an "utterly unprejudiced point of view?" Such a claim is farcical on its face. This isn't a one-off comment, either. Earlier in the program, he explained why he rejects theism as holding any sort of explanatory power:
I accepted right at the beginning that you can't disprove the existence of God, because as James [Croft] said, it's such a slippery and ill-defined concept. But what you can do is to understand how people came to believe that "God did it." That is, it's driven by sentiment, fear of personal annihilation, and cultural pressures, and history, and power grabbing, and all the things that go into religious belief. But if you discard those and you're left with trying to understand a mechanism by which the world works, a mechanism how it came into existence, then the only answer is through the scientific method, which is a procedure that depends upon evidence and setting theories into a whole network of understanding.
During the conversation, Glass queries Atkins and asks him how he proposes to use science to explain things like objective moral values, mathematics, and logic. Atkins retorts that ethics indeed can be explained via evolutionary survival principles, thus completely missing the distinction between functional outcomes and moral reality.

Who's Lazy Now?

Atkins' dodge should be noted. He cannot discuss how science claims to account for mathematics or the laws of logic. That's because it is impossible to do so, for science must assume these things before it can even start.

Even leaving all that aside, any person who is even half-interested in the truth will recognize that Atkins is anything but unbiased when trying to understand how beliefs are formed. His vitriolic mischaracterization that all cultures across all societies throughout history came to the conclusion of a creator for the material world because of sentiment, power, and hegemony is shameful. Has Atkins bothered at all to look into this matter? Why doesn't he acknowledge that world-class scientists like Francis Collins, who is doing top-notch work, would be deeply offended at such a characterization?

Atkins' statements do serve a purpose. They functions as evidence for only one conclusion: Atkins is the one corrupted by bias. He's the lazy one who isn't interested in seeking answers. He simply wants to throw insults, and his opinions on this issue can be ignored.

Saturday, February 06, 2016

Time Is No Longer the Friend of Darwinism



Today's neo-Darwinian scenarios have always relied on an abundance of time as an essential component of diversification through adaptation. We've all heard the canard where enough monkeys with enough typewriters will eventually produce a Shakespearean sonnet. Of course, it's been proven there's a big bias in even that assumption. However, now, as we learn more about the complexity and intricacy of encoded DNA instruction sets, we are finding out that time is not the friend of evolution.

In a paper published last year in the Journal Theoretical Biology and Medical Modelling, John Sanford, Wesley Brewer, Franzine Smith and John Baumgardner look at the nut and bolts of DNA mutation and calculate just how long it would take to get new, biologically meaningful nucleotides. In other words, we don't simply need genetic mutations. We need mutations that will be of some benefit to the organism, will be significant enough to be "set" in the population, and will be able to do so within a population and generational set projected for the species. Let's look at each of these requirements in more detail.

1. Mutations Must Add to the Fitness of the Organism

In their paper, Sanford et. al. looked at strings of code within the DNA commonly referenced as genes. These strings of code are the instruction set for building all the biological systems that make up you and me. Just as a book is composed sentence by sentence, the sentences are built word by word, and the words are built by strings of letter, so DNA nucleotide strings are the source of the proteins or RNA machines that build more complex biological functions. These are the foundation of biology and it is at this level where any meaningful change must happen for evolution to work.

Further, not just any mutation counts. Just as any recombination or addition of letters doesn't lead to new sentences, the mutations must be to the degree that it provides "stronger fitness benefits" to the organism according to natural selection.1 This isn't controversial in and of itself; common descent argues this way. But needing specific mutations raises the odds and requires more time than just any mutation.

2. Mutations Must be Set Within the Organism

Advantageous mutations are not good enough, though. The mutations must be of a kind to be hereditary. They have to be able to be passed from parent to offspring, otherwise they die with the carrier. Not only do they need to be hereditary but they need to be dominant enough within the population to permeate the species.

Imagine you have an isolated village of blond-haired people. One dark-haired traveler stumbles onto the village and decides to settle there. He marries and has children, some of whom are dark-haired. While dark hair is a dominant trait, not all the dark-haired man's children will necessarily carry his dark hair gene. Dark hair could still be lost within a few generations.

The model used by Sanford et. al. takes this factor into account. It also raises the complexity of the mutation, sometimes requiring the same mutation more than once in a population in order for the gene to be "set."

3. The Size and Reproductive Rate of the Population Matters

Lastly, these kinds of mutations happen at different speeds for different types of organisms, as Michael Behe deftly explained in The Edge of Evolution. For example, the E. Coli bacteria can have a population pool in the millions within a community. It divides every thirty seconds, allowing less than 20 years for researcher Richard Lenski to reach 50,000 generations.2 However, for higher primates, the population size is much smaller and they reproduce at a much slower pace. Human beings must achieve puberty before they can start having babies. This means that mutations are passed much more slowly.

So what happens when you take these various factors and put them all together into a computer model? Just how long is enough time to get a biologically advantageous mutation within a hominid population, that is an animal type that would eventually produce homo sapiens?

Sanford's model shows there simply isn't enough time for enough significant mutations to move from more primitive hominids to human beings. The paper's conclusion states:
Biologically realistic numerical simulations revealed that a population of this type required inordinately long waiting times to establish even the shortest nucleotide strings. To establish a string of two nucleotides required on average 84 million years. To establish a string of five nucleotides required on average 2 billion years. We found that waiting times were reduced by higher mutation rates, stronger fitness benefits, and larger population sizes. However, even using the most generous feasible parameters settings, the waiting time required to establish any specific nucleotide string within this type of population was consistently prohibitive.3 (emphasis added.)
Interestingly, one point noted in the paper was that increasing population sizes to help point number three actually work against point number two by making the new genetic function more diluted within the population group and therefore less likely to become "set."

Overall, the paper is interesting and offers independent verification to the time problem Behe argued in The Edge of Evolution while using a completely different methodology. Make sure you take the time to read it.

References

1. Sanford, John, Wesley Brewer, Franzine Smith, and John Baumgardner. "The Waiting Time Problem in a Model Hominin Population." Theor Biol Med Model Theoretical Biology and Medical Modelling 12.1 (2015): n. pag. Web. 6 Feb. 2016.
2. Gallup, Dave. "E. Coli: A "Model Organism" From Theodor Escherich's Legacy." The Environmental Reporter. EMLab P&K, 13 May 2010. Web. 6 Feb. 2016.
3. Sanford, 2015.

Monday, January 11, 2016

It's Crippling to Believe Only in Science



I've written several times on how today's culture holds an over-inflated view of science. Science is a great tool that helps us to learn about one very specific subset of knowledge: the mechanics behind the natural world. It cannot tell us about other crucial pieces such as what constitutes knowledge, what constitutes a meaningful relationship, or how to stop people from being evil. Given its limited scope, therefore, science is of a certain limited value.

This isn't to say the study of science is of no value or marginal value. Some of our gravest problems do come from mechanical interactions. Illness would be one example. But it is wrong to think that because one can claim "science says so" and therefore the discussion should end. With politically contentious and highly complex issues like how modern humanity may be affecting climate, a large degree of caution is warranted.

The fact is science doesn't always get it right. Thomas S. Kuhn explained scientific advancements do not come in a pattern of smooth upwards growth, but in a very herky-jerky set of fits and starts, as those holding to old paradigms are hesitant to give their particular views up. Even if there is a strong consensus of opinion on how some particular point, scientists are still people and people are capable of being wrong and being persuaded by others who are also wrong.

Here are just a few areas where claims based on accepted science were either rushed, fraudulent, or simply wrong:
  • AETHER: Aether was believed to be an element permeating the universe. The view was held by a consensus of scientists for many centuries, including names such as Issac Newton, Thomas Young, Maxwell, and Lord Kelvin. In the 19th century, more and more scientists held to the theory of luminiferous aether as the all-encompassing medium through which waves of light traveled. So strongly was the theory held that published student references works would claim: "The cumulative evidence for thinking space filled with a ponderable medium of exceedingly minute density grows stronger every day."1

    However, the entire enterprise and the many, many well-thought explanations of how our universe works were completely overthrown after an 1887 experiment couldn't detect the aether2 and Einstein showed the medium wasn't necessary. It is now considered scientifically obsolete, however it took decades for the theory to be completely abandoned as the 1914 student reference work demonstrates.
  • PILTDOWN MAN Palentologists in Britain announced Piltdown Man in1913 as a find of one of the "missing links" between ape and man. The general accepted it for years, but in 1953, Piltdown 'man' was exposed as a forgery. The skull was modern and the teeth on the ape's jaw had been filed down.3
  • ACADEMIC FRAUD: The US National Institutes of Health investigatory panel found the immunologist Thereza Imanishi-Kari had fabricated data in a 1986 research paper authored with the Nobel prize winner David Baltimore. The findings claimed in the paper promised a breakthrough for genetic modification of the immune system.4
  • N-RAYS: A French physicist, René Blondlot, claimed to have discovered a new type of radiation, shortly after Roentgen had discovered X-rays. American physicist Robert Wood, however, revealed that N-rays were little more than a delusion. Wood removed the prism from the N-ray detection device, without which the machine couldn't work.5
None of these events show that all of science is corrupted or questionable, but it does illustrate that science has no claim on being the only way to really know something. That's why anyone who says they only believe in "science" has crippled him or herself from the truth before they've even begun to search for it.

Evolutionist Stephen Jay Gould said "Scientists cannot claim higher insight into moral truth from any superior knowledge of the world's empirical constitution."6 I always take exception when in conversation an atheist will claim to "only believe in science."

References

1. Beach, Chandler Belden, Frank Morton McMurry, and Eleanor Atkinson. "Ether." The New Student's Reference Work: For Teachers, Students and Families. Vol. II. Chicago: F.E. Compton, 1914. Online. https://en.wikisource.org/wiki/The_New_Student%27s_Reference_Work/Ether
2. "Michelson–Michelson–Morley ExperimentMorley Experiment." Wikipedia. Wikimedia Foundation, 23 Nov. 2015. Web. 11 Jan. 2016. https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_experiment.
3. "Piltdown Man." Natural History Museum. The Trustees of the Natural History Museum, London, n.d. Web. 11 Jan. 2016. http://www.nhm.ac.uk/our-science/departments-and-staff/library-and-archives/collections/piltdown-man.html.
4. Research Integrity Adjudications Panel ."Thereza Imanishi-Kari, Ph.D., DAB No. 1582 (1996)." Departmental Appeals Board, Department of Health and Human Services, 21 June 1996. Web. 11 Jan. 2016. http://www.hhs.gov/dab/decisions/dab1582.html.
5. Carroll, Robert Todd. The Skeptic's Dictionary: A Collection of Strange Beliefs, Amusing Deceptions, and Dangerous Delusions. Hoboken, NJ: Wiley, 2003. 63. Print..
6. Gould, Stephen Jay. "Nonoverlapping Magisteria." Natural History 106 (March 1997): 16-22; Reprinted by The Unofficial Stephen Jay Gould Archive. 1998. Web. 11 Jan. 2016. http:

Wednesday, January 06, 2016

Questioning Our Over-Reliance on Science (video)



Recently, I got to sit down with the One Minute Apologist, Bobby Conway, and discussed several topics. One item that came up was our culture's over-emphasis of science as the last word in knowledge. The role of science does seem to be misunderstood these days, with people giving it more credence than it may deserve.

Interestingly, John Cleese of Monty Python fame also recently tweeted:
Cleese went on to offer a couple other tweets, which could be viewed in different ways, although folks like John Prager at AddictingInfo felt Cleese was slamming "anti-science conservatives." I don't know of that was Cleese's intention. However I do know that in his podcast, he seemed to make fun of those who would place an over-emphasis on science and scientists in this humorous video.

Of course, taking that tweet as it stands, Cleese is right. Science is only one method we use to know about the world and it is a fairly limited one at that. That's what I was able to explain in this short clip with Bobby Conway. You can watch it here:


For more detail on these ideas, check out my previous articles here, here, and here.

Sunday, December 27, 2015

Infertility And Christian Families: What's The Right Thing To Do? (video)



Many Christian couples yearn for a child but fertility issues have thwarted their efforts. Now, the medical community has undergone an explosion in new reproductive technologies, promising success where there was previously no hope. But with the promise, they also bring new questions on how Christians should approach such techniques.

In this video, Lenny explains various techniques and the potential dangers that accompany them. He also offers specific questions Christians should ask before embarking on in-vitro fertilization. This is the first in a multi-part series on reproductive technologies and the Christian.

Tuesday, December 01, 2015

Did God Make Life on Other Planets?



This month, the seventh Star Wars movie is set to debut. Fans are looking forward to seeing not only the action but the fantastic inhabitants of far off worlds, like those found in the now-famous cantina scene from A New Hope. The sheer number of diverse creatures from a host of worlds pictured there plays on our sense of wonder.

It also leads us to think about the real world and our place in it. Are we alone in the universe or could there be intelligent life found in some planet or galaxy far, far away? In our galaxy alone there exists some 200 billion stars1, many which have the potential for planetary systems, and ours is just one galaxy out of billions and billions. If God created such a vast universe, wouldn't it be likely that at least a few others would have life on them?

The Bible Doesn't Rule Out Life on Other Planets

First, it is quite possible that some kind of life could exist on other planets. There is nothing in the Bible that says God only created life for the earth. He could have created some kind of life elsewhere, too. Even on earth, when we travel to the harshest environments, such as volcanic vents in the ocean floor, we are surprised to find life in such unrelenting places.2 Microbes have even been found surviving in the stratosphere, miles above the earth. So to have some kind of an ecosystem found on another planet, even when that planet could not support human life is not as inconceivable as it may seem to be.

However, when this question is asked most of the time, people aren't asking about fungus, moss, or microbes. They want to know whether intelligent life—life capable of communication and abstract thought like humans are—is possible on other planets. I think the answer is such life is highly doubtful.

If advanced life were to exist on other planets, we begin to run into the same theological issues on free will and sin that have so frequently become a part of our conversations on evil and God's existence. In order to be truly free, alien beings must also be capable of sinning. However, if they were to sin, it would place them in a greater predicament.

The Need for a Redeemer Like Us

In the book of Hebrews, the writer explains why Jesus is greater than the angelic beings, who were held in high esteem by first century Jewish culture. He quotes Psalm 2, then explains that human beings, not the angels ae the beneficiaries of Jesus's salvific work on the cross:
But we see him who for a little while was made lower than the angels, namely Jesus, crowned with glory and honor because of the suffering of death, so that by the grace of God he might taste death for everyone. For it was fitting that he, for whom and by whom all things exist, in bringing many sons to glory, should make the founder of their salvation perfect through suffering. For he who sanctifies and those who are sanctified all have one source. That is why he is not ashamed to call them brothers...

For surely it is not angels that he helps, but he helps the offspring of Abraham. Therefore he had to be made like his brothers in every respect, so that he might become a merciful and faithful high priest in the service of God, to make propitiation for the sins of the people. (Heb. 2:9-11, 16-17, ESV).
Later, the writer explains that Jesus's sacrifice was a singular event: "He has no need, like those high priests, to offer sacrifices daily, first for his own sins and then for those of the people, since he did this once for all when he offered up himself." (Heb. 7:27, ESV).

Therefore, if alien beings were advanced enough to make free choices for themselves, they would either need to be perfect throughout all eternity (which is highly unlikely) or irredeemable. Given the verses above, one can see why fallen angels cannot be redeemed and why God had to create Hell for them.

Thinking through the Presupposition

I've been asked this question many times, and I think it's a helpful one. It shows that human beings tend to think spatially about our world. If our planet takes up such a little place in the great big and vast universe that God created, certainly he would have placed life elsewhere, right? But God is an immaterial being. He doesn't value us on the basis of our mass. He values us because we bear his image. Therefore, I have no problem believing that God could have created the entire universe just to support life on one single planet, so he could have creatures who know and love him. That's true value.

References

1. Rayman, Marc. "How Many Solar Systems Are in Our Galaxy?" NASA. NASA, n.d. Web. 1 Dec. 2015. http://spaceplace.nasa.gov/review/dr-marc-space/#/review/dr-marc-space/solar-systems-in-galaxy.html.
2. "'Alien' Life Forms Discovered" NOAA. National Oceanic and Atmospheric Administration, n.d. Web. 01 Dec. 2015. http://www.noaa.gov/features/monitoring_0209/vents.html.

Tuesday, November 17, 2015

Three Intractable Problems for Atheism


Is science doing real work while people who posit a creator are being intellectually lazy? That's what atheists like Richard Dawkins would have you believe. In an interview with the Dean of the College of Arts and Sciences at the University of Connecticut, Dawkins claimed pointing to an intelligent designer is "a cowardly evasion, it's lazy. What we should be doing as scientists is rolling up our sleeves and saying, right, Darwin solved the big problem. Now let's take that as encouragement to solve the other big problems, like the origin of life and the origin of the cosmos."1

Is Dawkins right? In fact, he has the whole thing backwards. Darwin had the easier time constructing his evolutionary model because he didn't have the details to worry about. Scientists in Darwin's day didn't know about the complex structures of DNA or how the telltale evidence of the Big Bang proves the universe must've come into existence at a specific point in the past. Darwin could sluff over the biology. However atheists today don't have that luxury.

1. What Started the Universe?

The first problem is the most fundamental. Why does our universe exit? Why should it be here at all? Usually when bringing up this issue, you will hear people retreat to talk of the Big Bangs and quantum vacuums. But both of those things assume what is being asked.

You cannot have a bang unless there is something to go bang and something else to trigger the bang. If before the Big Bang there is nothing, then nothing cannot bang. Quantum vacuums, which have become the easy excuse in trying to solve this problem, are not nothing either. As I've explained before, these fluctuations have attributes and potentials. The fact that they fluctuate means they are in time and they have energy states. Just as an idea isn't nothing, to define quantum states as nothing is to misunderstand what nothing is. Out of nothing nothing comes is foundational to all scientific studies. If you give up on that, you're not doing science any more.

So, instead of starting with nothing, maybe we assume the thing that banged is the eternal thing. But if the singularity that banged existed from all eternity, then why didn't it bang earlier than when it did? We know the universe is using up its energy, so we know that it's only been around a limited amount of time. Why? What was that thing that changed to make the singularity explode into the universe we see? What ever it was that changed, it certainly wasn't nothing, because if nothing changed, then the universe would never have come to be.

2. What Started Life?

In 2011, John Horgan wrote an article for the Scientific American web site entitled "Pssst! Don't tell the creationists, but scientists don't have a clue how life began." There, Horgan explains how the search to understand the origin of life from nonliving chemicals has given science exactly zero answers.2 The problems are legion: the speed at which microorganisms emerged from the time that earth was capable of supporting any life is pretty fast. It really doesn't give the incredibly complex chains of molecules like DNA or RNA much time to "stumble" into the right configurations to start replicating, especially given the harsh environment and the capacity for destruction even after a fortuitous assembly.

Just what those things were that first came together is problematic, too. As David Berlinkski pointed out, there is a real chicken and the egg problem, given the need for proteins to assemble DNA or RNA and the need for DNA or RNA to carry the blueprint for those very proteins. Even the RNA Word hypotheses Horgan mentions are not immune to monstrous problems, such as the astronomical odds it would take to assemble any kind of self-replicating chain of RNA. That's why there is no functional model at all for how life came to be; there's merely a bunch of speculation containing an incredible number of holes.

3. Where Did Consciousness Come From?

Even if one were to get chemicals to self-replicate, that wouldn't be the end of the difficulty to explain how beings like us got here. While reproduction is a defining feature of life, life has different levels. A plant is a living being, but it isn't conscious; it cannot think. Human beings are known as thinking creatures. But, just how does this consciousness arise from non-conscious material? What model is there for this? Again, there isn't one.

Consciousness is an incredibly tricky thing. A lot of materialists want to redefine consciousness as the electro-chemical reactions happening in the brain, but that makes no sense. Consciousness is something qualitatively different than electrical connections, otherwise we would have to consider that our tablets and smart phones are conscious right now. Consciousness is qualitatively different from physical processes, which means that it cannot be grounded in only the physical. It requires a completely different explanation, one that science cannot offer.

In his article, John Horgan is honest in reporting that science is completely in the dark concerning the beginning of life. Yet, he balks at one workable explanation available to him, the idea of a creator. At the end of the article he writes that creationists' "explanations suffer from the same flaw: What created the divine Creator? And at least scientists are making an honest effort to solve life's mystery instead of blaming it all on God." Of course, this is as old as it is uninformed. Asking what created the creator is like asking which golfer is going to win the Daytona 500. It's a clear category error and is really Horgan's way of ignoring the only other option out there.

These three problems should offer clear signs that there is more to the world than matter in motion. Science is a field that relies upon observation to draw conclusions. In our entire history, no human has ever seen a thing come from nothing, seen life emerge spontaneously from non-life, or seen consciousness emerge from unconscious matter. It just doesn't happen. So why would anyone think all three happened, and happened without the guidance of any intelligent entity? If you're a golfer in Daytona, you can pull your driver from your bag, but it won't do you much good in this competition. Scientists can continue to talk about these problems, but they won't get any closer to the answer.

References

1. Teitelbaum, Jeremy. "The Dean and Richard Dawkins." UConn Today. University of Connecticut, 10 Apr. 2014. Web. 16 Nov. 2015. http://today.uconn.edu/2014/04/the-dean-and-richard-dawkins/
2. Horgan, John. "Pssst! Don't Tell the Creationists, but Scientists Don't Have a Clue How Life Began." Scientific American. Scientific American, a Division of Nature America, Inc., 28 Feb. 2011. Web. 16 Nov. 2015. http://blogs.scientificamerican.com/cross-check/pssst-dont-tell-the-creationists-but-scientists-dont-have-a-clue-how-life-began/.
Image courtesy rosipaw and licensed via theCreative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0) license.

Saturday, November 14, 2015

Three Facts Showing the Incredible Fine-Tuning of the Universe for Life



William Lane Craig is one of Christendom's most effective spokesmen arguing for God's existence. One of his famed "five arguments for God's existence" is the fine tuning of the universe for intelligent life. But the term "fine-tuning" really falls very short of just how precise the initial conditions and the universal constants are that allow us to live. They are infinitesimally fine numbers.

In his short book Does God Exist? Craig offers some examples as well as a couple of comparisons to set the stage. He writes:
Before I share a few examples of fine-tuning by way of physics, here are some numbers to help us to appreciate the delicacy of the fine-tuning. The number of seconds in the entire history of the universe is around 1017 (that's 1 followed by seventeen zeroes: 100,000,000,000,000,000). The number of subatomic particles in the entire known universe is said to be around 1080 (1 followed by eighty zeroes). These are simply incomprehensible numbers.

Being mindful of those numbers, consider the following: The force of gravity is so finely tuned that an alteration in its value by even one part out of 1050 would have prevented a life-permitting universe. Similarly, a change in the value of the so-called cosmological constant, which drives the acceleration of the universe's expansion, by as little as one part in 10120 would have rendered the universe life-prohibiting. Now here's a corker: Roger Penrose of Oxford University has calculated that the odds of the universe's initial low entropy condition's existing by chance is on the order of one chance out of 1010 (123), a number which is so inconceivable that to call it astronomical would be a wild understatement.

The fine-tuning here is beyond comprehension. Having an accuracy of even one part out of 1060 is like firing a bullet toward the other side of the observable universe, twenty billion light years away, and nailing a one-inch target!

The examples of fine-tuning are so many and so various that they aren't likely to disappear with the advance of science. Like it or not, fine-tuning is just a fact of life which is scientifically well-established.

But, you might say, if the constants and quantities had had different values, then maybe different forms of life might have evolved! No, that underestimates the truly disastrous consequences of a change in the values of these constants and quantities. When scientists talk about a universe's being life-permitting, they're not talking about just present forms of life. By "life," scientists just mean the property of organisms to take in food, extract energy from it, grow, adapt to their environment, and reproduce. Anything that can fulfill those functions counts as life. And the point is, in order for life so-defined to exist, whatever form it might take, the constants and quantities of the universe have to be unbelievably fine-tuned; otherwise, disaster results. In the absence of fine-tuning, not even matter, not even chemistry, would exist, much less planets where life might evolve.

The question we face, then, is this: What is the best explanation of the cosmic fine-tuning? Many philosophers and scientists think that the reason that the universe is finely tuned for life is because it was designed to be life-permitting by an intelligent Designer.1
1. Craig, William Lane. Does God Exist?. Pine Mountain, GA: Impact 360 Institute, 2014. Kindle Edition. (Kindle Locations 524-546)

Wednesday, October 28, 2015

Must Science Assume Atheism?



I recently listened to an interesting conversation between Alister McGrath and Jim Al-Kalili on the Unbelievable! podcast. Both guests have an extensive science background and had a very thought-provoking exchange. While McGrath is a Christian apologist Al-Kalili is a theoretical physicist, radio host, and president of the British Humanist Association.

One key point that McGrath mentioned on the program is the assumptions people take from the scientific enterprise. For example, I've spoken with many atheists who say in order to "do good science" one must assume atheism. They then conclude that science is itself an atheistic enterprise and they believe science and faith are then set against one another. But this is actually sloppy thinking, as McGrath pointed out, and it misses a key distinction.

Methodology versus Ontology

McGrath makes the point that science does adopt a certain methodology in its discipline, what is known as methodological naturalism. In other words, science approaches its exploration of the world as if the answers can all be found by uncovering various natural laws and functions. Scientists take this approach because it forces them to dig deeper; asking the "why does this thing function in this way" helps us investigate the natural world more completely.

However methodological naturalism is just that: a methodology. It's an assumption the scientist makes as he approaches his work.  This assumption, just like any other, has limitations and cannot inform us of other questions which may be equally relevant.  As an illustration, think of a forensic scientist. A forensic pathologist can study a body and determine the cause of death. Perhaps the victim's heart gave out under extreme stress. What the pathologist cannot do is say whether the person was under stress because of an emotional crisis at home, because the victim was exercising to try and get into shape, or because the victim was under duress by being held at gunpoint. The mental state of the victim is out of reach to science. Even if it is shown that the death was caused by another party, motive for the crime cannot be shown scientifically. The detectives must employ methods other than naturalism to uncover those.

This is where most atheists who make the claim that science and faith are at odds go wrong.  They jump from science assuming a methodology of naturalism to the existence of God Himself. That's an unwarranted leap. Existence is a question of ontology, not methodology. That is it is a question of existence.  As McGrath stated, "By definition, a research method can uncover some things and not others, and this is the method that science uses. But we have to be very careful we don't conflate that into a view of reality." That would be like a shopkeeper believing that since his inventory shows negative two widgets, he is in possession of widgets made out of anti-matter! The method of inventory is not the same as the reality.

Weighing Science Along With Other Forms of Knowledge

To claim that science is atheistic is to confuse methodological naturalism with philosophical naturalism, a mistake thinking people should never make. A more thoughtful approach to questions of truth and reality is to take those findings we understand through scientific discovery and see how they fit with all the other ways we can know things. Like the detective, we must gather our facts about the world from more than just the science. We must weigh all the evidence we have and see if we can draw an inference to the best explanation from them. Shutting out other forms of knowledge doesn't make one more intelligent; it makes them less so.

Tuesday, September 01, 2015

The Inherent Bias in Science


Last week, I wrote about an online conversation I had with an atheist who accused me of making a God of the gaps type argument for the origin of life, even though all the observational evidence across humanity's history demonstrates that life comes from life. He claimed that "Science may well provide an answer to the origin of life in the future," whereby he commits the very fallacy he accused me of committing. While not appealing to a God of the gaps, he is certainly appealing to "science of the gaps."

In our engagement, I asked for some justification for such an unwarranted claim. He leaned on this explanation:
Apocryphally, Edison learned 999 "wrong" ways to make a light bulb in in the process of finding 1 "right" way. (Was he ever really wrong?) Obviously, science has proposed wrong explanations many times as it approaches the truth. The more pertinent inquiry would be "Are there any cases where science has settled on an explanation only to be proven wrong by a theistic explanation?" Because the reverse admits of many, easy historical examples.
His reasoning is misleading in many ways. First, there's a significant difference between a single research project, such as Edison's testing of different material for light bulb filaments versus the assumption that science can answer every question of origins. That's a simple category error. By using Edison as an example, and then saying that the entire discipline of science is functioning in the same way, he has equivocated how an experiment works with how a consensus is built.

Not Counting Wrong Conclusions

In fact, accepting new scientific conclusions works in a much different way than Edison's trial-and-error approach. In his book The Structure of Scientific Revolutions, Thomas Kuhn has demonstrated that science isn't the incremental set of discoveries most think it is. When one really studies the history of scientific discovery, one finds the personal beliefs and biases of scientists themselves color their investigations. Kuhn writes "An apparently arbitrary element, compounded of personal and historical accident, is always a formative ingredient of the beliefs espoused by a given scientific community at a given time." 1 He explains in his book how scientific research is "a strenuous and devoted attempt to force nature into the conceptual boxes supplied by professional education."2
Perhaps science does not develop by the accumulation of individual discoveries and inventions. Simultaneously, these same historians confront growing difficulties in distinguishing the "scientific" component of past observation and belief from what their predecessors had readily labeled "error" and "superstition." 3
Exactly, It's easy to claim science always advances forward if you don't count any of the conclusions that we now reject as science, but label them error or superstition.

Kuhn explains that in the enterprise of science, scientists are not readily willing to give up on their preconceptions and biases:
Normal science, the activity in which most scientists inevitably spend almost all their time, is predicated on the assumption that the scientific community knows what the world is like. Much of the success of the enterprise derives from the community's willingness to defend that assumption, if necessary at considerable cost.

Scientists Tend Toward Stasis

All of this means that many scientists will accept their current understanding of the scientific landscape and a kind of stasis will develop. Students learn their scientific assumptions from their professors, who teach what they also had learned to be true. Kuhn coined the term "paradigm" to describe this common set of assumptions. It isn't until there become so many problems or deviations from what was expected given the prevailing paradigm that a flurry of new research will ensue and may create a paradigm shift—a new idea replacing the old one:
Normal science, for example, often suppresses fundamental novelties because they are necessarily subversive of its basic commitments. Nevertheless, so long as those commitments retain an element of the arbitrary, the very nature of normal research ensures that novelty shall not be suppressed for very long.

… When the profession can no longer evade anomalies that subvert the existing tradition of scientific practice—then begin the extraordinary investigations that lead the profession at last to a new set of commitments, a new basis for the practice of science. 4
This is the common pattern in the history of science. It isn't a smooth slope upwards of increasing knowledge. It has fits and starts. It has many dead ends. Scientists get things wrong, such as the alchemists trying to turn lead into gold, but the atheists don't count them. They claim "that wasn't science, it was superstition." Still, the tree of modern chemistry grows from the roots of alchemy.

Don't Assume Science will Always Succeed

Remember, "science" makes no claims; scientists do. As I've said before, "scientists are not immune to bias, deceit, greed or the quest for fame and power any more than the rest of us. In fact, scientists ARE the rest of us!"5 I've illustrated that even when scientists reach a consensus, it doesn't mean their conclusions are correct.

Thus it is just as likely that science will not find the answer to the origin of life. It may be the search for turning material into life may be like the search for turning lead into gold. To hold to a science of the gaps theory offers no real advance in knowledge; it is simply shows one's willingness to defend their paradigm and at considerable cost.

References

1. Kuhn, Thomas S. The Structure of Scientific Revolutions. Chicago: U of Chicago, 1970. Print.2.
2. Kuhn, 1970. 5.
3. Kuhn, 1970. 5.
4. Kuhn, 1970.5-6.
5. Esposito, Lenny. "Should We Place Our Trust in Science?" Come Reason's Apologetics Notes. Come Reason Ministries, 5 Aug. 2013. Web. 01 Sept. 2015. http://apologetics-notes.comereason.org/2013/08/should-we-place-our-trust-in-science.html.

Sunday, August 30, 2015

One Quote for Naturalist Professors



Most naturalists prefer a more subtle approach. Instead of openly insulting Christianity, they patronize it, paying it the kind of compliments one pays to children and the simple-minded. Or they use "as-we-now-know" statements: "As we now know, there is no life after death." These are often introduced by "it-was-once-thought" statements: "It was once thought that moral laws were given to us by a God or gods, but as we now know, mankind gives moral laws to himself." Whenever a teacher makes an "as-we-now-know" statement, ask "Who do you mean by 'we,' and how do we 'know'?" If you aren't yet ready for public debate, ask the questions inwardly. If you do ask them aloud, be respectful. Your goal isn't to show that your teacher is wrong but merely that he isn't taking seriously the legitimate arguments on the other side.

To get this point across, ask your teacher to read the following words of Harvard paleontologist Richard Lewontin. Like every naturalist, Lewontin believes that the material world of nature is all there is, but he also confesses to something many of his fellow naturalists would rather deny. The confession is that they all believe in naturalism in spite of the evidence, not because of it. For example, even though the evidence strongly suggests that living things are the result of intelligent design, naturalists are desperate to prove they can't be.' Most of us would call the urge to ignore evidence "prejudice." Strangely, Lewontin calls it "taking the side of science"! See for yourself:
We take the side of science in spite of the patent absurdity of some of its constructs, ... in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counterintuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.
This amazing confession is important because it shows that what naturalists call "science" isn't really science—at least not if science means following the evidence! Naturalists like to think of themselves as brave defenders of clear reasoning itself is the superstition. It isn't supported by reasoning but by blind hostility to the evidence of God. Pray that your professors will finally get tired of their games. As Blaise Pascal wrote long ago, "it is good to be tired and wearied by the vain search after the true good, that we may stretch out our arms to the Redeemer."
— J. Budziszewski
 J. Budziszewski. How to Stay Christian in College (Kindle Locations 417-419). Kindle Edition.

Friday, August 28, 2015

Discovering God the Way Sherlock Holmes Would



I recently received a comment on my post on how the origin of life creates a significant problem for the naturalist. I was charged with making a "God of the gaps" argument. While a reading of the actual article displays no such breech in logic, it did begin an exchange with my critic that proves all too familiar: any logical argument that ends by inferring a supernatural actor as the best explanation of the facts at hand is easily dismissed as "God of the gaps" while any assumption that "science will one day figure it out" is supposedly rational.

This is an old canard that I've dealt with before (here and here), but I tried to take a different tact in this engagement. I wanted to place the burden on my objector, so I asked "Can you tell me the distinction between a valid inference for God and what you would classify as a God of the Gaps argument?" His reply is telling:
I'm not sure there is one. Abduction seems to be little more than a guess until a better explanation comes along. Science may well provide an answer to the origin of life in the future. (Which is something we may conclude through induction, a much stronger epistemology than abduction.)
There's so much wrong with this statement that it's hard to know where to begin. First, let's unpack some terms. There are two ways we can draw conclusions based on reasoning, known as deductive reasoning and inferential reasoning. In deductive reasoning, the conclusion is inescapable from the facts presented. The oft-used example is given the facts that all men are mortal and Socrates is a man, one is forced to conclude that Socrates is mortal.

Understanding Inferences

While Sherlock Holmes is well known for what's Doyle's books called "the science of deuction," he actually didn't deduce things. He used inferential reasoning. An inferential argument takes what is generally understood to be the case and applies it to the greater whole. For example, people have observed that like electrical charges repel each other and opposite charges attract. Thus, when English physicist Joseph John Thomson saw that cathode rays would bend certain ways based on whether a positive or negative magnet was placed near it, he inferred that the cathode ray was made up of negatively charged particles. The electron was discovered.1

The argument that Thompson used is known as abduction, which simply means reasoning to the best explanation. We take the facts that we know and try to get at the truth. Usually, that means applying a rule we already understand, such as the laws of magnetism, and seeing if it does a good job of explaining the specific circumstance we see. Your doctor does this all the time, such as when he prescribes penicillin for your bacterial infection. Prescribing penicillin isn't "little more than a guess" but is based on what is most likely, though not necessarily the case.

Abductive Arguments Drive Science

Because deductive arguments are few and far between in the real world, most of science is built on inference to the best explanation. Ironically, my critic got induction and abduction kind of backwards; induction in this sense is actually the weaker of the two. The Stanford Encyclopedia of Philosophy clarifies the difference:
You may have observed many gray elephants and no non-gray ones, and infer from this that all elephants are gray, because that would provide the best explanation for why you have observed so many gray elephants and no non-gray ones. This would be an instance of an abductive inference. It suggests that the best way to distinguish between induction and abduction is this: both are ampliative, meaning that the conclusion goes beyond what is (logically) contained in the premises (which is why they are non-necessary inferences), but in abduction there is an implicit or explicit appeal to explanatory considerations, whereas in induction there is not; in induction, there is only an appeal to observed frequencies or statistics. 2

Closed to the Best Explanations

I explain all this to make sure you understand that the arguments like the one inferring God from the origin of life are not merely guesses or "God of the gaps" claims. They are just like those abduction arguments that are the cornerstone of scientific and medical research. Human beings have observed life throughout our history. Never once in all of that time observing life have we ever seen life come from non-life. In fact, Louis Pasteur's science shows life doesn't spontaneously arise from non-living material. Therefore, it is reasonable to conclude that all life comes from other living beings and therefore the first life came from a living being. That's abduction.

Notice that when asked for a distinction as to what would make a valid inference for God's existence, my critic replied "I'm not sure there is one." That answer is as telling as the rest of the conversation. He has rejected any argument that leads to the conclusion that God exists at the outset. That's his prerogative, but doing so is anti-logic, anti-science, and inconsistent.

References

1. Douven, Igor. "Abduction." Stanford Encyclopedia of Philosophy. Stanford University, 09 Mar. 2011. Web. 28 Aug. 2015. http://plato.stanford.edu/entries/abduction/#UbiAbd.
2. Douven, 2011.

Monday, June 15, 2015

What Should We Think About Genetic Engineering?


What does it mean to give your children the best chance at success? Would it include changing their DNA so they would never get sick? Could it include genetically changing them to make them stronger, smarter, and faster than others? Is that even moral?

These questions used to be relegated to the realm of science fiction, but as genetic technologies advance, they have become more and more real. There are already instances of people using genetic screening during in vitro fertilization.1 While this process is currently used to only identify the correct number of chromosomes in an embryo, the Guardian article states, "If doctors had a readout of an embryo's whole genome, they could judge the chances of the child developing certain diseases, such as cancer, heart disease or Alzheimer's."2

While genetic screening itself opens a host of moral questions, even more provocative is the concept of genetic engineering: changing the gene itself to either rid the embryo of a trait or to enhance natural traits such as strength or intelligence. This morning I read two articles from Christians (J.W. Wartick and ElijiahT) who outlined the issue and offered their views. They've done a good job in laying out most of the arguments, both pro and con, that you find see in the debate, so I won't rehash them here. Both are worthy of your time. But there is an aspect that neither touched on which I think is fundamental to the discussion.

Genetic Therapy and Genetic Enhancement

First, I do wish to distinguish between the two goals of genetic engineering. There is a distinction between genetic therapy, which is basically correcting a genetic defect such as Sickle-cell disease that Wartick offers, and genetic enhancement, which takes a function that would fall within the normal range and improve it. 3 Yet, even here the standard isn't so easily discernable. For example, the deaf community even today has significant disagreement whether deaf children born to deaf parents should receive cochlear implants.4 In fact, one lesbian couple sought out a sperm donor who had five generations of deafness in his family to ensure their IVF child would be deaf.5

I have some problems with the couple's approach, but it does illustrate that defining disability versus difference isn't always so clear. However, with most cases, I think a case can be made that genetic therapies fall within a Christian construct. God has given us the ability to learn about His creation and to try and alleviate some of the suffering brought on by the Fall. Treatments for maladies are currently invasive (they require surgery), artificial (stints, mechanics, etc.) and even happen in utero as with fetal surgeries. Delivering treatments at the genetic level seems to me to be only a difference in degree, not in kind.

We are More than Our Genes

I have a different concern with genetic enhancements however. In his article, ElijiahT quoted Kurt Baier writing, "The best course of action is… the course of action which is supported by the best reasons. And the best reasons may require us to abandon the aim we actually have set our heart on."6 This is a fair standard and one that I think I can use to expand the debate.

The piece missing in both articles above is that every human being is not simply a product of his or her genetics. Human beings are also living souls and God is extremely concerned with the development of the soul as well as the ability of the body. Theologians have understood that while eliminating suffering is important and Christians should help those who they can, God's providential ordering of things is also important. That's why the Psalmist writes "For you formed my inward parts; you knitted me together in my mother's womb. I praise you, for I am fearfully and wonderfully made."7

Part of our fearful and wonderful makeup is our specific limitations in certain areas. These shape us into who we are as much as our ability to excel. While I personally didn't struggle academically, I wasn't a natural athlete growing up. I was very small as a teenager and didn't have much experience with a ball. However, I found sports that stressed endurance such as cross country and wrestling and I was able to do very well in both. Striving there taught me perseverance and discipline that I may not have otherwise experienced. If my strength and height were genetically enhanced in utero, I wouldn't have the soul-shaping experienced I had, which helped form my spiritual makeup and attitude.

In his post, Wartick opines, "It is unclear, though, whether genetic enhancement would undermine the good of accomplishment and human achievement. Indeed, one could argue that genetic enhancement, in fact, bolsters human achievement by widening the scope of possibility for humans."8 Physically, that may be true, but I am not sure that it would be true spiritually. While our culture overburdens the concept of diversity, there are things one can learn from those who have varied obstacles they had to overcome. Sometimes, those experiences inform the rest of us in new way. We can learn from Helen Keller.

No Genetic Lottery

So are we to leave our children to what has been deemed the "genetic lottery"? And, to extend the argument, is seeking a child's excellence through genetic enhancement techniques any different from some of the advantages certain children currently enjoy? Outlining this aspect of the pro-enhancement position, ElijiahT writes:
Parents make choices regarding the life and welfare of their children all the time, yet no one claims the autonomy of the child is being violated. Expectant mothers will regularly take vitamins (to enhance the prenatal environment), read or play music to the developing child and alter her diet, all in an attempt to give the child the best environment possible. After birth, parents deliberately choose the child's nutrition, education, entertainment and health. In fact, to neglect these things is often seen as inappropriate parenting.9
I agree. Yet, the difference is qualitative; it's one of nature versus nurture. One need look no farther than the recent Lance Armstrong scandal. No one would bat an eye if Armstrong was reported taking the best vitamins, using the best trainers, and following the best exercise and diet regimen. It was the artificial input of what should be a natural (e.g. "God-given") function of his body. If we are created fearfully and wonderfully by a holy God, it simply may be that our limitations are there to build our character and our spirit.

Escaping the Playing God Dodge

ElijiahT counters with the argument that "playing God "with another's life may be a fallback excuse: "The actions associated with ‘playing God' are usually new technologies that alter something about the human condition. In this case, genetic engineering is seen as playing God, but couldn't the same argument be used as a ‘catch-all' for anything that makes us uncomfortable?"10

Of course he's right. The objection has been used as a conversation-stopper many times. But that doesn't mean that it is always fallacious. A doctor who indiscriminately euthanizes his patients is playing God; he's taken upon himself the mantle of choosing which people are worthy of life—the province of God alone. Similarly, if God is interested in shaping us into mature souls, he may limit certain physical attributes that we would otherwise wish for ourselves or our children. These differences are not defects caused by the fall, but truly differences that God allows for our good. One shouldn't assume to modify them because we believe they are not as worthy as other characteristics.

There's an interesting scene in the 1999 hit move The Matrix, where Agent Smith tells Morpheus that human beings don't thrive in paradise. The character explains:
Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy? It was a disaster. No one would accept the program. Entire crops were lost. Some believed we lacked the programming language to describe your perfect world. But I believe that, as a species, human beings define their reality through suffering and misery. The perfect world was a dream that your primitive cerebrum kept trying to wake up from. Which is why the Matrix was redesigned to this: the peak of your civilization.11
That's an oversimplification, but it does bring up a point. Struggle and hardship may be uncomfortable, but they are not always to be avoided. They can and often do serve to benefit believers. Holding to a "genetic lottery" assumes at the very least a deistic worldview. While we mitigate the defects brought on by sin, including Original sin, we shouldn't be so bold as to assume we can improve physical characteristics that are not defective. The Nazis sought to do that with race, but race isn't a defect. Neither are our lesser or grater physical abilities.

Without a discussion of the soul-shaping nature of bodily limitations, the questions raised regarding genetic modification is incomplete.

References

1. Sample, Ian. "IVF Baby Born Using Revolutionary Genetic-screening Process." The Guardian. Guardian News and Media Limited, 7 July 2013. Web. 15 June 2015. http://www.theguardian.com/science/2013/jul/07/ivf-baby-born-genetic-screening.
2. Sample, 2013.
3. There is also a distinction between treating someone genetically where the modified genes are localized versus recoding the person's entire DNA, as would happen at the first stages of life. Biologists differentiate the two by referring to the first as somatic genetic treatments, where the gene therapy would not be passed on to succeeding generations. Germ-line genetic treatments, however, are passed from parent to child.
4. Ringo, Allegra. "Understanding Deafness: Not Everyone Wants to Be 'Fixed'" The Atlantic. Atlantic Media Company, 09 Aug. 2013. Web. 15 June 2015. http://www.theatlantic.com/health/archive/2013/08/understanding-deafness-not-everyone-wants-to-be-fixed/278527/.
5. Spriggs, M. "Lesbian Couple Create a Child Who Is Deaf like Them." Journal of Medical Ethics 28.5 (2002): 283. Web. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1733642/pdf/v028p00283.pdf.
6. ElijiahT. "Why You Should Genetically Engineer Your Children." ElijiahT. ElijiahT, 07 Dec. 2014. Web. 15 June 2015. https://elijiaht.wordpress.com/2014/12/08/genetic-engineering-and-human-children/.
7. Psalm 139: 13-14, ESV
8. Wartick, J. W. "Genetics and Bioethics: Enhancement or Therapy?" Always Have a Reason. J.W. Wartick, 15 June 2015. Web. 15 June 2015. http://jwwartick.com/2013/02/25/enhance-therapy/.
9. ElijiahT, 2014.
10. ElijiahT, 2014.
11. The Matrix. Prod. Andy Wachowski and Lana Wachowski. Dir. Andy Wachowski and Lana Wachowski. By Andy Wachowski and Lana Wachowski. 1999.

Monday, June 01, 2015

Replying to Science-of-the-Gaps Arguments



I had a commenter named Barry respond to my blog post "Why the Darwinist Version of Life's Origin is Anti-Science". First, he asked whether it is appropriate to couple the origin of life with neo-Darwinian evolution (it is), he then made the following statements:
You can't say "Well, we don't know how life emerged so God musta done it" simply because scientists don't know (yet). … That we don't know NOW how life began doesn't give anyone intellectual license to say that life has a supernatural cause due to a creative moment by a whimsical Omniscient Being. Relax. So we don't know right now what caused life to emerge. That's just the way it is. We'll understand some day. Maybe not in our lifetimes but it's likely to happen in the next fifty years or so.

In the meantime, God-of-the-gaps arguments aren't arguments from the point of evidence. They're arguments from the point of faith and belief. That's not a persuasive rhetorical tactic for the plain reason that reality is preferable to believing in things simply because you want these things to be true."
You will notice that Barry admits a couple of things. First, he holds that arguments that are not from the point of evidence are not strong. He refers to these as "intellectually feeble." He also admits that scientists don’t know how life began. In fact, they have absolutely no idea, no working models, nor even any controlled lab experiments that shows how one can get even a self-replicating RNA molecule from ribozyme components. I also brought this up in my response, pointing him to the enormous odds Dr. David Berlinski offered.

Barry’s response was telling. He replied:
Odds, shmods. It happened. Life DID emerge when it did and that's that. The only thing we don't understand is HOW life emerged—and there is zip evidence that it was due to some supernatural intervention. Evidence is tying a palm print on the rifle to Oswald. Evidence is collecting DNA from a crime scene and connecting it to a suspect. You? You got nuthin' to link to.
Can you see how this paragraph directly contradicts his previously stated view that arguments without evidence are intellectually feeble? Odds schmods?? It’s clear that Barry doesn’t care what the evidence (e.g. the mathematics) shows on the possibility of life emerging by chance. He simply wants it to be true. But that’s what the decried in the previous exchange! He’s not relying on a God-of-the-gaps argument, but a science-of-the-gaps one. He rejects the actual scientific data that that natural laws and chemistry alone could never assemble the first living organism simply because he doesn’t want to believe it to be true!

You’ll also notice that Barry claimed I had "nuthin' to link to." I did link to a couple of articles in fact, one being the Berlinski quote above. One of the main tasks of the scientific method is to either validate or falsify a hypothesis. You see, scientists understand that a negative result is still a result. We have data on what is required for life to exist, and it is showing more and more that spontaneous self-assembly is not a logical option. Asserting "we'll understand some day" is a statement of faith that directly contradicts the increasingly mounting evidence against the hypothesis.

To trust in science alone is not following the evidence wherever it leads. It is seeking to validate a preconception at any cost, something rational individuals should shun.
Original image courtesy Dale Schoonover, Kim Schoonover [CC BY 3.0]

Saturday, May 30, 2015

The Odds Against a Natural Account of Life's Origin



One of the most fundamental questions human beings have asked "Where did we come from?" The Christian will respond that we are creations of God. Modern atheism, though, seeks to erase God from the picture by proposing that we came about as a result of a very lucky combination of material and the laws of science where short strands of polynucleotides—the stuff that makes up our DNA and RNA molecules—would stick together to form longer chains. The story goes that eventually, an RNA molecule would form that could self-replicate and life would begin.

Just how much luck was involved? Dr. David Berlinski discusses it here:
Was nature lucky? It depends on the payoff and the odds. The payoff is clear: an ancestral form of RNA capable of replication. Without that payoff, there is no life, and obviously, at some point, the payoff paid well. The question is the odds.

For the moment, no one knows precisely how to compute those odds, if only because within the laboratory, no one has conducted an experiment leading to a self-replicating ribozyme. But the minimum length or "sequence" that is needed for a contemporary ribozyme to undertake what the distinguished geochemist Gustaf Arrhenius calls "demonstrated ligase activity" is known. It is roughly 100 nucleotides.

Whereupon, just as one might expect, things blow up very quickly. As Arrhenius notes, there are 4100, or roughly 1060 nucleotide sequences that are 100 nucleotides in length. This is an unfathomably large number. It exceeds the number of atoms in the universe, as well as the age of the universe in seconds. If the odds in favor of self-replication are 1 in 1060, no betting man would take them, no matter how attractive the payoff, and neither presumably would nature.1
Following that description, Berlinski notes that Arrhenius seeks to escape his own dilemma by proposing that such long self-replicating sequences may not have been as rare in the primeval earth as they are today. He then answers:
Why should self-replicating RNA molecules have been common 3.6 billion years ago when they are impossible to discern under laboratory conditions today? No one, for that matter, has ever seen a ribozyme capable of any form of catalytic action that is not very specific in its sequence and thus unlike even closely related sequences. No one has ever seen a ribozyme able to undertake chemical action without a suite of enzymes in attendance. No one has ever seen anything like it.

The odds, then, are daunting; and when considered realistically, they are even worse than this already alarming account might suggest. The discovery of a single molecule with the power to initiate replication would hardly be sufficient to establish replication. What template would it replicate against? We need, in other words, at least two, causing the odds of their joint discovery to increase from 1 in 1060 to 1 in 10120. Those two sequences would have been needed in roughly the same place. And at the same time. And organized in such a way as to favor base pairing. And somehow held in place. And buffered against competing reactions. And productive enough so that their duplicates would not at once vanish in the soundless sea.

In contemplating the discovery by chance of two RNA sequences a mere forty nucleotides in length, Joyce and Orgel concluded that the requisite "library" would require 1048 possible sequences. Given the weight of RNA, they observed gloomily, the relevant sample space would exceed the mass of the Earth. And this is the same Leslie Orgel, it will be remembered, who observed that "it was almost certain that there once was an RNA world." 2
This section of Berlinski's article deals with just one step of a multi-step process that would fashion the first life. Other pieces include the advancement from self-replicating RNA to a fully working cell producing the appropriate amino acids and nucleic acids to function as well as assembling the right nucleic acids to construct the polynucleotides to begin with. And we haven't even factored in the problem of chirality.  However, looking at Berlinski's numbers alone, it seems clear that a reasonable person would not assume life came about by dumb luck.

References

1. Berlinski, David. "On the Origin of Life." The Nature of Nature: Examining the Role of Naturalism in Science. By Bruce L. Gordon and William A. Dembski. Wilmington: ISI, 2011. 286. Print.
2. Berlinski, 2011. 286-287.
Image courtesy Toni Lozano [CC BY 2.0], via Wikimedia Commons
Come Reason brandmark Convincing Christianity
An invaluable addition to the realm of Christian apologetics

Mary Jo Sharp:

"Lenny Esposito's work at Come Reason Ministries is an invaluable addition to the realm of Christian apologetics. He is as knowledgeable as he is gracious. I highly recommend booking Lenny as a speaker for your next conference or workshop!"
Check out more X