Absolutely Nothing Is Absolute

A discussion of radical relativism, the belief that there are no absolute facts and that worldviews are more important to reality than the other way around.

Tuesday, March 07, 2006

1+1=?

As a kid, I always had a sense that there was something wrong with the way adults talked about the world. They often said things that seemed to me too fantastic to be true. The sun was 93 million miles away and, at it surface was over 10,000 degrees! Well, being a kid, those numbers just sounded too outrageous to be true. But what bothered me more was that anyone would claim to know. I want to see the thermometer they used to take the temperature at the sun, and imagine the sunburn whoever went up there to take it must have gotten.

I was not raised in a religious household (in fact, my parents both seemed to revolt against their own oppressively religious upbringings) so I was not taught the concept of blind faith. My father, in particular, was always challenging the conventional wisdom about things and encouraging me to do the same. It made me just precocious enough to alienate just about everyone.

There was one summer in particular that was pretty pivotal, though, in launching my intellectual pursuit of all things relative. A cousin was visiting who, while religious, did not spend a lot of time talking about religion. I do not remember how the topic came up, but I started asking him about what he believed and why he believed it. It was obvious in the conversation that he believed what he did because he was told to and that was good enough for him. He understood, completely, that it was possible to believe other things, and that other people did, and he did not think those other people stupid, or damned, or, necessarily even wrong. For him, his religious beliefs were where he was at home, philosophically. There was no reason for him to consider anything else because he was comfortable where he was.

This was also one of my first summers where adolescence was really setting in and my first hints of being gay were manifesting themselves in my attempt to keep my attractive and half-naked cousin awake through this conversation so I could keep looking at him. But I digress.

This was, in fact, when I decided that I did not believe in God. Nobody had told me what to believe about this. My mother had always said that she believed in God, though she never talked about it or told me why I should, or even that I should. So it just sort of hung there for me. I would have said I believed in God or that I probably did until that summer when, after my conversation with my cousin and a few other observations (including a late-night talk show guest, neither the name or profession of whom I recall, who declared with considerable authority, that masturbation was a mortal sin, which at that point in my life was potentially very disturbing) that there was no point in believing in such a thing as a god. There was no evidence for a god. I had not heard a single argument that I felt justified such a belief. I had not ruled out the possibility that I might change my mind, but it just seemed to me that if it was so intuitively obvious that God exists (as it seemed to be for everyone else) that everyone would agree on all of the details about him. Clearly they did not, so, even if I were to believe in a god, which one would I believe in?

Sometime later, while discussing my newfound atheism with a friend who was very devoutly Christian, I explained to him why I had decided that I could not believe in God simply because others did. There was no evidence for such a belief, I repeated and asked why one would believe in something so fantastic without any concrete proof.

This friend, who as I recall was named Chris, met my challenge with another. Why, he asked, did I believe in gravity. At first, I thought the question silly.

Because there is evidence for it, of course, I replied. Things fall down. The planets stay in their orbits. The sun continues to fuse hydrogen into helium because of the pressures caused by its gravitational mass.

Why do you believe that's gravity, and not simply God, dictating that those things behave the way they do, Chris continued? Wouldn't God, if he were designing an elegant universe, design it in consistent and harmonious ways that we could predict and count on? That we can describe a mathematical model that allows us to know how the universe behaves says nothing about the cause of that behavior.

I didn't realize, until I was in college, just how important an observation this was. But it stuck with me, anyway. I was not satisfied, of course, that Chris had rebutted my atheism. That believing in gravity is, hypothetically, no more logical than believing in God is not, to me, proof of the existence of God. But it certainly did cause me to question what all I really do believe. Are the only two choices for understanding the universe to believe in absolute physical laws on the one hand or divine providence on the other?

As I studied philosophy and physics in college, I started to frame, more specifically, the sense that I was having about our view of the world. Rene desCartes, trying to reduce all knowledge to its most basic elements, asked how we know that we are not merely disembodied brains, or held in suspended animation in some Matrix-like world, with our brains being fed whatever sensory inputs some malevolent entity wants us to have. With the declaration, Cogito ergo sum, he decided that since he had a consciousness, at least he must surely exist and in an amazing series of following leaps of logic, concluded that God and the world all exist exactly as we have always known them to. David Hume, though believing in an objective material world that obeyed natural laws, admitted that there was no real logical reason to do so and posed several logical problems with his own empiricist beliefs.

The problem with any belief, even those for which we can find supporting evidence, is that it requires a chain of reasoning that winds up being circular. This problem, called the Problem of Induction goes something like this: Gathering evidence about the physical world requires that we interact with it and observe it. This requires reliable senses that actually observe the real world. We believe our senses are reliable because the information that we collect from them seems reliable in that the world we sense is predictable and generally conforms to the expectations we have developed of it from this sensory data. Since our senses have been reliable in the past, we have evidence that they are reliable generally. Now, the data that we gather about the world from our generally reliable senses seems to suggest that the world behaves in predicatable ways. We can abstract from these observations models that reflect physical laws such as gravity, electromagnetism and so on. Since these models, which we call theories, seem, in the past, to have been reliable predictors of the behavior in the world, we have evidence that they will be reliable in the future.

This common theme, that patterns we have experienced in the past are reliable predictors of the future is called induction: generalizing from specific observations (things fall down) to a general law (all things will fall down). However reasonable this may seem, there is actually no logical justification for it. We know, from our own experience, that lucky streaks come to an end, that a stock that has climbed in value for 12 consecutive months can crash without warning or that a previously unbeaten sports team can be upset. The only difference between our belief in a bullish stock and the reliability of gravity is simply a matter of degree. We, to date, have no examples of gravity failing. There is, though, no logical reason to believe that it never will. Only our common sense, whatever that is, tells us that "as far back as humanity can remember" is long enough to believe in gravity's eternal nature.

Believers in Biblical Creation like to point out that evolution "is only a theory." They are correct, of course, but then so are the laws of gravitation. They are theories because they cannot be logically proven, but only shown to be consistent, so far. (And only so far as they go, since, for example, we now know that the laws of gravitation, as written by Sir Isaac Newton, are not exactly correct in all cases. Einstein's theory of general relativity is now believed to be a more complete description of the behavior of the force we call gravity, though scientists still work towards an even more general theory that will, they hope, unify gravity with the other perceived forces of the universe.) Unlike mathematical truisms, like the Pythagorean Theorem, physical laws cannot be proven but only inferred. They are attempts to explain specific phenomenon in terms of general laws, laws which can be shown to be incorrect by a single counterexample, but can never be proven to be correct even if it has never, in the course of history, been shown to be wrong.

So, there's no logical reason to believe anything we think we know about the world we live in. There is no reason to believe that gravity will not stop tomorrow at 1:32 P.M. There is no reason to believe that the strong nuclear force that holds the nuclei of our atoms together will not turn out to be a fortunate fluke that, because of whatever larger realm our universe might happen to be a part of, just lasted for 13.2 billion years but will come to an end along with all matter everywhere sometime next year.

When I came to the realization that there was no logical basis for scientific knowledge, I abandoned my interest in physics and took up computers.

Seriously. Science was messy. Observation required imprecise measurements and statistical models and conflicting results from the same experiment performed multiple times. And, of course, science required induction. Nothing you learned was certain, so what was the point?

Computers, though, were different. You could write a computer program that was certain. If you wrote a program to add 1 and 1, and your program was correct, you knew you would get 2. You knew this, not because you tried it several times and got the right answer, but because you could prove it. The program was its own proof. It was logical.

Back in the late 70s and early 80s, when computers were much simpler than they are today, it was still possible, and even expected, to write bug-free code. Algorithms were simple because the computers were not powerful enough to process the kinds of things computers do today. You could print out all of the software that could fit on one computer onto several hundred pages of greenbar paper and sit down with a handful of programmers who would review it, the same way a mathematician would submit a mathematical proof for peer review. If several smart guys all believed that your code was "clean," it probably was. I liked that. It was logical, predictable and certain.

That being said, it was always still true that software teams would test their code. Since proving that software was correct was hard, it often still fell on a group of people who would run programs through their paces to make sure they did what they were supposed to do. In a sense, these test teams were scientists, testing a theory: the theory that the program was valid. They used a set of random examples of inputs (because the number of possible inputs was infinite, you couldn't simply test them all) and verified the outputs. If any of them turned out to be wrong, you knew you had a bug. But, no matter how many times you ran them with correct results, you could never be certain that it was perfect. Even what I thought was so logical and certain, in the end, turned out not to be.

It was a frustrating realization, but I considered it a minor inconvenience in my thinking. We tested software not because it was impossible to prove it correct, but because it was impractical. Submitting complex software to committees of peer programmers for review would be horrendously expensive and, besides, programmers, as a rule, would rather spend their time writing their own code than reviewing someone else's. So, even though our application of computer technology was less than perfect, the technology itself still had the potential to be more pure than empirical scientific methods of understanding the universe.

And, of course, math was perfect. You could, indeed, know things about math. The Pythagorean Theorem, that the square of the length of the hypotenuse of a right triangle is equal to the sum of the squares of the lengths of the sides, is easy to prove. Even a grade-school student can be made to understand why, with certainty, the theorem is true. I might have been interested in math as a career, had there been much call, in the computer age, for professional mathematicians.

Somewhat later in my career, in 1993, I read a remarkable thing on the internet. An obscure mathematical puzzle called Fermat's Last Theorem had apparently been solved. The theorem said that a certain equatiion (expressed in a notation in which x^y means, x, raised to the power of y), a^n + b^n = c^n, is untrue for any whole number value of n greater than 2. This theorem is of no important significance to anyone. There are no practical applications of it that I have ever heard, but what made it interesting was that a 17th-century mathematician, Pierre de Fermat, claimed to have proven it, but no one since had been able to, until, in 1993, a British mathematician, Andrew Wiles announced his own voluminous proof. I remember reading several attempts by mathematicians on the internet to describe the incredibly complex methods Wiles used. Unfortunately, later that year, Wiles had to retract his proof when some serious flaws were discovered. After having spent 7 years working out what he believed to be certain proof of the conjecture, it turned out Wiles was wrong. (Fortunately, he and one of his students were able to spend another year working out the problem and finally publish what is now considered to be a valid proof.)

The point is that, even in mathematics, a field known for its purity and logical certainty, there is still an uncertain element, namely, the brains of the mathematicians that develop it. Like certainty about the physical world requires a belief that because things have reliably been a certain way in the past, they will be that way in the future, certainty about the logical world requires a belief that the past reliability of logic, and those who practice it, is proof that it is valid. We, in fact, induce that our memory will accurately reflect our past experiences. Even our own thoughts, and our thoughts about our thoughts, are suspect.

So where does this leave us? If I cannot beleive in God, or in the real world, or in 1+1=2 or even in my own thoughts, what is the point? Why exist at all (if in fact, we do)?

For now, all I can say is, just because. One of my favorite Confucious quotes is, "To know is to know that you know nothing." To let go of certainty is to let go of arrogance. To stop believing that the world is or ought to be a certain way is to open oneself to the possibilities of what it can be through contribution to and relationship with that world and its other inhabitants.

Only when you let go of the biases that the world has raised you to adopt will you really be able to develop certainty about anything and that is the great paradox of human existence. And it is a paradox I will discuss in more detail on another occassion.

In the meantime, I challenge you to stop claiming to know. Don't do anything stupid. You can still be pretty sure of things you don't absolutely know. I still recommend living a safe, healthful and generous life. But do it questioningly and without unnecessary judgement. The world may just seem a little more open and the possibilities greater.

1 Comments:

Blogger zmyth said...

thanks lane! you may find "laws of form" by g.spencer brown interesting, check wikipedia.

11:20 PM  

Post a Comment

<< Home