Many people think of the "scientific method" as a matter of white coats, laboratories, microscopes, test tubes, atom smashers, high-powered computers, and similar arcane apparatus far beyond the ken of ordinary mortals. If they are a little more aware of just what makes science science, they think of the "scientific method" as a matter of hypothesis, experiment, logic, theory, and law.
There is less truth to the first of these images than to the second because different sciences use different apparatus, and it is entirely possible to do science with a pencil and a pad of paper, or flat on one's belly staring into the depths of a backyard lawn. The second image comes closer to being true for all sciences and all scientists, for it is indeed what comprises the "standard picture" or "standard model" of the scientific method.
However, this standard picture of the scientific method is also a myth. It has more to do with the way science is presented in scientists' reports of their work than with the way the scientists actually do their work. In practice, scientists are often less orderly, less logical, and more prone to very human conflicts of personality than most people suspect. Some even get their best ideas in dreams and brainstorms, just like artists.
The myth remains because it helps to organize science. It provides labels and a framework for what a scientist does; it may thus be especially valuable to student scientists who are still learning the ropes. In addition, it embodies certain important ideals of scientific thought. It is these ideals that make the scientific approach the most powerful and reliable guide to truth about the world that human beings have yet devised.
The soul of science is a very simple idea: Check it out. Scholars used to think that all they had to do to do their duty by the truth was to say "According to..." some ancient authority such as Aristotle or holy text such as the Bible. If someone with a suitably illustrious reputation had once said something was so, it was so. Arguing with authority could get you charged with heresy and imprisoned or burned at the stake.
This attitude is the opposite of everything that modern science stands for. Scientific knowledge is based not on authority but on reality itself. Scientists take nothing on faith. They are skeptical. When they want to know something, they do not look it up in the library or take others' word for it. They go into the laboratory, the forest, the desert--wherever they can find the phenomena they wish to know about--and they ask those phenomena directly. They look for answers in the book of nature. And if they think they know the answer already, it is not of books that they ask, "Are we right?" but of nature. This is the point of "scientific experiments"--they are how scientists ask nature whether their ideas check out.
This "check it out" ideal is, however, an ideal. No one can possibly check everything out for himself or herself. Even scientists, in practice, look things up in books. They too rely on authorities. But the authorities they rely on are other scientists who have studied nature and reported what they learned. And in principle, everything those authorities report can be checked. Observations in the lab or in the field can be repeated. New theoretical or computer models can be designed. What is in the books can be confirmed.
In fact, a good part of the official "scientific method" is designed to make it possible for any scientist's findings or conclusions to be confirmed. Scientists do not say, "Vitamin D is essential for strong bones. Believe me. I know." They say, "I know that vitamin D is essential for proper bone formation because I raised rats without vitamin D in their diet, and their bones turned out soft and crooked. When I gave them vitamin D, their bones hardened and straightened. Here is the kind of rat I used, the kind of food I fed them, the amount of vitamin D I gave them. Go thou and do likewise, and you will see what I saw."
Communication is therefore an essential part of modern science. That is, in order to function as a scientist, you must not keep secrets. You must tell others not just what you have learned by studying nature, but how you learned it. You must spell out your methods in enough detail to let others repeat your work.
Scientific knowledge is thus reproducible knowledge. Strictly speaking, if a person says, "I can see it, but you can't," that person is not a scientist. This means that psychic phenomena (ESP, or extrasensory perception) are not science. Telepathy, precognition, clairvoyance, and other such "wild talents" are said to work for some people but not for others. Worse yet, ESP partisans often say that if a skeptic is present when they try to demonstrate their ESP, it won't work. It exists only for those who already believe in it. Scientific knowledge, on the other hand, exists for everyone. Anyone who takes the time to learn the proper techniques can confirm it. They don't have to believe in it first.
As an exercise, devise a way to convince a red-green colorblind person, who sees no difference between red and green, that such a difference really exists. That is, show that a knowledge of colors is reproducible, and therefore scientific, knowledge, rather than something more like belief in ghosts or telepathy.
(Here's a hint: Photographic light meters respond to light hitting a sensor. Photographic filters permit light of only a single color to pass through.)
What scientists do as they apply their methods is called research. Basic research seeks no specific result. It is motivated essentially by curiosity. It is the study of some intriguing aspect of nature for its own sake. It has revealed vast amounts of detail about the chemistry and function of genes, discovered ways to cut and splice genes at will, and learned how to insert into one organism genes from other organisms. It has also revealed the structure of the atom and discovered radioactivity. It has opened to our minds the immensity in both time and space of the universe in which we live. It has yielded photos of the surface of Mars. It has shown how to make electrons jump through hoops.
Applied research is more mission-oriented, and most biologists and other scientists who work for government and industry are applied researchers. They seek answers to specific problems. They want cures for diseases, methods for analyzing problems, and ways to control various phenomena. Among other things, they have taken the knowledge and techniques developed by basic research in genetics and molecular biology and created the technology of genetic engineering. With this technology they have made it possible to manufacture in quantity and relatively cheaply numerous chemicals for the treatment of diseases. They are now learning how to replace defective genes, and they may one day learn how to equip organisms with new characteristics. They have already created a new industry with immense potentials for growth and impact on human welfare.
Outside biology, nuclear physicists have used the new knowledge of atomic structure to invent nuclear power and H-bombs. Hoop-jumping electrons have made possible modern desk-top computers. Knowledge of the universe has stimulated plans for warding off impending collisions such as the one that extinguished the dinosaurs 65 million years ago.
Today, applied research receives far more funding than basic research. The reason is clear, for we have many problems that cry for solutions. Yet there is also a need for basic research, for basic research supplies a great many of the ideas, facts, and techniques--including new kinds of microscopes and other instruments--which applied researchers then use in their search for answers. Basic researchers, of course, use the same ideas, facts, and techniques as they continue their probings into the way nature works.
The standard picture of the scientific method, which we will discuss in some detail below, presumes that the scientist is an experimentalist or naturalist, making observations, guessing at what they mean (making hypotheses), testing those guesses with experiments, and constructing theories. This is in fact the common image of the biologist, chemist, or physicist in a laboratory. It applies as well to those biologists and geologists who work outdoors, observing wildlife, collecting fossils, and searching for the signs of ancient earthquakes. There are many ways to be a scientist, and some of those ways bear little apparent relationship to the standard picture.
For example, consider mathematicians. Their business is the manipulation of numbers, equations, and geometric images on computer screens, pads of paper, and blackboards. Some mathematicians take great pride in claiming that the patterns and relationships they find in their numbers, equations, and images have nothing to do with the real world. Indeed, they feel that mathematics is at its best when it is most utterly useless. They call themselves "pure" mathematicians and feel that "applied" mathematicians whose work does have something to do with the real world--with building bridges or plotting satellite orbits, for instance--are in some way stained by their connection to the world.
Traditionally, basic researchers in many fields of science have felt some of this same scorn for applied researchers. Yet we might ask whether mathematicians are researchers--or scientists--at all, at least in the same sense as biologists and chemists.
It is not unreasonable to see pure mathematicians as generating ideas (or hypotheses) but refusing to ask the book of nature whether they make sense (by doing experiments). That is, they do not use the scientific method. They seem quite content to check their ideas only against each other, seeking consistency and fit and then constructing--what? It is hardly fair to accuse mathematicians of constructing castles in the air. No matter how pure they seem, they do not construct mere useless fantasies or pretty pictures.
We can say this because again and again, other mathematicians, computer scientists, biologists, and physicists look at the most "useless" of the pure mathematician's efforts and say, "Mmm. That looks like..." And Lo! that useless castle in the air turns out to fit something in the world, to be useful after all. One example comes from the study of how difficult it is to find the prime factors of very large numbers which led to the discovery of a virtually unbreakable way to encode secret messages and delighted both financial institutions and national security agencies. (A prime number is a number divisible only by itself and one, such as 5, 11, 23, and 111. The factors of a number such as 12 are those numbers which, when multiplied together, give the first number; thus 3 and 4 are factors of 12, as are 6 and 2, or 1 and 12. You find prime factors by factoring factors until you can go not further; thus the prime factors of 12 are 3, 2, and 2.)
In a way, when such things happen, pure mathematics does indeed become science. The pure mathematician, all by himself or herself, is far more concerned with coming up with nifty ideas that check out against each other rather than against some external reality. Those who notice a match-up between those ideas and the world outside the mathematician's office are the ones who fulfill the "check it out" ideal. The pure mathematician produces the hypotheses. Others do the experiments.
Both essentials of the scientific method are therefore there, as well as a reminder that the individual scientist does not have to do everything called for in textbook descriptions of the method in order to be called a scientist. The point of science is that guesses be checked, whether by individuals, by cooperating teams, or by people who never meet or even hear each other's names.
Science is often considered the epitome of rationality, of emotionless logic.
This is nonsense, of course. Scientists do have feelings, both in their personal lives and in their work. In the latter, they have a sense of the beauty and majesty and mystery of nature. Their souls bubble with the juices of creative inspiration. Ideas come to them out of the blue, from the unconscious, from dreams, just as they do for poets, painters, and other artists. Indeed, it is not at all unreasonable to consider science just another art.
Like any art, science demands a certain way of thinking to develop and defend those ideas that come from the blue, the unconscious, and dreams. This is where the logic comes in, in the search for implications, the devising of experiments that leave no opening for chance and delusion, the construction of chains of word and idea that allow a scientist to lead others--via reports and textbooks--to the destination he or she has found.
Scientific logic comes in two forms. Inductive logic is at work when a pattern emerges from a mass of observations. It is therefore the kind of logic that leads to hypotheses. It was this kind of logic Charles Darwin used when he assembled a great mass of data on the geographic distribution of similar creatures (along with other data) and discovered the theory of evolution by means of natural selection.
Deductive logic works the other way around. It moves from the general to the specific, not from the specific to the general. One who uses it begins with a general statement (a rule or theory) such as, "Objects not supported against the pull of gravity fall" (a version of the law of gravity), and says, "If I let go of this rock, it will no longer be supported against the pull of gravity and it will fall." That is, deductive logic is predictive logic. It is the kind of logic scientists use when they design experiments. They say, "If my hypothesis is correct, then if I do this, that will happen.
1. Imagine a city neighborhood that is suffering a rash of burglaries. One day you realize that every time there is a burglary, a black van is parked by the fire hydrant down the block. What conclusion jumps to your mind?
2. Did you use inductive logic or deductive logic to reach that conclusion?
3. How would you check out that conclusion?
4. When you answered the previous question, were you using inductive logic or deductive logic?
5. Why, do you think, are police detectives said to excel at deduction? Don't they ever use induction?
As it is usually presented, the scientific method has five major components. They include observation, generalization (identifying a pattern), stating a hypothesis (a tentative extension of the pattern or explanation for why the pattern exists), and experimentation (testing that explanation). The results of the tests are then communicated to other members of the scientific community, usually by publishing the findings. How each of these components contributes to the scientific method will be discussed below.
The basic units of science--and the only real facts the scientist knows--are the individual observations. Using them, we look for patterns, suggest explanations, and devise tests for our ideas. Our observations can be casual, as when we notice that black van parked in front of the fire hydrant on our block. They may also be more deliberate, as what a police detective notices when he or she sets out to find clues to who has been burglarizing apartments in our neighborhood.
After we have made many observations, we try to discern a pattern among them. For instance, after measuring the heights of a great number of males and females, we might notice that males are taller than females on the average. A statement of such a pattern is a generalization.
In the context of the black van and burglary example we started developing above, we would form a generalization when we realized that every time the black van parked by the hydrant, there was a burglary on the block.
Cautious experimenters do not jump to conclusions. When they think they see a pattern, they often make a few more observations just to be sure the pattern holds up. This practice of strengthening or confirming findings by replicating them is a very important part of the scientific process. In our example, the police would wait for the van to show up again and for another burglary to happen. Only then would they descend on the alleged villains.
A tentative explanation suggesting why a particular pattern exists is called a hypothesis. In our example, the hypothesis that comes to mind is obvious: the burglars drive to work in that black van.
The mark of a good hypothesis is that it be testable. Since there is no way to test a guess about past events and be sure of absolute truth in the results, we need a simpler, more direct hypothesis. Fortunately, such a hypothesis is easy to devise: if the burglars do indeed drive to work in that van, it should contain the burglars' tools and loot. If we hypothesize that it does contain these items, the test becomes easy. All we have to do is stop and search the van.
What we are doing here is saying, in effect, "I have an idea that X is true. I cannot test X easily or reliably. But if X is true, then so is Y. And I can test Y."
Unfortunately, tests can fail even when the hypothesis is perfectly correct. That is, the van might turn out to contain no obvious loot and no burglary tools, just two young men protesting their innocence. Anyone familiar with television or the movies might immediately guess that the evidence has been hidden elsewhere or passed to a colleague in another vehicle, but there really are no grounds to arrest the suspects.
Many philosophers of science insist on falsification as a crucial aspect of the scientific method. That is, when a test of a hypothesis shows the hypothesis to be false, the hypothesis must be rejected and replaced with another.
In terms of the X and Y we mentioned above, we have found that Y is not true. Does this mean X is false too? Perhaps, but we must bear in mind that we did not test X. We tested Y, and Y is the hypothesis that the idea of falsification says must be replaced, perhaps with hypothesis Z.
1. Almost any interesting question or hypothesis can be divided into sub-questions or sub-hypotheses. How have we done this in the example of the black van and the burglars?
2. If our test of the "van contains burglar tools and loot" hypothesis fails (is falsified), what can we say about the "van belongs to burglars" hypothesis?
3. What testable hypothesis might we try next?
The experiment is the most formal part of the scientific process. The concept, however, is very simple: An experiment is nothing more than a test of a hypothesis. It is what a scientist--or a detective--does to check an idea out. It is giving a new drug to a sick patient. It is raiding that black van parked by the fire hydrant.
The term "experiment" is not always used with care. Because "doing science" means "doing experiments," anything a scientist does, especially if it involves apparatus or laboratories, tends to be called experimentation. However, the scientist may be engaged in nothing more than that essential part of the process of discovery called observation.
When "experiment" means "observation," a scientist is not really checking anything out, although he or she might say otherwise later on, when writing up the scientific paper that reports the results. He is comparing the heights of boys and girls to see if there is any pattern to the differences. She is mixing chemicals to see what happens.
When such explorers of the world are lucky, they discover patterns and hypotheses. Sometimes, the patterns and hypotheses surprise them, for they are looking for something else entirely, like a detective looking for an embezzler when the black van by the hydrant catches his eye.
In either case, the scientist now has hypotheses to test, and here we find the true experiment. Now the scientist is trying to figure out why the pattern exists, what makes it what it is, and what it might mean.
Unfortunately, no experiment can ever prove anything at all. Experiments can only reveal whether a hypothesis is wrong, not whether it is correct. That is what falsification means.
If the experiment does not falsify the hypothesis, that does not mean it is true. It simply means that the scientist has not yet come up with the test that falsifies it. The more times and the more different ways that falsification fails, the more probable it is that the hypothesis is true. Unfortunately, because it is impossible to do all the possible tests of a hypothesis, the scientist can never prove it is true.
Consider the hypothesis that all cats are black. If you see a black cat, you don't really know anything at all about all cats. If you see a white cat, though, you certainly know that not all cats are black. You would have to look at every cat on Earth to prove the hypothesis. It takes just one to disprove it.
This is why philosophers of science say that science is the art of disproving, not proving. If a hypothesis withstands many attempts to disprove it, then it may be a good explanation of what is going on. If it fails just one test, it is clearly wrong and must be replaced with a new hypothesis.
However, researchers who study what scientists actually do point out that the truth is a little different. Almost all scientists, when they come up with what strikes them as a good explanation of a phenomenon or pattern, do not try to disprove their hypothesis. Instead, they design experiments to confirm it. If an experiment fails to confirm the hypothesis, the researcher tries another experiment, not another hypothesis.
Police detectives may do the same thing. Think of the one who found no tools or loot in the black van but arrested the suspects anyway. Armed with a search warrant, he later searched their apartments. He was saying, in effect, "I know they're guilty. I just have to find the evidence to prove it."
The logical weakness in this approach is obvious, but that does not keep researchers (or detectives) from falling in love with their ideas and holding onto them as long as possible. Sometimes they hold on so long, even without confirmation of their hypothesis, that they wind up looking ridiculous. Sometimes the confirmations add up over the years and whatever attempts are made to disprove the hypothesis fail to do so. The hypothesis may then be elevated to the rank of a theory, principle, or law. Theories are explanations of how things work (the theory of evolution by means of natural selection). Principles and laws tend to be statements of things that happen, such as the law of gravity (masses attract each other, or what goes up comes down) or the gas law (if you increase the pressure on an enclosed gas, the volume will decrease and the temperature will increase).
Each scientist is obligated to share her or his hypotheses, methods, and findings with the rest of the scientific community. This sharing serves two purposes. First, it supports the basic ideal of skepticism by making it possible for others to say, "Oh, yeah? Let me check that." It tells those others where to look to see what the scientist saw, what techniques to use, and what tools to use.
Second, it gets the word out so that others can use what has been discovered. This is essential because science is a cooperative endeavor. People who work thousands of miles apart build with and upon each other's discoveries, and some of the most exciting discoveries have involved bringing together information from very different fields, as when geochemistry, paleontology, and astronomy came together to reveal that what killed off the dinosaurs was the impact of a massive comet or asteroid with the Earth.
Scientific cooperation stretches across time as well. Every generation of scientists both uses and adds to what previous generations have discovered. As Isaac Newton said, "If I have seen further than [other men], it is by standing upon the shoulders of Giants" (Letter to Robert Hooke, February 5, 1675/6).
The communication of science begins with a process called "peer review," which typically has three stages. The first occurs when a scientist seeks funding--from government agencies, foundations, or other sources--to carry out a research program. He or she must prepare a report describing the intended work, laying out background, hypotheses, planned experiments, expected results, and even the broader impacts on other fields. Committees of other scientists then go over the report to see whether the scientist knows his or her area, has the necessary abilities, and is realistic in his or her plans.
Once the scientist has the needed funding, has done the work, and has written a report of the results, that report will go to a scientific journal. Before publishing the report, the journal's editors will show it to other workers in the same or related fields and ask whether the work was done adequately, the conclusions are justified, and the report should be published.
The third stage of peer review happens after publication, when the broader scientific community gets to see and judge the work.
This three-stage quality-control filter can, of course, be short-circuited. Any scientist with independent wealth can avoid the first stage quite easily, but such scientists are much, much rarer today than they were a century or so ago. Those who remain are the object of envy. Surely it is fair to say that they are not frowned upon as are those who avoid the later two stages of the "peer review" mechanism.
Those who use vanity presses to produce pamphlets or books (avoiding the second stage) are seen as crackpots. Those who call press conferences instead of submitting reports to journals (avoiding the third stage) are seen as impatient publicity-hounds, promoters, perhaps also crackpots. In both of these cases, the scientist will face an uphill struggle to get his or her ideas accepted.
On the other hand, it is certainly possible for the standard peer review mechanisms to fail. By their nature, these mechanisms are more likely to approve ideas that do not contradict what the reviewers think they already know. Yet unconventional ideas are not necessarily wrong, as Alfred Wegener proved when he tried to gain acceptance for the idea of continental drift early in the
twentieth century. At the time, geologists believed the crust of the Earth--which was solid rock, after all--did not behave like liquid. Yet Wegener was proposing that the continents floated about like icebergs in the sea, bumping into each other, tearing apart (to produce matching profiles like those of South America and Africa), and bumping again. It was not until the 1960s that most geologists accepted his ideas as genuine insights instead of hare-brained delusions.
Many years ago, I read a description of a wish machine. It consisted of an ordinary stereo amplifier with two unusual attachments. The wires that would normally be connected to a microphone were connected instead to a pair of copper plates. The wires that would normally be connected to a speaker were connected instead to a whip antenna of the sort we usually see on cars.
To use this device, one put a picture of some desired item between the copper plates. It could be a photo of a person with whom one wanted a date, a lottery ticket, a college, anything. One test case used a photo of a pest-infested corn field.
One then wished fervently for the date, a winning ticket, a college acceptance, or whatever else one craved. In the test case, that meant wishing that all the corn-field pests should drop dead.
Supposedly the wish would be picked up by the copper plates, amplified by the stereo amplifier, and then sent via the whip antenna wherever wish-orders have to go. Whoever or whatever fills those orders would get the message, and then... Well, in the test case, the result was that when the testers checked the corn field, there was no longer any sign of pests.
What's more, the process worked equally well whether the amplifier was plugged in or not.
I'm willing to bet that you are now feeling very much like a scientist--skeptical. The true, dedicated scientist, however, does not stop with saying, "Oh, yeah? Tell me another one!" Instead, he or she says something like, "Mmm. I wonder. Let's check this out." (Must we, really? After all, we can be quite sure that the wish machine does not work because if it did, it would be on the market. Casinos would then be unable to make a profit for their backers. Deadly diseases would not be deadly. And so on.)
Where must the scientist begin? The standard model of the scientific method says the first step is observation. Here, our observations (as well as our necessary generalization) are simply the description of the wish machine and the claims for its effectiveness. Perhaps we even have an example of the physical device itself.
What is our hypothesis? We have two choices, one consistent with the claims for the device, one denying those claims: The wish machine always works, or the wish machine never works. Both are equally testable, equally falsifiable.
How do we test the hypothesis? Set up the wish machine, and perform the experiment of making a wish. If the wish comes true, the device works. If it does not, it doesn't.
Can it really be that simple? In essence, yes. But in fact, no.
Even if you don't believe that wishing can make something happen, sometimes wishes do come true by sheer coincidence. Therefore, if the wish machine is as nonsensical as most people think it is, sometimes it will seem to work. We therefore need a way to shield against the misleading effects of coincidence. We need a way to control the possibilities of error.
Coincidence is not, of course, the only source of error we need to watch out for. For instance, there is a very human tendency to interpret events in such a way as to agree with our preexisting beliefs, our prejudices. If we believe in wishes, we therefore need a way to guard against our willingness to interpret near misses as not quite misses at all. There is also a human tendency not to look for mistakes when the results agree with our prejudices. That cornfield, for instance, might not have been as badly infested as the testers said it was, or a farmer might have sprayed it with pesticide, or the field they checked might have been the wrong one.
We would also like to check whether the wish machine does indeed work equally well plugged in or not, and then we must guard against the tendency to wish harder when we know it's plugged in. We would like to know whether the photo between the copper plates makes any difference, and then we must guard against the tendency to wish harder when we know the wish matches the photo.
Coincidence is easy to protect against. All that is necessary is to repeat the experiment enough times to be sure we are not seeing flukes. This is one major purpose of replication.
Our willingness to shade the results in our favor can be defeated by having someone else judge the results of our wishing experiments. Our eagerness to overlook "favorable" errors can be defeated by taking great care to avoid any errors at all; peer reviewers also help by pointing out such problems.
The other sources of error are harder to avoid, but scientists have developed a number of helpful control techniques. One is "blinding." In essence, it means setting things up so the scientist does not know what he or she is doing.
In the pharmaceutical industry, this technique is used whenever a new drug must be tested. A group of patients are selected. Half of them--chosen randomly to avoid any unconscious bias that might put sicker (or taller, shorter, male, female, homosexual, black, white--there is no telling what difference might affect the outcome; drug [and other] researchers therefore take great pains to be sure groups of experimental subjects are alike in every way but the one way being tested. Here that means the only difference between the groups should be which one gets the drug and which one gets the pacebo) patients in one group--are given the drug. The others are given a dummy pill, or a sugar pill, also known as a placebo. In all other respects, the two groups are treated exactly the same.
Unfortunately, placebos can have real medical effects, apparently because we believe our doctors when they tell us that a pill will cure what ails us. We have faith in them, and our minds do their best to bring our bodies into line. This mind-over-body "placebo effect" seems to be akin to faith healing.
Single Blind: The researchers therefore do not tell the patients what pill they are getting. The patients are "blinded" to what is going on. Both placebo and drug then gain equal advantage from the placebo effect. If the drug seems to work better or worse than the placebo, then the researchers can be sure of a real difference between the two.
Double Blind: Or can they? Unfortunately, if the researchers know what pill they are handing out, they can give subtle, unconscious clues. Or they may interpret any changes in symptoms in favor of the drug. It is therefore best to keep the researchers in the dark too; since both researchers and patients are now blind to the truth, the experiment is said to be "double blind." Drug trials often use pills that differ only in color or in the number on the bottle, and the code is not broken until all the results are in. This way nobody knows who gets what until the knowledge can no longer make a difference.
Obviously, the double-blind approach can work only when there are human beings on both sides of the experiment, as experimenter and as experimental subject. When the object of the experiment is an inanimate object such as a wish machine, only the single-blind approach is possible.
How might we blind a researcher experimenting with the wish machine? There are a number of possibilities, all of which require a second experimenter in addition to the one making the wishes. This assistant might flip a coin to determine whether the machine was plugged into a power supply or the copper plates or the antenna was plugged into the amplifier (but in a way that was not obvious to the wish-maker). He or she might prepare a stack of numbered envelopes containing photos to wish on. The wish-maker would then decide what to wish for, pick an envelope at random, slip it between the copper plates, and wish; he or she would be blind because he could never be sure the wish matched the photo (or even whether the envelope held a photo!).
1. Devise an experimental setup that would permit an assistant to keep the wish-making experimenter from knowing whether the amplifer, microphone, or antenna was plugged in.
2. What kinds of error should each of the above controls on the wish-machine experiment help to avoid?
3. What other controls can you devise for the wish-machine test?
With suitable precautions against coincidence, self-delusion, wishful thinking, bias, and other sources of error, the wish machine could be convincingly tested. Yet it cannot be perfectly tested, for perhaps it only works sometimes, when the aurora glows green over Copenhagen, in months without an "r," or when certain people use it. It is impossible to rule out all the possibilities, although we can rule out enough to be pretty confident as we call the gadget nonsense.
Very similar precautions are essential in every scientific field, for the same sources of error lie in wait wherever experiments are done, and they serve very much the same function. However, we must stress that no controls and no peer review system, no matter how elaborate, can completely protect a scientist--or science--from error.
It is normal for scientists to be wrong. In fact, the scientific method is designed to take advantage of error--that is what the "check it out" soul of science is all about. A scientist is supposed to make educated guesses as a first step in figuring out how the universe works. Most such guesses are bound to be wrong. The function of experiment is to tell which ones are wrong, to weed them out of the garden of possibilities, and to correct them. And it has been said that wrong guesses may be more valuable than right ones, for they may stimulate more future observations, hypotheses, and experiments.
Sadly, the normal procedures of scientific research are too slow for the patience of some researchers. Some researchers therefore fake their results. William Summerlin, a cancer researcher, eager to show his supervisor that the technique he had developed to transplant skin from a black mouse to a white mouse really worked, used a felt-tip marker to color a white mouse black; he was caught when the ink rubbed off on a lab assistant's fingers. Psychologist Cyril Burt, feeling that because he knew the truth he needn't bother with the drudgery of actual research, manufactured reports on the heritability of intelligence while sitting at his desk; he was not found out until after his death. Other researchers, perhaps feeling the pressure to produce that goes with the endless competition for research funding and promotion, have also faked data, plagiarized others' reports, and committed other sins. Even Gregor Mendel, the monk who discovered the basic laws of genetics, seems to have fudged his data!
Some people say that those who get caught in their misdeeds represent only the tip of the iceberg. Others say that fraudulent research and plagiarism are rare, and in any case it doesn't really matter because of the scientific ideal of "check it out," which guarantees that the frauds will eventually be found out and corrected. Unfortunately, this guarantee is only a theoretical guarantee. Because researchers build reputations, get promotions, and win research funding only for new investigations, not for repeating others' work, very few experiments are ever repeated. Lies, once enshrined in the scientific literature, may remain to mislead future scientific workers.
Scientific integrity means adhering to the ideals of science--skepticism, communication, and reproducibility. Scientists of integrity take nothing on faith, and they do not ask others to take their own word for anything. They communicate, and they communicate truthfully, so that their work, like that of others, can be checked against the ultimate authority, reality, the universe, nature's book.
What if a "researcher" claims to have done experiments that show, say, that vitamin C can cure nearsightedness? What if he or she never did the experiments, but later on it turns out that vitamin C does indeed cure nearsightedness? After all, it has to happen sometimes that a liar turns out to be right. But the liar has still failed in his or her responsibility to uphold the ideals of science. He or she still deserves to be drummed out of the profession. The reason is simple: He or she has demonstrated that he cannot be trusted. He does not do the work he says he does. The methods she lies about having used may not even work and may therefore lead future researchers down unnecessary blind alleys. And future guesses will probably not be right. Luck is not reliable.
Scientific research requires patience and an enduring allegiance to the ideals of skepticism and truth and helping others check your work. Those who lack these qualities should not plan to become scientists.