I think I am done with my experiments. Whew. This summer I was able to run the test loop that my colleagues and I devised to investigate boiling flow in expanding microchannels. A mouthful, eh? True, but the data output is a rather a bigger bite. It's no Large Hadron Collider, but my four experiments ran to a 100 MB chunk of text data. What do you do with this? Well, certainly you write programs (in MATLAB) to organize it, do calculations, sniff through it, plot the parts you want, and so on.
But you look at all of it. And you don't delete. And you don't fudge.
This is not to say that you don't analyze it, or ignore the big spike where you turned power on, or even perhaps use a smoothing routine on the thermocouples (which are as noisy as undergrads), but if you do any of these things you say so. You work only from what the data confidently allows you to say, and the more layers of analysis lie between you and the data, the bigger the uncertainty becomes. You don't clip the funny part that wouldn't agree with your theory. You don't impose your theoretical curve on the data points and say "see!". You must patiently sift and see what the data say. Only thus can you contribute to science rather than impede it.
Van der Waals, the great Dutch thermodynamicist, provides a (perhaps forgivable) illustration. He formulated his equation of corresponding states for gases carefully, using (and making up) the brand-new molecular theory in an innovative way, and predicting values for the terms in his equation. He was also a fine experimentalist, and at other times accurately measured these same values. They did not agree with the theory. He did not publish on this discrepancy. By accepting the theoretical values, the world chose to wait for dozens of years until the reasons for the deviations would be investigated carefully, opening interesting doors to small-scale statistical thermodynamic theories.
Big deal, right? He made a good theory, it was workable, and it served pretty well for pretty long. But did it actually describe reality? Not quite. Was it great science? Well, yes and mostly no, because it was brilliant in theory, but wrong.
Science is about honesty (see earlier post), and honest science is fearless and doesn't give a rip about your hypothesis. If it comes out validated, great, but if it comes out invalidated, great. You still learned something. And it is at the loose ends that science advances. It is where somebody stands up and asks "Well WHY didn't it work?" or says "That's not right! There must be more to that!" And in finding the "more to that", we move forward.
"The first principle is that you must not fool yourself--and you are the easiest person to fool. So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists. You just have to be honest in a conventional way after that." - Richard Feynman in CalTech commencement speech, 1974 (see http://neurotheory.columbia.edu/~ken/cargo_cult.html or "Surely You're Joking, Mr. Feynman", which is on my shelf at home).