Monday, 22 July 2013

I am slowly doing science (1, 2, 3, 4, 5, 6, drop).

Last week, the pitch dropped. The tar pitch, that is. I won't go into all the details of what happened, but here's a summary: A long time ago some people got in an argument about whether or not tar pitch was an extremely slow moving liquid (as opposed to a solid). To resolve this argument, they stuck some tar pitch in a funnel, which they stuck in a jar which went in another jar which went in a cupboard, and the green grass grew all around. The idea was that if the pitch ever dropped through the funnel, it would be proof that the pitch was liquid. If we reached the end of time and the pitch hadn't dropped, it was probably solid (turns out proving non-existence is tough...).

So last week, after various events had kept such a drop from ever being recorded (they happen once every ten years or so), a pitch drop was caught on video, 70 years after the experiment was started. Yay science!

The reason I bring this up is that there's a message in all of this that hasn't made any of the news reports about the pitch drop: Science sometimes takes time. Science is sometimes boring, and tedious. Science is sometimes boring and tedious even for scientists. If that seems like a strange thing for someone who spends at least some of their time as a science communicator, well it is. But it's also an important one.

First off, sometimes is a key word here. Science can be, and often is, exciting. It can blow your mind and change your view of the world in an instant. It can be indescribably cool. And sharing those cool, mind-blowing moments is an important part of inspiring both future scientists and the public at large to learn more about the world around them and what humanity can do with it.

But if that's all we ever focus on, we risk sending the message that doing science is about having a big idea, which is so obviously right that everyone goes, "Wow! You're obviously right," and sees the world in a new way. These moments, though, are few and far between. Far more often, someone proposes an idea that is partially right, and it gets bounced around, and revised, and extended. And, in the most crucial step, it gets tested by experiments. Experiments that can take time, experiments whose results are inconclusive or difficult to interpret, experiments that lead to more questions than answers.

The development of silicon computer chips is a good example of this. Electronic band structure theory, the ideas that eventually allowed people to understand the electronic structure of silicon, started development in the early 1930's. Experimentally testing this, though, was a problem; experiments in silicon contradicted each other, and were generally inconclusive. The problem, it later turned out, is that silicon is both exquisitely sensitive to the presence of impurities (which is why it's so useful) and extremely difficult to purify. It took a decades-long effort of progressively refining the techniques to manufacture pure silicon before its properties could actually be probed. This went hand in hand with refinement of bandstructure theory. Eventually, the structure was known well enough that the first solid-state transistor could be created, which would lead to the computer revolution--decades later.

Even after the silicon transistor was created in 1954, it took scientists and engineers years to get to the desktop computer and the internet. And much of the development was incremental, rather than in revolutionary flashes of insight. Each generation of hardware allowed engineers to refine techniques and build a better set, which is why the cpu in this computer consists of transistors largely in the same design as the 1954 original, except millions of times smaller and faster.

Ignoring this type of incremental (but no less world-changing) science leads to the type of big-idea, insight driven reporting so brilliantly excoriated in this extended piece by Boris Kachka in New York Magazine, written in the wake of the Jonah Lehrer scandal. It leads to doubt when climate change science isn't as clear-cut and straightforward as people have come to expect real science to be. And it leads to young potential scientists doubting their ability to be scientists when their ideas aren't right, or are incomplete.

So let's keep telling the mind-blowing stories. But let's also remember to occasionally tell the stories of ideas that weren't quite right, experiments that were confusing, and pitch that took a decade to drop.

Curing cancer in everything but humans

If, the next time you read an article that claims that a cure for cancer is just around the corner, you will be forgiven if you don't rush out to tell your friends and family the good news. After all, it seems like such announcements are made fairly regularly; meanwhile oncology wards aren't exactly closing down due to lack of patients.

You could repeat this example indefinitely; we still don't have cures for Alzheimer's or Parkinson's; we don't know what causes autism (or even if there is a cause); we still can't halt aging. All this despite the regular headlines telling us that such things are just around the corner. What gives?

A big part of the problem here comes from the fact that, when it comes to doing medicine in humans, there are two types of studies: the type we would like to perform, and the type we actually get to. The type we would like to do goes something like this: take two groups of people with a disease. Treat one group with the drug you want to test, and don't treat the other group. Keep everything else the same, to the point of giving the control group fake treatments so that the experience isn't different. At the end, tally up how everyone did, and see if the drug worked. Or, to take a slightly different formulation, take two groups of people. Expose one group to the agent that you think causes a certain disease. Don't expose the other group, but keep everything else the same, to the point of exposing them to a fake agent. At the end, tally up how everyone did, and see if the agent caused the disease.

The problem is, of course, that it is usually completely unethical to do this type of study with actual people. Obviously you can't go around deliberately exposing to things that you think might cause terrible diseases in the name of science, and you also can't deny people older, at least partially effective treatments because you need a control group to test your newer, better treatment. So scientists fall back on two ways of getting around this. One, you could do the test on animals. Two, you could look back at records of who had radium watches, or smoked cigarettes, or consumed excess vitamin C, and then correlate that with the rates of leukaemia, or lung cancer, or long life.

There are, of course, practical downsides to each. Animal testing, which can in principle be done with our rigorous ideal study design, has the problem that you don't know that humans will react the same way as the animals, and the only way to know is to do another study, which brings us back to the original problem. Correlating patient histories, which involves actual humans, has the disadvantage that you don't have controls; you don't know, for example, that the people who consumed excess vitamin C weren't simply the type of people who followed all kinds of health fads, in which case they may also have been, compared to the general public, less likely to live near power lines, more likely to rub their skin with olive oil, more likely to eat organic foods, etc, etc.

John Ioannidis looks at statistical issues around both animal studies and "look-back" studies. His research is, to say the least, a little disturbing, considering how often these types of studies are reported in the media. In a now-classic paper, "Why most published research findings are false," Ioannidis points out that most look-back studies ignore the roads not taken in their analysis. What he meant was that if you do a study on, say, the connection between aspartame and Alzheimer's (which made headlines in the '90's), you need to look at all of the other things that you didn't test that were, in principle, just as likely to be connected. This is because study conclusions are typically reported with what's known as the significance; the probability that the effect observed could have arisen randomly. Significance is reported basically because it's what we can calculate easily. But the problem with look-back studies is that if there were, say, 50 different things that were as likely as aspartame to be connected to Alzheimer's, then a significance of 0.05 (which is a typical value) becomes inconclusive. Because now you have a 1 in 20 chance of getting the effect you saw by chance, but there were 50 different relationships you could have tested, so odds are you'd get 2 or 3 positive results just by randomness. Unfortunately it's usually pretty difficult to estimate how many different things are as likely to be connected with Alzheimer's (this is known as the prior, or prior probability, and it's a pretty endemic problem across science). So most studies don't report it. Which means that they may be drastically overestimating the strength of their conclusions.

Ioannidis has also looked at animal studies. In a paper published last week in PLoS Biology, he asks the following question: if you perform some large number of studies, how many do you expect to see come back with a positive result (ie, the medicine worked). He works out a statistical argument for this expected number, then compares to available databases in which people have reported both positive and negative results from animal tests. What he sees there is that the observed positive results are way higher than the expected. Hence, researchers are, for whatever reason, more likely to report positive results than negative results. This is a problem because of the last section: in order to get an idea of the prior for a given relationship, we need to know how many similar studies have turned up negative results. If the negative results are unreported, again, studies end up drastically overestimating the strength of their conclusions.

This leaves us in rather a bad situation. Not only do look-back and animal studies have built-in limitations, which tend to get glossed over in media reports looking to make an impact, but the studies themselves are overestimating how conclusive the data they report is. The end result is the first paragraph of this post: a relentless stream of articles promising potential breakthroughs that never quite pan out.

What's the solution to this? There may not be a simple one. Better appreciation of study design and limitations, both by researchers and science communicators. Most importantly, we need to design more integrity into biomedical studies. There are places where research teams can register studies before starting them, thus removing an important source of bias. Such efforts are voluntary at the moment; national governments and healthcare bodies could make them compulsory. And scicomm blogs could work to make sure that each time someone reads a headline that says "Cancer cured in ...", they skeptically say, but what type of study was it?

Tuesday, 16 July 2013

Headlines in Science: Scientists say...

The last post in this series looked at a specific example of a bad headline. This time, I want to zoom out a little and focus on a class of headlines. Specifically, all those that include "Scientists claim," or "Scientists say," or any equivalent to this, in the title.

This is a dangerous construct. The implication is that all scientists, or at least all scientists in a given field, or at least a majority of scientists in a given field, agree with the statement in the rest of the headline. It's understandable when you consider headlines as a way of generating interest in an article. After all, no one cares if "Joe from accounting totally thinks he gained weight when he stopped sleeping." But give it a title like "Lack of sleep can make you fat, scientists claim" and now we've got something.

However interest grabbing it may be, problems arise when the implied support of the scientific community collides with that effect we've discussed before, wherein an editor summarizes (and sensationalizes) an article whose author was summarizing (and sensationalizing) a research project. A potentially misleading or flat-out wrong statement has now been given the weight of expert consensus.

To look at the "lack of sleep can make you fat" example, looking at the article reveals that the scientists didn't actually measure weight, or BMI, or waist size. And they didn't run the experiment long enough to even see a noticeable weight gain. What they did was measure the levels of a particular chemical in the body that is linked to a desire to eat. Now there's nothing wrong with doing that, but as with all research it's important to be clear on what was and wasn't shown.

The next big problem with the "scientists claim" construct is that very often it's applied to the findings of one researcher, or at most a small collaboration. While technically it's true that a paper with three authors is "scientists" saying something, headlines using this construct give the impression of consensus, not just a small group.

The example we've already looked at falls squarely in this category; it's a single group reporting one study they performed. This problem, though, appears to be rife. Searching "scientists claim" and "scientists say" in Google news on 16 July 2013 brought up, in addition to the sleep-fat story:


"Singing And Yoga Might Have Same Health Benefits, Scientists Claim"

"Global warming 'can be reversed', scientists claim"

"Earth had two moons, scientists claim"

"'There is no scientific consensus' on sea-level rise, say scientists"

In every one of these stories, upon actually reading the story you realize that each is based on one paper published by one research group. The research might be born out, and the headline may actually reflect scientific consensus in a few years (well, except the last one, which is a flatly disingenuous article from the climate-change deniers camp). But at the moment they're jumping the gun.

There's one last point I want to make here. Often "scientists claim" headlines do include a qualifier: might, may, can, etc. This in and of itself isn't a bad thing. But I don't think it lets editors off the hook for making the statements they then qualify. Now, I don't actually have any research backing up what I'm about to say (if anyone else knows of any, I'd love to hear about it!). But from personal experience and talking to other people, it seems like qualifiers are the first things forgotten when recalling a headline. I don't generally remember the exact wording, I remember the idea and I paraphrase it, which comes out something like "this study showed that if you get less sleep you gain weight." Qualifier gone.

For this particular problem, there's a pretty easy solution: stop using "scientists claim"! Or any other equivalent construct. At the very least, reserve it for statements that come out of large conferences designed to forge a consensus. But on the whole, editors, please just stop.

Maybe then I can stop losing sleep over terrible headlines. Which I heard was making me fat. It's true; scientists say so.

Monday, 15 July 2013

Why E=mc^2 is actually cool

E=mc^2 may be the most famous physics equation in history. Why this is, though, is misunderstood, both by the public at large and even by many physics students (at least ones I've talked to about this).

So, the Public Understanding: Einstein was a super-genius, and he invented E=mc^2. This has something to do with energy. Einstein used this to invent the atomic bomb and win World War II.

Why this is Wrong: Well, Einstein was actually a super-genius. I kind of have a crush on him, to be honest. And he did derive (an important point we'll come to later) E=mc^2. He did not, though, have much to do with inventing the atomic bomb. What's more, E=mc^2 didn't lead straight to the bomb in the sense that most people think it did.

So let's take a step back. The equation we're talking about says that Energy (E) is equal to (=) mass (m) times the speed of light (c) squared (^2). This tells us that (a) mass can be converted to energy, and energy can be converted to mass, and (b) a little bit of mass converts to an enormous amount of energy, since c^2 is a very big number. Now, it's certainly true that the mass of the final nuclei involved in a nuclear bomb is less than the mass of the initial nuclei, and that this change in mass is proportional to the energy released. But that's true of all processes. When I burn gas in my car, the final products are ever so slightly lighter than the initial ones. But I don't credit E=mc^2 with making my car run. So what's up?

The reason we associate E=mc^2 with nuclear (ie, a-bomb) processes and not chemical (ie, gas-burning) ones is basically a matter of technical convenience. When I burn gas, it's easy to measure the energy that came out, but hard to measure the change in mass, because it's incredibly tiny. When I split or collide nuclei, it's hard to measure the energy that comes out, partly because there's so much of it and partly because a bunch of the energy gets carried off by neutrinos, which we can't capture very well. But it's (relatively) easy to measure the mass of the initial and final nuclei, so that's what we do. E=mc^2 is always true, it's just sometimes convenient to use, and other times not.

In any case, most of the effort that went into building the atomic bomb was on rather practical questions like, "How do we separate out the uranium we want from the uranium we don't want?" and "How can we use precision explosives to bring that uranium together in just the right way?" These questions had really nothing to do with E=mc^2.

Now the Common Physics Student Understanding: Einstein was a super-genius, and he derived E=mc^2. This tells us that mass and energy are equivalent, two aspects of the same thing. This changed our view of reality.

Why this is Wrong: Well, it's really not. What it is, though, is incomplete. So to complete it, we have,

Why E=mc^2 is Cool and Important: To understand this, we need to take a look at where the equation came from. Where it came from was two papers Einstein published in 1905 on electricity and magnetism. Einstein starts off this little duology by noting that, at the time, the laws of electricity and magnetism were inconsistent with the laws of motion in a peculiar way. The example he used requires a bit of background, so I'm going to pick a simpler, but equivalent one.

You probably learned at some point that electric current can make magnetic fields--this is how we get electromagnets. In fact, any current, and any electric charge that's moving, creates magnetic fields. This, though, creates a problem. Say I rub a balloon on my head to put some charge on it, then put my charged balloon out in space, a long way away from anything. Now, if the charged balloon is moving, it creates a magnetic field; if it's not, it doesn't. But, how do we know in space what is moving and what is standing still? If one person (normally called Alice) is floating next to the balloon, and another person (normally called Quvenzhané) shoots past, they would disagree on who is moving and who is standing still, and hence they would disagree on whether or not the balloon was producing a magnetic field. But the magnetic field can't both be there and not be there, so we have a problem.

Einstein noted this inconsistency, and found a way to write physics laws in a way that didn't create these disagreements. It was a bit of a weird way, with time slowing down as you sped up, and lengths changing and whatnot, but it worked. And, almost as an aside, it produced the expression E=mc^2.

The details of how that all works aren't really important here. What is important is this: the laws of Electricity and Magnetism (EM), which you can work out with some styrofoam balls and plastic in a high school classroom, imply that every object in the universe has an intrinsic energy that only depends on its mass. Not its internal structure, of what it's made of, just its mass. So E=mc^2, which isn't really about EM, and applies to things that aren't charged or magnetic, and plays a large role in gravity, is embedded in the structure of electricity and magnetism. This should blow your mind. The laws of how electricity works also tell you that everything has an intrinsic energy proportional only to its mass. This is one of the best pieces of evidence so far that there is, in fact, a consistent mathematical structure underlying the universe. That Einstein figured out this implication pretty much cemented his genius status, even if he didn't single-handedly win World War II.

And THAT is why E=mc^2 is cool.

Saturday, 13 July 2013

Headlines in Science: io9 and Objectifying Women

This is the first post in a series on headlines in science communication. This one is from last year, but I'm going to talk about it because it's a particularly egregious example of a terrible headline.

One of the blogs I read is io9, a blog about science, technology, and nerdy things in general. Normally I enjoy frequenting this part of the internet. But in this case they participated in the all-too-common trend of headline inflation - that is, taking some research, adding an interpretation to it that wasn't in the original but that makes it more attention-grabbing, and then making that interpretation the headline. In this case, the research was this, an article (unfortunately behind a paywall) that sought to quantify the mental processes that drive the objectification of women's bodies.

I won't go into all the details, but basically the study looked at how likely people were to notice changes in specific body parts in pictures of men and women, and how easily they identified pictures they saw earlier from a picture of just one part of the person. It was a study looking at the way the minds of the participants worked, while assuming throughout that a) objectification is a bad thing, and b) studying the way the mind works could help us combat it.

Move on over to the Scientific American blog post covering the story. Here the bare facts are reported, and the post stresses that objectification is harmful, and points to an experiment that could lead to a better understanding of how to put your brain into a non-objectifying mode. But the author also wants to get at more than just what is in the study, so we get this quote:

There could be evolutionary reasons that men and women process female bodies differently, Gervais said, but because both genders do it, "the media is probably a prime suspect."

Note that the quote they got from the study author (Gervais) says "the media is probably a prime suspect," but the post author chose to place it in a context that implies that evolutionary reasons are also a possibility, something not mentioned at all in the research article. Obviously, I have no idea what their conversation was, but it seems like the journalist asked whether evolution could have hardwired us for this, and the study author gave a researcher's version of "no." Bear in mind that scientists get good reputations by not being wrong, so they tend to hedge any statements they make, especially ones on which they don't have conclusive data from multiple sources. The journalist here took that "no" and shaped it into a sentence that comes across as a "maybe..." It's a little sad, but not entirely unexpected.

Now we get to the io9 article. The headline is "Both men and women may be hardwired to objectify women's bodies". At least, it was (I'll get to that in a bit.) At this point, my question for io9 is, WHAT THE HELL?!?! With one headline they took an article whose goal was to help reduce objectification, and turned it into something that could be used as an excuse to keep doing it. The comments indicated that a number of people took it exactly as that. (One comment: "I'm staring at your titties because of science baby. Pure science." Presumably not the reaction the study author was hoping for.) I appreciate the fact that io9 wants to increase their page views, since that is how they generate revenue, but completely inverting the subject of research in a way that harms the people the research was trying to help so that more people are directed to their blog is despicable. There's no way to soften that or justify what they've done here.

Just in case that wasn't bad enough, the article title was changed, after a number of the comments pointed out that it was completely at odds with the research it was reporting. Now it reads, "Why both men and women's eyes are drawn to women's bodies." Perhaps no more accurate, but at least less offensive. (The original title is still in the web address). But, there's no mention in the article that this has been edited. So now, the numerous people who made comments suggesting that this was a sexist attempt to increase page views are left sounding like they over-reacted to a straightforward article. If you're going to correct a mistake on a thing like this, the least you could do is fess up to it. Not that would have undone the damage. Since io9 is a high volume blog, even someone who checks in once a day won't ever see the article again, unless they scroll down the sidebar looking for it. All in all, this amazing double-whammy might be the biggest science journalism fail I've seen in a long time.

This whole episode is a sad illustration of the process that creates terrible headlines, and the damage they can do. To start with, the science being reported on was an exploratory work; an initial study that will presumably be followed by others to given more nuance and context. Exploratory work often turns out to be incorrect, and when it is correct, the best interpretation is bounced around the research community, often for years, until the work is settled in context with other work and theories, and a story emerges. Any take-home messages that are suggested before then are, at best, the opinion of one researcher regarding the significance of their own work, or at worst, the opinion of one editor regarding work in a field they have only a passing knowledge of. This type of science needs to be considered especially carefully as it is especially prone to bad headlines.

The next step in this sad process is the multiple layers of sensationalizing. Here there were three levels: the original blog at Scientific American, the article at io9, then finally the headline for that article. Each level pushed the conclusions from the original research into more sensational, and less accurate, territory, until the point was completely lost.

Finally, the damage. Though the body of the article never makes the statement the headline does, it's pretty clear from the comments that a number of people assumed the headline was the conclusion of the research being reported on. And why not? Isn't that the point of a headline? It's difficult to quantify this, but it's safe to say that, for each person that took the time to comment on the article, many more simply saw the headline and added it to their internal "facts I know" database.

The process by which this headline went wrong suggests some things that could be done. First, read the original research. I know that paywalls are a problem for many science enthusiasts, but there's really no excuse for a professional journalist to not get a copy of the original article. Reporting on a report on some research is bound to introduce distortions. Second (and this will be a theme), the headline should be tossed back as far as possible. What I mean by this is that in the worst case scenario, the headline is okayed by the journalist writing the article, and in the best case, by the scientist actually doing the research. This won't always be realistic given news timelines, but throwing the headline back as far as possible to double-check it can only help.

So, thank you, io9, for that wonderful illustration of how to be completely terrible at making headlines. I sincerely hope the rest of the articles in this little series have far less to work with.

Scientists say Headlines Generally Terrible

A headline serves as the title for a news story, but obviously it's more than that. It serves as a guide, letting the reader know what to expect; it forms a context for what they are about to read; and it may be the only part of the article many people look at. Which is to say it's important that a headline be as accurate as you can be in one line.

Unfortunately, headlines often aren't written by the author of the attached article--they're added by an editor or someone else involved with the publishing process. So there's a double layer of understanding to surmount: the science journalist understanding the material and communicating it clearly in an article, then an editor understanding the article and making a good headline. Sadly, the point of a piece of research or a discovery is often lost in these two translations.

It gets even worse when the subject matter is controversial. Now, instead of just two layers of understanding going into the headline, there are also two layers of sensationalism. I'm not trying to say here that science journalists are irresponsible tabloid writers, just that by nature they look at a story and ask, "What's the excitement here? How can we spice up this story?" Then the editor looks over the story and asks, "What's the excitement here? How can we spice up this headline?" Two layers of this, and it's no wonder that the headline often ends up an extreme exaggeration of the science it purports to describe.

It's helpful to look at some examples to see where headlines can go wrong, and what the damage can be. With that in mind, this post is merely the introduction to what I hope will be an ongoing series looking at headlines in science articles. For the most part it will be focussed on where they go wrong, although if I come across anything that strikes me as a particularly good headline for a subtle or difficult topic, I'll post about that too.

Often I see articles pointing out problems with a format, or institution, or society, without any suggestions as to what can be done. This can be useful, but obviously has its limits. So I'm going to try to think about how headlines can be better generated as I post each article about them. Hopefully I'll leave you with not just "This is a problem," but "This is a problem, and here's how it could have been done better."

So, now for some terrible headlines!

Newton, Gravity, and How People Weren't Total Idiots

There's an unfortunate reality about getting a university degree in science: you end up knowing essentially nothing about the history of science. I was reminded of this recently because there was a relatively large history of science conference happening in the city I live in, sponsored by the school I attend. You would think it might be of some interest to some people in the physics department. You would be wrong. It wasn't mentioned once in the numerous emails I get describing the events of interest going on each day, wasn't discussed by any of the graduate students I ran into that week, and when I did bring it up, I got strange looks, as if, why would someone in physics care about the history of physics.

I'm not saying the state of affairs is all physics' fault; looking over the talks scheduled at this conference made me realize that the academics in the field, like all other fields, are mainly concerned with impressing their colleagues in their subfield, rather than building bridges across related disciplines. Still, it's sad, and these types of divisions mean, among other things, that you can get multiple degrees in science while maintaining a complete lack of understanding or appreciation of how your field came to be.

So hopefully I can do a small part, occasionally, to remedy the situation. Starting with Newton and gravity.

Isaac Newton, as everyone knows, invented gravity. Or discovered gravity. Something to do with gravity. There's two versions of the story. The first one, the one that is vaguely in the heads of non-scientists when they are asked about Newton, goes something like this: Newton was sitting on the ground one day when an apple fell on his head. This made him realize that gravity was a thing, so he told other people about it. They then realized that gravity was a thing and so declared Newton to be a genius.

This version of the story seems to imply that people back then were complete and utter idiots; that no one had ever noticed that things fall down, or commented on it, or thought about why this might be. Clearly, a little bit of though shows that this cannot be a true story.

The version that you get in first year physics is more like this: Ha ha, normal people are dumb, there was no apple. Newton realized that gravity is a force that is proportional to the inverse of the square of the distance between two objects, and also to the objects' masses. That is why he is famous for gravity.

This, while being closer to the truth, in that Newton did propose an inverse square law, doesn't fully explain Newton's fame and lasting influence. Hooke also proposed an inverse square law independently, and neither was the first person to make mathematical statements about gravity and the planets.

Newton's lasting influence arises from a bold claim he made with this theory (and others): that there is a single law of gravity, which applies to apples, and the Earth, and Mars, and Jupiter, and the Sun, and every single body that we can see, regardless of whether it lives in the heavens or the earth. It is this universal nature that sets Newton apart from the people who came before him, and it is that attitude that is perhaps his most influential contribution. Even if you don't remember a single law of motion, or how gravity works in a mathematical way, you know that things on Mars obey the same laws of physics as things on earth, and you believe, without needing it to be proven, that if we ever sent a probe to a planet in another star system, that the same laws of physics would apply there as well. That you believe that is Newton's most lasting contribution to science.

Why does that matter? Well, for one thing, it's always worth remembering that people in the past didn't necessarily think the same way we do now, and there's a lot we take for granted that would have been foreign to them. Prior to Newton (and yes, I know I'm simplifying things by implying it was all due to him), the idea that the universe operated under a set of consistent rules that applied everywhere was wouldn't have occurred to most people. In fact, if you go back far enough, you lose the distinction between the supernatural and the natural completely.

Secondly, in general I think that the more we educate ourselves about how science has worked, and how it works now, the better able we will be to make decisions about the many, many issues that science touches on today.

So there's the history lesson. For more on Newton, I. Bernard Cohen is a place to start. For more on pre-scientific world-views, the opening chapters of The Evolution of God offer a fantastic description.