Table of Contents
Section 1 –
Introduction. pg. 3
Definition of terms. pg. 4
Background. pg. 5
Section 2 –
Philosophical problems for science in the 20th century. pg. 7
Demarcation: the line between what is science and what is not. pg. 8
Falsification and Induction. pg. 8
Section 3 -
Theoretic problems for science in the 20th century. pg. 9
Constructivism. pg. 10
Section 4 –
Solutions in Philosophy and Theory. pg. 12
Section 5 –
Conclusion. pg. 13
Section 1 – Introduction
Science and its methods suffered from a full spectrum of extremism in the 20th century. Scientists in the 1900’s operated with an overly austere view of what defined their discipline. The prevailing philosophy of the time, now regarded as the ‘empiricist’ philosophy, was principally represented by a group called the Vienna Circle. In the decades following the turn of the century science was forced to deal with attacks directed toward the scientific method and doubts about justifications for theories, which presented challenges to both the philosophy of science and the social interpretations of the discipline.
The rigid and restrictive grasp of the empiricists was gradually loosened by powerful theories put forth by philosophers that challenged conventional thinking about science, namely the theories championed by Karl Popper, W. V. Quine, and Thomas Kuhn. As recognition of the qualities in these theories gained adherents throughout the scientific fields, the pendulum of sentiment swung away from the strict views held by the Vienna Circle, more to a moderate position, and in some ways closer to the meta-physical principles of the older centuries, like those from Francis Bacon and Rene Descartes. (Descartes felt that even if everyone were to agree on something, like the Ptolemaic theory of the universe, it may still be a deception).
Eventually some of the looser practitioners focused so intently on the shortcomings of the scientific method and whether we should believe science provides true accounts of our world, they pushed the pendulum past the point of common sense, swinging beyond the center-point of balance and over correcting into the other extreme, a range where relativism, realism, and constructivism postulate much different assertions about science and theory.
The thesis of this essay maintains that humans can understand reality and conceive whether theories are adequate by using the best parts of science, which are sufficiently evidentiary. It allows for the belief that science is and can be empirically successful without automatically warranting the belief that truths of theories always have to be perfect.
Definition of Terms
Ampliative rules: Likely able to go beyond the given information; providing justification for the inferred conclusion.
Constructivism: The constructivist concept of rationality involves conscious analysis, and deliberate design of models or rules. The models classify individual behaviors in order to explain general behavior. It is neo-classical, but not inherently inconsistent or in opposition to Vernon Smith’s ‘ecological’ form of rationality. The two are different ways of understanding behavior, that work together.
Empiricism: A benchmark era for science, the years around 1900, when hypotheses would only be accepted under austere circumstances, where the cold hard facts having been confirmed and verified through deductive testing, were thought to be objective observation, and involving universal laws of nature.
Falsification: Karl Popper suggested the demarcation line for science could be found through falsifying theories instead of trying to verify them. So scientific theories needed to contain something that you can actually dispute. The position “Cherry pie is good,” is not falsifiable.
Induction: Considered the biggest problem for finding scientific criterion for theory choice. The problem of induction pre-dates the 1800’s; it is deeply philosophical and tricky to comprehend. Technically, a cognitive process that includes statistical laws, or conditional probability. An interesting place to start when setting out to understand induction is “the Monty Hall” problem, where pigeons learn from experience in laboratory tests to switch doors, but humans do not.
Realism: an overly loose interpretation of intangible, unobservable things, to the extent that they are considered objective items of evidence in every case. Even if they are independent of accepted concepts, they still make for empirical theory and belief in them is still required for coherent science. In one version of realism, the success of science put forth as the proof of its objectivity. Science has not historically been so successful however, in fact, it has been the opposite.
Underdetermination: the Duhem-Quine (D-Q) theorem: D-Q has two components; 1) there are too many unknowns for evidence to be sufficient for us to identify what belief we should hold or what decision we should make between rival choices; theories must remain unsupported. 2) A small theory can never be isolated and tested by itself; if a small theory appears to fail a test, the entire corporate body, or the test, or the scientist must be called into question, not the small theory.
Background of Philosophy
As described in the introduction, science held to an extremely narrow concept and rigid interpretation of scientific procedure at the beginning of the 1900’s. The indisputability of facts were paramount virtues of clear cut reasoning and exacting rationality. Only unmistakable evidence could be used in investigations to discover rules and laws. Laws for prediction and truth are what distinguished science, and the activities of science were above this line of demarcation. This overly strict philosophy hampered practitioner’s efforts to understand the world around them. Skeptics, and critics of empiricism claimed that the true nature of testing is limited, as theories do not ever find perfect “truths;” and that empiricism failed to detect this very deviation between itself and reality.
Background of Theory
After the Renaissance, human knowledge developed to the point where it established itself as a full or authentic partner to reality. Humankind came to trust that any subject could be credibly understood if the activities of science and technology followed systematic discovery of evidence. Intellectual communities received increasing support, gradually replacing the old world way of using the senses as inputs and then haphazardly constructing a belief from there. In this way science and technology eventually became institutionalized in the twentieth century. At the apex of scientific heyday, The Vienna Circle permitted only the narrowest of definitions of what constituted a valuable hypothesis. Scientists or the layperson could accept them or not, there was no middle ground. Nor was there any need to postulate about hidden entities, the Circle did not want the rules of the universe to have to continue into an infinite string of explanations.
Popper advocated an innovative way to identify the products of science, and argued that scientific inferences do not use induction. His theory loosened up the structure of what constituted the infamous demarcation point.
Kuhn wrote that everything is relative to the culture or time period in which the circumstance exists, and that the one thing that we do know for sure is that science will be re-written in the future. Kuhn proposed that the context of time breaks the line-of-decent model from old science as the foundation for newer science; that two different periods of science are not be comparable, and he acknowledged the existence of subjective elements within science.
From there we viewed science’s dependency on theory: that science can never escape its relationship with theory, because even the laws of science will change over time or at least be conceived differently from one society or another. From this outlook, science is dependent on theory as a set up or precursor for the scientific method. In light of this dependency, social scientists highlighted various troublesome issues in scientific elements, such as conflicting evidence, partial evidence, and weird evidence, and used these issues to critique the scientific method.
Larry Laudan proposed splitting the action of problem solving from the concept of the solution. In this perspective effective problem solving remains a rational activity, while what counts as a solution is allowed to be relative, and in this way Laudan found an answer to a major problem for determining acceptability of a theory.
Section 2 – Philosophical Problems for Science.
The how-to component of justifying a belief is the most important epistemic problem for scientific investigation. It also happens to be equally problematic for induction. Science entered the 1900’s with a pre-existing problem of induction, stuck like a thorn in its side that it carried around for hundreds of years. David Hume explicated the problem in his mid-nineteenth century works, and it has been seen as the major obstacle for science ever since.
A second serious challenge for science surfaced as more attention turned to the fact that every theory or at least some parts of theories are eventually found to be inadequate or wrong.
In a third challenge, we came to face the fact that scientific methodology, like all contexts that involve humans as the practitioners, is an activity that works in ways that we do not exactly understand. Although the empiricists in the Vienna Circle attempted to deny it, science in practice involves social aspects that are subjective, and a general method for obtaining ‘correct’ conclusions through objective investigation will not always follow some universal recipe for getting to an explanation of the world. Every person has a unique set of principles, we can each look at the same data and come to different conclusions, and science has proven to be unable to escape this ‘problem.’
Demarcation: In order to establish a solid baseline for the reputation of scientific methods, the demarcation line stood as the separation between science’s concrete evidence and everything else below it for Rudolph Carnap, Carl G. Hempel, and the Vienna Circle. They were very committed to observation and measurements that could be used to formulate laws with predictive power, and it was these bullet-proof rules that were the backbone of their model of science. Empiricists were especially enamored with the predictive power of a rule or law.
Falsification and Induction: Popper’s solution for demarcation suggested we not worry about confirmation, and instead focus on falsifying a theory. Popper argued that since we are limited by finite sets of observations, anything can technically be confirmed using induction, though he did not feel induction was used in true scientific critique, only deduction. Unfortunately, we cannot simply deny that we use induction. Wesley Salmon writes that with Popper’s falsification, we would be stuck in a situation with infinite conjectures; and, according to Salmon, Popper’s ideas when closely examined contain circular runarounds. Summarized by Scott Scheall at Arizona State University: “we cannot use a conjecture’s degree of corroboration as a measure of its reasonableness as a basis for prediction. To do so would be to let induction in through the back door and we would again be saddled with the problem of induction. In other words, a conjecture’s degree of corroboration tells us how well it has performed with respect to past predictive tests, but it tells us nothing (logically) about how it will perform in future tests.”
Thus Popper’s falsification, and its contingent sub premises of conjecture and collaboration, and demarcation fail to detail a demarcation for science or formalize the scientific method much if any better than past attempts. Laudan brings final clarification to the discussion however, noting that we never have sufficient justification to need an assertion to be true in a perfect sense in order to accept it; justification for induction is simply not required.
Section 3 – Theoretical Problems for Science
As described above, Popper proposed falsification as the solution to the problem of induction. The D-Q theory of underdetermination also shows falsification is not a work around for the problem of induction. D-Q declares the procedures one would use to falsify theories are ambiguous, and second, that we can only falsify an entire corporate body, not a single/small theory in isolation. Later theorists then expanded on weaknesses identified by D-Q, interpreting D-Q as showing that rules or “as-if” rationalities are impossible.
In his attempt to loosen the overly strict grip of empiricist philosophy on science and provide guidance when deciding on what theory to follow, Kuhn championed the idea that demarcation is only relevant within normal science, and what makes a theory scientific is the absence of debate over theories; hence only whenever critiques are silent, are we experiencing science. Kuhn saw two distinct periods of scientific activity, with the period of what he referred to as normal science making up the super-majority of time, and only during the very rare revolutionary periods would Popper’s falsification be useful for demarcation. He also saw any challenge to a theory as necessarily directed at the scientist, not at a paradigm itself. Kuhn agreed with D-Q in this respect, but whereas D-Q underdetermination considers paradigms as more or less static and permanent, for Kuhn, neither the standards of evaluation or conditions in the field are permanent, they are always changing.
Changing scientific evidence causes problems for anyone who wants to adhere to a particular theory. Imagine a person makes a decision to eat fish for the omega 3 acids that are good for the heart, or decides to exclude fish from the diet because of the mercury content based on the existing knowledge and theories on food science. To then hear of a new study that has determined that those same omega 3 acids are now apparently bad for the prostate, and trans fats that were thought to be bad for the heart, are what is good for your prostate, calls the whole paradigm of food science into question.
People operate with some kind of personal philosophy either to believe in no theory at all, or some theory in particular, and might at this point find themselves with a freezer full of fish that they no longer wish to eat, because science had decided “healthy eating may be a much more complicated matter than nutritionists previously realized.” (The Week, 2011).
The D-Q principles (that advise theories must remain unsupported), and Bacon’s analysis that almost nothing is a full treatment of a subject for everyone (and that there is no single question on which all people can agree on the answer), and various misinterpretations of the critiques of empiricism & Popper, combined and led to unlicensed promotions of constructivism, realism, or relativism by Bruno Latour, Paul Feyerbend, and several others.
Laudan corrects the D-Q/Kuhn inseparability-of-paradigm pyramid structure by replacing it with a web structure, and weakens DQ to the simulacrum of rendering it moot. Laudan also liberalized the standard view of paradigms as static systems; he explained that they are always comparative, subject to change, and dependent on circumstances of context. Determining whether certain criteria are more important than others is not a straightforward process, but we have no reason to consider unbalanced concepts like relativism while we still have common sense at our disposal. Laudan also clarifies how induction is really not such a big problem when ampliative rules of evidence can be incorporated.
Constructivism runs into problems in social studies because social theories are composites; they put construct parts into wholes, and schemes of relationships that are interpretations, but they are not able to do more than that. The constructed models leave out some of the parts. They are schemes that connect distinct, single things by using relationships that we understand; they create wholes, but this does not make them factual. We report on them using terms like ‘New York City’, that do not have sharp, precise definitions, because they may have a variety of properties. For example, a problem for the prominent social science of economics is that it cannot distinguish how people go from a starting point and through practice in self regulated systems, to finding equilibrium in personal exchange without the use of consciously constructed models. The structured model does not predict the higher level of cooperation or reciprocity that takes place in the market. Studying behavior, we see people use their unconsciously learned experience when they need to make spontaneous moves; they dynamically figure out what car insurance to buy or how to evaluate university ranking matrices, either without, or together with the existing instructions in the constructed schemes; so the schemes often have little legitimate purpose or are redundant.
Section 4 – Solutions in Philosophy and Theory
Laudan fixed the problems introduced by loose interpretations of D-Q by clarifying that science is neither so static nor inseparable as D-Q posits, and he split the over used concept of “theory” into big and little theories; where the big ones function as tools, and the little constituents do the solving of problems. Thanks to Laudan’s perspective we have a quality picture of the formality of the scientific method and clarity on how we can choose between theories.
People need to be able to understand reality, and conceive whether theories are true and whether evidence is real. This can be more difficult if the particular subject of discussion/observation involves something as invisible the chains of bondage in the Stockholm Syndrome. At the opposing poles of an ongoing argument over whether to believe in invisible entities before they have been technically verified, realists and empiricists hold firm beliefs on when an unobservable can be considered real. Bas Van Fraassen gives agnostic discourse on particles that are too small to see and he notes that the best available explanation is often good enough as a representation of the truth; but most importantly, he recommends an approach of taking unobservables on case by case bases. Decisions on invisible particles and unobservables are important, when we consider situations involving forensic science testimony, DNA, and other evidence that jurors may not fully appreciate which have the power to put people in prison. Jurors are often expecting science to be responsible for solving the case, when in fact forensic evidence is occasionally found to be invalid (Begley, 2010).
In a 1998 scientific paper published by the esteemed medical journal The Lancet, author Andrew Wakefield linked the childhood vaccine MMR to an increased risk of autism in children. Thirteen years later, after much debate, scientific exploration and reexamination, and a plethora of class action lawsuits, the link has been discredited and the author vilified for both “bad science” and for perpetrating a fraud. But the damage caused by the claim is hard to undo. Despite scientific evidence to the contrary, many people still believe that childhood vaccination is a confirmed major cause of autism. While it is acknowledged that vaccines can, on rare occasions, cause severe side effects, the U.S. Institute of Medicine rejects the link between vaccination and autism.
Common sense dictates we not get hung up on distinction between truth and what is useful; we can commit to a level just short of literal truth and accept the concept of approximation as weak, but having a necessary value for scientific claims. The position for science to move forward is to just be the best at solving problems. Adequacy is fine for this; it is reliable and economic, like the neighborhood play at second base. Scientists can referee cognitive practices from this position and judge questions of when invisible entities are ok, because they can observe when entities are used in, or for good theories.
Section 5 – Conclusion
Science is simply a belief, like religion. No one size fits all regulations or broad views work for the man on the street; life is not a carrot or stick situation. Science remains the best alternative we have for knowledge and description of the world, and the social aspects of scientific practice and concrete evidence are both factors for determining preferences. It we do not try to take either one too far, technology will continue to pull science into balance, and we might find we have both the carrot and the stick.
Tension remains between followers of the Darwinian doctrine and followers of religious doctrines because of differences on conceptual grounds. A young person may try to decide between Darwin and St. Peter; or between industrial progress and environmental protection. Are they to throw their hands up? No, they can understand reality and conceive whether theories are true and whether evidence is real, with help from empirically successful science and technology.
Begley, S. 2010. But It Works on TV! Forensic ‘science’ often isn’t. Newsweek: Science. Pg. 26.
Curd, M. & Cover, J. A. 1998. Philosophy of Science: The Central Issues. New York, Norton & Company.
The Week. (2011) Health scare of the week. News: Health & Science. The Week: The Best of the U.S. and International Media, pg 21.