Experiment Research
Experiment Research
refuting, or establishing the validity of a hypothesis. An experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a "what-if" question, without a specific expectation about what the experiment will reveal, or to confirm prior results. f an experiment is carefully conducted, the results usually either support or disprove the hypothesis !ash "#$$%&. 'he experiment is a situation in which a researcher attempts to ob(ectively observe phenomena which are made to occur in a strictly controlled situation where one or more variables are varied and the others are held constant. According to some )hilosophies of science, an experiment can never "prove" a hypothesis, it can only add support. *imilarly, an experiment that provides a counter example can disapprove a theory or hypothesis. An experiment must also control the possible confounding factors t is also important to know what variable"s& you want to test and measure..
n experimental research, the researcher not only manipulates the independent variable, he or she also randomly assigns individuals to the various treatment categories "i.e., control and treatment&.
What is an experiment, and what are the significant components of experiment 'he definition says that we should attempt to make impartial and unbiased observations in the experimental situation. n short, ob(ectivity is the ideal to which experimenters strive even though perfect ob(ectivity is impossible to achieve.
n experiments phenomena are made to occur. 'he phenomena are observable events+ they are the conditions presented to the participants. *pecifically, these phenomena or conditions are the levels of the independent variable that are made to occur "e.g., one group is given a pill and another group is given a placebo&. 'he idea is that an experimental researcher does something and then observes the outcome. ",anipulation is the key defining characteristic of an experiment.&
'he observations in the laboratory experiment are made under conditions set up and controlled by the researcher+ if the experiment has multiple groups then the researcher attempts to standardi-e the conditions for all groups with the only difference being that the different groups get different levels of the independent variable. 'he key idea is that the researcher tries to set up a situation where the only systematic difference between the groups to be that they got different levels of the independent variable.
'he researcher attempts to hold all variables other than the independent variable constant. 'his is best done by first, randomly assigning participants to the groups "which will .equate/ the groups on all known and unknown variables at the beginning of the study&, and second, by standardi-ing the conditions as much as possible so that the only difference that occurs during the experiment is the administration of the levels of the independent variable.
Quasi experimental research, 'he researcher does not randomly assign sub(ects to treatment and control groups. n other words, the treatment is not distributed among participants randomly. n some cases, a researcher may randomly assign one whole group to treatment and one whole group to control. n this case, quasi-experimental research involves using intact groups in an experiment, rather than assigning individuals at random to research conditions. Causal comparative n this research, the groups are already formed. t does not meet the standards of an experiment because the independent variable is not manipulated.
A rule of thumb is that physical sciences, such as physics, chemistry and geology tend to define experiments more narrowly than social sciences, such as sociology and psychology, which conduct experiments closer to the wider definition. Aims of Experimental Research 0xperiments are conducted to be able to predict phenomenon. 'ypically, an experiment is constructed to be able to explain some kind of causation. 0xperimental research is important to society - it helps us to improve our everyday lives. Identifying the Research Pro lem After deciding the topic of interest, the researcher tries to define the research problem. 'his helps the researcher to focus on a more narrow research area to be able to study it appropriately. 1efining the research problem helps you to formulate a research hypothesis, which is tested against the null hypothesis. 'he research problem is often operationali-ed, to define how to measure the research problem. 'he results will depend on the exact measurements that the researcher chooses and may be operationali-ed differently in another study to test the main conclusions of the study. Constructing the Experiment 'here are various aspects to remember when constructing an experiment. )lanning ahead ensures that the experiment is carried out properly and that the results reflect the real situation on the ground, in the best possible way. !ampling "roups to !tudy *ampling groups correctly is especially important when we have more than one condition in the experiment. 2ne sample often serves as a control whilst others are tested under the experimental conditions. 1eciding the sample groups can be done in using many different sampling techniques. )opulation sampling may be chosen by a number of methods, such
as randomi-ation, "quasi-randomi-ation" and pairing researchers often ad(ust the sample si-e to minimi-e chances of random errors. *ome common sampling techniques3
probability sampling non-probability sampling simple random sampling convenience sampling stratified sampling systematic sampling cluster sampling sequential sampling disproportional sampling (udgmental sampling snowball sampling quota sampling Creating the #esign
'he research design is chosen based on a range of factors. mportant factors when choosing the design are feasibility, time, cost, ethics, measurement problems and what you would like to test. 'he design of the experiment is critical for the validity of the results. $ypical #esigns and %eatures in Experimental #esign Pretest&Post test #esign 4heck whether the groups are different before the manipulation starts and the effect of the manipulation. )retests sometimes influence the effect. Control "roup 4ontrol groups are designed to measure research bias and measurement effects, such
as the 'awthorne Effect or the Place o Effect( A control group is a group not receiving the same manipulation as the experimental group. 0xperiments frequently have two conditions, but rarely more than three conditions at the same time. Randomi)ed Controlled $rials 5andomi-ed *ampling, comparison between an 0xperimental 6roup and a 4ontrol 6roup and strict control7randomi-ation of all other variables
!olomon %our&"roup #esign 'he *olomon four-group design is an experimental design that assesses the plausibility of pretest sensiti-ation effects , that is, whether the mere act of taking a pretest influences scores on subsequent administrations of the test. 8or example, if respondents complete a questionnaire measuring their knowledge of science as a pretest, they might then decide to subsequently seek answers to a few unfamiliar equations. At the posttest they might then score better on the science test compared to how they would have scored without taking the pretest..9ith two control groups and two experimental groups. Half the groups have a pretest and half do not have a pretest. 'his to test both the effect itself and the effect of the pretest. #ou le&*lind Experiment 'he researcher, nor the participants, know which the control group is. 'he results can be affected if the researcher or participants know this. *ayesian Pro a ility :sing ;ayesian probability to "interact" with participants is a more "advanced" experimental design. t can be used for settings where there are many variables which are hard to isolate. 'he researcher starts with a set of initial beliefs, and tries to ad(ust them to how participants have responded
Pilot !tudy
t is wise to first conduct a pilot-study or two before you do the real experiment. 'his ensures that the experiment measures what it should, and that everything is set up right. f the experiments involve humans, a common strategy is to first have a pilot study with someone involved in the research, but not too closely, and then arrange a pilot with a person who resembles the sub(ect"s&. 'hose two different pilots are likely to give the researcher good information about any problems in the experiment.
Conducting the Experiment dentifying and controlling non-experimental factors which the researcher does not want to influence the effects, is crucial to drawing a valid conclusion. 'his is often done by controlling variables, if possible, or randomi-ing variables to minimi-e effects that can be traced back to third variables. 5esearchers only want to measure the effect of the independent variable"s& when conducting an experiment, allowing them to conclude that this was the reason for the effect. 0xperiments are more often of quantitative nature than qualitative nature, although it happens. Examples of Experiments 2ne important feature that distinguishes experimental research from correlational research is that instead of simply measuring two variables, the researcher manipulates one of them. 'his means that the experimenter actually changes the value of that variable in a systematic way. 'his variable, which is called the independent varia le, is the one that the researcher believes is the cause. 'he other variable, which the researcher believes is the effect, is called the dependent varia le. 8or example, you could do a correlational study on the relationship between noise level and concentration by going to a variety of places, measuring the noise levels there, and giving people a task that requires concentration. 2r you could do an experiment by
setting up a situation in which you could manipulate the noise level<perhaps by making it really loud for one group of people and really soft for another. And of course you could give them a task that requires concentration, and their performance on this task would be the dependent variable. Control of Extraneous +aria les 'he second feature that distinguishes experimental research from correlational research is the control of extraneous varia les. 0xtraneous variables are basically all variables other than those you are interested in for purposes of your research. n an experiment on the effects of noise on concentration, there is an infinite number of extraneous variables3 age and sex of the research participants, whether or not they have eaten recently, the temperature of the room, the time of day, 'o control extraneous variables means to keep their values or levels as similar as possible across the different values or levels of your independent variable. Confounding +aria les An extraneous variable that differs systematically across conditions is called a confounding variable. t is important to see the difference between extraneous variables and confounding variables. 8or example, in an experiment on the effectiveness of cognitive psychotherapy for treating depression, the independent variable is whether or not patients get the psychotherapy, and the dependent variable is how much they improve. $he ,imitations of Experiments 'he obvious advantage of experimental research is that it provides stronger evidence for causal claims. t does, however, have at least two limitations. 'he first is that sometimes you cannot do an experiment because you cannot manipulate the independent variable, either for practical or ethical reasons. 8or example, if you are interested in the effects of a person=s culture on their tendency to help strangers, you
cannot do an experiment. 9hy not> ?ou cannot manipulate a person=s culture. 2r if you are interested in how damage to a certain part of the brain affects behavior, you cannot do an experiment. 9hy not> ?ou cannot go around damaging people=s brains to see what happens. n such cases, correlational research is the only alternative. 'he second limitation of experimental research is that sometimes controlling extraneous variables means creating situations that are somewhat artificial. A good example is provided research on the effect of smiling on first impressions. 'o control extraneous variables, people are typically brought into a laboratory and asked standard questions about a small number of posed stimulus photographs. t is legitimate to ask, however, whether the effect of smiling is likely to be the same out in the "real world" where people are actually interacting with each other 'here are several common threats to internal validity in experimental research. *ome includes the following+.
,oss of !u -ects "Mortality& -- All of the high or low scoring sub(ect may have dropped out or were missing from one of the groups. f we collected posttest data on a day when the honor society was on field trip at the treatment school, the mean for the treatment group would probably be much lower than it really should have been.
,ocation -- )erhaps one group was at a disadvantage because of their location. 'he city may have been demolishing a building next to one of the schools in our study and there are constant distractions which interfere with our treatment.
Instrumentation Instrument #ecay -- 'he testing instruments may not be scores similarly. )erhaps the person grading the posttest is fatigued and pays less attention to the last set of papers reviewed. t may be that those papers are from one of our groups and will received different scores than the earlier group@s papers
#ata Collector Characteristics -- 'he sub(ects of one group may react differently to the data collector than the other group. A male interviewing males and females about their attitudes toward a type of math instruction may not receive the same responses from females as female interviewing females would.
#ata Collector *ias -- 'he person collecting data my favors one group, or some characteristic some sub(ect possess, over another. A principal who favors strict
classroom management may rate students@ attention under different teaching conditions with a bias toward one of the teaching conditions.
$esting -- 'he act of taking a pretest or posttest may influence the results of the experiment. *uppose we were conducting a unit to increase student sensitivity to pre(udice.
'he pretest may have actually increased both groups@ sensitivity and we find that our treatment groups didn@t score any higher on a posttest given later than the control group did. f we hadn@t given the pretest, we might have seen differences in the groups at the end of the study.
'istory -- *omething may happen at one site during our study that influences the results. )erhaps a classmate dies in a car accident at the control site for a study teaching children bike safety. 'he control group may actually demonstrate more concern about bike safety than the treatment group.
.aturation --'here may be natural changes in the sub(ects that can account for the changes found in a study. A critical thinking unit may appear more effective if it taught during a time when children are developing abstract reasoning.
'awthorne Effect -- 'he sub(ects may respond differently (ust because they are being studied. 'he name comes from a classic study in which researchers were studying the effect of lighting on worker productivity. As the intensity of the factor lights increased, so did the work productivity
Resentful #emorali)ation of the Control "roup -- 'he control group may become discouraged because it is not receiving the special attention that is given to the treatment group. 'hey may perform lower than usual because of this.
Regression A class that scores particularly low can be expected to score slightly higher (ust by chance. Aikewise, a class that scores particularly high, will have a tendency to score slightly lower by chance. 'he change in these scores may have nothing to do with the treatment.
Implementation --'he treatment may not be implemented as intended. A study where teachers are asked to use student modeling techniques may not show positive results, not because modeling techniques don@t work, but because the teacher didn@t implement them or didn@t implement them as they were designed.
Compensatory E/uali)ation of $reatment -- *omeone may feel sorry for the control group because they are not receiving much attention and give them special treatment. 8or example, a researcher could be studying the effect of laptop computers on students@ attitudes toward math. 'he teacher feels sorry for the class that doesn@t have computers and sponsors a popcorn party during math class. 'he control group begins to develop a more positive attitude about mathematics.
Experimental $reatment #iffusion -- *ometimes the control group actually implements the treatment. f two different techniques are being tested in two different third grades in the same building, the teachers may share what they are doing. :nconsciously, the control may use of the techniques she or he learned from the treatment teacher.
4ontrolled experiments can be performed when it is difficult to exactly control all the conditions in an experiment. n this case, the experiment begins by creating two or more sample groups that are probabilistically equivalent, which means that measurements of traits should be similar among the groups and that the groups should respond in the same manner if given the same treatment. 'his equivalency is determined by statistical methods that take into account the amount of variation between individuals and the number of individuals in each group. n fields such as microbiology and chemistry, where there is very little variation between individuals and the group si-e is easily in the millions, these statistical methods are often bypassed and simply splitting a solution into equal parts is assumed to produce identical sample groups. 0atural experiments 'he term "experiment" usually implies a controlled experiment, but sometimes controlled experiments are prohibitively difficult or impossible. n this case researchers resort to natural experiments or quasi-experiments natural experiments rely solely on observations of the variables of the system under study, rather than manipulation of (ust one or a few variables as occurs in controlled experiments. 'o the degree possible, they attempt to collect data for the system in such a way that contribution from all variables can be determined, and where the effects of variation in certain variables remain approximately
constant so that the effects of other variables can be discerned. 'he degree to which this is possible depends on the observed correlation between explanatory variables in the observed data. 9hen these variables are not well correlated, natural experiments can approach the power of controlled experiments. :sually, however, there is some correlation between these variables, which reduces the reliability of natural experiments relative to what could be concluded if a controlled experiment were performed. Also, because natural experiments usually take place in uncontrolled environments, variables from undetected sources are neither measured nor held constant, and these may produce illusory correlations in variables under study. %ield experiments 8ield experiments are so named in order to draw a contrast with laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. 2ften used in the social sciences, and especially in economic analyses of education and health interventions, field experiments have the advantage that outcomes are observed in a natural setting rather than in a contrived laboratory environment. 8or this reason, field experiments are sometimes seen as having higher external validity than laboratory experiments. However, like natural experiments, field experiments suffer from the possibility of contamination3 experimental conditions can be controlled with more precision and certainty in the lab. ?et some phenomena "e.g., voter turnout in an election& cannot be easily studied in a laboratory.
Conclusion Although experiments are widely recogni-ed as the method of choice for determining the effects of an instructional intervention, they are sub(ect to limitations involving method and theory. 8irst, concerning method, the requirements for random assignment, experiment control, and appropriate measures can impose artificiality on the situation. )erfectly controlled conditions are generally not possible in authentic environments 'hus+ there may be a tradeoff between experimental rigor and practical authenticity, in which
highly controlled experiments may be too far removed from real contexts. 0xperimental researchers should be sensitive to this limitation, by incorporating mitigating features in their experiments that maintain validity
RE%ERE0CE!
4onstas, ,. A. "#$$B&. 5eshaping the methodological identity of education research. 0valuation 5eview, %C"D&, %ECF%EE. 0rickson, 8., G 6utierre-, H. "#$$#&. 4ulture, rigor, and science in educational research. 0ducational 5esearcher, %C"I&, #CF#D. Hlein, J.0., G 8leischman, A.5. "#$$#&. 'he private practicing physician-investigator3 ethical implications of clinical research in the office setting. Hastings 4enter 5eport, %#"D&, ##F#K. Hopelman, A. ,. "#$$D&. 4linical trials. n *. )ost "0d.& 0ncyclopedia of bioethics "%rd ed.&, pp. #%%DF#%D%. !ew ?ork3 ,ac,illan 5eference :*A.