Experimental Research in Education

 

Dr. V.K. Maheshwari, Former Principal

K.L.D.A.V (P. G) College, Roorkee, India

Experimental research is a method used by researchers through manipulating one variable and control the rest of the variables. The process, treatment and program in this type of research are also introduced and the conclusion is observed.

Commonly used in sciences such as sociology, psychology, physics, chemistry, biology and medicine, experimental research is a collection of research designs which make use of manipulation and controlled testing in order to understand casual processes. To determine the effect on a dependent variable, one or more variables need to be manipulated.

The experimental Research is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables

The aim of experimental research is to predict phenomenons. In most cases, an experiment is constructed so that some kinds of causation can be explained. Experimental research is helpful for society as it helps improve everyday life.

Experimental research describes the process that a researcher undergoes of controlling certain variables and manipulating others to observe if the results of the experiment reflect that the manipulations directly caused the particular outcome.

Experimental researchers test an idea (or practice or procedure) to determine its effect on an outcome. Researchers decide on an idea with which to “experiment,” assign individuals to experience it (and have some individuals experience something different), and then determine whether those who experienced the idea or practice performed better on some outcome than those who did not experience it.

Experimental research is used where:

  • time priority in a causal relationship.
  • consistency in a causal relationship.
  • magnitude of the correlation is great.

Key Characteristics of Experimental Research

Today, several key characteristics help us understand and read experimental research.

  • Experimental researchers randomly assign participants to groups or other units.
  • They provide control over extraneous variables to isolate the effects of the independent variable on the outcomes.
  • They physically manipulate the treatment conditions for one or more groups.
  • They then measure the outcomes for the groups to determine if the experimental treatment had a different effect than the non-experimental treatment.
  • This is accomplished by statistically comparing the groups.
  • Overall, they design an experiment to reduce the threats to internal validity and external validity.

Unique Features of Experimental Method

“The best method — indeed the only fully compelling method — of establishing causation is to conduct a carefully designed experiment in which the effects of possible lurking variables are controlled. To experiment means to actively change x and to observe the response in y” .

“The experimental method is the only method of research that can truly test hypotheses concerning cause-and-effect relationships. It represents the most valid approach to the solution of educational problems, both practical and theoretical, and to the advancement of education as a science .

  • After treatment, performance of subjects (dependent variable) in both groups is compared.Bottom of Form
  • Empirical observations based on experiments provide the strongest argument for cause-effect relationships.
  • Extraneous variables are controlled by 3 & 4 and other procedures if needed.
  • Problem statement ⇒ theory ⇒ constructs ⇒ operational definitions ⇒ variables ⇒ hypotheses.
  • Random assignment of subjects to treatment and control (comparison) groups (insures equivalency of groups; ie., unknown variables that may influence outcome are equally distributed across groups.
  • Random sampling of subjects from population (insures sample is representative of population).
  • The investigator manipulates a variable directly (the independent variable).
  • The research question (hypothesis) is often stated as the alternative hypothesis to the null hypothesis, that is used to interpret differences in the empirical data.

Key Components of Experimental Research Design

The Manipulation of Predictor Variables

In an experiment, the researcher manipulates the factor that is hypothesized to affect the outcome of interest. The factor that is being manipulated is typically referred to as the treatment or intervention. The researcher may manipulate whether research subjects receive a treatment

Random Assignment

  • Study participants are randomly assigned to different treatment groups
  • All participants have the same chance of being in a given condition

Random assignment neutralizes factors other than the independent and dependent variables, making it possible to directly infer cause and effect

Random Sampling

Traditionally, experimental researchers have used convenience sampling to select study participants. However, as research methods have become more rigorous, and the problems with generalizing from a convenience sample to the larger population have become more apparent, experimental researchers are increasingly turning to random sampling. In experimental policy research studies, participants are often randomly selected from program administrative databases and randomly assigned to the control or treatment groups.

Validity of Results

The two types of validity of experiments are internal and external. It is often difficult to achieve both in social science research experiments.

Internal Validity

  • When an experiment is internally valid, we are certain that the independent variable (e.g., child care subsidies) caused the outcome of the study (e.g., maternal employment)
  • When subjects are randomly assigned to treatment or control groups, we can assume that the independent variable caused the observed outcomes because the two groups should not have differed from one another at the start of the experiment
  • Since research subjects were randomly assigned to the treatment  and control groups, the two groups should not have differed at the outset of the study.

One potential threat to internal validity in experiments occurs when participants either drop out of the study or refuse to participate in the study. If particular types of individuals drop out or refuse to participate more often than individuals with other characteristics, this is called differential attrition.

External Validity

  • External validity is also of particular concern in social science experiments
  • It can be very difficult to generalize experimental results to groups that were not included in the study
  • Studies that randomly select participants from the most diverse and representative populations are more likely to have external validity
  • The use of random sampling techniques makes it easier to generalize the results of studies to other groups

Ethical Issues in Experimental Research

Ethical issues in conducting experiments relate to withholding the experimental treatment from some individuals who might benefit from receiving it, the disadvantages that might accrue from randomly assigning individuals to groups. This assignment overlooks the potential need of some individuals for beneficial treatment. Ethical issues also arise as to when to conclude an experiment, whether the experiment will provide the best answers to a problem, and considerations about the stakes involved in conducting the experiment.

It is particularly important in experimental research to follow ethical guidelines

The basic ethical principles:

  • Respect for persons — requires that research subjects are not coerced into participating in a study and requires the protection of research subjects who have diminished autonomy
  • Beneficence — requires that experiments do not harm research subjects, and that researchers minimize the risks for subjects while maximizing the benefits for them.

Validity Threats in Experimental Research

By validity “threat,” we mean only that a factor has the potential to bias results.In 1963, Campbell and Stanley identified different classes of such threats.

  • Instrumentation. Inconsistent use is made of testing instruments or testing conditions, or the pre-test and post- test are uneven in difficulty, suggesting a gain or decline in performance that is not real.
  • Testing. Exposure to a pre-test or intervening assessment influences performance on a post-test.
  • History. This validity threat is present when events, other than the treatments, occurring during the experimental period can influence results.
  • Maturation. During the experimental period, physical or psychological changes take place within the subjects.
  • Selection. There is a systematic difference in subjects’ abilities or characteristics between the treatment groups being compared.
  • Diffusion of Treatments. The implementation of a particular treatment influences subjects in the comparison treatment
  • Experimental Mortality. The loss of subjects from one or more treatments during the period of the study may bias the results.

In many instances, validity threats cannot be avoided. The presence of a validity threat should not be taken to mean that experimental findings are inaccurate or misleading. Knowing about validity threats gives the experimenter a framework for evaluating the particular situation and making a judgment about its severity. Such knowledge may also permit actions to be taken to limit the influences of the validity threat in question.

Planning a Comparative Experiment in Educational Settings

Educational researchers in many disciplines are faced with the task of exploring how students learn and are correspondingly addressing the issue of how to best help students do so. Often, educational researchers are interested in determining the effectiveness of some technology or pedagogical technique for use in the classroom. Their ability to do so depends on the quality of the research methodologies used to investigate these treatments.

 

 
Types of experimental research designs
There are three basic types of experimental research designs . These include

1)      True experimental designs

2)      Pre-experimental designs,

3)      Quasi-experimental designs.

The degree to which the researcher assigns subjects to conditions and groups distinguishes the type of experimental design.

True Experimental Designs

True experimental designs are characterized by the random selection of participants and the random assignment of the participants to groups in the study. The researcher also has complete control over the extraneous variables. Therefore, it can be confidently determined that that effect on the dependent variable is directly due to the manipulation of the independent variable. For these reasons, true experimental designs are often considered the best type of research design.

A true experiment is thought to be the most accurate experimental research design. A true experiment is a type of experimental design and is thought to be the most accurate type of experimental research. This is because a true experiment supports or refutes a hypothesis using statistical analysis. A true experiment is also thought to be the only experimental design that can establish cause and effect relationships.

types of true experimental designs

There are several types of true experimental designs and they are as follows:

One-shot case study design

A single group is studied at a single point in time after some treatment that is presumed to have caused change. The carefully studied single instance is compared to general expectations of what the case would have looked like had the treatment not occurred and to other events casually observed. No control or comparison group is employed.

Static-group comparison

A group that has experienced some treatment is compared with one that has not. Observed differences between the two groups are assumed to be a result of the treatment.

Post-test Only Design – This type of design has two randomly assigned groups: an experimental group and a control group. Neither group is pretested before the implementation of the treatment. The treatment is applied to the experimental group and the post-test is carried out on both groups to assess the effect of the treatment or manipulation. This type of design is common when it is not possible to pretest the subjects.

Pretest-Post-test Only Design -

The subjects are again randomly assigned to either the experimental or the control group. Both groups are pretested for the independent variable. The experimental group receives the treatment and both groups are post-tested to examine the effects of manipulating the independent variable on the dependent variable.

One-group pretest-posttest design

A single case is observed at two time points, one before the treatment and one after the treatment. Changes in the outcome of interest are presumed to be the result of the intervention or treatment. No control or comparison group is employed.

Solomon Four Group Design – Subjects are randomly assigned into one of four groups. There are two experimental groups and two control groups. Only two groups are pretested. One pretested group and one unprotested group receive the treatment. All four groups will receive the post-test. The effects of the dependent variable originally observed are then compared to the effects of the independent variable on the dependent variable as seen in the post-test results. This method is really a combination of the previous two methods and is used to eliminate potential sources of error.

Factorial Design

The researcher manipulates two or more independent variables (factors) simultaneously to observe their effects on the dependent variable. This design allows for the testing of two or more hypotheses in a single project.

Randomized Block Design

This design is used when there are inherent differences between subjects and possible differences in experimental conditions. If there are a large number of experimental groups, the randomized block design may be used to bring some homogeneity to each group.

Crossover Design (also known as Repeat Measures Design)

Subjects in this design are exposed to more than one treatment and the subjects are randomly assigned to different orders of the treatment. The groups compared have an equal distribution of characteristics and there is a high level of similarity among subjects that are exposed to different conditions. Crossover designs are excellent research tools, however, there is some concern that the response to the second treatment or condition will be influenced by their experience with the first treatment. In this type of design, the subjects serve as their own control groups.

Criteria of true experiment

True experimental design employ both a control group and a means to measure the change that occurs in both groups.  In this sense, we attempt to control for all confounding variables, or at least consider their impact, while attempting to determine if the treatment is what truly caused the change.  The true experiment is often thought of as the only research method that can adequately measure the cause and effect relationship.

There are three criteria that must be met in a true experiment

  1. Control group and experimental group
  2. Researcher-manipulated variable
  3. Random assignment

Control Group and Experimental Group

True experiments must have a control group, which is a group of research participants that resemble the experimental group but do not receive the experimental treatment. The control group provides a reliable baseline data to which you can compare the experimental results.

The experimental group is the group of research participants who receive the experimental treatment. True experiments must have at least one control group and one experimental group, though it is possible to have more than one experimental group.

Researcher-Manipulated Variable

In true experiments, the researcher has to change or manipulate the variable that is hypothesized to affect the outcome variable that is being studied. The variable that the researcher has control over is called the independent variable. The independent variable is also called the predictor variable because it is the presumed cause of the differences in the outcome variable.

The outcome or effect that the research is studying is called the dependent variable. The dependent variable is also called the outcome variable because it is the outcome that the research is studying. The researcher does not manipulate the dependent variable.

Random Assignment

Research participants have to be randomly assigned to the sample groups. In other words, each research participant must have an equal chance of being assigned to each sample group. Random assignment is useful in that it assures that the differences in the groups are due to chance. Research participants have to be randomly assigned to either the control or experimental group.

Elements of true experimental research

Once the design has been determined, there are four elements of true experimental research that must be considered:

Manipulation: The researcher will purposefully change or manipulate the independent variable, which is the treatment or condition that will be applied to the experimental groups. It is important to establish clear procedural guidelines for application of the treatment to promote consistency and ensure that the manipulation itself does affect the dependent variable.

  • Control: Control is used to prevent the influence of outside factors (extraneous variables) from influencing the outcome of the study. This ensures that outcome is caused by the manipulation of the independent variable. Therefore, a critical piece of experimental design is keeping all other potential variables constant.
  • Random Assignment: A key feature of true experimental design is the random assignment of subjects into groups. Participants should have an equal chance of being assigned into any group in the experiment. This further ensures that the outcome of the study is due to the manipulation of the independent variable and is not influenced by the composition of the test groups. Subjects can be randomly assigned in many ways, some of which are relatively easy, including flipping a coin, drawing names, using a random table, or utilizing a computer assisted random sequencing.
  • Random selection: In addition to randomly assigning the test subjects in groups, it is also important to randomly select the test subjects from a larger target audience. This ensures that the sample population provides an accurate cross-sectional representation of the larger population including different socioeconomic backgrounds, races, intelligence levels, and so forth.

Pre-experimental Design

Pre-experimental design is a research format in which some basic experimental attributes are used while some are not. This factor causes an experiment to not qualify as truly experimental. This type of design is commonly used as a cost effective way to conduct exploratory research.

Pre-experimental designs are so named because they follow basic experimental steps but fail to include a control group.  In other words, a single group is often studied but no comparison between an equivalent non-treatment group is made

Pre-experiments are the simplest form of research design. In a pre-experiment either a single group or multiple groups are observed subsequent to some agent or treatment presumed to cause change.

Types of Pre-Experimental Design

  • One-shot case study design
  • One-group pretest-posttest design
  • Static-group comparison

One-shot case study design

A single group is studied at a single point in time after some treatment that is presumed to have caused change. The carefully studied single instance is compared to general expectations of what the case would have looked like had the treatment not occurred and to other events casually observed. No control or comparison group is employed.

In one-shot case study we expose a group to a treatment X and measure the outcome Y. It lacks a pre-test Y and a control group. It has no basis for comparing groups, or pre- and post-tests

Used to measure an outcome after an intervention is implemented; often to measure use of a new program or service

  • One group receives the intervention
  • Data gathered at one time point after the intervention
  • Design weakness: does not prove there is a cause and effect relationship between the intervention and outcomes -

One-group pretest-posttest design

A single case is observed at two time points, one before the treatment and one after the treatment. Changes in the outcome of interest are presumed to be the result of the intervention or treatment. No control or comparison group is employed.

In one-group pre-test/post-test design we include the measurement of Y before and after treatment X. It has no control group, so no group comparisons

  • Used to measure change in an outcome before and after an intervention is implemented
  • One group receives the intervention
  • Data gathered at 2+ time points
  • Design weakness: shows that change occurred, but does not account for an event, maturation, or altered survey methods that could occur between Static group comparison
  • Used to measure an outcome after an intervention is implemented ◦

Static-group comparison

In static-group comparison we have experimental and control group, but no pre-test. It allows for comparisons among groups, but no pre- and post-tests.

A group that has experienced some treatment is compared with one that has not. Observed differences between the two groups are assumed to be a result of the treatment.

Two non-randomly assigned groups, one that received the intervention and one that did not (control)

  • Data gathered at one time point after the intervention
  • Design weakness: shows that change occurred, but participant selection could result in groups that differ on relevant variables

Validity of Results in Pre-experimental designs

An important drawback of pre-experimental designs is that they are subject to numerous threats to their validity. Consequently, it is often difficult or impossible to dismiss rival hypotheses or explanations.

One reason that it is often difficult to assess the validity of studies that employ a pre-experimental design is that they often do not include any control or comparison group. Without something to compare it to, it is difficult to assess the significance of an observed change in the case.

Even when pre-experimental designs identify a comparison group, it is still difficult to dismiss rival hypotheses for the observed change. This is because there is no formal way to determine whether the two groups would have been the same if it had not been for the treatment. If the treatment group and the comparison group differ after the treatment, this might be a reflection of differences in the initial recruitment to the groups or differential mortality in the experiment.

Advantages in Pre-experimental designs

  • Apply only in situations in which it is impossible to manipulate more than one condition.
  • Are useful in the applied field, emerges as a response to the problems of experimentation in education.
  • As exploratory approaches, pre-experiments can be a cost-effective way to discern whether a potential explanation is worthy of further investigation
  • Do not control the internal validity, so are not very useful in the scientific construction.
  • Meet the minimum condition of an experiment.
  • The results are always debatable.

Disadvantages in Pre-experimental designs

Pre-experiments offer few advantages since it is often difficult or impossible to rule out alternative explanations. The nearly insurmountable threats to their validity are clearly the most important disadvantage of pre-experimental research designs.

Because of strict conditions and control the experimenter can set up the experiment again and repeat or ‘check’ their results. Replication is very important as when similar results are obtained this gives greater confidence in the results.

  • Control over extraneous variables is usually greater than in other research methods.
  • Experimental design involves manipulating the independent variable to observe the effect on the dependent variable. This makes it possible to determine a cause and effect relationship.
  • Quantitative observational designs allow variables to be investigated that would be unethical, impossible or too costly under an experimental design.
  • Cannot infer such a strong cause and effect relationship because there is or greater chance of other variables affecting the results. This is due to the lack of random assignment to groups.
  • Cannot replicate the findings as the same situation will not occur naturally again.
  • Experimental situation may not relate to the real world. Some kinds of behaviour can only be observed in a naturalistic setting.
  • It may be unethical or impossible to randomly assign people to groups
  • Observer bias may influence the results.
  • Quantitative Observational does not allow generalisation of findings to the general population.
  • Elimination of extraneous variables is not always possible.

Quasi-experimental designs

Quasi-experimental designs help researchers test for causal relationships in a variety of situations where the classical design is difficult or inappropriate. They are called quasi because they are variations of the classical experimental design. In general, the researcher has less control over the independent variable than in the classical design.

Main points of Quasi-experimental research designs

Quasi-experimental research designs, like experimental designs, test causal hypotheses.

  • A quasi-experimental design by definition lacks random assignment.
  • Quasi-experimental designs identify a comparison group that is as similar as possible to the
  • treatment group in terms of baseline (pre-intervention) characteristics.
  • There are different techniques for creating a valid comparison group such as regression
  • discontinuity design (RDD) and propensity score matching (PSM).

Types of Quasi-Experimental Designs

1. Two-Group Posttest-Only Design

a. This is identical to the static group comparison, with one exception: The groups are randomly assigned. It has  all the parts of the classical design except a pretest. The random assignment reduces the chance that the groups differed before the treatment, but without a pretest, a researcher cannot be as certain that the groups began the same on the dependent variable.

2. Interrupted Time Series

a. In an interrupted time series design, a researcher uses one group and makes multiple pretest measures before and after the treatment.

3. Equivalent Time Series

a. An equivalent time series is another one-group design that extends over a time period. Instead of one treatment, it has a pretest, then a treatment and posttest, then treatment and posttest, then treatment and posttest, and so on.

Other Quasi-Experimental Designs

There are many different types of quasi-experimental designs that have a variety of applications in specific contexts

The Proxy Pretest Design

The proxy pretest design looks like a standard pre-post design. But there’s an important difference. The pretest in this design is collected after the program is given. The recollection proxy pretest would be a sensible way to assess participants’ perceived gain or change.

The Separate Pre-Post Samples Design

The basic idea in this design (and its variations) is that the people you use for the pretest are not the same as the people you use for the posttest

The Double Pretest Design

The Double Pretest is a very strong quasi-experimental design with respect to internal validity. Why? Recall that the

The double pretest design includes two measures prior to the program.. Therefore, this design explicitly controls for selection-maturation threats. The design is also sometimes referred to as a “dry run” quasi-experimental design because the double pretests simulate what would happen in the null case.

The Switching Replications Design

The Switching Replications quasi-experimental design is also very strong with respect to internal validity. The design has two groups and three waves of measurement. In the first phase of the design, both groups are pretests, one is given the program and both are posttested. In the second phase of the design, the original comparison group is given the program while the original program group serves as the “control

The Nonequivalent Dependent Variables (NEDV) Design

The Nonequivalent Dependent Variables (NEDV) Design is a deceptive one. In its simple form, it is an extremely weak design with respect to internal validity. But in its pattern matching variations, it opens the door to an entirely different approach to causal assessment that is extremely powerful.

The idea in this design is that you have a program designed to change a specific outcome.

The Pattern Matching NEDV Design. Although the two-variable NEDV design is quite weak, we can make it considerably stronger by adding multiple outcome variables. In this variation, we need many outcome variables and a theory that tells how affected (from most to least) each variable will be by the program.

Depending on the circumstances, the Pattern Matching NEDV design can be quite strong with respect to internal validity. In general, the design is stronger if you have a larger set of variables and you find that your expectation pattern matches well with the observed results

The Regression Point Displacement (RPD) Design

The RPD design attempts to enhance the single program unit situation by comparing the performance on that single unit with the performance of a large set of comparison units. In community research, we would compare the pre-post results for the intervention community with a large set of other communities.

Advantages in Quasi-experimental designs

  • Since quasi-experimental designs are used when randomization is impractical and/or unethical, they are typically easier to set up than true experimental designs, which require[random assignment of subjects.
  • Additionally, utilizing quasi-experimental designs minimizes threats to ecological validity as natural environments do not suffer the same problems of artificiality as compared to a well-controlled laboratory setting.
  • Since quasi-experiments are natural experiments, findings in one may be applied to other subjects and settings, allowing for some generalizations to be made about population.
  • This experimentation method is efficient in longitudinal research that involves longer time periods which can be followed up in different environments.
  • The idea of having any manipulations the experimenter so chooses. In natural experiments, the researchers have to let manipulations occur on their own and have no control over them whatsoever.
  • Using self selected groups in quasi experiments also takes away to chance of ethical, conditional, etc. concerns while conducting the study.
  • As exploratory approaches, pre-experiments can be a cost-effective way to discern whether a potential explanation is worthy of further investigation.

Disadvantages of quasi-experimental designs

  • Quasi-experimental estimates of impact are subject to contamination by confounding variables.
  • The lack of random assignment in the quasi-experimental design method may allow studies to be more feasible, but this also poses many challenges for the investigator in terms of internal validity. This deficiency in randomization makes it harder to rule out confounding variables and introduces new threats to internal validity.
  • Because randomization is absent, some knowledge about the data can be approximated, but conclusions of causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment.
  • Moreover, even if these threats to internal validity are assessed, causation still cannot be fully established because the experimenter does not have total control over extraneous variables
  • The study groups may provide weaker evidence because of the lack of randomness. Randomness brings a lot of useful information to a study because it broadens results and therefore gives a better representation of the population as a whole.
  • Using unequal groups can also be a threat to internal validity.
  • If groups are not equal, which is sometimes the case in quasi experiments, then the experimenter might not be positive what the causes are for the results.

Experimental Research in Educational Technology

Here is a sequence of logical steps for planning and conducting research

Step 1. Select a Topic. This step is self-explanatory and usually not a problem, except for those who are “required” to do research  as opposed to initiating it on their own. The step simply involves identifying a general area that is of personal interest and then narrowing the focus to a researchable problem

Step 2. Identify the Research Problem. Given the general topic area, what specific problems are of interest? In many cases, the researcher already knows the problems. In others, a trip to the library to read background literature and examine previous studies is probably needed. A key concern is the importance of the problem to the field. Conducting research requires too much time and effort to be examining trivial questions that do not expand existing knowledge.

Step 3. Conduct a Literature Search. With the research topic and problem identified, it is now time to conduct a more intensive literature search. Of importance is determining what relevant studies have been performed; the designs, instruments, and procedures employed in those studies; and, most critically, the findings. Based on the review, direction will be provided for (a) how to extend or complement the existing literature base, (b) possible research orientations to use, and (c) specific research questions to address.

Step 4. State the Research Questions (or Hypotheses). This step is probably the most critical part of the planning process. Once stated, the research questions or hypotheses provide the basis for planning all other parts of the study: design, materials, and data analysis. In particular, this step will guide the researcher’s decision as to whether an experimental design or some other orientation is the best choice.

Step 5. Determine the Research Design. The next consideration is whether an experimental design is feasible. If not, the researcher will need to consider alternative approaches, recognizing that the original research question may not be answerable as a result.

Step 6. Determine Methods. Methods of the study include (a) subjects, (b) materials and data collection instruments, and (c) procedures. In determining these components, the researcher must continually use the research questions and/or hypotheses as reference points. A good place to start is with subjects or participants. What kind and how many participants does the research design require?

Next consider materials and instrumentation. When the needed resources are not obvious, a good strategy is to construct a listing of data collection instruments needed to answer each question (e.g., attitude survey, achievement test, observation form).

An experiment does not require having access to instruments that are already developed. Particularly in research with new technologies, the creation of novel measures of affect or performance may be implied. From an efficiency standpoint, however, the researcher’s first step should be to conduct a thorough search of existing instruments to determine if any can be used in their original form or adapted to present needs. If none is found, it would usually be far more advisable to construct a new instrument rather than “force fit” an existing one. New instruments will need to be pilot tested and validated. Standard test and measurement texts provide useful guidance for this requirement The experimental procedure, then, will be dictated by the research questions and the available resources. Piloting the methodology is essential to ensure that materials and methods work as planned.

Step 7. Determine Data Analysis Techniques.

Whereas statistical analysis procedures vary widely in complexity, the appropriate options for a particular experiment will be defined by two factors: the research questions and the type of data

Reporting and Publishing Experimental Studies

Obviously, for experimental studies to have impact on theory and practice in educational technology, their findings need to be disseminated to the field.

Introduction. The introduction to reports of experimental studies accomplishes several functions: (a) identifying the general area of the problem , (b) creating a rationale to learn more about the problem , (c) reviewing relevant literature, and (d) stating the specific purposes of the study. Hypotheses and/or research questions should directly follow from the preceding discussion and generally be stated explicitly, even though they may be obvious from the

literature review. In basic research experiments, usage of hypotheses is usually expected, as a theory or principle is typically being tested. In applied research experiments, hypotheses would be used where there is a logical or empirical basis for expecting a certain result

Method. The Method section of an experiment describes the participants or subjects, materials, and procedures. The usual convention is to start with subjects (or participants) by clearly describing the population concerned (e.g., age or grade level, background) and the sampling procedure. In reading about an experiment, it is extremely important to know if subjects were randomly assigned to treatments or if intact groups were employed. It is also important to know if participation was voluntary or required and whether the level of performance on the experimental task was consequential to the subjects. Learner motivation and task investment are critical in educational technology research, because such variables are likely to impact directly on subjects’ usage of media attributes and instructional strategies

Results. This major section describes the analyses and the findings. Typically, it should be organized such that the most important dependent measures are reported first. Tables and/or figures should be used judiciously to supplement (not repeat) the text. Statistical significance vs. practical importance. Traditionally, researchers followed the convention of determining the “importance” of findings based on statistical significance. Simply put, if the experimental group’s mean of 85% on the post test was found to be significantly higher (say, at p < .01) than the control group’s mean of 80%, then the “effect” was regarded as having theoretical or practical value. If the result was not significant (i.e., the null hypothesis could not be rejected), the effect was dismissed as not reliable or important.

In recent years, however, considerable attention has been given to the benefits of distinguishing between “statistical significance” and “practical importance” . Statistical significance indicates whether an effect can be considered attributable to factors other than chance. But a significant effect does not necessary mean a “large” effect.

Discussion. To conclude the report, the discussion section explains and interprets the findings relative to the hypotheses or research questions, previous studies, and relevant theory and practice. Where appropriate, weaknesses in procedures that may have impacted results should be identified. Other conventional features of a discussion may include suggestions for further research and conclusions regarding the research hypotheses/ questions. For educational technology experiments, drawing implications for practice in the area concerned is highly desirable.

Advantages of Experimental Research

1. Variables Are Controlled
With experimental research groups, the people conducting the research have a very high level of control over their variables. By isolating and determining what they are looking for, they have a great advantage in finding accurate results, this provides more valid and accurate results. This research aids in controlling independent variables for the experiments aim to remove extraneous and unwanted variables. The control over the irrelevant variables is higher as compared to other research types or methods.

2. Determine Cause and Effect
The experimental design of this type of research includes manipulating independent variables to easily determine the cause and effect relationship.This is highly valuable for any type of research being done.

3. Easily Replicated
In many cases multiple studies must be performed to gain truly accurate results and draw valid conclusions. Experimental research designs can easily be done again and again, and since all control over the variables is had, you can make it nearly identical to the ones before it. There is a very wide variety of this type of research. Each can provide different benefits, depending on what is being explored. The investigator has the ability to tailor make the experiment for their own unique situation, while still remaining in the validity of the experimental research design.

4. Best Results
Having control over the entire experiment and being able to provide in depth analysis of the hypothesis and data collected, makes experimental research one of the best options. The conclusions that are met are deemed highly valid, and on top of everything, the experiment can be done again and again to prove validity. Due to the control set up by experimenter and the strict conditions, better results can be achieved. Better results that have been obtained can also give researcher greater confidence regarding the results.

5. Can Span Across Nearly All Fields Of Research
Another great benefit of this type of research design is that it can be used in many different types of situations. Just like pharmaceutical companies can utilize it, so can teachers who want to test a new method of teaching. It is a basic, but efficient type of research.

6. Clear Cut Conclusions
Since there is such a high level of control, and only one specific variable is being tested at a time, the results are much more relevant than some other forms of research. You can clearly see the success, failure, of effects when analyzing the data collected.
7.Greater transfer ability

gaining insights to instruction methods, performing experiments and combining methods for rigidity, determining the best for the population and providing greater transferability.

Limitations in Experimental Design

Failure to do Experiment
One of the disadvantages of experimental research is that you cannot do experiments at times because you cannot manipulate independent variables either due to ethical or practical reasons. Taking for instance a situation wherein you are enthusiastic about the effects of an individual’s culture or the tendency of helping strangers, you cannot do the experiment. The reason for this is simply because you are not capable of manipulating the individual’s culture.

External Validity

A limitation of both experiments and well-identified quasi-experiments is whether the estimated impact would be similar if the program were replicated in another location, at a different time, or targeting a different group of students. Researchers often do little or nothing to address this point and should likely do more

Another limitation of experiments is that they are generally best at uncovering partial equilibrium effects. The impacts can be quite different when parents, teachers, and students have a chance to optimize their behavior in light of the program.

Hawthorne Effects

Another limitation of experiments is that it is possible that the experience of being observed may change one’s behavior—so-called Hawthorne effects. For example, participants may exert extra effort because they know their outcomes will be measured. As a result, it may be this extra effort and not the underlying program being studied that affects student outcomes.

Cost

Experimental evaluations can be expensive to implement well. Researchers must collect a wide variety of mediating and outcome variables . It is sometimes expensive to follow the control group, which may become geographically dispersed over time or may be less likely to cooperate in the research process. The costs of experts’ time and incentives for participants also threaten to add up quickly. Given a tight budget constraint, sometimes the best approach may be to run a relatively small experimental study.

Violations of Experimental Assumptions

Another limitation of experiments is that it is perhaps too easy to mine the data. If one slices and dices the data in enough ways, there is a good chance that some spurious results will emerge. This is a great temptation to researchers, especially if they are facing pressure from funders who have a stake in the results. Here, too, there are ways to minimize the problem.

Subject to Human Error

Researchers are human too and they can commit mistakes. However, whether the error was made by machine or man, one thing remains certain: it will affect the results of a study.

Other issues cited as disadvantages include personal biases, unreliable samples, results that can only be applied in one situation and the difficulty in measuring the human experience.

Experimental designs are frequently contrived scenarios that do not often mimic the things that happen in real world. The degree on which results can be generalized all over situations and real world applications are limited.

Can Create Artificial Situations
Experimental research also means controlling irrelevant variables on certain occasions. As such, this creates a situation that is somewhat artificial.By having such deep control over the variables being tested, it is very possible that the data can be skewed or corrupted to fit whatever outcome the researcher needs. This is especially true if it is being done for a business or market study.

Can take an Extensive Amount of Time
With experimental testing individual experiments have to be done in order to fully research each variable. This can cause the testing to take a very long amount of time and use a large amount of resources and finances. These costs could transfer onto the company, which could inflate costs for consumers

Participants can be influenced by environment
Those who participate in trials may be influenced by the environment around them. As such, they might give answers not based on how they truly feel but on what they think the researcher wants to hear. Rather than thinking through what they feel and think about a subject, a participant may just go along with what they believe the researcher is trying to achieve.

Manipulation of variables isn’t seen as completely objective
Experimental research mainly involves the manipulation of variables, a practice that isn’t seen as being objective. As mentioned earlier, researchers are actively trying to influence variable so that they can observe the consequences

Limited Behaviors
When people are part of an experiment, especially one where variables are controlled so precisely, the subjects of the experiment may not give the most accurate reactions. Their normal behaviors are limited because of the experiment environment.

It’s Impossible to control  it all
While the majority of the variables in an experimental research design are controlled by the researchers, it is absolutely impossible to control each and every one. Things from mood, events that happened in the subject’s earlier day, and many other things can affect the outcome and results of the experiment.

In short it can be said that When a researcher decides on a topic of interest, they try to define the research problem, which really helps as it makes the research area narrower thus they are able to study it more appropriately. Once the research problem is defined, a researcher formulates a research hypothesis which is then tested against the null hypothesis.

Experimental research is guided by educated guesses that guess the result of the experiment. An experiment is conducted to give evidence to this experimental hypothesis. Experimental research,although very demanding of time and resources, often produces the soundest evidence concerning hypothesized cause-effect relationships.

 

 

 

 

This entry was posted in Uncategorized. Bookmark the permalink.

Comments are closed.