Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Quasi-Experimental Design | Definition, Types & Examples

Quasi-Experimental Design | Definition, Types & Examples

Published on July 31, 2020 by Lauren Thomas . Revised on January 22, 2024.

Like a true experiment , a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable .

However, unlike a true experiment, a quasi-experiment does not rely on random assignment . Instead, subjects are assigned to groups based on non-random criteria.

Quasi-experimental design is a useful tool in situations where true experiments cannot be used for ethical or practical reasons.

Quasi-experimental design vs. experimental design

Table of contents

Differences between quasi-experiments and true experiments, types of quasi-experimental designs, when to use quasi-experimental design, advantages and disadvantages, other interesting articles, frequently asked questions about quasi-experimental designs.

There are several common differences between true and quasi-experimental designs.

True experimental design Quasi-experimental design
Assignment to treatment The researcher subjects to control and treatment groups. Some other, method is used to assign subjects to groups.
Control over treatment The researcher usually . The researcher often , but instead studies pre-existing groups that received different treatments after the fact.
Use of Requires the use of . Control groups are not required (although they are commonly used).

Example of a true experiment vs a quasi-experiment

However, for ethical reasons, the directors of the mental health clinic may not give you permission to randomly assign their patients to treatments. In this case, you cannot run a true experiment.

Instead, you can use a quasi-experimental design.

You can use these pre-existing groups to study the symptom progression of the patients treated with the new therapy versus those receiving the standard course of treatment.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

quasi experimental pre post study

Many types of quasi-experimental designs exist. Here we explain three of the most common types: nonequivalent groups design, regression discontinuity, and natural experiments.

Nonequivalent groups design

In nonequivalent group design, the researcher chooses existing groups that appear similar, but where only one of the groups experiences the treatment.

In a true experiment with random assignment , the control and treatment groups are considered equivalent in every way other than the treatment. But in a quasi-experiment where the groups are not random, they may differ in other ways—they are nonequivalent groups .

When using this kind of design, researchers try to account for any confounding variables by controlling for them in their analysis or by choosing groups that are as similar as possible.

This is the most common type of quasi-experimental design.

Regression discontinuity

Many potential treatments that researchers wish to study are designed around an essentially arbitrary cutoff, where those above the threshold receive the treatment and those below it do not.

Near this threshold, the differences between the two groups are often so minimal as to be nearly nonexistent. Therefore, researchers can use individuals just below the threshold as a control group and those just above as a treatment group.

However, since the exact cutoff score is arbitrary, the students near the threshold—those who just barely pass the exam and those who fail by a very small margin—tend to be very similar, with the small differences in their scores mostly due to random chance. You can therefore conclude that any outcome differences must come from the school they attended.

Natural experiments

In both laboratory and field experiments, researchers normally control which group the subjects are assigned to. In a natural experiment, an external event or situation (“nature”) results in the random or random-like assignment of subjects to the treatment group.

Even though some use random assignments, natural experiments are not considered to be true experiments because they are observational in nature.

Although the researchers have no control over the independent variable , they can exploit this event after the fact to study the effect of the treatment.

However, as they could not afford to cover everyone who they deemed eligible for the program, they instead allocated spots in the program based on a random lottery.

Although true experiments have higher internal validity , you might choose to use a quasi-experimental design for ethical or practical reasons.

Sometimes it would be unethical to provide or withhold a treatment on a random basis, so a true experiment is not feasible. In this case, a quasi-experiment can allow you to study the same causal relationship without the ethical issues.

The Oregon Health Study is a good example. It would be unethical to randomly provide some people with health insurance but purposely prevent others from receiving it solely for the purposes of research.

However, since the Oregon government faced financial constraints and decided to provide health insurance via lottery, studying this event after the fact is a much more ethical approach to studying the same problem.

True experimental design may be infeasible to implement or simply too expensive, particularly for researchers without access to large funding streams.

At other times, too much work is involved in recruiting and properly designing an experimental intervention for an adequate number of subjects to justify a true experiment.

In either case, quasi-experimental designs allow you to study the question by taking advantage of data that has previously been paid for or collected by others (often the government).

Quasi-experimental designs have various pros and cons compared to other types of studies.

  • Higher external validity than most true experiments, because they often involve real-world interventions instead of artificial laboratory settings.
  • Higher internal validity than other non-experimental types of research, because they allow you to better control for confounding variables than other types of studies do.
  • Lower internal validity than true experiments—without randomization, it can be difficult to verify that all confounding variables have been accounted for.
  • The use of retrospective data that has already been collected for other purposes can be inaccurate, incomplete or difficult to access.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Thomas, L. (2024, January 22). Quasi-Experimental Design | Definition, Types & Examples. Scribbr. Retrieved August 26, 2024, from https://www.scribbr.com/methodology/quasi-experimental-design/

Is this article helpful?

Lauren Thomas

Lauren Thomas

Other students also liked, guide to experimental design | overview, steps, & examples, random assignment in experiments | introduction & examples, control variables | what are they & why do they matter, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

The use and interpretation of quasi-experimental design

Last updated

6 February 2023

Reviewed by

Miroslav Damyanov

Short on time? Get an AI generated summary of this article instead

  • What is a quasi-experimental design?

Commonly used in medical informatics (a field that uses digital information to ensure better patient care), researchers generally use this design to evaluate the effectiveness of a treatment – perhaps a type of antibiotic or psychotherapy, or an educational or policy intervention.

Even though quasi-experimental design has been used for some time, relatively little is known about it. Read on to learn the ins and outs of this research design.

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

  • When to use a quasi-experimental design

A quasi-experimental design is used when it's not logistically feasible or ethical to conduct randomized, controlled trials. As its name suggests, a quasi-experimental design is almost a true experiment. However, researchers don't randomly select elements or participants in this type of research.

Researchers prefer to apply quasi-experimental design when there are ethical or practical concerns. Let's look at these two reasons more closely.

Ethical reasons

In some situations, the use of randomly assigned elements can be unethical. For instance, providing public healthcare to one group and withholding it to another in research is unethical. A quasi-experimental design would examine the relationship between these two groups to avoid physical danger.

Practical reasons

Randomized controlled trials may not be the best approach in research. For instance, it's impractical to trawl through large sample sizes of participants without using a particular attribute to guide your data collection .

Recruiting participants and properly designing a data-collection attribute to make the research a true experiment requires a lot of time and effort, and can be expensive if you don’t have a large funding stream.

A quasi-experimental design allows researchers to take advantage of previously collected data and use it in their study.

  • Examples of quasi-experimental designs

Quasi-experimental research design is common in medical research, but any researcher can use it for research that raises practical and ethical concerns. Here are a few examples of quasi-experimental designs used by different researchers:

Example 1: Determining the effectiveness of math apps in supplementing math classes

A school wanted to supplement its math classes with a math app. To select the best app, the school decided to conduct demo tests on two apps before selecting the one they will purchase.

Scope of the research

Since every grade had two math teachers, each teacher used one of the two apps for three months. They then gave the students the same math exams and compared the results to determine which app was most effective.

Reasons why this is a quasi-experimental study

This simple study is a quasi-experiment since the school didn't randomly assign its students to the applications. They used a pre-existing class structure to conduct the study since it was impractical to randomly assign the students to each app.

Example 2: Determining the effectiveness of teaching modern leadership techniques in start-up businesses

A hypothetical quasi-experimental study was conducted in an economically developing country in a mid-sized city.

Five start-ups in the textile industry and five in the tech industry participated in the study. The leaders attended a six-week workshop on leadership style, team management, and employee motivation.

After a year, the researchers assessed the performance of each start-up company to determine growth. The results indicated that the tech start-ups were further along in their growth than the textile companies.

The basis of quasi-experimental research is a non-randomized subject-selection process. This study didn't use specific aspects to determine which start-up companies should participate. Therefore, the results may seem straightforward, but several aspects may determine the growth of a specific company, apart from the variables used by the researchers.

Example 3: A study to determine the effects of policy reforms and of luring foreign investment on small businesses in two mid-size cities

In a study to determine the economic impact of government reforms in an economically developing country, the government decided to test whether creating reforms directed at small businesses or luring foreign investments would spur the most economic development.

The government selected two cities with similar population demographics and sizes. In one of the cities, they implemented specific policies that would directly impact small businesses, and in the other, they implemented policies to attract foreign investment.

After five years, they collected end-of-year economic growth data from both cities. They looked at elements like local GDP growth, unemployment rates, and housing sales.

The study used a non-randomized selection process to determine which city would participate in the research. Researchers left out certain variables that would play a crucial role in determining the growth of each city. They used pre-existing groups of people based on research conducted in each city, rather than random groups.

  • Advantages of a quasi-experimental design

Some advantages of quasi-experimental designs are:

Researchers can manipulate variables to help them meet their study objectives.

It offers high external validity, making it suitable for real-world applications, specifically in social science experiments.

Integrating this methodology into other research designs is easier, especially in true experimental research. This cuts down on the time needed to determine your outcomes.

  • Disadvantages of a quasi-experimental design

Despite the pros that come with a quasi-experimental design, there are several disadvantages associated with it, including the following:

It has a lower internal validity since researchers do not have full control over the comparison and intervention groups or between time periods because of differences in characteristics in people, places, or time involved. It may be challenging to determine whether all variables have been used or whether those used in the research impacted the results.

There is the risk of inaccurate data since the research design borrows information from other studies.

There is the possibility of bias since researchers select baseline elements and eligibility.

  • What are the different quasi-experimental study designs?

There are three distinct types of quasi-experimental designs:

Nonequivalent

Regression discontinuity, natural experiment.

This is a hybrid of experimental and quasi-experimental methods and is used to leverage the best qualities of the two. Like the true experiment design, nonequivalent group design uses pre-existing groups believed to be comparable. However, it doesn't use randomization, the lack of which is a crucial element for quasi-experimental design.

Researchers usually ensure that no confounding variables impact them throughout the grouping process. This makes the groupings more comparable.

Example of a nonequivalent group design

A small study was conducted to determine whether after-school programs result in better grades. Researchers randomly selected two groups of students: one to implement the new program, the other not to. They then compared the results of the two groups.

This type of quasi-experimental research design calculates the impact of a specific treatment or intervention. It uses a criterion known as "cutoff" that assigns treatment according to eligibility.

Researchers often assign participants above the cutoff to the treatment group. This puts a negligible distinction between the two groups (treatment group and control group).

Example of regression discontinuity

Students must achieve a minimum score to be enrolled in specific US high schools. Since the cutoff score used to determine eligibility for enrollment is arbitrary, researchers can assume that the disparity between students who only just fail to achieve the cutoff point and those who barely pass is a small margin and is due to the difference in the schools that these students attend.

Researchers can then examine the long-term effects of these two groups of kids to determine the effect of attending certain schools. This information can be applied to increase the chances of students being enrolled in these high schools.

This research design is common in laboratory and field experiments where researchers control target subjects by assigning them to different groups. Researchers randomly assign subjects to a treatment group using nature or an external event or situation.

However, even with random assignment, this research design cannot be called a true experiment since nature aspects are observational. Researchers can also exploit these aspects despite having no control over the independent variables.

Example of the natural experiment approach

An example of a natural experiment is the 2008 Oregon Health Study.

Oregon intended to allow more low-income people to participate in Medicaid.

Since they couldn't afford to cover every person who qualified for the program, the state used a random lottery to allocate program slots.

Researchers assessed the program's effectiveness by assigning the selected subjects to a randomly assigned treatment group, while those that didn't win the lottery were considered the control group.

  • Differences between quasi-experiments and true experiments

There are several differences between a quasi-experiment and a true experiment:

Participants in true experiments are randomly assigned to the treatment or control group, while participants in a quasi-experiment are not assigned randomly.

In a quasi-experimental design, the control and treatment groups differ in unknown or unknowable ways, apart from the experimental treatments that are carried out. Therefore, the researcher should try as much as possible to control these differences.

Quasi-experimental designs have several "competing hypotheses," which compete with experimental manipulation to explain the observed results.

Quasi-experiments tend to have lower internal validity (the degree of confidence in the research outcomes) than true experiments, but they may offer higher external validity (whether findings can be extended to other contexts) as they involve real-world interventions instead of controlled interventions in artificial laboratory settings.

Despite the distinct difference between true and quasi-experimental research designs, these two research methodologies share the following aspects:

Both study methods subject participants to some form of treatment or conditions.

Researchers have the freedom to measure some of the outcomes of interest.

Researchers can test whether the differences in the outcomes are associated with the treatment.

  • An example comparing a true experiment and quasi-experiment

Imagine you wanted to study the effects of junk food on obese people. Here's how you would do this as a true experiment and a quasi-experiment:

How to carry out a true experiment

In a true experiment, some participants would eat junk foods, while the rest would be in the control group, adhering to a regular diet. At the end of the study, you would record the health and discomfort of each group.

This kind of experiment would raise ethical concerns since the participants assigned to the treatment group are required to eat junk food against their will throughout the experiment. This calls for a quasi-experimental design.

How to carry out a quasi-experiment

In quasi-experimental research, you would start by finding out which participants want to try junk food and which prefer to stick to a regular diet. This allows you to assign these two groups based on subject choice.

In this case, you didn't assign participants to a particular group, so you can confidently use the results from the study.

When is a quasi-experimental design used?

Quasi-experimental designs are used when researchers don’t want to use randomization when evaluating their intervention.

What are the characteristics of quasi-experimental designs?

Some of the characteristics of a quasi-experimental design are:

Researchers don't randomly assign participants into groups, but study their existing characteristics and assign them accordingly.

Researchers study the participants in pre- and post-testing to determine the progress of the groups.

Quasi-experimental design is ethical since it doesn’t involve offering or withholding treatment at random.

Quasi-experimental design encompasses a broad range of non-randomized intervention studies. This design is employed when it is not ethical or logistically feasible to conduct randomized controlled trials. Researchers typically employ it when evaluating policy or educational interventions, or in medical or therapy scenarios.

How do you analyze data in a quasi-experimental design?

You can use two-group tests, time-series analysis, and regression analysis to analyze data in a quasi-experiment design. Each option has specific assumptions, strengths, limitations, and data requirements.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Quasi Experimental Design Overview & Examples

By Jim Frost Leave a Comment

What is a Quasi Experimental Design?

A quasi experimental design is a method for identifying causal relationships that does not randomly assign participants to the experimental groups. Instead, researchers use a non-random process. For example, they might use an eligibility cutoff score or preexisting groups to determine who receives the treatment.

Image illustrating a quasi experimental design.

Quasi-experimental research is a design that closely resembles experimental research but is different. The term “quasi” means “resembling,” so you can think of it as a cousin to actual experiments. In these studies, researchers can manipulate an independent variable — that is, they change one factor to see what effect it has. However, unlike true experimental research, participants are not randomly assigned to different groups.

Learn more about Experimental Designs: Definition & Types .

When to Use Quasi-Experimental Design

Researchers typically use a quasi-experimental design because they can’t randomize due to practical or ethical concerns. For example:

  • Practical Constraints : A school interested in testing a new teaching method can only implement it in preexisting classes and cannot randomly assign students.
  • Ethical Concerns : A medical study might not be able to randomly assign participants to a treatment group for an experimental medication when they are already taking a proven drug.

Quasi-experimental designs also come in handy when researchers want to study the effects of naturally occurring events, like policy changes or environmental shifts, where they can’t control who is exposed to the treatment.

Quasi-experimental designs occupy a unique position in the spectrum of research methodologies, sitting between observational studies and true experiments. This middle ground offers a blend of both worlds, addressing some limitations of purely observational studies while navigating the constraints often accompanying true experiments.

A significant advantage of quasi-experimental research over purely observational studies and correlational research is that it addresses the issue of directionality, determining which variable is the cause and which is the effect. In quasi-experiments, an intervention typically occurs during the investigation, and the researchers record outcomes before and after it, increasing the confidence that it causes the observed changes.

However, it’s crucial to recognize its limitations as well. Controlling confounding variables is a larger concern for a quasi-experimental design than a true experiment because it lacks random assignment.

In sum, quasi-experimental designs offer a valuable research approach when random assignment is not feasible, providing a more structured and controlled framework than observational studies while acknowledging and attempting to address potential confounders.

Types of Quasi-Experimental Designs and Examples

Quasi-experimental studies use various methods, depending on the scenario.

Natural Experiments

This design uses naturally occurring events or changes to create the treatment and control groups. Researchers compare outcomes between those whom the event affected and those it did not affect. Analysts use statistical controls to account for confounders that the researchers must also measure.

Natural experiments are related to observational studies, but they allow for a clearer causality inference because the external event or policy change provides both a form of quasi-random group assignment and a definite start date for the intervention.

For example, in a natural experiment utilizing a quasi-experimental design, researchers study the impact of a significant economic policy change on small business growth. The policy is implemented in one state but not in neighboring states. This scenario creates an unplanned experimental setup, where the state with the new policy serves as the treatment group, and the neighboring states act as the control group.

Researchers are primarily interested in small business growth rates but need to record various confounders that can impact growth rates. Hence, they record state economic indicators, investment levels, and employment figures. By recording these metrics across the states, they can include them in the model as covariates and control them statistically. This method allows researchers to estimate differences in small business growth due to the policy itself, separate from the various confounders.

Nonequivalent Groups Design

This method involves matching existing groups that are similar but not identical. Researchers attempt to find groups that are as equivalent as possible, particularly for factors likely to affect the outcome.

For instance, researchers use a nonequivalent groups quasi-experimental design to evaluate the effectiveness of a new teaching method in improving students’ mathematics performance. A school district considering the teaching method is planning the study. Students are already divided into schools, preventing random assignment.

The researchers matched two schools with similar demographics, baseline academic performance, and resources. The school using the traditional methodology is the control, while the other uses the new approach. Researchers are evaluating differences in educational outcomes between the two methods.

They perform a pretest to identify differences between the schools that might affect the outcome and include them as covariates to control for confounding. They also record outcomes before and after the intervention to have a larger context for the changes they observe.

Regression Discontinuity

This process assigns subjects to a treatment or control group based on a predetermined cutoff point (e.g., a test score). The analysis primarily focuses on participants near the cutoff point, as they are likely similar except for the treatment received. By comparing participants just above and below the cutoff, the design controls for confounders that vary smoothly around the cutoff.

For example, in a regression discontinuity quasi-experimental design focusing on a new medical treatment for depression, researchers use depression scores as the cutoff point. Individuals with depression scores just above a certain threshold are assigned to receive the latest treatment, while those just below the threshold do not receive it. This method creates two closely matched groups: one that barely qualifies for treatment and one that barely misses out.

By comparing the mental health outcomes of these two groups over time, researchers can assess the effectiveness of the new treatment. The assumption is that the only significant difference between the groups is whether they received the treatment, thereby isolating its impact on depression outcomes.

Controlling Confounders in a Quasi-Experimental Design

Accounting for confounding variables is a challenging but essential task for a quasi-experimental design.

In a true experiment, the random assignment process equalizes confounders across the groups to nullify their overall effect. It’s the gold standard because it works on all confounders, known and unknown.

Unfortunately, the lack of random assignment can allow differences between the groups to exist before the intervention. These confounding factors might ultimately explain the results rather than the intervention.

Consequently, researchers must use other methods to equalize the groups roughly using matching and cutoff values or statistically adjust for preexisting differences they measure to reduce the impact of confounders.

A key strength of quasi-experiments is their frequent use of “pre-post testing.” This approach involves conducting initial tests before collecting data to check for preexisting differences between groups that could impact the study’s outcome. By identifying these variables early on and including them as covariates, researchers can more effectively control potential confounders in their statistical analysis.

Additionally, researchers frequently track outcomes before and after the intervention to better understand the context for changes they observe.

Statisticians consider these methods to be less effective than randomization. Hence, quasi-experiments fall somewhere in the middle when it comes to internal validity , or how well the study can identify causal relationships versus mere correlation . They’re more conclusive than correlational studies but not as solid as true experiments.

In conclusion, quasi-experimental designs offer researchers a versatile and practical approach when random assignment is not feasible. This methodology bridges the gap between controlled experiments and observational studies, providing a valuable tool for investigating cause-and-effect relationships in real-world settings. Researchers can address ethical and logistical constraints by understanding and leveraging the different types of quasi-experimental designs while still obtaining insightful and meaningful results.

Cook, T. D., & Campbell, D. T. (1979).  Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin

Share this:

quasi experimental pre post study

Reader Interactions

Comments and questions cancel reply.

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

8.2 Quasi-experimental and pre-experimental designs

Learning objectives.

  • Identify and describe the various types of quasi-experimental designs
  • Distinguish true experimental designs from quasi-experimental and pre-experimental designs
  • Identify and describe the various types of quasi-experimental and pre-experimental designs

As we discussed in the previous section, time, funding, and ethics may limit a researcher’s ability to conduct a true experiment. For researchers in the medical sciences and social work, conducting a true experiment could require denying needed treatment to clients, which is a clear ethical violation. Even those whose research may not involve the administration of needed medications or treatments may be limited in their ability to conduct a classic experiment. When true experiments are not possible, researchers often use quasi-experimental designs.

Quasi-experimental designs

Quasi-experimental designs are similar to true experiments, but they lack random assignment to experimental and control groups. Quasi-experimental designs have a comparison group that is similar to a control group except assignment to the comparison group is not determined by random assignment. The most basic of these quasi-experimental designs is the nonequivalent comparison groups design (Rubin & Babbie, 2017).  The nonequivalent comparison group design looks a lot like the classic experimental design, except it does not use random assignment. In many cases, these groups may already exist. For example, a researcher might conduct research at two different agency sites, one of which receives the intervention and the other does not. No one was assigned to treatment or comparison groups. Those groupings existed prior to the study. While this method is more convenient for real-world research, it is less likely that that the groups are comparable than if they had been determined by random assignment. Perhaps the treatment group has a characteristic that is unique–for example, higher income or different diagnoses–that make the treatment more effective.

Quasi-experiments are particularly useful in social welfare policy research. Social welfare policy researchers often look for what are termed natural experiments , or situations in which comparable groups are created by differences that already occur in the real world. Natural experiments are a feature of the social world that allows researchers to use the logic of experimental design to investigate the connection between variables. For example, Stratmann and Wille (2016) were interested in the effects of a state healthcare policy called Certificate of Need on the quality of hospitals. They clearly could not randomly assign states to adopt one set of policies or another. Instead, researchers used hospital referral regions, or the areas from which hospitals draw their patients, that spanned across state lines. Because the hospitals were in the same referral region, researchers could be pretty sure that the client characteristics were pretty similar. In this way, they could classify patients in experimental and comparison groups without dictating state policy or telling people where to live.

quasi experimental pre post study

Matching is another approach in quasi-experimental design for assigning people to experimental and comparison groups. It begins with researchers thinking about what variables are important in their study, particularly demographic variables or attributes that might impact their dependent variable. Individual matching involves pairing participants with similar attributes. Then, the matched pair is split—with one participant going to the experimental group and the other to the comparison group. An ex post facto control group , in contrast, is when a researcher matches individuals after the intervention is administered to some participants. Finally, researchers may engage in aggregate matching , in which the comparison group is determined to be similar on important variables.

Time series design

There are many different quasi-experimental designs in addition to the nonequivalent comparison group design described earlier. Describing all of them is beyond the scope of this textbook, but one more design is worth mentioning. The time series design uses multiple observations before and after an intervention. In some cases, experimental and comparison groups are used. In other cases where that is not feasible, a single experimental group is used. By using multiple observations before and after the intervention, the researcher can better understand the true value of the dependent variable in each participant before the intervention starts. Additionally, multiple observations afterwards allow the researcher to see whether the intervention had lasting effects on participants. Time series designs are similar to single-subjects designs, which we will discuss in Chapter 15.

Pre-experimental design

When true experiments and quasi-experiments are not possible, researchers may turn to a pre-experimental design (Campbell & Stanley, 1963).  Pre-experimental designs are called such because they often happen as a pre-cursor to conducting a true experiment.  Researchers want to see if their interventions will have some effect on a small group of people before they seek funding and dedicate time to conduct a true experiment. Pre-experimental designs, thus, are usually conducted as a first step towards establishing the evidence for or against an intervention. However, this type of design comes with some unique disadvantages, which we’ll describe below.

A commonly used type of pre-experiment is the one-group pretest post-test design . In this design, pre- and posttests are both administered, but there is no comparison group to which to compare the experimental group. Researchers may be able to make the claim that participants receiving the treatment experienced a change in the dependent variable, but they cannot begin to claim that the change was the result of the treatment without a comparison group.   Imagine if the students in your research class completed a questionnaire about their level of stress at the beginning of the semester.  Then your professor taught you mindfulness techniques throughout the semester.  At the end of the semester, she administers the stress survey again.  What if levels of stress went up?  Could she conclude that the mindfulness techniques caused stress?  Not without a comparison group!  If there was a comparison group, she would be able to recognize that all students experienced higher stress at the end of the semester than the beginning of the semester, not just the students in her research class.

In cases where the administration of a pretest is cost prohibitive or otherwise not possible, a one- shot case study design might be used. In this instance, no pretest is administered, nor is a comparison group present. If we wished to measure the impact of a natural disaster, such as Hurricane Katrina for example, we might conduct a pre-experiment by identifying  a community that was hit by the hurricane and then measuring the levels of stress in the community.  Researchers using this design must be extremely cautious about making claims regarding the effect of the treatment or stimulus. They have no idea what the levels of stress in the community were before the hurricane hit nor can they compare the stress levels to a community that was not affected by the hurricane.  Nonetheless, this design can be useful for exploratory studies aimed at testing a measures or the feasibility of further study.

In our example of the study of the impact of Hurricane Katrina, a researcher might choose to examine the effects of the hurricane by identifying a group from a community that experienced the hurricane and a comparison group from a similar community that had not been hit by the hurricane. This study design, called a static group comparison , has the advantage of including a comparison group that did not experience the stimulus (in this case, the hurricane). Unfortunately, the design only uses for post-tests, so it is not possible to know if the groups were comparable before the stimulus or intervention.  As you might have guessed from our example, static group comparisons are useful in cases where a researcher cannot control or predict whether, when, or how the stimulus is administered, as in the case of natural disasters.

As implied by the preceding examples where we considered studying the impact of Hurricane Katrina, experiments, quasi-experiments, and pre-experiments do not necessarily need to take place in the controlled setting of a lab. In fact, many applied researchers rely on experiments to assess the impact and effectiveness of various programs and policies. You might recall our discussion of arresting perpetrators of domestic violence in Chapter 2, which is an excellent example of an applied experiment. Researchers did not subject participants to conditions in a lab setting; instead, they applied their stimulus (in this case, arrest) to some subjects in the field and they also had a control group in the field that did not receive the stimulus (and therefore were not arrested).

Key Takeaways

  • Quasi-experimental designs do not use random assignment.
  • Comparison groups are used in quasi-experiments.
  • Matching is a way of improving the comparability of experimental and comparison groups.
  • Quasi-experimental designs and pre-experimental designs are often used when experimental designs are impractical.
  • Quasi-experimental and pre-experimental designs may be easier to carry out, but they lack the rigor of true experiments.
  • Aggregate matching – when the comparison group is determined to be similar to the experimental group along important variables
  • Comparison group – a group in quasi-experimental design that does not receive the experimental treatment; it is similar to a control group except assignment to the comparison group is not determined by random assignment
  • Ex post facto control group – a control group created when a researcher matches individuals after the intervention is administered
  • Individual matching – pairing participants with similar attributes for the purpose of assignment to groups
  • Natural experiments – situations in which comparable groups are created by differences that already occur in the real world
  • Nonequivalent comparison group design – a quasi-experimental design similar to a classic experimental design but without random assignment
  • One-group pretest post-test design – a pre-experimental design that applies an intervention to one group but also includes a pretest
  • One-shot case study – a pre-experimental design that applies an intervention to only one group without a pretest
  • Pre-experimental designs – a variation of experimental design that lacks the rigor of experiments and is often used before a true experiment is conducted
  • Quasi-experimental design – designs lack random assignment to experimental and control groups
  • Static group design – uses an experimental group and a comparison group, without random assignment and pretesting
  • Time series design – a quasi-experimental design that uses multiple observations before and after an intervention

Image attributions

cat and kitten   matching avocado costumes on the couch looking at the camera by Your Best Digs CC-BY-2.0

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Separate-Sample Pretest-Posttest Design: An Introduction

The separate-sample pretest-posttest design is a type of quasi-experiment where the outcome of interest is measured 2 times: once before and once after an intervention, each time on a separate group of randomly chosen participants.

The difference between the pretest and posttest measures will estimate the intervention’s effect on the outcome.

The intervention can be:

  • A medical treatment
  • A training or an exposure to some factor
  • A policy change, etc.

Separate-sample pretest-posttest design

Characteristics of the separate-sample pretest-posttest design:

  • Data from the pretest and posttest come from different groups.
  • Participants are randomly assigned to each group (which makes the outcome of the pretest and posttest comparable).
  • All study participants receive the intervention (i.e. there is no control group).

Advantages of the separate-sample pretest-posttest design:

The benefit of using a separate-sample pretest-posttest design is that it avoids some of the most common biases that other quasi-experimental designs suffer from.

Here are some of these advantages:

1. Avoids testing bias

Definition: Testing effect is the influence of the pretest itself on the posttest (regardless of the intervention). Testing bias can happen when the mood, experience, or awareness of the participants taking the pretest is affected, which in turn will affect their posttest outcome.

Example: Asking people about a psychological or family problem in a pretest may affect their mood in a way that influences the posttest. Or when the response rate of the posttest declines after taking a long and time-consuming pretest.

How it is avoided: Perhaps the biggest advantage of using a separate-sample design is that the pretest cannot affect the posttest, since different groups of participants are measured each time.

2. Avoids regression to the mean

Definition: Regression to the mean happens when participants are included in the study based on their extreme pretest scores. The problem will be that their posttest measurement will naturally become less extreme, an effect that can be mistaken for that of the intervention.

Example: Some of the top 10 scorers on a first round in a sport competition will most likely lose their top 10 ranking in the second round, because their performance will naturally regress towards the mean. In simple terms, an extreme score is hard to sustain over time.

How it is avoided: Since the posttest participants are not the same as those measured on the pretest, this rules out their inclusion in the study based on their unusual pretest scores, therefore avoiding the regression problem.

3. Avoids selection bias

Definition: Selection bias happens when compared groups are not similar regarding some basic characteristics, which can offer an alternative explanation of the outcome, and therefore bias the relationship between the intervention and the outcome.

Example: If participants who were less interested in receiving the intervention were somehow more prevalent in the pretest compared to the posttest group, then the outcome of the posttest and the pretest cannot be compared anymore because of selection bias.

How it is avoided: In a separate-sample pretest-posttest design, selection bias can be ruled out as an explanation since the 2 groups were made comparable through randomization.

4. Avoids follow-up

Definition: Loss to follow-up can cause serious problems if participants who were lost to follow-up differ in some important characteristics from those who stayed in the study.

Example: In studies where participants are followed over time, those who did not feel any improvement may not return to follow-up, and therefore the effect of the intervention will be biased high.

How it is avoided: Separate-sample pretest-posttest studies are protected against such effect as each group must be measured only once, and therefore there is no follow-up of participants over time.

Limitations of the separate-sample pretest-posttest design:

For each limitation below, we will discuss how it threats the validity of the study, as well as how to control it by manipulating the design (adding or changing the timing of observations). Statistical techniques can be used to control these limitations, but these will not be discussed here.

Definition: History refers to any event other than the intervention that takes place between the pretest and the posttest and has the potential to affect the outcome of the posttest, therefore biasing the study.

Example: When studying the effect of a medical intervention on weight loss, an outside event such as the launch of a documentary that has the potential to change the diet of the study participants can co-occur with the intervention, and become a source of bias.

How to control it: If the resources are available, repeating the study 2 or more times at different time periods makes History less likely to bias the study, as it would be highly unlikely that every time a biasing event will occur.

2. Maturation

Definition: Maturation is any natural or biological trend that can offer an alternative explanation of the outcome other than the intervention.

Example: Participants growing older in the time period between the pretest and posttest may offer an alternative explanation to an intervention for smoking cessation.

How to control it: Adding another pretest measurement can expose natural trends and thus control for maturation.

3. Mortality

Definition: When the pretest and posttest are separated by a long time period, some participants may become unavailable for the posttest when the time comes. If those differ systematically from those who are still available for the posttest, then the pretest and posttest groups that are no longer comparable.

Example: Over time, patients who become severely sick from a certain medical condition are more likely to be hospitalized and therefore become unavailable for the posttest, creating a source of bias.

How to control it: Taking an additional posttest measurement of the group who received the pretest will eliminate Mortality effects as it provides a measurement of the same type of participants available for the posttest.

4. Instrumentation

Definition: Instrumentation effect refers to changes in the measuring instrument that may account for the observed difference between pretest and posttest results. Note that sometimes the measuring instrument is the researchers themselves who are recording the outcome.

Example: Using 2 interviewers, one for the pretest and another one for the posttest may introduce instrumentation bias as they may have different levels of interest, or different measuring skills that can affect the outcome of interest.

How to control it: Use a group of interviewers randomly assigned to participants.

Finally, adding a control group to the separate-sample pretest-posttest design is highly recommended when possible, as it controls for History, Maturation, Mortality, and Instrumentation at the same time.

Example of a study that used the separate-sample pretest-posttest design:

Lynch and Johnson used a separate-sample pretest-posttest design to evaluate the effect of an educational seminar on 24 medical residents regarding practice-management issues.

A questionnaire that assesses the residents’ knowledge on the subject was used as a pre- and posttest.

The advantages of using a separate-sample pretest-posttest design in this case were:

  • Ease of feasibility: Since the questionnaire was time-consuming, giving participants the opportunity to be tested just once was important to get a high response rate (in this case 80%).
  • The testing effect was controlled: The participants’ familiarity with the questions asked in the pretest did not affect the posttest.

The study concluded that there was a statistically significant improvement of the knowledge of residents after the seminar.

  • Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research . Wadsworth; 1963.

Further reading

  • Experimental vs Quasi-Experimental Design
  • Understand Quasi-Experimental Design Through an Example
  • One-Group Posttest Only Design
  • One-Group Pretest-Posttest Design
  • Posttest-Only Control Group Design
  • Static Group Comparison Design
  • Matched Pairs Design
  • Randomized Block Design

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Quasi-Experimental Design (Pre-Test and Post-Test Studies) in Prehospital and Disaster Research

  • PMID: 31767051
  • DOI: 10.1017/S1049023X19005053

PubMed Disclaimer

Similar articles

  • Course in Prehospital Major Incidents Management for Health Care Providers in Saudi Arabia. Bajow NA, AlAssaf WI, Cluntun AA. Bajow NA, et al. Prehosp Disaster Med. 2018 Dec;33(6):587-595. doi: 10.1017/S1049023X18000791. Epub 2018 Sep 28. Prehosp Disaster Med. 2018. PMID: 30261941
  • Pop quiz: do you know how to test your students? Mattera CJ. Mattera CJ. JEMS. 2011 Jan;36(1):26, 28. doi: 10.1016/S0197-2510(11)70011-0. JEMS. 2011. PMID: 21236377 No abstract available.
  • Educational and training systems in Sweden for prehospital response to acts of terrorism. Kulling PE, Holst JE. Kulling PE, et al. Prehosp Disaster Med. 2003 Jul-Sep;18(3):184-8. doi: 10.1017/s1049023x00001035. Prehosp Disaster Med. 2003. PMID: 15141856
  • Disaster planning for unconventional acts of civilian terrorism. Fry DE. Fry DE. Curr Probl Surg. 2006 Apr;43(4):253-315. doi: 10.1067/j.cpsurg.2006.01.004. Curr Probl Surg. 2006. PMID: 16581341 Review. No abstract available.
  • Basic Community Emergency Response Team training to augment medical infrastructure preparedness. Simmons G. Simmons G. Am J Disaster Med. 2007 Mar-Apr;2(2):59-63. Am J Disaster Med. 2007. PMID: 18271153 Review. No abstract available.
  • Interactive Learning System for Learning Calculus. Rahman MA, Sook Ling L, Yin OS. Rahman MA, et al. F1000Res. 2024 Mar 11;11:307. doi: 10.12688/f1000research.73595.2. eCollection 2022. F1000Res. 2024. PMID: 38765243 Free PMC article.
  • Effectiveness of an eHealth intervention for reducing psychological distress and increasing COVID-19 knowledge and protective behaviors among racialized sexual and gender minority adults: A quasi-experimental study (#SafeHandsSafeHearts). Newman PA, Chakrapani V, Massaquoi N, Williams CC, Tharao W, Tepjan S, Roungprakhon S, Forbes J, Sebastian S, Akkakanjanasupar P, Aden M. Newman PA, et al. PLoS One. 2024 May 3;19(5):e0280710. doi: 10.1371/journal.pone.0280710. eCollection 2024. PLoS One. 2024. PMID: 38701074 Free PMC article.
  • Myths and common misbeliefs about cervical cancer causation among Palestinian women: a national cross-sectional study. Elshami M, Abukmail H, Thalji M, Al-Slaibi I, Alser M, Radaydeh A, Alfuqaha A, Khader S, Khatib L, Fannoun N, Ahmad B, Kassab L, Khrishi H, Elhussaini D, Abed N, Nammari A, Abdallah T, Alqudwa Z, Idais S, Tanbouz G, Hajajreh M, Selmiyh HA, Abo-Hajouj Z, Hebi H, Zamel M, Skaik RN, Hammoud L, Rjoub S, Ayesh H, Rjoub T, Zakout R, Alser A, Albarqi SI, Abu-El-Noor N, Bottcher B. Elshami M, et al. BMC Public Health. 2024 Jan 16;24(1):189. doi: 10.1186/s12889-024-17733-5. BMC Public Health. 2024. PMID: 38229049 Free PMC article.
  • The Effectiveness of Publicly Available Web-Based Interventions in Promoting Health App Use, Digital Health Literacy, and Media Literacy: Pre-Post Evaluation Study. König L, Suhr R. König L, et al. J Med Internet Res. 2023 Dec 4;25:e46336. doi: 10.2196/46336. J Med Internet Res. 2023. PMID: 38048146 Free PMC article.
  • Evaluation of an enhanced service for medication review with follow up in Swiss community pharmacies: Pre-post study protocol. Amador-Fernández N, Escaith M, Simi E, Quintana-Bárcena P, Berger J. Amador-Fernández N, et al. PLoS One. 2023 Oct 17;18(10):e0292037. doi: 10.1371/journal.pone.0292037. eCollection 2023. PLoS One. 2023. PMID: 37847695 Free PMC article.

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Cambridge University Press
  • MedlinePlus Health Information
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Editor's Choice
  • Focus Issue Archive
  • Open Access Articles
  • JAMIA Journal Club
  • Author Guidelines
  • Submission Site
  • Open Access
  • Call for Papers
  • About Journal of the American Medical Informatics Association
  • About the American Medical Informatics Association
  • Journals Career Network
  • Editorial Board
  • Advertising and Corporate Services
  • Self-Archiving Policy
  • Dispatch Dates
  • For Reviewers
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Results and discussion.

  • < Previous

The Use and Interpretation of Quasi-Experimental Studies in Medical Informatics

  • Article contents
  • Figures & tables
  • Supplementary Data

Anthony D. Harris, Jessina C. McGregor, Eli N. Perencevich, Jon P. Furuno, Jingkun Zhu, Dan E. Peterson, Joseph Finkelstein, The Use and Interpretation of Quasi-Experimental Studies in Medical Informatics, Journal of the American Medical Informatics Association , Volume 13, Issue 1, January 2006, Pages 16–23, https://doi.org/10.1197/jamia.M1749

  • Permissions Icon Permissions

Quasi-experimental study designs, often described as nonrandomized, pre-post intervention studies, are common in the medical informatics literature. Yet little has been written about the benefits and limitations of the quasi-experimental approach as applied to informatics studies. This paper outlines a relative hierarchy and nomenclature of quasi-experimental study designs that is applicable to medical informatics intervention studies. In addition, the authors performed a systematic review of two medical informatics journals, the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI), to determine the number of quasi-experimental studies published and how the studies are classified on the above-mentioned relative hierarchy. They hope that future medical informatics studies will implement higher level quasi-experimental study designs that yield more convincing evidence for causal links between medical informatics interventions and outcomes.

Quasi-experimental studies encompass a broad range of nonrandomized intervention studies. These designs are frequently used when it is not logistically feasible or ethical to conduct a randomized controlled trial. Examples of quasi-experimental studies follow. As one example of a quasi-experimental study, a hospital introduces a new order-entry system and wishes to study the impact of this intervention on the number of medication-related adverse events before and after the intervention. As another example, an informatics technology group is introducing a pharmacy order-entry system aimed at decreasing pharmacy costs. The intervention is implemented and pharmacy costs before and after the intervention are measured.

In medical informatics, the quasi-experimental, sometimes called the pre-post intervention, design often is used to evaluate the benefits of specific interventions. The increasing capacity of health care institutions to collect routine clinical data has led to the growing use of quasi-experimental study designs in the field of medical informatics as well as in other medical disciplines. However, little is written about these study designs in the medical literature or in traditional epidemiology textbooks. 1–3 In contrast, the social sciences literature is replete with examples of ways to implement and improve quasi-experimental studies. 4–6

In this paper, we review the different pretest-posttest quasi-experimental study designs, their nomenclature, and the relative hierarchy of these designs with respect to their ability to establish causal associations between an intervention and an outcome. The example of a pharmacy order-entry system aimed at decreasing pharmacy costs will be used throughout this article to illustrate the different quasi-experimental designs. We discuss limitations of quasi-experimental designs and offer methods to improve them. We also perform a systematic review of four years of publications from two informatics journals to determine the number of quasi-experimental studies, classify these studies into their application domains, determine whether the potential limitations of quasi-experimental studies were acknowledged by the authors, and place these studies into the above-mentioned relative hierarchy.

The authors reviewed articles and book chapters on the design of quasi-experimental studies. 4–10 Most of the reviewed articles referenced two textbooks that were then reviewed in depth. 4 , 6

Key advantages and disadvantages of quasi-experimental studies, as they pertain to the study of medical informatics, were identified. The potential methodological flaws of quasi-experimental medical informatics studies, which have the potential to introduce bias, were also identified. In addition, a summary table outlining a relative hierarchy and nomenclature of quasi-experimental study designs is described. In general, the higher the design is in the hierarchy, the greater the internal validity that the study traditionally possesses because the evidence of the potential causation between the intervention and the outcome is strengthened. 4

We then performed a systematic review of four years of publications from two informatics journals. First, we determined the number of quasi-experimental studies. We then classified these studies on the above-mentioned hierarchy. We also classified the quasi-experimental studies according to their application domain. The categories of application domains employed were based on categorization used by Yearbooks of Medical Informatics 1992–2005 and were similar to the categories of application domains employed by Annual Symposiums of the American Medical Informatics Association. 11 The categories were (1) health and clinical management; (2) patient records; (3) health information systems; (4) medical signal processing and biomedical imaging; (5) decision support, knowledge representation, and management; (6) education and consumer informatics; and (7) bioinformatics. Because the quasi-experimental study design has recognized limitations, we sought to determine whether authors acknowledged the potential limitations of this design. Examples of acknowledgment included mention of lack of randomization, the potential for regression to the mean, the presence of temporal confounders and the mention of another design that would have more internal validity.

All original scientific manuscripts published between January 2000 and December 2003 in the Journal of the American Medical Informatics Association (JAMIA) and the International Journal of Medical Informatics (IJMI) were reviewed. One author (ADH) reviewed all the papers to identify the number of quasi-experimental studies. Other authors (ADH, JCM, JF) then independently reviewed all the studies identified as quasi-experimental. The three authors then convened as a group to resolve any disagreements in study classification, application domain, and acknowledgment of limitations.

What Is a Quasi-experiment?

Quasi-experiments are studies that aim to evaluate interventions but that do not use randomization. Similar to randomized trials, quasi-experiments aim to demonstrate causality between an intervention and an outcome. Quasi-experimental studies can use both preintervention and postintervention measurements as well as nonrandomly selected control groups.

Using this basic definition, it is evident that many published studies in medical informatics utilize the quasi-experimental design. Although the randomized controlled trial is generally considered to have the highest level of credibility with regard to assessing causality, in medical informatics, researchers often choose not to randomize the intervention for one or more reasons: (1) ethical considerations, (2) difficulty of randomizing subjects, (3) difficulty to randomize by locations (e.g., by wards), (4) small available sample size. Each of these reasons is discussed below.

Ethical considerations typically will not allow random withholding of an intervention with known efficacy. Thus, if the efficacy of an intervention has not been established, a randomized controlled trial is the design of choice to determine efficacy. But if the intervention under study incorporates an accepted, well-established therapeutic intervention, or if the intervention has either questionable efficacy or safety based on previously conducted studies, then the ethical issues of randomizing patients are sometimes raised. In the area of medical informatics, it is often believed prior to an implementation that an informatics intervention will likely be beneficial and thus medical informaticians and hospital administrators are often reluctant to randomize medical informatics interventions. In addition, there is often pressure to implement the intervention quickly because of its believed efficacy, thus not allowing researchers sufficient time to plan a randomized trial.

For medical informatics interventions, it is often difficult to randomize the intervention to individual patients or to individual informatics users. So while this randomization is technically possible, it is underused and thus compromises the eventual strength of concluding that an informatics intervention resulted in an outcome. For example, randomly allowing only half of medical residents to use pharmacy order-entry software at a tertiary care hospital is a scenario that hospital administrators and informatics users may not agree to for numerous reasons.

Similarly, informatics interventions often cannot be randomized to individual locations. Using the pharmacy order-entry system example, it may be difficult to randomize use of the system to only certain locations in a hospital or portions of certain locations. For example, if the pharmacy order-entry system involves an educational component, then people may apply the knowledge learned to nonintervention wards, thereby potentially masking the true effect of the intervention. When a design using randomized locations is employed successfully, the locations may be different in other respects (confounding variables), and this further complicates the analysis and interpretation.

In situations where it is known that only a small sample size will be available to test the efficacy of an intervention, randomization may not be a viable option. Randomization is beneficial because on average it tends to evenly distribute both known and unknown confounding variables between the intervention and control group. However, when the sample size is small, randomization may not adequately accomplish this balance. Thus, alternative design and analytical methods are often used in place of randomization when only small sample sizes are available.

What Are the Threats to Establishing Causality When Using Quasi-experimental Designs in Medical Informatics?

The lack of random assignment is the major weakness of the quasi-experimental study design. Associations identified in quasi-experiments meet one important requirement of causality since the intervention precedes the measurement of the outcome. Another requirement is that the outcome can be demonstrated to vary statistically with the intervention. Unfortunately, statistical association does not imply causality, especially if the study is poorly designed. Thus, in many quasi-experiments, one is most often left with the question: “Are there alternative explanations for the apparent causal association?” If these alternative explanations are credible, then the evidence of causation is less convincing. These rival hypotheses, or alternative explanations, arise from principles of epidemiologic study design.

Shadish et al. 4 outline nine threats to internal validity that are outlined in Table 1 . Internal validity is defined as the degree to which observed changes in outcomes can be correctly inferred to be caused by an exposure or an intervention. In quasi-experimental studies of medical informatics, we believe that the methodological principles that most often result in alternative explanations for the apparent causal effect include (a) difficulty in measuring or controlling for important confounding variables , particularly unmeasured confounding variables, which can be viewed as a subset of the selection threat in Table 1 ; (b) results being explained by the statistical principle of regression to the mean . Each of these latter two principles is discussed in turn.

Threats to Internal Validity

1. Ambiguous temporal precedence: Lack of clarity about whether intervention occurred before outcome
2. Selection: Systematic differences over conditions in respondent characteristics that could also cause the observed effect
3. History: Events occurring concurrently with intervention could cause the observed effect
4. Maturation: Naturally occurring changes over time could be confused with a treatment effect
5. Regression: When units are selected for their extreme scores, they will often have less extreme subsequent scores, an occurrence that can be confused with an intervention effect
6. Attrition: Loss of respondents can produce artifactual effects if that loss is correlated with intervention
7. Testing: Exposure to a test can affect scores on subsequent exposures to that test
8. Instrumentation: The nature of a measurement may change over time or conditions
9. Interactive effects: The impact of an intervention may depend on the level of another intervention
1. Ambiguous temporal precedence: Lack of clarity about whether intervention occurred before outcome
2. Selection: Systematic differences over conditions in respondent characteristics that could also cause the observed effect
3. History: Events occurring concurrently with intervention could cause the observed effect
4. Maturation: Naturally occurring changes over time could be confused with a treatment effect
5. Regression: When units are selected for their extreme scores, they will often have less extreme subsequent scores, an occurrence that can be confused with an intervention effect
6. Attrition: Loss of respondents can produce artifactual effects if that loss is correlated with intervention
7. Testing: Exposure to a test can affect scores on subsequent exposures to that test
8. Instrumentation: The nature of a measurement may change over time or conditions
9. Interactive effects: The impact of an intervention may depend on the level of another intervention

Adapted from Shadish et al. 4

An inability to sufficiently control for important confounding variables arises from the lack of randomization. A variable is a confounding variable if it is associated with the exposure of interest and is also associated with the outcome of interest; the confounding variable leads to a situation where a causal association between a given exposure and an outcome is observed as a result of the influence of the confounding variable. For example, in a study aiming to demonstrate that the introduction of a pharmacy order-entry system led to lower pharmacy costs, there are a number of important potential confounding variables (e.g., severity of illness of the patients, knowledge and experience of the software users, other changes in hospital policy) that may have differed in the preintervention and postintervention time periods ( Fig. 1 ). In a multivariable regression, the first confounding variable could be addressed with severity of illness measures, but the second confounding variable would be difficult if not nearly impossible to measure and control. In addition, potential confounding variables that are unmeasured or immeasurable cannot be controlled for in nonrandomized quasi-experimental study designs and can only be properly controlled by the randomization process in randomized controlled trials.

Example of confounding. To get the true effect of the intervention of interest, we need to control for the confounding variable.

Example of confounding. To get the true effect of the intervention of interest, we need to control for the confounding variable.

Another important threat to establishing causality is regression to the mean. 12–14 This widespread statistical phenomenon can result in wrongly concluding that an effect is due to the intervention when in reality it is due to chance. The phenomenon was first described in 1886 by Francis Galton who measured the adult height of children and their parents. He noted that when the average height of the parents was greater than the mean of the population, the children tended to be shorter than their parents, and conversely, when the average height of the parents was shorter than the population mean, the children tended to be taller than their parents.

In medical informatics, what often triggers the development and implementation of an intervention is a rise in the rate above the mean or norm. For example, increasing pharmacy costs and adverse events may prompt hospital informatics personnel to design and implement pharmacy order-entry systems. If this rise in costs or adverse events is really just an extreme observation that is still within the normal range of the hospital's pharmaceutical costs (i.e., the mean pharmaceutical cost for the hospital has not shifted), then the statistical principle of regression to the mean predicts that these elevated rates will tend to decline even without intervention. However, often informatics personnel and hospital administrators cannot wait passively for this decline to occur. Therefore, hospital personnel often implement one or more interventions, and if a decline in the rate occurs, they may mistakenly conclude that the decline is causally related to the intervention. In fact, an alternative explanation for the finding could be regression to the mean.

What Are the Different Quasi-experimental Study Designs?

In the social sciences literature, quasi-experimental studies are divided into four study design groups 4 , 6 :

Quasi-experimental designs without control groups

Quasi-experimental designs that use control groups but no pretest

Quasi-experimental designs that use control groups and pretests

Interrupted time-series designs

There is a relative hierarchy within these categories of study designs, with category D studies being sounder than categories C, B, or A in terms of establishing causality. Thus, if feasible from a design and implementation point of view, investigators should aim to design studies that fall in to the higher rated categories. Shadish et al. 4 discuss 17 possible designs, with seven designs falling into category A, three designs in category B, and six designs in category C, and one major design in category D. In our review, we determined that most medical informatics quasi-experiments could be characterized by 11 of 17 designs, with six study designs in category A, one in category B, three designs in category C, and one design in category D because the other study designs were not used or feasible in the medical informatics literature. Thus, for simplicity, we have summarized the 11 study designs most relevant to medical informatics research in Table 2 .

Relative Hierarchy of Quasi-experimental Designs

Quasi-experimental Study DesignsDesign Notation
A. Quasi-experimental designs without control groups
            1. The one-group posttest-only designX O1
            2. The one-group pretest-posttest designO1 X O2
            3. The one-group pretest-posttest design using a double pretestO1 O2 X O3
            4. The one-group pretest-posttest design using a nonequivalent dependent variable(O1a, O1b) X (O2a, O2b)
            5. The removed-treatment designO1 X O2 O3 removeX O4
            6. The repeated-treatment designO1 X O2 removeX O3 X O4
B. Quasi-experimental designs that use a control group but no pretest
            1. Posttest-only design with nonequivalent groupsIntervention group: X O1
Control group: O2
C. Quasi-experimental designs that use control groups and pretests
            1. Untreated control group with dependent pretest and posttest samplesIntervention group: O1a X O2a
Control group: O1b O2b
            2. Untreated control group design with dependent pretest and posttest samples using a double pretestIntervention group: O1a O2a X O3a
Control group: O1b O2b O3b
            3. Untreated control group design with dependent pretest and posttest samples using switching replicationsIntervention group: O1a X O2a O3a
Control group: O1b O2b X O3b
D. Interrupted time-series design
            1. Multiple pretest and posttest observations spaced at equal intervals of timeO1 O2 O3 O4 O5 X O6 O7 O8 O9 O10
Quasi-experimental Study DesignsDesign Notation
A. Quasi-experimental designs without control groups
            1. The one-group posttest-only designX O1
            2. The one-group pretest-posttest designO1 X O2
            3. The one-group pretest-posttest design using a double pretestO1 O2 X O3
            4. The one-group pretest-posttest design using a nonequivalent dependent variable(O1a, O1b) X (O2a, O2b)
            5. The removed-treatment designO1 X O2 O3 removeX O4
            6. The repeated-treatment designO1 X O2 removeX O3 X O4
B. Quasi-experimental designs that use a control group but no pretest
            1. Posttest-only design with nonequivalent groupsIntervention group: X O1
Control group: O2
C. Quasi-experimental designs that use control groups and pretests
            1. Untreated control group with dependent pretest and posttest samplesIntervention group: O1a X O2a
Control group: O1b O2b
            2. Untreated control group design with dependent pretest and posttest samples using a double pretestIntervention group: O1a O2a X O3a
Control group: O1b O2b O3b
            3. Untreated control group design with dependent pretest and posttest samples using switching replicationsIntervention group: O1a X O2a O3a
Control group: O1b O2b X O3b
D. Interrupted time-series design
            1. Multiple pretest and posttest observations spaced at equal intervals of timeO1 O2 O3 O4 O5 X O6 O7 O8 O9 O10

O = Observational Measurement; X = Intervention Under Study. Time moves from left to right.

In general, studies in category D are of higher study design quality than studies in category C, which are higher than those in category B, which are higher than those in category A. Also, as one moves down within each category, the studies become of higher quality, e.g., study 5 in category A is of higher study design quality than study 4, etc.

The nomenclature and relative hierarchy were used in the systematic review of four years of JAMIA and the IJMI. Similar to the relative hierarchy that exists in the evidence-based literature that assigns a hierarchy to randomized controlled trials, cohort studies, case-control studies, and case series, the hierarchy in Table 2 is not absolute in that in some cases, it may be infeasible to perform a higher level study. For example, there may be instances where an A6 design established stronger causality than a B1 design. 15–17

Quasi-experimental Designs without Control Groups

The one-group posttest-only design.

graphic

Here, X is the intervention and O is the outcome variable (this notation is continued throughout the article). In this study design, an intervention (X) is implemented and a posttest observation (O1) is taken. For example, X could be the introduction of a pharmacy order-entry intervention and O1 could be the pharmacy costs following the intervention. This design is the weakest of the quasi-experimental designs that are discussed in this article. Without any pretest observations or a control group, there are multiple threats to internal validity. Unfortunately, this study design is often used in medical informatics when new software is introduced since it may be difficult to have pretest measurements due to time, technical, or cost constraints.

The One-Group Pretest-Posttest Design

graphic

This is a commonly used study design. A single pretest measurement is taken (O1), an intervention (X) is implemented, and a posttest measurement is taken (O2). In this instance, period O1 frequently serves as the “control” period. For example, O1 could be pharmacy costs prior to the intervention, X could be the introduction of a pharmacy order-entry system, and O2 could be the pharmacy costs following the intervention. Including a pretest provides some information about what the pharmacy costs would have been had the intervention not occurred.

The One-Group Pretest-Posttest Design Using a Double Pretest

graphic

The advantage of this study design over A2 is that adding a second pretest prior to the intervention helps provide evidence that can be used to refute the phenomenon of regression to the mean and confounding as alternative explanations for any observed association between the intervention and the posttest outcome. For example, in a study where a pharmacy order-entry system led to lower pharmacy costs (O3 < O2 and O1), if one had two preintervention measurements of pharmacy costs (O1 and O2) and they were both elevated, this would suggest that there was a decreased likelihood that O3 is lower due to confounding and regression to the mean. Similarly, extending this study design by increasing the number of measurements postintervention could also help to provide evidence against confounding and regression to the mean as alternate explanations for observed associations.

The One-Group Pretest-Posttest Design Using a Nonequivalent Dependent Variable

graphic

This design involves the inclusion of a nonequivalent dependent variable ( b ) in addition to the primary dependent variable ( a ). Variables a and b should assess similar constructs; that is, the two measures should be affected by similar factors and confounding variables except for the effect of the intervention. Variable a is expected to change because of the intervention X, whereas variable b is not. Taking our example, variable a could be pharmacy costs and variable b could be the length of stay of patients. If our informatics intervention is aimed at decreasing pharmacy costs, we would expect to observe a decrease in pharmacy costs but not in the average length of stay of patients. However, a number of important confounding variables, such as severity of illness and knowledge of software users, might affect both outcome measures. Thus, if the average length of stay did not change following the intervention but pharmacy costs did, then the data are more convincing than if just pharmacy costs were measured.

The Removed-Treatment Design

graphic

This design adds a third posttest measurement (O3) to the one-group pretest-posttest design and then removes the intervention before a final measure (O4) is made. The advantage of this design is that it allows one to test hypotheses about the outcome in the presence of the intervention and in the absence of the intervention. Thus, if one predicts a decrease in the outcome between O1 and O2 (after implementation of the intervention), then one would predict an increase in the outcome between O3 and O4 (after removal of the intervention). One caveat is that if the intervention is thought to have persistent effects, then O4 needs to be measured after these effects are likely to have disappeared. For example, a study would be more convincing if it demonstrated that pharmacy costs decreased after pharmacy order-entry system introduction (O2 and O3 less than O1) and that when the order-entry system was removed or disabled, the costs increased (O4 greater than O2 and O3 and closer to O1). In addition, there are often ethical issues in this design in terms of removing an intervention that may be providing benefit.

The Repeated-Treatment Design

graphic

The advantage of this design is that it demonstrates reproducibility of the association between the intervention and the outcome. For example, the association is more likely to be causal if one demonstrates that a pharmacy order-entry system results in decreased pharmacy costs when it is first introduced and again when it is reintroduced following an interruption of the intervention. As for design A5, the assumption must be made that the effect of the intervention is transient, which is most often applicable to medical informatics interventions. Because in this design, subjects may serve as their own controls, this may yield greater statistical efficiency with fewer numbers of subjects.

Quasi-experimental Designs That Use a Control Group but No Pretest

Posttest-only design with nonequivalent groups:.

graphic

An intervention X is implemented for one group and compared to a second group. The use of a comparison group helps prevent certain threats to validity including the ability to statistically adjust for confounding variables. Because in this study design, the two groups may not be equivalent (assignment to the groups is not by randomization), confounding may exist. For example, suppose that a pharmacy order-entry intervention was instituted in the medical intensive care unit (MICU) and not the surgical intensive care unit (SICU). O1 would be pharmacy costs in the MICU after the intervention and O2 would be pharmacy costs in the SICU after the intervention. The absence of a pretest makes it difficult to know whether a change has occurred in the MICU. Also, the absence of pretest measurements comparing the SICU to the MICU makes it difficult to know whether differences in O1 and O2 are due to the intervention or due to other differences in the two units (confounding variables).

Quasi-experimental Designs That Use Control Groups and Pretests

The reader should note that with all the studies in this category, the intervention is not randomized. The control groups chosen are comparison groups. Obtaining pretest measurements on both the intervention and control groups allows one to assess the initial comparability of the groups. The assumption is that if the intervention and the control groups are similar at the pretest, the smaller the likelihood there is of important confounding variables differing between the two groups.

Untreated Control Group with Dependent Pretest and Posttest Samples:

graphic

The use of both a pretest and a comparison group makes it easier to avoid certain threats to validity. However, because the two groups are nonequivalent (assignment to the groups is not by randomization), selection bias may exist. Selection bias exists when selection results in differences in unit characteristics between conditions that may be related to outcome differences. For example, suppose that a pharmacy order-entry intervention was instituted in the MICU and not the SICU. If preintervention pharmacy costs in the MICU (O1a) and SICU (O1b) are similar, it suggests that it is less likely that there are differences in the important confounding variables between the two units. If MICU postintervention costs (O2a) are less than preintervention MICU costs (O1a), but SICU costs (O1b) and (O2b) are similar, this suggests that the observed outcome may be causally related to the intervention.

Untreated Control Group Design with Dependent Pretest and Posttest Samples Using a Double Pretest:

graphic

In this design, the pretests are administered at two different times. The main advantage of this design is that it controls for potentially different time-varying confounding effects in the intervention group and the comparison group. In our example, measuring points O1 and O2 would allow for the assessment of time-dependent changes in pharmacy costs, e.g., due to differences in experience of residents, preintervention between the intervention and control group, and whether these changes were similar or different.

Untreated Control Group Design with Dependent Pretest and Posttest Samples Using Switching Replications:

graphic

With this study design, the researcher administers an intervention at a later time to a group that initially served as a nonintervention control. The advantage of this design over design C2 is that it demonstrates reproducibility in two different settings. This study design is not limited to two groups; in fact, the study results have greater validity if the intervention effect is replicated in different groups at multiple times. In the example of a pharmacy order-entry system, one could implement or intervene in the MICU and then at a later time, intervene in the SICU. This latter design is often very applicable to medical informatics where new technology and new software is often introduced or made available gradually.

Interrupted Time-Series Designs

graphic

An interrupted time-series design is one in which a string of consecutive observations equally spaced in time is interrupted by the imposition of a treatment or intervention. The advantage of this design is that with multiple measurements both pre- and postintervention, it is easier to address and control for confounding and regression to the mean. In addition, statistically, there is a more robust analytic capability, and there is the ability to detect changes in the slope or intercept as a result of the intervention in addition to a change in the mean values. 18 A change in intercept could represent an immediate effect while a change in slope could represent a gradual effect of the intervention on the outcome. In the example of a pharmacy order-entry system, O1 through O5 could represent monthly pharmacy costs preintervention and O6 through O10 monthly pharmacy costs post the introduction of the pharmacy order-entry system. Interrupted time-series designs also can be further strengthened by incorporating many of the design features previously mentioned in other categories (such as removal of the treatment, inclusion of a nondependent outcome variable, or the addition of a control group).

Systematic Review Results

The results of the systematic review are in Table 3 . In the four-year period of JAMIA publications that the authors reviewed, 25 quasi-experimental studies among 22 articles were published. Of these 25, 15 studies were of category A, five studies were of category B, two studies were of category C, and no studies were of category D. Although there were no studies of category D (interrupted time-series analyses), three of the studies classified as category A had data collected that could have been analyzed as an interrupted time-series analysis. Nine of the 25 studies (36%) mentioned at least one of the potential limitations of the quasi-experimental study design. In the four-year period of IJMI publications reviewed by the authors, nine quasi-experimental studies among eight manuscripts were published. Of these nine, five studies were of category A, one of category B, one of category C, and two of category D. Two of the nine studies (22%) mentioned at least one of the potential limitations of the quasi-experimental study design.

Systematic Review of Four Years of Quasi-designs in JAMIA

StudyJournalInformatics Topic CategoryQuasi-experimental DesignLimitation of Quasi-design Mentioned in Article
Staggers and Kobus JAMIA1Counterbalanced study designYes
Schriger et al. JAMIA1A5Yes
Patel et al. JAMIA2A5 (study 1, phase 1)No
Patel et al. JAMIA2A2 (study 1, phase 2)No
Borowitz JAMIA1A2No
Patterson and Harasym JAMIA6C1Yes
Rocha et al. JAMIA5A2Yes
Lovis et al. JAMIA1Counterbalanced study designNo
Hersh et al. JAMIA6B1No
Makoul et al. JAMIA2B1Yes
Ruland JAMIA3B1No
DeLusignan et al. JAMIA1A1No
Mekhjian et al. JAMIA1A2 (study design 1)Yes
Mekhjian et al. JAMIA1B1 (study design 2)Yes
Ammenwerth et al. JAMIA1A2No
Oniki et al. JAMIA5C1Yes
Liederman and Morefield JAMIA1A1 (study 1)No
Liederman and Morefield JAMIA1A2 (study 2)No
Rotich et al. JAMIA2A2 No
Payne et al. JAMIA1A1No
Hoch et al. JAMIA3A2 No
Laerum et al. JAMIA1B1Yes
Devine et al. JAMIA1Counterbalanced study design
Dunbar et al. JAMIA6A1
Lenert et al. JAMIA6A2
Koide et al. IJMI5D4No
Gonzalez-Hendrich et al. IJMI2A1No
Anantharaman and Swee Han IJMI3B1No
Chae et al. IJMI6A2No
Lin et al. IJMI3A1No
Mikulich et al. IJMI1A2Yes
Hwang et al. IJMI1A2Yes
Park et al. IJMI1C2No
Park et al. IJMI1D4No
StudyJournalInformatics Topic CategoryQuasi-experimental DesignLimitation of Quasi-design Mentioned in Article
Staggers and Kobus JAMIA1Counterbalanced study designYes
Schriger et al. JAMIA1A5Yes
Patel et al. JAMIA2A5 (study 1, phase 1)No
Patel et al. JAMIA2A2 (study 1, phase 2)No
Borowitz JAMIA1A2No
Patterson and Harasym JAMIA6C1Yes
Rocha et al. JAMIA5A2Yes
Lovis et al. JAMIA1Counterbalanced study designNo
Hersh et al. JAMIA6B1No
Makoul et al. JAMIA2B1Yes
Ruland JAMIA3B1No
DeLusignan et al. JAMIA1A1No
Mekhjian et al. JAMIA1A2 (study design 1)Yes
Mekhjian et al. JAMIA1B1 (study design 2)Yes
Ammenwerth et al. JAMIA1A2No
Oniki et al. JAMIA5C1Yes
Liederman and Morefield JAMIA1A1 (study 1)No
Liederman and Morefield JAMIA1A2 (study 2)No
Rotich et al. JAMIA2A2 No
Payne et al. JAMIA1A1No
Hoch et al. JAMIA3A2 No
Laerum et al. JAMIA1B1Yes
Devine et al. JAMIA1Counterbalanced study design
Dunbar et al. JAMIA6A1
Lenert et al. JAMIA6A2
Koide et al. IJMI5D4No
Gonzalez-Hendrich et al. IJMI2A1No
Anantharaman and Swee Han IJMI3B1No
Chae et al. IJMI6A2No
Lin et al. IJMI3A1No
Mikulich et al. IJMI1A2Yes
Hwang et al. IJMI1A2Yes
Park et al. IJMI1C2No
Park et al. IJMI1D4No

JAMIA = Journal of the American Medical Informatics Association; IJMI = International Journal of Medical Informatics.

Could have been analyzed as an interrupted time-series design.

In addition, three studies from JAMIA were based on a counterbalanced design. A counterbalanced design is a higher order study design than other studies in category A. The counterbalanced design is sometimes referred to as a Latin-square arrangement. In this design, all subjects receive all the different interventions but the order of intervention assignment is not random. 19 This design can only be used when the intervention is compared against some existing standard, for example, if a new PDA-based order entry system is to be compared to a computer terminal–based order entry system. In this design, all subjects receive the new PDA-based order entry system and the old computer terminal-based order entry system. The counterbalanced design is a within-participants design, where the order of the intervention is varied (e.g., one group is given software A followed by software B and another group is given software B followed by software A). The counterbalanced design is typically used when the available sample size is small, thus preventing the use of randomization. This design also allows investigators to study the potential effect of ordering of the informatics intervention.

Although quasi-experimental study designs are ubiquitous in the medical informatics literature, as evidenced by 34 studies in the past four years of the two informatics journals, little has been written about the benefits and limitations of the quasi-experimental approach. As we have outlined in this paper, a relative hierarchy and nomenclature of quasi-experimental study designs exist, with some designs being more likely than others to permit causal interpretations of observed associations. Strengths and limitations of a particular study design should be discussed when presenting data collected in the setting of a quasi-experimental study. Future medical informatics investigators should choose the strongest design that is feasible given the particular circumstances.

Rothman KJ Greenland S . Modern epidemiology . Philadelphia : Lippincott–Raven Publishers , 1998 .

Google Scholar

Google Preview

Hennekens CH Buring JE . Epidemiology in medicine . Boston : Little, Brown , 1987 .

Szklo M Nieto FJ . Epidemiology: beyond the basics . Gaithersburg, MD : Aspen Publishers , 2000 .

Shadish WR Cook TD Campbell DT . Experimental and quasi-experimental designs for generalized causal inference . Boston : Houghton Mifflin , 2002 .

Trochim WMK . The research methods knowledge base . Cincinnati : Atomic Dog Publishing , 2001 .

Cook TD Campbell DT . Quasi-experimentation: design and analysis issues for field settings . Chicago : Rand McNally Publishing Company , 1979 .

MacLehose RR Reeves BC Harvey IM Sheldon TA Russell IT Black AM . A systematic review of comparisons of effect sizes derived from randomised and non-randomised studies . Health Technol Assess 2000 ; 4 : 1 – 154 .

Shadish WR Heinsman DT . Experiments versus quasi-experiments: do they yield the same answer? NIDA Res Monogr 1997 ; 170 : 147 – 64 .

Grimshaw J Campbell M Eccles M Steen N . Experimental and quasi-experimental designs for evaluating guideline implementation strategies . Fam Pract 2000 ; 17 ( Suppl 1 ): S11 – 6 .

Zwerling C Daltroy LH Fine LJ Johnston JJ Melius J Silverstein BA . Design and conduct of occupational injury intervention studies: a review of evaluation strategies . Am J Ind Med 1997 ; 32 : 164 – 79 .

Haux RKC , editor. Yearbook of medical informatics 2005 . Stuttgart : Schattauer Verlagsgesellschaft , 2005 , p 563 .

Morton V Torgerson DJ . Effect of regression to the mean on decision making in health care . BMJ 2003 ; 326 : 1083 – 4 .

Bland JM Altman DG . Regression towards the mean . BMJ 1994 ; 308 : 1499 .

Bland JM Altman DG . Some examples of regression towards the mean . BMJ 1994 ; 309 : 780 .

Guyatt GH Haynes RB Jaeschke RZ Cook DJ Green L Naylor CD et al.  . Users' guides to the medical literature: XXV. Evidence-based medicine: principles for applying the users' guides to patient care. Evidence-Based Medicine Working Group . JAMA 2000 ; 284 : 1290 – 6 .

Harris RP Helfand M Woolf SH Lohr KN Mulrow CD Teutsch SM et al.  . Current methods of the US Preventive Services Task Force: a review of the process . Am J Prev Med 2001 ; 20 : 21 – 35 .

Harbour R Miller J . A new system for grading recommendations in evidence based guidelines . BMJ 2001 ; 323 : 334 – 6 .

Wagner AK Soumerai SB Zhang F Ross-Degnan D . Segmented regression analysis of interrupted time series studies in medication use research . J Clin Pharm Ther 2002 ; 27 : 299 – 309 .

Campbell DT . Counterbalanced design . In: Company RMCP , editor. Experimental and Quasiexperimental Designs for Research . Chicago : Rand-McNally College Publishing Company , 1963 , 50 – 5 .

Staggers N Kobus D . Comparing response time, errors, and satisfaction between text-based and graphical user interfaces during nursing order tasks . J Am Med Inform Assoc 2000 ; 7 : 164 – 76 .

Schriger DL Baraff LJ Buller K Shendrikar MA Nagda S Lin EJ et al.  . Implementation of clinical guidelines via a computer charting system: effect on the care of febrile children less than three years of age . J Am Med Inform Assoc 2000 ; 7 : 186 – 95 .

Patel VL Kushniruk AW Yang S Yale JF . Impact of a computer-based patient record system on data collection, knowledge organization, and reasoning . J Am Med Inform Assoc 2000 ; 7 : 569 – 85 .

Borowitz SM . Computer-based speech recognition as an alternative to medical transcription . J Am Med Inform Assoc 2001 ; 8 : 101 – 2 .

Patterson R Harasym P . Educational instruction on a hospital information system for medical students during their surgical rotations . J Am Med Inform Assoc 2001 ; 8 : 111 – 6 .

Rocha BH Christenson JC Evans RS Gardner RM . Clinicians' response to computerized detection of infections . J Am Med Inform Assoc 2001 ; 8 : 117 – 25 .

Lovis C Chapko MK Martin DP Payne TH Baud RH Hoey PJ et al.  . Evaluation of a command-line parser-based order entry pathway for the Department of Veterans Affairs electronic patient record . J Am Med Inform Assoc 2001 ; 8 : 486 – 98 .

Hersh WR Junium K Mailhot M Tidmarsh P . Implementation and evaluation of a medical informatics distance education program . J Am Med Inform Assoc 2001 ; 8 : 570 – 84 .

Makoul G Curry RH Tang PC . The use of electronic medical records: communication patterns in outpatient encounters . J Am Med Inform Assoc 2001 ; 8 : 610 – 5 .

Ruland CM . Handheld technology to improve patient care: evaluating a support system for preference-based care planning at the bedside . J Am Med Inform Assoc 2002 ; 9 : 192 – 201 .

De Lusignan S Stephens PN Adal N Majeed A . Does feedback improve the quality of computerized medical records in primary care? J Am Med Inform Assoc 2002 ; 9 : 395 – 401 .

Mekhjian HS Kumar RR Kuehn L Bentley TD Teater P Thomas A et al.  . Immediate benefits realized following implementation of physician order entry at an academic medical center . J Am Med Inform Assoc 2002 ; 9 : 529 – 39 .

Ammenwerth E Mansmann U Iller C Eichstadter R . Factors affecting and affected by user acceptance of computer-based nursing documentation: results of a two-year study . J Am Med Inform Assoc 2003 ; 10 : 69 – 84 .

Oniki TA Clemmer TP Pryor TA . The effect of computer-generated reminders on charting deficiencies in the ICU . J Am Med Inform Assoc 2003 ; 10 : 177 – 87 .

Liederman EM Morefield CS . Web messaging: a new tool for patient-physician communication . J Am Med Inform Assoc 2003 ; 10 : 260 – 70 .

Rotich JK Hannan TJ Smith FE Bii J Odero WW Vu N Mamlin BW et al.  . Installing and implementing a computer-based patient record system in sub-Saharan Africa: the Mosoriot Medical Record System . J Am Med Inform Assoc 2003 ; 10 : 295 – 303 .

Payne TH Hoey PJ Nichol P Lovis C . Preparation and use of preconstructed orders, order sets, and order menus in a computerized provider order entry system . J Am Med Inform Assoc 2003 ; 10 : 322 – 9 .

Hoch I Heymann AD Kurman I Valinsky LJ Chodick G Shalev V . Countrywide computer alerts to community physicians improve potassium testing in patients receiving diuretics . J Am Med Inform Assoc 2003 ; 10 : 541 – 6 .

Laerum H Karlsen TH Faxvaag A . Effects of scanning and eliminating paper-based medical records on hospital physicians' clinical work practice . J Am Med Inform Assoc 2003 ; 10 : 588 – 95 .

Devine EG Gaehde SA Curtis AC . Comparative evaluation of three continuous speech recognition software packages in the generation of medical reports . J Am Med Inform Assoc 2000 ; 7 : 462 – 8 .

Dunbar PJ Madigan D Grohskopf LA Revere D Woodward J Minstrell J et al.  . A two-way messaging system to enhance antiretroviral adherence . J Am Med Inform Assoc 2003 ; 10 : 11 – 5 .

Lenert L Munoz RF Stoddard J Delucchi K Bansod A Skoczen S et al.  . Design and pilot evaluation of an Internet smoking cessation program . J Am Med Inform Assoc 2003 ; 10 : 16 – 20 .

Koide D Ohe K Ross-Degnan D Kaihara S . Computerized reminders to monitor liver function to improve the use of etretinate . Int J Med Inf 2000 ; 57 : 11 – 9 .

Gonzalez-Heydrich J DeMaso DR Irwin C Steingard RJ Kohane IS Beardslee WR . Implementation of an electronic medical record system in a pediatric psychopharmacology program . Int J Med Inf 2000 ; 57 : 109 – 16 .

Anantharaman V Swee Han L . Hospital and emergency ambulance link: using IT to enhance emergency pre-hospital care . Int J Med Inf 2001 ; 61 : 147 – 61 .

Chae YM Heon Lee J Hee Ho S Ja Kim H Hong Jun K Uk Won J . Patient satisfaction with telemedicine in home health services for the elderly . Int J Med Inf 2001 ; 61 : 167 – 73 .

Lin CC Chen HS Chen CY Hou SM . Implementation and evaluation of a multifunctional telemedicine system in NTUH . Int J Med Inf 2001 ; 61 : 175 – 87 .

Mikulich VJ Liu YC Steinfeldt J Schriger DL . Implementation of clinical guidelines through an electronic medical record: physician usage, satisfaction and assessment . Int J Med Inf 2001 ; 63 : 169 – 78 .

Hwang JI Park HA Bakken S . Impact of a physician's order entry (POE) system on physicians' ordering patterns and patient length of stay . Int J Med Inf 2002 ; 65 : 213 – 23 .

Park WS Kim JS Chae YM Yu SH Kim CY Kim SA et al.  . Does the physician order-entry system increase the revenue of a general hospital? Int J Med Inf 2003 ; 71 : 25 – 32 .

Dr. Harris was supported by NIH grants K23 AI01752-01A1 and R01 AI60859-01A1. Dr. Perencevich was supported by a VA Health Services Research and Development Service (HSR&D) Research Career Development Award (RCD-02026-1). Dr. Finkelstein was supported by NIH grant RO1 HL71690.

Supplementary data

Month: Total Views:
December 2016 1
January 2017 5
February 2017 18
March 2017 12
April 2017 20
May 2017 19
June 2017 12
July 2017 23
August 2017 59
September 2017 31
October 2017 64
November 2017 66
December 2017 118
January 2018 206
February 2018 281
March 2018 258
April 2018 259
May 2018 230
June 2018 248
July 2018 244
August 2018 274
September 2018 231
October 2018 301
November 2018 274
December 2018 206
January 2019 180
February 2019 181
March 2019 249
April 2019 362
May 2019 298
June 2019 253
July 2019 233
August 2019 277
September 2019 247
October 2019 246
November 2019 209
December 2019 165
January 2020 186
February 2020 211
March 2020 170
April 2020 197
May 2020 173
June 2020 194
July 2020 275
August 2020 309
September 2020 473
October 2020 702
November 2020 599
December 2020 404
January 2021 391
February 2021 465
March 2021 536
April 2021 516
May 2021 484
June 2021 398
July 2021 480
August 2021 495
September 2021 592
October 2021 756
November 2021 616
December 2021 508
January 2022 476
February 2022 693
March 2022 887
April 2022 943
May 2022 944
June 2022 802
July 2022 689
August 2022 773
September 2022 940
October 2022 1,231
November 2022 1,165
December 2022 1,031
January 2023 1,199
February 2023 911
March 2023 1,219
April 2023 1,204
May 2023 1,178
June 2023 876
July 2023 829
August 2023 891
September 2023 1,300
October 2023 1,300
November 2023 1,286
December 2023 1,224
January 2024 1,096
February 2024 1,226
March 2024 1,347
April 2024 1,224
May 2024 1,054
June 2024 948
July 2024 885
August 2024 768

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1527-974X
  • Copyright © 2024 American Medical Informatics Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • DOI: 10.33140/jnh.06.04.03
  • Corpus ID: 245868960

A Quasi Experimental Study to Assess the Effectiveness of Planned Teaching Program in Promoting Knowledge Regarding Sexual Health Among Adolescent Girls at Selected School of Jhajjar Haryana

  • Published in Journal of Nursing… 29 November 2021
  • Education, Medicine

Related Papers

Showing 1 through 3 of 0 Related Papers

Logo for The University of Regina OEP Program

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

31 8.2 Quasi-experimental and pre-experimental designs

Learning objectives.

  • Identify and describe the various types of quasi-experimental designs
  • Distinguish true experimental designs from quasi-experimental and pre-experimental designs
  • Identify and describe the various types of quasi-experimental and pre-experimental designs

As we discussed in the previous section, time, funding, and ethics may limit a researcher’s ability to conduct a true experiment. For researchers in the medical sciences and social work, conducting a true experiment could require denying needed treatment to clients, which is a clear ethical violation. Even those whose research may not involve the administration of needed medications or treatments may be limited in their ability to conduct a classic experiment. When true experiments are not possible, researchers often use quasi-experimental designs.

Quasi-experimental designs

Quasi-experimental designs are similar to true experiments, but they lack random assignment to experimental and control groups. Quasi-experimental designs have a comparison group that is similar to a control group except assignment to the comparison group is not determined by random assignment. The most basic of these quasi-experimental designs is the nonequivalent comparison groups design (Rubin & Babbie, 2017).  The nonequivalent comparison group design looks a lot like the classic experimental design, except it does not use random assignment. In many cases, these groups may already exist. For example, a researcher might conduct research at two different agency sites, one of which receives the intervention and the other does not. No one was assigned to treatment or comparison groups. Those groupings existed prior to the study. While this method is more convenient for real-world research, it is less likely that that the groups are comparable than if they had been determined by random assignment. Perhaps the treatment group has a characteristic that is unique–for example, higher income or different diagnoses–that make the treatment more effective.

Quasi-experiments are particularly useful in social welfare policy research. Social welfare policy researchers often look for what are termed natural experiments , or situations in which comparable groups are created by differences that already occur in the real world. Natural experiments are a feature of the social world that allows researchers to use the logic of experimental design to investigate the connection between variables. For example, Stratmann and Wille (2016) were interested in the effects of a state healthcare policy called Certificate of Need on the quality of hospitals. They clearly could not randomly assign states to adopt one set of policies or another. Instead, researchers used hospital referral regions, or the areas from which hospitals draw their patients, that spanned across state lines. Because the hospitals were in the same referral region, researchers could be pretty sure that the client characteristics were pretty similar. In this way, they could classify patients in experimental and comparison groups without dictating state policy or telling people where to live.

quasi experimental pre post study

Matching is another approach in quasi-experimental design for assigning people to experimental and comparison groups. It begins with researchers thinking about what variables are important in their study, particularly demographic variables or attributes that might impact their dependent variable. Individual matching involves pairing participants with similar attributes. Then, the matched pair is split—with one participant going to the experimental group and the other to the comparison group. An ex post facto control group , in contrast, is when a researcher matches individuals after the intervention is administered to some participants. Finally, researchers may engage in aggregate matching , in which the comparison group is determined to be similar on important variables.

Time series design

There are many different quasi-experimental designs in addition to the nonequivalent comparison group design described earlier. Describing all of them is beyond the scope of this textbook, but one more design is worth mentioning. The time series design uses multiple observations before and after an intervention. In some cases, experimental and comparison groups are used. In other cases where that is not feasible, a single experimental group is used. By using multiple observations before and after the intervention, the researcher can better understand the true value of the dependent variable in each participant before the intervention starts. Additionally, multiple observations afterwards allow the researcher to see whether the intervention had lasting effects on participants. Time series designs are similar to single-subjects designs, which we will discuss in Chapter 15.

Pre-experimental design

When true experiments and quasi-experiments are not possible, researchers may turn to a pre-experimental design (Campbell & Stanley, 1963).  Pre-experimental designs are called such because they often happen as a pre-cursor to conducting a true experiment.  Researchers want to see if their interventions will have some effect on a small group of people before they seek funding and dedicate time to conduct a true experiment. Pre-experimental designs, thus, are usually conducted as a first step towards establishing the evidence for or against an intervention. However, this type of design comes with some unique disadvantages, which we’ll describe below.

A commonly used type of pre-experiment is the one-group pretest post-test design . In this design, pre- and posttests are both administered, but there is no comparison group to which to compare the experimental group. Researchers may be able to make the claim that participants receiving the treatment experienced a change in the dependent variable, but they cannot begin to claim that the change was the result of the treatment without a comparison group.   Imagine if the students in your research class completed a questionnaire about their level of stress at the beginning of the semester.  Then your professor taught you mindfulness techniques throughout the semester.  At the end of the semester, she administers the stress survey again.  What if levels of stress went up?  Could she conclude that the mindfulness techniques caused stress?  Not without a comparison group!  If there was a comparison group, she would be able to recognize that all students experienced higher stress at the end of the semester than the beginning of the semester, not just the students in her research class.

In cases where the administration of a pretest is cost prohibitive or otherwise not possible, a one- shot case study design might be used. In this instance, no pretest is administered, nor is a comparison group present. If we wished to measure the impact of a natural disaster, such as Hurricane Katrina for example, we might conduct a pre-experiment by identifying  a community that was hit by the hurricane and then measuring the levels of stress in the community.  Researchers using this design must be extremely cautious about making claims regarding the effect of the treatment or stimulus. They have no idea what the levels of stress in the community were before the hurricane hit nor can they compare the stress levels to a community that was not affected by the hurricane.  Nonetheless, this design can be useful for exploratory studies aimed at testing a measures or the feasibility of further study.

In our example of the study of the impact of Hurricane Katrina, a researcher might choose to examine the effects of the hurricane by identifying a group from a community that experienced the hurricane and a comparison group from a similar community that had not been hit by the hurricane. This study design, called a static group comparison , has the advantage of including a comparison group that did not experience the stimulus (in this case, the hurricane). Unfortunately, the design only uses for post-tests, so it is not possible to know if the groups were comparable before the stimulus or intervention.  As you might have guessed from our example, static group comparisons are useful in cases where a researcher cannot control or predict whether, when, or how the stimulus is administered, as in the case of natural disasters.

As implied by the preceding examples where we considered studying the impact of Hurricane Katrina, experiments, quasi-experiments, and pre-experiments do not necessarily need to take place in the controlled setting of a lab. In fact, many applied researchers rely on experiments to assess the impact and effectiveness of various programs and policies. You might recall our discussion of arresting perpetrators of domestic violence in Chapter 2, which is an excellent example of an applied experiment. Researchers did not subject participants to conditions in a lab setting; instead, they applied their stimulus (in this case, arrest) to some subjects in the field and they also had a control group in the field that did not receive the stimulus (and therefore were not arrested).

Key Takeaways

  • Quasi-experimental designs do not use random assignment.
  • Comparison groups are used in quasi-experiments.
  • Matching is a way of improving the comparability of experimental and comparison groups.
  • Quasi-experimental designs and pre-experimental designs are often used when experimental designs are impractical.
  • Quasi-experimental and pre-experimental designs may be easier to carry out, but they lack the rigor of true experiments.
  • Aggregate matching – when the comparison group is determined to be similar to the experimental group along important variables
  • Comparison group – a group in quasi-experimental design that does not receive the experimental treatment; it is similar to a control group except assignment to the comparison group is not determined by random assignment
  • Ex post facto control group – a control group created when a researcher matches individuals after the intervention is administered
  • Individual matching – pairing participants with similar attributes for the purpose of assignment to groups
  • Natural experiments – situations in which comparable groups are created by differences that already occur in the real world
  • Nonequivalent comparison group design – a quasi-experimental design similar to a classic experimental design but without random assignment
  • One-group pretest post-test design – a pre-experimental design that applies an intervention to one group but also includes a pretest
  • One-shot case study – a pre-experimental design that applies an intervention to only one group without a pretest
  • Pre-experimental designs – a variation of experimental design that lacks the rigor of experiments and is often used before a true experiment is conducted
  • Quasi-experimental design – designs lack random assignment to experimental and control groups
  • Static group design – uses an experimental group and a comparison group, without random assignment and pretesting
  • Time series design – a quasi-experimental design that uses multiple observations before and after an intervention

Image attributions

cat and kitten   matching avocado costumes on the couch looking at the camera by Your Best Digs CC-BY-2.0

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Adv Med Educ Prof
  • v.12(3); 2024 Jul
  • PMC11336188

Roles of Two Learning Methods in the Perceived Competence of Surgery and Quality of Teaching: A Quasi-experimental Study among Operating Room Nursing Students

Sina ghasemi.

1 Student Research Committee, Hamadan University of Medical Sciences, Hamadan, Iran

BEHZAD IMANI

2 Department of Operating Room, School of Paramedicine, Hamadan University of Medical Sciences, Hamadan, Iran

ALIREZA JAFARKHANI

Hossein hosseinefard.

3 Department of Biostatistics, School of Public Health, Hamadan University of Medical Sciences, Hamadan, Iran

Introduction:

Nowadays, Clinical courses are meticulously structured to give students essential opportunities to elevate their professional qualifications,so that the patients’ safety is protected and their conditions improve. Given the many challenges in the clinical environment of the operating room, this study was conducted to compare the impact of team-based and task-based learning methods in the clinical settings on the perceived competence of surgery and the quality of training from the operating room nursing students’ point of view.

This quasi-experimental study was conducted on fifty 5 th semester operating room technology students at Hamadan University of Medical Sciences in 2023. In this study, students were selected using the convenience sampling method and placed in two educational groups (team-based and task-based) of 25 subjects using the matching method. After implementing the training process in the operating room setting, the data related to the study were collected using the valid questionnaires of perceived competence in surgery (Cronbach's alpha=0.86) and quality of education (Cronbach's alpha=0.94). Also, the data analysis was conducted at the descriptive and inferential (included independent t-test and analysis of covariance) statistics level using SPSS version 16 software.

Findings showed that the mean clinical training quality score was significantly higher in the team-based learning group than in the other group (P=0.014). Also, after the median intervention, the perceived competence score of surgery was higher in the task-based learning group than in the team-based group, and the difference in the average change of the competence score between the two groups was statistically significant (P<0.001).

Conclusion:

Based on the results, it is suggested that a task-based learning method should be used for the clinical instructors to increase level of the perceived competence of the surgery among operating roon nursing students.

Introduction

Nowadays, one of the desirable methods of learning in medical sciences is the learning skills in a clinical environment ( 1 ). Clinical education is crucial as it helps students apply theoretical knowledge to develop skills necessary for patient care ( 2 ).

The operating room environment, considered as the specialized clinical environment of operating room nursing students, always entails many challenges due to crowding, the variety of surgical operations with the presence of surgical and anesthesia groups, etc. In this environment, opportunities are provided for effective training and patient safety ( 3 , 4 ). On the other hand, since the operating room is a high-risk patient environment, effective training and creation of clinical competence through clinical training are essential for operating room nursing students ( 5 ). Therefore, Clinical courses should be meticulously structured to give students essential opportunities to elevate their professional qualifications. Clinical educators play a major role in achieving this goal by choosing the correct educational method ( 6 ).

Regarding different educational methods, educational psychologists believe learning is improved with more inclusive participation in the learning process, and its impact is more lasting. Therefore, experts emphasize the use of new student-centered methods ( 7 ). Accordingly, the World Health Organization (WHO), in its statement on clinical educators, recommends using active learning in the education process, choosing appropriate technologies and information, and encouraging students to learn experientially ( 8 ). In this regard, some previous studies have considered team-based and task-based learning methods as two student-oriented methods to increase clinical skills and their function ( 6 , 9 ).

The team-based learning method is an active and student-centered educational strategy that allows students to apply conceptual knowledge in small groups ( 10 ). This educational method is presented to improve the quality of students' learning through increasing problem-solving skills ( 11 ). In this new method, students conduct various discussions around the educational goals the teacher sets, which ultimately increases their motivation, understanding, and mastery of the knowledge they have learned ( 12 ). This educational method generally consists of three stages: studying the basic material independently by the students, assessing the basic understanding of the concepts by conducting an individual and group test to ensure readiness, and performing group activities as assignments ( 13 ). Using this method in the field of clinical education can play a significant role in improving the students' clinical reasoning ability ( 14 , 15 ). Also, the results of previous studies show that the use of team-based learning methods and teamwork is effective in creating interaction between students, solving problems related to different views about a patient, and facilitating the application of theoretical knowledge in the clinical environment by the instructor ( 9 ). On the other hand, one of the problems of inexperienced surgical technologists is the lack of the necessary skills due to fears that remain from their studentship period ( 16 ). Various studies show that participation in team work is an effective factor for students to overcome their fear and acquire the relevant qualifications to work in their desired profession ( 17 ). In spite of the advantages of this educational method, previous studies have criticized the lack of sufficient motivation and sufficient ability in students to solve clinical problems. However, the team-based learning method plays a significant role in reducing the students' stress through participation in group activities and fairer assessment ( 15 ).

Task-based learning is one of the modern educational methods that is very popular today to achieve satisfactory clinical performance for students ( 8 ). In this educational method, the educational goals are set by the instructor based on the tasks of the health team, and learning is achieved by performing the tasks by the students in the clinical environment ( 18 ). In this method, the learning process consists of three main stages. At the beginning, the environment and requirements for learning are provided. Then, various tasks are assigned to the student in line with the educational goals, and the instructor follows up and evaluates the student's performance. Finally, in the case of need, training can be repeated ( 19 ). In applying different training and learning rules, the student works as a part of an organization and is asked to apply his knowledge and clinical skills in different situations and acquire the required professional competence ( 20 ). This active educational method increases the students' motivation, encourages them to learn, and plays a major role in the quality of education, providing sufficient experience for students ( 21 , 22 ).

Given that the operating room students, based on their clinical nature, should be trained in the clinical environment and the necessity to use appropriate teaching techniques to increase their competence and motivation level for learning, and as no study has compared the effects of team-based and task-based learning methods on operating room nursing students, the question arises as to enhance operating room nursing students' perceived competence of surgery and clinical training quality, which method is more effective: team-based or task-based learning?

Study Design and Participants

This quasi-experimental study was conducted at Hamadan University of Medical Sciences in 2023. In this study, the statistical population included the Hamadan University of Medical Sciences operating room technology students, among whom 50 students were selected using the convenience sampling method and based on the inclusion criteria. As to the sample size, according to the results of Mirbagher Ajorpaz et al.'s study ( 23 ), the mean and standard deviation of Perceived Competence in the intervention group before and after were equal to 38.23 (2.59) and 46.84 (1.91), respectively, and in the control group, respectively, equal to 38.69 (2.54) and 42.59 (2.39) were considered. Considering the type I error equal fo 0.05 (α=0.05) and power of 0.90 (1-β=0.90), the minimum sample size was 7 samples in each group. For increasing the reliability of the study and by predicting a possible sample loss and increasing the power of the statistical tests, 25 students was allocated to each group. The following equation was used to calculate the sample size.

n = ( Z 1 - 1 α + Z 1-β ) 2 ( S 1 2 + S 2 2 ) d 2

Also, the inclusion criteria for this research were students studying in the 5 th semester of operating room technology and willingness to participate in the study. Also, the only exclusion criterion for this study was absence in more than 2 training sessions.

Ethical Considerations

The study was approved by the Ethics Committee of Hamadan University of Medical Sciences with the code of IR.UMSHA.REC.1402.493. All participants were informed about the aim of the study and informed consent to conduct the research was obtained from them. Also, the research units were assured of the confidentiality of the information.

Data Collection

In this research, 3 questionnaires were used to collect the data:

1- Demographic information questionnaire included age, gender, marital status, student's grade point average (GPA), academic semester, and the number of credits passed by the student.

2- The Persian and revised version of the perceived surgical competency questionnaire measured the students' perceived competence in surgery. This questionnaire was first designed by Glipsey and Hamilton in Australia in 2009 with 8 subscales and 98 statements for final-year students and experts. This instrument was revised by Glipsey et al. in 2012 and named the Revised Scale of Perceived Competence in the Operating Room. It has 40 items and 6 subscales, including foundational skills and knowledge (9 items), leadership (8 items), collegiality (6 items), proficiency and expertise (6 items), empathy (5 items), and professional development (6 items). The reliability of this scale was determined using Cronbach's alpha coefficient of 0.96 in the whole scale and between 0.81 and 0.89 for the subscales ( 24 ). In Iran, the scale was translated for the first time, and its psychometric properties were assessed by Mirbaqer Ajorpaz et al. among internship students of medical sciences universities. The congruence or internal consistency of the instrument was evaluated using Cronbach's alpha. The results of face validity, content (0.95 in relevance dimension), structure (confirmatory and exploratory factor analysis), and scale reliability (0.86) showed that the PPCS-R (Perceived Perioperative Competence ScaleRevised) six-factor scale is in the localization phase in the country with 33 items and 5 subscales including foundational skills and knowledge (7 items), leadership (9 items), collegiality (7 items), proficiency (4 items), and professional development (6 items); it has adequate validity and reliability. The obtained version was adjusted like the original version of the instrument with a 5-point Likert scale with the range of never [1], rarely [2], sometimes [3], often [4], and always [5]. The score of this tool is reported quantitatively, and the total score range of the Persian version is from 33 to 163; also, higher scores indicate greater competence ( 23 , 25 ).

3- The questionnaire designed by Bahadori et al. is used to evaluate the quality of education. This questionnaire, with 28 items and four subscales, includes educational objectives and programs (11 items), the instructor's performance (9 items), interaction with students (4 items), and monitoring and assessment (4 items). The reliability of the instrument was also assessed through the test–retest approach with a correlation coefficient of 0.92 and a Cronbach’s alpha of 0.94 ( 6 ). In this questionnaire, each question is examined with a nominal scale of Yes, Somewhat, and No. The answer "Yes" is considered a favorable situation and indicated by a score of 3, the answer "Somewhat" is considered a relatively favorable situation and indicated by a score of 2, and the answer "No" is considered an unfavorable situation and indicated by a score of 1.

It should be mentioned that the perceived surgical competency questionnaire was distributed twice. It was distributed among the students in the first training session to check the perceived surgical competence of the students before starting the training. Also, once again, this questionnaire was given to the students to complete along with the education quality questionnaire in the last training session of both groups. Also, the clinical education quality questionnaire was distributed themthe once and after completing the training course.

Intervention

To conduct the study and collect data, we placed the students in 2 task-based and team-based learning groups using matching method; the number of students in each group was 25 ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is JAMP-12-180-g001.jpg

Participant Flow

An instructor held the training sessions for both groups in 2 different periods and 18 sessions that lasted five hours in the operating room department of Besat Hospital, Hamadan. In this study, the students had passed the Operating Room Technique 2 Internship course. According to the educational curriculum in this course, the students, under the direct supervision of the instructor, not only performed scrub and circular tasks in the fields of general and specialized surgery (orthopedic, thoracic, neurology) accurately but also they were responsible for explaining the types of incisions and how to close the wound. Also, students apply theoretical and practical knowledge in cases such as the anatomy of the surgical site (anatomy of different parts of the body, tissue layers, etc.), types of surgical incisions and tools and equipment related to exposure and suture (dissection of layers, hemostasis, wound dressing) and acquire the necessary skills. In the task-based learning group at the beginning of each training session, the specific tasks of students in the fields of principles and techniques of circular and scrub, surgical techniques, and surgical tools and equipment in the conference hall of the operating room department were taught in a theoretical and practical manner ( 18 ). Finally, they were asked to work in the main surgery rooms like the operating room personnel and to be effective in the performing surgical procedures of the surgical team members. The instructor followed up the performance of the above tasks, and if necessary, the student was retrained to perform a specific activity correctly.

In the team-based learning group, the students were required to review the relevant and specified materials in three areas of principles and techniques of the circular person and scrub, surgical techniques, and surgical tools and equipment before starting each internship session. At the beginning of each training session and in the conference hall of the operating room department, an individual test was conducted for 10 minutes on the specified materials. After collecting the individual test, a team test with similar concepts to the first test was held for 20 minutes.

After collecting the questions, the groups discussed and reviewed the test questions for 30 minutes with the instructor's guidance and feedback. Finally, after their allocation to the operation rooms, the students performed the designated activities related to the topic and objectives of the meeting with the supervision and cooperation of their fellow students and tried to criticize the work of their fellow students. Also, to motivate the students, they were evaluated based on team and individual activities at the end of each session, and the teams were ranked ( 26 ).

Statistical Analysis

Finally, data analysis was conducted using SPSS, version 16.0 (SPSS Inc., Chicago, Ill., USA) software at a significance level 0.05. At first, the assumption of normality of the data was checked using the Kolmogorov-Smirnov and Shapiro-Wilk tests. Descriptive statistics included determining frequency, frequency percentage, mean, and standard deviation. Chi-square tests (or Fisher's exact test if needed) and independent t-tests were used to compare the frequency distribution of qualitative and quantitative demographic characteristics between the two groups. In addition, scores between the two groups before and after the intervention and within each group were compared using Independent t-test and paired t-test, respectively. In comparing the perceived competence scores between the two groups after the intervention, covariance analysis was used to control the scores before the intervention.

In this study, 50 undergraduate students of operating room technology were studied in two task-based and team-based learning groups. The Mean±Standard Deviation of the age of the students was 22.56±1.28 years. 94% (n=47) of the students were single, and 6% (n=3) were married. 44% (22 people) of the students were male, and 56% ( 28 ) were female. In terms of gender distribution, marital status, mean age, and grade point average (GPA) of students, there was no statistically significant difference between the two groups ( Table 1 ).

Student’s demographic characteristics by study groups

Team-based learningTask-based learningP
Age22.72±1.3322.40±1.220.382
Grade point average17.55±0.5817.39±0.640.364
Marital statusSingle92% (n=23)96% (n=24)0.99
Married8% (n=2)4% (n=1)
GenderMale44% (n=11)44% (n=11)0.99
Female56% (n=14)56% (n=14)

Based on the findings of our study, the mean±standard deviation of the total clinical training quality score in the task-based and team-based learning groups was equal to 71.32±6.50 and 76.72±8.36, respectively, and this difference was statistically significant based on independent t-test (P=0.014). Table 2 shows the mean and standard deviation of the clinical education quality score between the two study groups.

Comparison of the mean score of clinical education quality after the intervention by the study groups

GroupMean±SDP
Task-based learning71.32±6.500.014
Team-based learning76.72±8.36

Also, Table 3 shows the mean and standard deviation of the clinical education quality subscales.

Comparison of the mean scores of clinical education quality subscales after the intervention by the study groups

SubscaleGroupMean±SDP
Educational objectives and programsTask-based learning27.08±3.300.022
Team-based learning29.04±2.47
Instructor's performanceTask-based learning24.28±2.230.156
Team-based learning26.24±6.43
Attitudes and behavior toward studentsTask-based learning9.44±1.610.016
Team-based learning10.44±1.19
Monitoring and assessmentTask-based learning10.52±1.500.173
Team-based learning11.00±0.87

Based on the results of Table 4 , there was a statistically significant relationship between the scores of the goals and educational program and interaction with students between the two groups (P<0.05). Thus, the mean score of these subscales in the team-based learning group was higher than the task-based learning group. Moreover, there was no statistically significant relationship between the mean score of the coach and monitoring and evaluation between the two groups.

Comparison of the mean total score of perceived competence of surgery before intervention by the study groups

GroupNumberMean±SDP
Task-based learning2585.64±9.210.34
Team-based learning2588.76±13.43

Findings showed that before the intervention, the student's competency score in the team-based learning group was higher than the task-based learning group. Still, based on the independent t-test, this difference was not statistically significant (P=0.34). Table 3 shows the mean and standard deviation of the total score of perceived surgical competence between the two study groups before the intervention.

Table 5 shows the mean competency score between the two groups before and after the intervention, along with the results of covariance analysis. Based on the results of the analysis of covariance, the main effect of the perceived competence score of surgery before the intervention was not significant between the two groups (F (1,46) =1.749 , P=0.193). Also, the interaction effect of competence score before the intervention and group was not statistically significant (F (1,46) =3.198 , P=0.081).

Comparison of the mean scores of perceived competence of surgery, before and after the intervention by the study groups

SubscaleGroupNumberMean±SDP
Before interventionTask-based learning2585.64±9.21< 0.001
Team-based learning2588.76±13.43
After interventionTask-based learning25125.08±14.82
Team-based learning25111.36±12.14

According to Table 5 , after the intervention, the mean score of perceived competence of surgery in the task-based group was higher than the team-based group, which was statistically significant based on the analysis of covariance. (P<0.001) Also, based on the independent t-test results before the intervention, there was no statistically significant difference between the mean competency subscales before the intervention.

Table 6 shows the mean and standard deviation of the scores of different competence subscales in the studied subjects before and after the intervention. After the intervention, the mean score of basic knowledge and skills, leadership, collegiality, skills, and personal development in the task-based group was higher than the team-based learning group. Based on the results of covariance analysis, considering the scores before the intervention as a covariate, the mean changes in the scores of knowledge and basic skills, leadership, collaboration, and communication between the two groups were statistically significant (P<0.05).

Comparison of the mean scores of perceived competence of surgery subscales, before and after intervention by the study groups

SubscaleTask-based learningTeam-based learningPP (ANCOVA )
Foundational skills and knowledgePre16.00±2.6517.84±3.700.050.001
Post24.96±4.0221.56±3.540.003
Change8.96±5.873.72±1.86
P<0.001<0.001
LeadershipPre22.20±4.2423.76±4.390.2080.001
Post34.16±4.6229.92±4.170.001
Change11.96±6.746.16±4.05
P<0.001<0.001
CollegialityPre17.76±2.8219.20±3.000.087<0.001
Post28.00±3.2124.40±2.55<0.001
Change10.24±4.305.20±2.36
P<0.001<0.001
ProficiencyPre11.68±2.5811.68±2.340.9990.218
Post14.84±2.6113.96±2.340.215
Change3.16±4.432.28±2.68
P0.002<0.001
Professional developmentPre18.00±5.7816.28±3.360.2750.275
Post23.12±4.5321.52±3.810.183
Change5.12±7.395.24±3.25
P0.002<0.001

In this study, team-based and task-based learning methods were used in the operating room nursing students internship, and their effect was measured by comparing the perceived surgical competence scores and the quality of education. The results showed that from the student's point of view, the team-based learning method had a higher quality than the task-based learning method. In Mohebi et al.'s study, students were more satisfied with the team-based learning method than the traditional educational methods ( 27 ). Also, in Gera et al's study, students stated that the team-based learning method was more interesting than the problem-based learning method; they generally reported higher satisfaction with this new method ( 15 ). Considering that the team-based learning method provides an active learning environment for students in which the students are more connected than the task-based learning method, and the tasks are performed in groups, the high quality of education in this method seems logical from the students’ point of view.

Also, based on the results of this study, among the different subscales of education quality, in the subscales of goals and educational programs as well as the subscale of ​​ attitudes and behavior toward students, the scores recorded in the team-based group were significantly higher than the task-based group; this can be justified by facilitating the achievement of educational goals, increasing interaction between students and instructors, and providing immediate feedback to students by instructors in the team-based educational method ( 10 , 15 ).

The present study showed that both team-based and task-based learning methods could significantly increase the perceived competence of surgery. In line with this result, in research conducted by Bahadori et al., it was shown that the team-based learning method plays a significant role in increasing the perceived competence of surgery in the operating room students ( 6 ). Also, the results of previous studies have shown that the task-based learning method, by teaching the tasks that match their job duties, plays an important role in increasing their clinical competence ( 20 ); this is in line with the results of our study. However, the task-based learning method was more effective in increasing the perceived competence of surgery, especially in the subscales of foundational skills and knowledge, leadership, and collegiality, and this difference was statistically significant.

The greater impact of the task-based educational method on the level of competence of students in the mentioned fields can be justified due to the major role of this type of education in communicating between theory and practice, doing tasks and learning independently, and increasing students' communication skills and justifying their interactions ( 18 ). Given the activities of students and operating room personnel in an environment where technology and performance are constantly changing, it is very important to acquire the necessary qualifications and skills during the student period ( 28 ). Therefore, considering that the lack of clinical skills and competencies in the staff is one of the most important problems in the operating room department ( 29 ), which can be the result of students facing clinical stressors during their student days ( 30 ), using the mentioned training methods, especially the task-based learning method, can be very effective in this field. Considering the contradiction of the findings in preious studies, researchers are suggested to investigate the effects of each of the above educational methods in different dimensions and areas in their future research to improve their effectiveness in learning, and increasing the students’ skills should be considered and used in educational guidelines.

Among the limitations of this study, we can mention the impossibility of blinding, not using a control group due to the limited number of qualified samples, and not holding a pre-test to evaluate the quality of the training course held due to the nature of the questionnaire questions which can only be used after the training course.

According to the results of the present study, it can be stated that both learning methods under investigation, especially the task-based learning method, have an effective role in increasing the level of students' perceived competence in surgery. Therefore, according to the essential need of medical environments, especially the operating room department, for efficient human forces and people who have sufficient clinical competence, using the above methods to train operating room nursing students is recommended to prepare them to work in this sensitive and high-risk environment. Also, considering that the team-based learning method has a higher educational quality from the student's point of view, it is recommended that clinical instructors should use this method to satisfy the students and, as a result, use their effective activity in the clinical environment.

Acknowledgments

This study is the result of a research plan approved by Hamadan University of Medical Sciences. We would like to express our gratitude to the Research and Technology Vice-Chancellor of Hamadan University of Medical Sciences for the financial support of this study in the form of project number 140207256162.

Authors’ Contributions

All authors contributed to the discussion, read and approved the manuscript and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated resolved.

Conflict of interest:

The authors declare no conflicts of interest.

  • Open access
  • Published: 26 August 2024

Using a flipped teaching strategy in undergraduate nursing education: students’ perceptions and performance

  • Shaherah Yousef Andargeery 1 ,
  • Hibah Abdulrahim Bahri 2 ,
  • Rania Ali Alhalwani 1 ,
  • Shorok Hamed Alahmedi 1 &
  • Waad Hasan Ali 1  

BMC Medical Education volume  24 , Article number:  926 ( 2024 ) Cite this article

18 Accesses

Metrics details

Flipped teaching is an interactive learning strategy that actively engages students in the learning process. Students have an active role in flipped teaching as they independently prepare for the class. Class time is dedicated to discussion and learning activities. Thus, it is believed that flipped teaching promotes students’ critical thinking, communication, application of knowledge in real-life situations, and becoming lifelong learners. The aim of this study was to describe the students’ perception of flipped teaching as an innovative learning strategy. And to assess if there was a difference in students’ academic performance between those who participated in a traditional teaching strategy compared to those who participated in flipped teaching intervention.

A quasi-experimental design with intervention and control groups. A purposive sampling technique of undergraduate nursing students was used.

A total of 355 students participated in both groups, and 70 out of 182 students in the intervention group completed the survey. The students perceived a moderate level of effectiveness of the flipped teaching classroom as a teaching strategy. The result revealed that there is a statistically significant difference in the mean students’ scores for the intervention group (M = 83.34, SD = 9.81) and control group (M = 75.57, SD = 9.82).

Flipped teaching proves its effectiveness in improving students’ learning experience and academic performance. Also, students had a positive perception about flipped teaching as it allowed them to develop essential nursing competencies. Future studies must consider measuring the influence of flipped teaching on students’ ability to acquire nursing competencies, such as critical thinking and clinical reasoning.

Peer Review reports

The successful outcome of individualized nursing care of each patient depends on effective communication between nurses and patients. Therapeutic communication consists of an exchange of verbal and non-verbal cues. It is a process in which the professional nurse uses specific techniques to help patients better understand their conditions and promote patients’ open communication of their thoughts and feelings in an environment of mutual respect and acceptance [ 1 ]. Effective educational preparation, continuing practice, and self-reflection about one’s communication skills are all necessary for becoming proficient in therapeutic communication. Teaching therapeutic communication to nursing students explains the principles of verbal and non-verbal communication that can be emphasized through classroom presentation, discussion, case studies and role-play. It also helps them develop their ability to communicate effectively with patients, families, and other health care professionals. Nursing students should be able to critically think, conceptualizing, applying, analyzing, synthesizing, and evaluating information generated by observation, experience, reflection, reasoning, and communication. Utilizing a traditional teaching strategy can be a challenge to meet the previously stated requirements [ 2 ]. Therefore, nurse educators should adapt unique teaching methods to help students learn and participate in their own education.

The “flipped classroom” is a pedagogical approach that has gained popularity worldwide to foster active learning. Active learning is defined as instructional strategies that actively engage students in their learning. It requires them to do meaningful learning activities and reflect on their actions [ 3 ]. Flipped teaching is a teaching strategy that promotes critical thinking and the application of information learned outside of the classroom to real-world situations and solves problems within the classroom. It is used in a way that allows educators to deliver lectures by using technologies such as video, audio files, PowerPoint or other media. Thus, the students can read or study those materials on their own at home before attending the class. As a result, discussions and debates about the materials take place throughout the lecture time. Some of the main principles of flipped teaching are increasing interaction and communication between students and educators, allocating more time for content mastery and understanding, granting opportunities for closing gaps and development, creating opportunities for active engagement, and providing immediate feedback [ 4 , 5 ]. This teaching/learning methodology is supported by constructivism learning theory. A “problem-solving approach to learning” is how constructivism is frequently described. In which, it requires a shift in the nurse educator’s epistemic assumptions about the teaching-learning process. Constructivism requires nursing educators to take on the role of a learning facilitator who encourages collaboration and teamwork as well as guides the students in building their knowledge. The underlying assumptions of constructivism include the idea that learning occurs as a result of social interaction in which the student actively creates their own knowledge, while prior experiences serve as the foundation for the learning process. The “flipping classroom” reflects that approach, which integrates student-centered learning [ 6 ].

Flipped teaching approach has students learning before lectures, teaching the material to better use classroom time for cooperative learning. The discussed herein represents studies and case studies from primary through graduate schools. The literature indicated students did see value in this pedagogical approach. Most of the studies found that flipped teaching was associated with better understanding of the material learned, higher academic achievement/performance, and potentially improved psychosocial factors (self-esteem, self-efficacy) that are associated with learning. Interestingly, one article pointed out that non-didactic material used in flipped-teaching lead to an increase in performance and this did not happen with didactic material.

According to Jordan et al. [ 7 ], a flipped teaching is a methodology that was developed as a response to advancements and changes in society, pedagogical approaches, and rapid growth and advancement of technology; The flipped teaching was evolved from the peer instruction and just in time teaching approaches. Jordan and colleagues [ 7 ] state that independent learning happens outside the classroom prior to the lesson through instructional materials while classroom time is maximized to fosters an environment of collaborative learning. Qutob [ 8 ] states that flipped teaching enhances student learning and engagement and promotes greater independence for students.

Jordan et al. [ 7 ] studied the use of flipped teaching on the teaching of first- and fourth-year students’ discrete mathematics and graphs, models, and applications. Across all the classes studied (pilot, graph, model and application, practices, computer and business administration), students preferred flipped teaching compared to traditional teaching. According to Jordan et al. [ 7 ], the quality of the materials and exercises, and perceived difficulty of the course and material are important to student satisfaction with this method. Additionally, it was found that interactions with teachers and collaborative learning were positive. Likewise, Nguyen et al. [ 9 ] found students favorably perceive flipped teaching. This is especially true for those students who have an understanding that the method involves preparation and interaction and how these affect the outcomes. Vazquez and Chiang [ 10 ] discuss the lessons learned from observing two large Principles of Economics Classes at the University of Illinois; each class held 900 students. Vazquez and Chiang [ 10 ] found that the students preferred watching videos over reading the textbook. Secondly, students were better prepared after they watched pre-lecture videos compared to reading the textbook beforehand. The third finding involved the length of time pre-lecture work should take; the authors state pre-lecture work should be approximately 15 to 20 min of work ahead of each in-class session. The fourth finding is that the flipped teaching is a costly endeavor. Finally, it was found that having the students watch videos before the lectures reduced the time spent in class covering the material; the end result of this is students spend more time engaging in active learning than reviewing the material.

Qutob [ 8 ] studied the effects of flip teaching using two hematology courses. One of the courses was delivered using traditional teaching and the other course was flipped teaching. Qutob [ 8 ] found that students in the flipped course not only performed better on academic tasks, but also they had more knowledge and understanding of the material covered compared to those in the traditional format class. Additionally, Qutob [ 8 ] revealed that students in the flipped classroom found this style of learning is more beneficial than traditional teaching. Moreover, Florence and Kolski [ 11 ] found an improvement in high school students’ writing post-intervention. The authors further found that students were more engaged with the material and had a positive perception of the flipped model. Bahadur and Akhtar [ 12 ] conducted a meta-analysis of twelve research articles on flipped teaching; the studies demonstrated that students taught in the flip teaching classroom performed better academically and were more interactive and engaged in the material than students taught through traditional methods. Galindo-Dominguez [ 13 ] conducted a systematic review using 61 studies and found evidence for the effectiveness of this approach compared to other pedagogical approaches with regards to academic achievement, improved self-efficacy, motivation, engagement, and cooperativeness. Webb et al. [ 14 ] studied 127 students taking microeconomics and found the delivery of flipped material (didactic vs. non-didactic) influenced students’ improvements. They further found performance improvements for the students who attended flipped classes using non-didactic pre-class material. At the same time, Webb et al. [ 14 ] further found non-improvement associated with flipped classes that used didactic pre-class materials; these materials are akin to traditional lectures.

In the context of nursing education, flipped teaching strategy has demonstrated promising and effective results in enhancing student motivation, performance, critical thinking skills, and learning quality. The flipped teaching classrooms were associated with high ratings in teaching evaluations, increased course satisfaction, improved critical thinking skills [ 15 ], improved exam results and learning quality [ 16 ] and high levels of personal, teaching, and pedagogical readiness [ 17 ]. Another study showed that student performance motivation scores especially in extrinsic goal orientation, control beliefs, and self-efficacy for learning and performance were significantly higher in the flipped teaching classroom when compared to the traditional classroom strategy [ 16 ].

Regardless of these important findings, there have been limited studies published about the flipped teaching strategy in Saudi Arabia, particularly among nursing students. Therefore, implementing the flipped teaching strategy in a therapeutic communication course would be effective in academic performance and retention of knowledge. The flipped teaching method will fit best with the goals of a therapeutic communication course as both focus on active learning and student engagement. This approach is well-matched for a therapeutic communication course as it allows students to apply and practice the communication techniques and strategies, they have learned outside of class from the flipped teaching materials and freeing up class time for interactive and experiential activities. The filliped teaching method can provide opportunities for students to apply effective interpersonal communication skills in classes, provide more time to observe students practicing therapeutic communication techniques through role-play, group discussions, and case studies. It also allows instructors to refine and provide individualized feedback and offer real-time guidance to help students improve their interpersonal communication skills.

The current study aims to examine the students’ perception of a teaching innovation based on the use of the flipped teaching strategy in the therapeutic communication course. Further, to compare if there is a difference in students’ academic performance of students who participate in a traditional teaching strategy when compared with students who participate in flipped teaching intervention.

Students who participated in the intervention group perceived a high level of effectiveness of the flipped teaching classroom as a teaching/learning strategy.

There is a significant difference in the mean scores of students’ academic performance between students who participate in a traditional teaching strategy (control group) when compared with those students who participate in flipped teaching classroom (intervention group).

Design of the study

Quantitative method, quasi-experimental design was used in this study. This research study involves implementing a flipped teaching strategy (intervention) to examine the effectiveness of the flipped teaching among the participants in the intervention group and to examine the significant difference in the mean scores of the students’ performance between the intervention and control group.

College of Nursing at one of the educational universities located in Saudi Arabia.

A purposive sampling technique was conducted in this study. This sampling technique allows the researcher to target specific participants who have certain characteristics that are most relevant and informative for addressing the research questions. The advantages of the purposive sampling lie in gathering in-depth, detailed and contextual data from the most appropriate sources and ensure that the study captures a more comprehensive understanding of the concept of interest by considering different viewpoints [ 18 ]. Participants were eligible to participate in this study if they were (1) Enrolled in the undergraduate nursing programs (Nursing or Midwifery Programs) in the College Nursing; (2) Enrolled in Therapeutic Communication Course; (3) at least 18 years old or older. Participant’s data was excluded if 50% of the responses were incomplete. The sample size was calculated using G-Power. The required participants for recruitment to implement this study is 152 participants to reach a confidence level of 95% and a margin error of 5%.

Measurement

Demographic data including the participants’ age and GPA were collected from all the participants. Educational characteristics related to the flipped teaching were collected from the participants in the intervention group including the level of English proficiency, program enrollment, attending previous, attending previous course(s) that used flipped teaching strategy, time spent each week preparing for the lectures, time spent preparing for the course exams, and recommendation for applying flipped teaching in other classes.

The student’s perception of the effectiveness of the flipped teaching strategy was measured by a survey that focused on the effectiveness of flipped teaching. This data was collected only from the participants in the intervention group. The survey involves 14 items that used 5-point Likert-type scale (5 = strongly agree, 4 = agree, 3 = neutral, 2 = disagree and 1 = strongly disagree). The sum of the scores was calculated for the item, a high score indicates a high effectiveness of flipped teaching. The survey was developed by Neeli et al. [ 19 ] and the author was contacted to obtain permission to use the survey. The reliability of the scale was tested using Cronbach alpha, which was 0.91, indicating that the scale has an excellent reliability.

Also, student academic performance was measured for both the intervention and control groups though the average cumulative scores of the assessment methods of students who were enrolled in the Therapeutic Communication Course, given a total of 100. The students’ grades obtained in the course were calculated based grading structure of the Ministry of Education in Saudi Arabia (The Rules and Regulations of Undergraduate Study and Examination).

Ethical approval

Institutional Review Board (IRB) approval (No. 22-0860) was received before conducting the study. Participants were provided with information about the study and informed about the consent process. Informed consent to participate was obtained from all the participants in the study.

Intervention

Therapeutic communication course was taught face-to-face for students enrolled in the second year in the Bachelor of Science in Midwifery and Bachelor of Science in Nursing Programs. There were eight sections for the therapeutic communication course, two of them were under the midwifery program and the remaining (six sections) were under the nursing program. Each section was held once a week in a two-hour length for 10 weeks during the second semester of 2022. Students in all sections received the same materials, contents, and assessment methods, which is considered the traditional teaching strategy. The contents of the course included the following topics: introduction of communication, verbal and written communication, listening skills, non-verbal communication, nurse-patient relationship, professional boundaries, communication styles, effective communication skills for small groups, communication through nursing process, communication with special needs patient, health education and principles for empowering individuals, communication through technology, and trends and issues in therapeutic communication. The course materials, course objectives and learning outcomes, learning resources, and other supporting materials were uploaded to the electronic platform “Blackboard” (A Learning Management System) for all sections to facilitate students’ preparation during classes. The assessment methods include written mid-term examination, case studies, group presentation, and final written examination. The grading scores for each assessment method were also the same for all sections.

The eight course sections were randomly assigned into traditional teaching strategy (control group) or flipped teaching strategy (intervention group). Figure  1 shows random distribution of the course sections. The intervention group ( n  = 182) included one section of the Bachelor of Science in Midwifery program ( n  = 55 students) and three sections of Bachelor of Science in Nursing program ( n  = 127 students). The control group ( n  = 173) included one section of the Bachelor of Science in Midwifery program ( n  = 50 students) and three sections of Bachelor of Science in Nursing program ( n  = 123 students). Although randomization of the participants is not possible, we were able to create comparison groups between participants who received the flipped teaching and traditional teaching strategy. To ensure the consistency of the information given to the students and reduce the variability, the instructors were meeting periodically and reviewed the materials together. More importantly, all students received the same topics and assessment methods as stated in the course syllabus and as mentioned above. The instructors in all sections were required to answer students’ questions, provide clarification to the points raised throughout the semester, and give constructive feedback after the evaluation of each assessment method. Students were encouraged to freely express their opinions on the issues discussed and to share their thoughts when the opinions were inconsistent.

figure 1

Random Distribution of the Course Sections

The intervention group were taught the course contents by using the flipped teaching strategy. The participants in the intervention group were asked to read the lectures and watch short videos from online sources before coming to classes. Similar materials and links were uploaded by the course instructors into the Blackboard system. During the classes, participants were divided into groups and were given time to appraise research articles and case scenarios related to the topics of the course. During the discussion time, each group presented their answers, and the course instructors encouraged the students to share their thoughts and provided constructive feedback. Questions corresponded to the intended objectives and learning outcomes were posted during the class time in Kahoot and Nearpod platforms as a competition to enhance students’ engagement. By the end of the semester, the flipped teaching survey was electronically distributed to students who were involved in the intervention group to examine the educational characteristics and assess the students’ perceptions about the flipped teaching.

Data collection procedure

After obtaining the IRB approval, the PI sent invitation letters to the potential participants using their official university email accounts. The invitation letter included a Microsoft Forms’ link with the description about the study, aim, research question, and sample size required to conduct the study. All students gave their permission to participate, and informed consent was obtained from them ( N  = 355). The link also included questions related to age, GPA, and approval to use their scores from assessment methods for research purposes. The first part of data collection was obtained immediately after the therapeutic communication course was over. The average cumulative scores of all the assessment methods (out of 100) were calculated to measure the students’ academic performance for both the intervention and control groups.

The second part of data collection was conducted after the final exam of the therapeutic communication course ( n  = 182). A Microsoft Forms link was sent to the participants in the intervention group only. It included questions related to educational characteristics and students’ perception of the effectiveness of flipped teaching. Students needed a maximum of 10 min to complete the study survey.

Data analysis

Data was analyzed using the SPSS version 27. Descriptive analysis was used to analyze the demographic and educational characteristics and perception of flipped teaching strategy. An independent t-test was implemented to compare the mean scores of the intervention and control groups to examine whether there is a statistically significance difference between both groups. A significance level of p  < 0.05 was determined as statistical significance in this study.

The total number of students who enrolled in therapeutic communication course was 355 students. The intervention group included 182 students and the control group included 173 students. The mean age of all participants in the study was 19 years old (M = 19.56, SD = 1.19). The mean GPA was 3.53 (SD = 1.43). Of those enrolled in the intervention group, only 70 out of 182 students completed the survey. Table  1 represents the description of the educational characteristics of the participants in intervention group ( n  = 70). Around 65% of the participants reported that their level of English proficiency is intermediate, and they were enrolled in the nursing program. Half of the students had precious courses that used flipped teaching strategy. About one-third of the students indicated that they spent less than 15 min each week preparing for lectures. Around 65% of the students stated that they spent more than 120 min preparing for the course exam. Half of the students gave their recommendation for applying flipped teaching strategy in other courses. The mean score of the students’ performance in Therapeutic Communication course who enrolled in the intervention group is 83.34 (SD = 9.81) and for those who were enrolled in the control group is 75.57 (SD = 9.82).

The students perceived a moderate level of effectiveness of the flipped teaching classroom as a teaching strategy (M = 3.49, SD = 0.69) (Table  2 ). The three highest items that improved students’ perception about the flipped teaching strategy were: flipped classroom session develops logical thinking (M = 3.77, SD = 0.99), followed by flipped classroom session provides extra information (M = 3.68, SD = 1.02), then flipped classroom session improves the application of knowledge (M = 3.64, SD = 1.04). The three lowest items perceived by the students were: Flipped classroom session should have allotted more time for each topic (M = 3.11, SD = 1.07), flipped classroom session requires a long time for preparation and conduction (M = 3.23, SD = 1.04), and flipped classroom session reduces the amount of time needed for study when compared to lectures (M = 3.26, SD = 1.07).

An independent sample T-test was implemented to compare the mean scores of the students’ academic performance between the intervention group ( n  = 182) and control group ( n  = 173) (Table  3 ). The results of Levene’s test for equality of variances ( p  = 0.801) indicated that equal variances assumed, and the assumption of equal variances has not been violated. The significant level value (2-tailed) is p  ≤ 0.001, indicating that there is a statistically significant difference in the mean scores of students’ academic performance for the intervention group (M = 83.34, SD = 9.81) and control group (M = 75.57, SD = 9.82). The magnitude of the differences in the means (Mean difference= -7.77%, CI: -10.02 to -5.52) is very small (Eta squared = 0.00035).

Flipped teaching is a learning strategy that engages students in the learning process allowing them to improve their academic performance and develop cognitive skills [ 20 ]. This study investigated the effect of implementing flipped teaching as an interactive learning strategy on nursing students’ performance. Also, the study examined students’ perceptions of integrating flipped teaching into their learning process. Flipped teaching is identified as an interactive teaching strategy that provides an engaging learning environment with immediate feedback allowing students to master the learning content [ 4 , 5 ]. Improvement in the student’s academic performance and development of learning competencies were expected outcomes. The flipped classroom approach aligns with the constructivist theory of education, which posits that students actively construct their own knowledge and understanding through engaging with the content and applying it in meaningful contexts. By providing pre-class materials (e.g., videos, readings) for students to engage with independently, the flipped classroom allows them to build a foundational understanding of the concepts before class, enabling them to actively participate in discussions, problem-solving, and collaborative activities during the class. By shifting the passive acquisition of knowledge to the pre-class phase and dedicating in-class time to active, collaborative, and problem-based learning, the flipped classroom approach creates an environment that fosters deeper understanding, the development of critical thinking and clinical reasoning skills as well as the ability to apply knowledge in clinical practice [ 21 ].

Effectiveness of the flipped teaching on students’ academic performance

The influence of flipped teaching on students’ academic performance was identified by evaluating students’ examination scores. The results of this study indicated that flipped teaching had a significant influence on students’ academic performance ( p  = 0.000). This significant influence implies the positive effectiveness of flipped teaching on students’ academic performance (M = 83.34, SD = 9.81) compared to traditional classroom (M = 75.57, SD = 9.82). These results are in line with other researchers regarding improving students’ academic performance [ 7 , 8 , 9 , 10 ]. Qutob’s [ 8 ] study shows that flipped teaching positively influences students’ performance. Preparation for class positively influenced students’ academic performance. The flipped classroom approach is underpinned by the principles of constructivism. These principles emphasize the active role of students in constructing their own understanding of concepts and ideas, rather than passively receiving information [ 21 ].

In a traditional classroom, the teacher typically delivers content through lectures, and students are tasked with applying that knowledge through homework or in-class activities. However, this model often fails to engage students actively in the learning process. In contract,

Flipped classroom requires students to prepare for the class which allows them to be exposed to the learning material before the class. During class time, students are giving opportunities to interact with their classmates and instructors to discuss the learning topic which can positively influencing their academic performance later [ 7 , 9 ]. Furthermore, the flipped classroom approach aligns perfectly with the core tenets of constructivism. Its adherence to the constructivist 5E Instructional Model further demonstrates its grounding in this learning theory. The 5E model, which includes the phases of engagement, exploration, explanation, elaboration, and evaluation, provides a framework for facilitating the active construction of knowledge [ 22 ].

It first sparks student interest and curiosity about the concepts (engagement), then enables students to investigate and experiment with the ideas through hands-on activities and investigations (exploration). This is followed by opportunities for students to make sense of their explorations and construct their own explanations (explanation). The flipped classroom then allows students to apply their knowledge in new contexts, deepening their understanding (elaboration). Finally, the evaluation phase assesses student learning and provides feedback, completing the cycle of constructivist learning [ 22 ]. This alignment with the 5E model, along with the flipped classroom’s emphasis on active learning and create environment that nurtures deeper understanding, the development of higher-order thinking skills, and the ability to transfer learning to real-world contexts.

In this study, one third of the students indicated that the preparation time was less than fifteen minutes a week. According to Vazquez and Chiang [ 10 ], preparation time for classroom should be about 15 to 20 min for each topic. Preparation for class did not take much time but positively influenced students’ academic performance. Furthermore, preparation for class allows students to develop the skills to be independent learners [ 8 ]. Independence in learning develops continuous learning skills, such as long-life learning which is a required competency for nursing. Garcia et al. [ 22 ] found out that focusing on shifting teachers’ practices towards active learning approaches, such as the 5E Instructional Model, can have lasting, positive impacts on students’ conceptual understanding and learning.

Students’ perception of flipped teaching as a teaching strategy

Students’ perception of flipped teaching as a learning strategy was examined using a survey developed by Neeli et al. [ 19 ]. Students recognize flipped teaching as an effective teaching strategy (M = 3.49, SD = 0.69) that had a positive influence on their learning processes and outcomes. Several studies identified the positive influence of flipped teaching on students’ learning process and learning outcomes [ 8 , 19 ]. Flipped teaching provides a problem-based learning environment allowing students to develop clinical reasoning, critical thinking, and a deeper understanding of the subject [ 5 , 8 , 19 , 23 ]. The flipped teaching approach introduces students to the learning materials before class. Class time is then utilized for discussion, hands-on, and problem-solving activities to foster a deeper understanding of the studied subject [ 5 ]. Consequently, flipped teaching provides a problem-based learning environment as it encourages students to be actively engaged in the learning process, work collaboratively with their classmates, and apply previously learned knowledge and skills to solve a problem. The result of this study is consistent with the results from a systematic review conducted by Youhasan et al. [ 5 ]. Implementing flipped teaching in undergraduate nursing education provides positive outcomes on students’ learning experiences and outcomes and prepares them to deal with future challenges in their academic and professional activities [ 5 ].

Implications

The results from this study identified that flipped teaching has a significant influence on students’ academic performance. The results also indicated that students have positive perception of flipped teaching as an interactive learning strategy. Flipped teaching pedagogy could be integrated in nursing curriculum to improve the quality of education process and outcomes which will result in improving the students’ performance. Flipped teaching provides an interactive learning environment that enhances the development of essential nursing competencies, such as communication, teamwork, collaboration, life-long learning, clinical reasoning, and critical thinking. For example, flipped teaching allows students to develop communication skills throughout discussion in the classroom, and collaboration skills by working with their classmate and instructor. In this study, flipped teaching was implemented in a theoretical course (therapeutic communication course). This interactive learning strategy could also be applied in clinical and practice setting for effective and meaningful learning process and outcomes.

Strengths and limitations

This research study reveals the effectiveness of flipped teaching on students’ academic performance. This study used a quasi-experimental design with control and intervention groups to investigate the influence of flipped teaching on nursing education. Nevertheless, this study has limitations. One of the study’s limitations is the lack of randomization, thus causal association between the variables cannot be investigated. In addition, this study used a self-administered survey which may include respondents’ bias; thus, it may affect the results. Also, this study investigated students’ perceptions of flipped teaching as a learning strategy. The results from examining students’ perceptions indicated that students had a positive perception of flipped teaching as it allowed them to develop essential nursing competencies. This study did not focus on identifying and measuring competencies. Therefore, future studies must consider measuring the influence of flipped teaching on students’ ability to acquire nursing competencies, such as critical thinking and clinical reasoning.

Flipped teaching is an interactive learning strategy that depends on students’ preparation of the topic to be interactive learners in the learning environment. Interactive learning environment improves learning process and outcomes. This study indicated that flipped teaching has significant influence on students’ academic performance. Students perceived flipped teaching as a learning strategy that allowed them to acquire learning skills, such as logical thinking and application of knowledge. These skills allow students to have meaningful learning experience. Also, students could apply these skills in other learning content and/or environments, for example, in clinical. Thus, we believe that flipped teaching is an effective learning approach to be integrated in the nursing curriculum to enhance students’ learning experience.

Data availability

The datasets generated and/or analyzed during the current study are not publicly available due to data privacy but are available from the corresponding author on reasonable request.

Abbreviations

Institutional Review Board

Standard deviation

The level of marginal significance within a statistical test

Confidence Interval of the Difference

Figueiredo AR, Potra TS. Effective communication transitions in nursing care: a scoping review. Ann Med. 2019;51(sup1):201–201. https://doi.org/10.1080/07853890.2018.1560159 .

Article   Google Scholar  

O’Rae A, Ferreira C, Hnatyshyn T, Krut B. Family nursing telesimulation: teaching therapeutic communication in an authentic way. Teach Learn Nurs. 2021;16(4):404–9. https://doi.org/10.1016/j.teln.2021.06.013 .

Thai NTT, De Wever B, Valcke M. The impact of a flipped classroom design on learning performance in higher education: looking for the best blend of lectures and guiding questions with feedback. Computers Educ. 2017;107:113–26. https://doi.org/10.1016/j.compedu.2017.01.003 .

Özbay Ö, Çınar S. Effectiveness of flipped classroom teaching models in nursing education: a systematic review. Nurse Educ Today. 2021;104922–104922. https://doi.org/10.1016/j.nedt.2021.104922 . 102 (n. Issue).

Youhasan P, Chen Y, Lyndon M, Henning MA. Exploring the pedagogical design features of the flipped classroom in undergraduate nursing education: a systematic review. BMC Nurs. 2021;20(1):50–50. https://doi.org/10.1186/s12912-021-00555-w .

Barbour C, Schuessler JB. A preliminary framework to guide implementation of the flipped classroom method in nursing education. Nurse Educ Pract. 2019;34:36–42. https://doi.org/10.1016/j.nepr.2018.11.001 .

Jordan C, Magrenan A, Orcos L. Considerations about flip education in the teaching of advanced mathematics. Educational Sci. 2019;9(3):227.

Qutob H. Effect of flipped classroom approach in the teaching of a hematology course. PLoS ONE. 2022;17(4):1–8.

Nguyen B, Yu X, Japutra A, Chen C. Reverse teaching: exploring student perceptions of flip teaching. Act Learn High Educ. 2016;17(1):51–61.

Vazquez J, Chiang E. Flipping out! A case study on how to flip the principles of economics classroom. Int Adv Econ Res. 2015;21(4):379–90.

Florence E, Kolski T. Investigating the flipped classroom model in a high school writing course: action research to impact student writing achievement and engagement. TechTrends: Link Res Pract Improve Learn. 2021;65(6):1042–52.

Bahadur G, Akhtar Z. Effect of teaching with flipped classroom model: a meta-analysis. Adv Social Sci Educ Humanit Res. 2021;15(3):191–7.

Google Scholar  

Galindo-Dominguez H. Flipped classroom in the educational system: Trend or effective pedagogical model compared to other methodologies? J Educational Technol Soc. 2021;24(3):44–60.

Webb R, Watson D, Shepherd C, Cook S. Flipping the classroom: is it the type of flipping that adds value? Stud High Educ. 2021;46(8):1649–63.

Barranquero-Herbosa M, Abajas-Bustillo R, Ortego-Maté C. Effectiveness of flipped classroom in nursing education: a systematic review of systematic and integrative reviews. Int J Nurs Stud. 2022;135:104327. https://doi.org/10.1016/j.ijnurstu.2022.104327 .

Lelean H, Edwards F. The impact of flipped classrooms in nurse education. Waikato J Educ. 2020;25:145–57.

Youhasan P, Chen Y, Lyndon M, Henning MA. Assess the feasibility of flipped classroom pedagogy in undergraduate nursing education in Sri Lanka: a mixed-methods study. PLoS ONE. 2021;16(11):e0259003. https://doi.org/10.1371/journal.pone.0259003 .

Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, Peterson DE, Finkelstein J. The use and interpretation of quasi-experimental studies in medical informatics. J Am Med Inf Association: JAMIA. 2006;13(1):16–23. https://doi.org/10.1197/jamia.M1749 .

Neeli D, Prasad U, Atla B, Kukkala SSS, Konuku VBS, Mohammad A. (2019). Integrated teaching in medical education: undergraduate student’s perception.

Baloch MH, Shahid S, Saeed S, Nasir A, Mansoor S. Does the implementation of flipped Classroom Model improve the Learning Outcomes of Medical College Students? A single centre analysis. J Coll Physicians Surgeons–pakistan: JCPSP. 2022;32(12):1544–7.

Robertson WH. The Constructivist flipped Classroom. J Coll Sci Teach. 2022;52(2):17–22.

Garcia I, Grau F, Valls C, Piqué N, Ruiz-Martín H. The long-term effects of introducing the 5E model of instruction on students’ conceptual learning. Int J Sci Educ. 2021;43(9):1441–58.

Chu TL, Wang J, Monrouxe L, Sung YC, Kuo CL, Ho LH, Lin YE. The effects of the flipped classroom in teaching evidence based nursing: a quasi-experimental study. PLoS ONE. 2019;14(1):e0210606.

Download references

Acknowledgements

The authors are grateful for the facilities and other support given by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R447), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R447), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia

Author information

Authors and affiliations.

Nursing Management and Education Department, College of Nursing, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia

Shaherah Yousef Andargeery, Rania Ali Alhalwani, Shorok Hamed Alahmedi & Waad Hasan Ali

Medical-Surgical Nursing Department, College of Nursing, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia

Hibah Abdulrahim Bahri

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization, H.B, S.Y.A, W.A.; methodology, S.Y.A., S.H.A.; validation, S.Y.A.; formal analysis, S.Y.A.; resources, H.B, S.Y.A, W.A, R. A.; data curation, S.Y.A, S.H.A.; writing—original draft preparation, R.A, H.B, S.Y.A., S.H.A, W.A; writing—review and editing, R.A, H.B, S.Y.A, S.H.A, W.A; supervision, R.A, H.B, S.Y.A, S.H.A.; project administration, R.A, S.Y.A, S.H.A.; funding acquisition, S.Y.A. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Hibah Abdulrahim Bahri .

Ethics declarations

Institutional review board.

Institutional Review Board (IRB) in Princess Nourah bint Abdulrahman University, approval No. (22-0860).

Informed consent

Informed consents were obtained from all study participants.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Andargeery, S.Y., Bahri, H.A., Alhalwani, R.A. et al. Using a flipped teaching strategy in undergraduate nursing education: students’ perceptions and performance. BMC Med Educ 24 , 926 (2024). https://doi.org/10.1186/s12909-024-05749-9

Download citation

Received : 26 February 2024

Accepted : 05 July 2024

Published : 26 August 2024

DOI : https://doi.org/10.1186/s12909-024-05749-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Flipped teaching
  • Active learning
  • Teaching strategy
  • Nursing education
  • Undergraduate nursing education

BMC Medical Education

ISSN: 1472-6920

quasi experimental pre post study

IMAGES

  1. A quasi-experimental design using pre-test and posttest.

    quasi experimental pre post study

  2. Pre-post quasi-experimental study design.

    quasi experimental pre post study

  3. PPT

    quasi experimental pre post study

  4. The pre-post quasi-experimental study design (also known as the

    quasi experimental pre post study

  5. Quasi-experiment with a two-group pre-test/post-test design

    quasi experimental pre post study

  6. PPT

    quasi experimental pre post study

COMMENTS

  1. The Use and Interpretation of Quasi-Experimental Studies in Medical Informatics

    Quasi-experimental study designs, often described as nonrandomized, pre-post intervention studies, are common in the medical informatics literature. Yet little has been written about the benefits and limitations of the quasi-experimental approach as applied to informatics studies. This paper outlines a relative hierarchy and nomenclature of ...

  2. PDF Checklist for Quasi- Experimental Studies (Non-randomized Experimental

    single group pre-test/post-test studies where the patients are the same (the same one group) in any pre-post comparisons, ... Critical Appraisal Checklist for Quasi-Experimental Studies - 6 This question is about the reliability of the measurement performed in the study, it is not about the validity ...

  3. Selecting and Improving Quasi-Experimental Designs in Effectiveness and

    Table 2 provides examples of studies using the pre-post non-equivalent control group designs that have employed one or more of these improvement approaches to improve the internal ... and Stanley JC, "Experimental and Quasi-Experimental Designs for Research on Teaching." In Gage NL (ed.), Handbook of Research on Teaching. Chicago: Rand ...

  4. Experimental and Quasi-Experimental Designs in Implementation Research

    Quasi-experimental designs include pre-post designs with a nonequivalent control group, interrupted time series (ITS), and stepped wedge designs. Stepped wedges are studies in which all participants receive the intervention, but in a staggered fashion. It is important to note that quasi-experimental designs are not unique to implementation science.

  5. Quasi-Experimental Design

    True experimental design Quasi-experimental design; Assignment to treatment: The researcher randomly assigns subjects to control and treatment groups.: Some other, non-random method is used to assign subjects to groups. Control over treatment: The researcher usually designs the treatment.: The researcher often does not have control over the treatment, but instead studies pre-existing groups ...

  6. Use and Interpretation of Quasi-Experimental Studies in Infectious

    Abstract. Quasi-experimental study designs, sometimes called nonrandomized, pre-post-intervention study designs, are ubiquitous in the infectious diseases literature, particularly in the area of interventions aimed at decreasing the spread of antibiotic-resistant bacteria.

  7. How to Use and Interpret Quasi-Experimental Design

    A quasi-experimental study (also known as a non-randomized pre-post intervention) is a research design in which the independent variable is manipulated, but participants are not randomly assigned to conditions.. Commonly used in medical informatics (a field that uses digital information to ensure better patient care), researchers generally use this design to evaluate the effectiveness of a ...

  8. Quasi Experimental Design Overview & Examples

    Quasi-experimental research is a design that closely resembles experimental research but is different. The term "quasi" means "resembling," so you can think of it as a cousin to actual experiments. ... A key strength of quasi-experiments is their frequent use of "pre-post testing." This approach involves conducting initial tests ...

  9. 8.2 Quasi-experimental and pre-experimental designs

    Pre-experimental designs - a variation of experimental design that lacks the rigor of experiments and is often used before a true experiment is conducted. Quasi-experimental design - designs lack random assignment to experimental and control groups. Static group design - uses an experimental group and a comparison group, without random ...

  10. PDF QUASI-EXPERIMENTAL or AND SINGLE-CASE EXPERIMENTAL post, DESIGNScopy

    in returns to a baseline phase (A) in which the treatment i. removed. This type of research design can be represented as follows:reversal design, or ABA design, is a single-case experimental design in which a single participant is obser. ed before (A), during (B), and after (A) a treatment or manipulati.

  11. Quasi-Experimental Design (Pre-Test and Post-Test Studies) in

    The pre-test and post-test quasi-experimental design is a widely established statistical method for the immediate evaluation of the efficacy of new concepts, as is the case with this work [30,31 ...

  12. One-Group Pretest-Posttest Design: An Introduction

    The effect of the intervention is measured by comparing the pre- and post-intervention measurements (the null hypothesis is that the intervention has no effect, i.e. the 2 measurements are equal). ... Kimport & Hartzell conducted a one-group pretest-posttest quasi-experiment to study the effect of clay work ... Experimental and Quasi ...

  13. Separate-Sample Pretest-Posttest Design: An Introduction

    The benefit of using a separate-sample pretest-posttest design is that it avoids some of the most common biases that other quasi-experimental designs suffer from. Here are some of these advantages: 1. Avoids testing bias. Definition: Testing effect is the influence of the pretest itself on the posttest (regardless of the intervention). Testing ...

  14. PDF Checklist for Quasi-Experimental Studies

    Critical Appraisal Tool for Quasi-Experimental Studies (experimental studies without random allocation) Answers: Yes, No, Unclear or Not Applicable 1. Is it clear in the study what is the 'cause' and what is the 'effect' (i.e. there is no confusion ... any pre-post comparisons, ...

  15. Pretest-Posttest Design: Definition & Examples

    The process for each research approach is as follows: Quasi-Experimental Research. 1. Administer a pre-test to a group of individuals and record their scores. 2. Administer some treatment designed to change the score of individuals. 3. Administer a post-test to the same group of individuals and record their scores. 4.

  16. Quasi-Experimental Design (Pre-Test and Post-Test Studies) in

    Quasi-Experimental Design (Pre-Test and Post-Test Studies) in Prehospital and Disaster Research. ... Quasi-Experimental Design (Pre-Test and Post-Test Studies) in Prehospital and Disaster Research Prehosp Disaster Med. 2019 Dec;34(6):573-574. doi: 10.1017/S1049023X19005053. Author Samuel J Stratton. PMID: 31767051 DOI: 10.1017 ...

  17. <em>Journal of Nursing Scholarship</em>

    A quasi-experimental pre-post design was used to evaluate outcomes (NA KAB, NV HAP rates, and adverse patient outcomes) before and after OHEP implementation. This study was conducted at a 631 bed Magnet® designated quaternary hospital located in the upper Midwest, with 12 inpatient acute-care units, three intensive care units, and an average ...

  18. pretest-posttest quasi-experimental design: Topics by Science.gov

    A quasi-experimental pre-test and post-test research design was conducted to evaluate adolescents' knowledge, attitude, and behavior about reproductive health before and after the program. Data were collected from students aged 11 to 16, at Ilala Municipal, Dar es Salaam, Tanzania.

  19. Use and Interpretation of Quasi-Experimental Studies in Medical

    Quasi-experimental study designs, often described as nonrandomized, pre-post intervention studies, are common in the medical informatics literature. Yet little has been written about the benefits and limitations of the quasi-experimental approach as applied to informatics studies.

  20. The effects of singing interventions on quality of life, mood and

    Of the 14 included studies, 10 were quasi-experimental (pre vs post design with no control group) (72%), one was experimental (7%), and three were randomised control trials (RCTs) (21%). Eight of the music interventions were delivered by music therapists, two by researchers, and four by professional musicians.

  21. A Quasi Experimental Study to Assess the Effectiveness of Planned

    The study concluded that the research hypothesis H1 was accepted due to significant difference between pre-test and posttest knowledge score of experimental group at 0.05 level of significance. Adolescence is a transitional stage of physical and mental human development that occurs between childhood and adulthood. This transition involves biological (i.e. pubertal), social and psychological ...

  22. 31 8.2 Quasi-experimental and pre-experimental designs

    Pre-experimental designs - a variation of experimental design that lacks the rigor of experiments and is often used before a true experiment is conducted. Quasi-experimental design - designs lack random assignment to experimental and control groups. Static group design - uses an experimental group and a comparison group, without random ...

  23. Roles of Two Learning Methods in the Perceived Competence of Surgery

    Methods: This quasi-experimental study was conducted on fifty 5 th semester operating room technology students at Hamadan University of Medical Sciences in 2023. In this study, students were selected using the convenience sampling method and placed in two educational groups (team-based and task-based) of 25 subjects using the matching method.

  24. PDF 2025 AmeriCorps State & National Mandatory Supplemental Information

    The study design must include pre- and post-assessments without a statistically matched comparison group or a post-assessment comparison between intervention and comparison groups. In some cases, a retrospective pre - ... Quasi-Experimental Design evaluations (QED) with statistically matched comparison (i.e.,

  25. PDF Checklist for Quasi- Experimental Studies (Non-randomized Experimental

    The systematic review is essentially an analysis of the available literature (that is, evidence) and a. judgment of the effectiveness or otherwise of a practice, involving a series of complex steps. JBI takes a. particular view on what counts as evidence and the methods utilised to synthesise those different types of. evidence.

  26. Using a flipped teaching strategy in undergraduate nursing education

    This study used a quasi-experimental design with control and intervention groups to investigate the influence of flipped teaching on nursing education. Nevertheless, this study has limitations. One of the study's limitations is the lack of randomization, thus causal association between the variables cannot be investigated.