Piaget’s Formal Operational Stage: Definition & Examples

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The formal operational stage begins at approximately age twelve and lasts into adulthood. As adolescents enter this stage, they can think abstractly by manipulating ideas in their head, without any dependence on concrete manipulation (Inhelder & Piaget, 1958).

In the formal operational stage, children tend to reason more abstractly, systematically, and reflectively. They are more likely to use logic to reason out the possible consequences of each action before carrying it out.

He/she can do mathematical calculations, think creatively, use abstract reasoning, and imagine the outcome of particular actions.

An example of the distinction between concrete and formal operational stages is the answer to the question, “If Kelly is taller than Ali and Ali is taller than Jo, who is tallest?”

This is an example of inferential reasoning, which is the ability to think about things which the child has not actually experienced and to draw conclusions from its thinking.

The child who needs to draw a picture or use objects is still in the concrete operational stage , whereas children who can reason the answer in their heads are using formal operational thinking.

Formal Operational Thought

Hypothetico-deductive reasoning.

Hypothetico-deductive reasoning is the ability to think scientifically through generating predictions, or hypotheses, about the world to answer questions.

The individual will approach problems in a systematic and organized manner rather than through trial-and-error.

A teenager can consider “what if” scenarios, like imagining the future consequences of climate change based on current trends.

Abstract Thought

Concrete operations are carried out on things, whereas formal operations are carried out on ideas. Individuals can think about hypothetical and abstract concepts they have yet to experience. Abstract thought is important for planning the future.

A student understands and manipulates concepts like justice, love, and freedom without needing concrete examples or experiences. For instance, they can comprehend and discuss a statement such as “Justice is not always fair.”

Scientific Reasoning

An example of formal operational thought could be the cognitive ability to plan and test different solutions to a problem systematically, a process often referred to as “scientific thinking.”

formal operational stage

The ability to form hypotheses, conduct experiments, analyze results, and use deductive reasoning is an example of formal operational thought.

A student forms a hypothesis about a science experiment, predicts potential outcomes, systematically tests the hypothesis, and then analyzes the results.

For example, they could hypothesize that increasing sunlight exposure will increase a plant’s rate of growth, design an experiment to test this, and then understand and explain the results.

Metacognition

Adolescents can think about their own thought processes, reflecting on how they learn best or understanding why they might have made a mistake in judgment.

For example, they might realize that they rush decisions when they’re feeling stressed and plan to use stress-reducing techniques before making important decisions in the future.

Testing Formal Operations

Piaget (1970) devised several tests of formal operational thought. One of the simplest was the “third eye problem”.  Children were asked where they would put an extra eye, if they could have a third one, and why.

Schaffer (1988) reported that when asked this question, 9-year-olds all suggested that the third eye should be on the forehead.  However, 11-year-olds were more inventive, suggesting that a third eye placed on the hand would be useful for seeing round corners.

Formal operational thinking has also been tested experimentally using the pendulum task (Inhelder & Piaget, 1958). The method involved a length of string and a set of weights. Participants had to consider three factors (variables) the length of the string, the heaviness of the weight, and the strength of the push.

The task was to work out which factor was most important in determining the speed of swing of the pendulum.

Participants can vary the length of the pendulum string, and vary the weight. They can measure the pendulum speed by counting the number of swings per minute.

To find the correct answer, the participant has to grasp the idea of the experimental method -that is to vary one variable at a time (e.g., trying different lengths with the same weight). A participant who tries different lengths with different weights will likely end up with the wrong answer.

Children in the formal operational stage approached the task systematically, testing one variable (such as varying the string length) at a time to see its effect. However, younger children typically tried out these variations randomly or changed two things simultaneously.

Piaget concluded that the systematic approach indicated that children were thinking logically, in the abstract, and could see the relationships between things. These are the characteristics of the formal operational stage.

Critical Evaluation

Psychologists who have replicated this research, or used a similar problem, have generally found that children cannot complete the task successfully until they are older.

Robert Siegler (1979) gave children a balance beam task in which some discs were placed on either side of the center of balance. The researcher changed the number of discs or moved them along the beam, each time asking the child to predict which way the balance would go.

He studied the answers given by children from five years upwards, concluding that they apply rules which develop in the same sequence as, and thus reflect, Piaget’s findings.

Like Piaget, he found that eventually, the children were able to take into account the interaction between the weight of the discs and the distance from the center, and so successfully predict balance. However, this did not happen until participants were between 13 and 17 years of age.

He concluded that children’s cognitive development is based on acquiring and using rules in increasingly more complex situations, rather than in stages.

Learning Check

Which of the following is/are not an indication of an individual being in the formal operational stage?

Mark often struggles with planning for the future. He can’t envision different possible outcomes based on his actions. Which of the following is true about Mark? a. He is in the Formal Operational stage. b. He is in the Preoperational stage. c. He is in the Concrete Operational stage. d. He is in the Sensorimotor stage.

Which of the following actions does NOT indicate that Lucy is in the Formal Operational stage? a. Lucy can think about abstract concepts like justice and fairness. b. Lucy enjoys debates and discussions where she can express her thoughts. c. Lucy can only solve problems that are concrete and immediately present. d. Lucy enjoys conducting experiments to test her hypotheses.

Sam can play with his friends and imagine what they think about him. However, he can’t conceptualize different outcomes of a hypothetical situation. What stage is Sam likely in? a. He is in the Formal Operational stage. b. He is in the Preoperational stage. c. He is in the Concrete Operational stage. d. He is in the Sensorimotor stage.

  • (b) He is in the Preoperational stage.
  • (c) Lucy can only solve problems that are concrete and immediately present.
  • (c) He is in the Concrete Operational stage.

According to Jean Piaget, in what stage do children begin to use abstract thinking processes?

According to Jean Piaget, children begin to use abstract thinking processes in the Formal Operational stage, which typically develops between 12 and adulthood.

In this stage, children develop the capacity for abstract thinking and hypothetical reasoning. They no longer rely solely on concrete experiences or objects in their immediate environment for understanding. Instead, they can imagine realities outside their own and consider various possibilities and perspectives.

They can formulate hypotheses, consider potential outcomes, and plan systematic approaches for problem-solving. Additionally, they can understand and manipulate abstract ideas such as moral reasoning, logic, and theoretical concepts in mathematics or science.

Based on Piaget’s theory, what should a teacher provide in the formal operational stage?

Based on Piaget’s theory, a teacher should provide the following for students in the Formal Operational stage:

Abstract Problems and Hypothetical Tasks : Encourage students to think abstractly and solve complex problems. Provide tasks that require logical reasoning, hypothesizing, and the consideration of multiple variables.

Opportunities for Debate and Discussion : Encourage students to express their thoughts and challenge the views of others. This can help them learn to view problems from multiple perspectives.

Experiments : Design lessons to allow students to develop hypotheses and conduct experiments. The scientific method is a valuable tool at this stage.

Real-world Applications : Connect classroom learning to real-world scenarios. This can help students understand the relevance and application of abstract ideas.

Higher-order Questions : Use questions involving analysis, synthesis, and evaluation to improve students’ critical thinking skills.

Guidance in Self-reflection : Encourage students to reflect on their thoughts, emotions, and behavior, which can help them understand their own cognitive processes better.

Moral and Ethical Discussions : As students in this stage begin to think more about abstract concepts such as justice, fairness, and rights, engage them in discussions around moral and ethical issues.

Piaget’s formal operational stage begins around age 11 or 12 and continues throughout adulthood. Does this suggest that once one reaches this level of cognitive development, they plateau? or are there different levels of formal operations?

According to Piaget’s theory, once individuals reach the Formal Operational stage, they have attained the highest level of cognitive development, as defined by his model. However, this does not suggest a cognitive plateau.

Cognitive development is individual and influenced by a range of factors beyond mere biological maturation.

The nature of human cognition is such that there’s always room for refinement, growth, and development throughout adulthood.

Furthermore, individual competence can vary greatly within the Formal Operational stage. For instance, a person might employ formal operational thinking in one area of life (such as their professional specialization) but not others.

Similarly, skills like problem-solving, logical reasoning, and handling abstract concepts can continue to improve with practice and experience.

Inhelder, B., & Piaget, J. (1958). Adolescent thinking.

Piaget, J. (1970). Science of education and the psychology of the child . Trans. D. Coltman.

Schaffer, H. R. (1988). Child Psychology: the future. In S. Chess & A. Thomas (eds), Annual Progress in Child Psychiatry and Child Development . NY: Brunner/Mazel.

Siegler, R. S. & Richards, D. (1979). Development of time, speed and distance concepts. Developmental Psychology, 15 , 288-298.

Print Friendly, PDF & Email

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Understanding the Mind by Measuring the Brain

Throughout the history of psychology, the path of transforming the physical (muscle movements, verbal behavior, or physiological changes) into the mental has been fraught with difficulty. Over the decades, psychologists have risen to the challenge and learned a few things about how to infer the mental from measuring the physical. The Vul, Harris, Winkielman, and Pashler (2009 , this issue) article points out that some of these lessons could be helpful to those of us who measure blood flow in the brain in a quest to understand the mind. Three lessons from psychometrics are discussed.

In 1862, Wilhelm Wundt tried to measure the speed of thought by tracking the discrepancy between the actual and perceived position of a swinging pendulum. By 1879, he had invented the reaction time experiment to measure the speed of perception by presenting participants with a tone or light of a particular color and measuring their latency to press or release a button in response. With these first experiments in psychology, Wundt's goal was to identify and measure the atoms of the mind —the most elemental processes that are the basic ingredients of mental life. Wundt's method remains a standard in the science of psychology today: Researchers carefully observe something physical (be it a set of muscle movements such as reaction time, a verbal response such as a self-reported experience, or a bodily response such as changes in heart rate) and record variations in these measurements across time or context. Somehow, we figure out which part of the observed variation is signal (the variation that are meaningful to us and that we want to explain) and which is noise (the variation we don't care about). We then use the physical to make inferences about the mental. We interpret the “signal” in terms of it psychological meaning and assume that the “noise” does not contaminate this interpretive process.

Throughout the history of psychology, the path of transforming the physical (muscle movements, verbal behavior, or physiological changes) into the mental has been marked with unforeseen problems. Psychologists made many mistakes along the way. Yet over the years, we have also learned a good many things about how to avoid these pitfalls as we measure behavior in various forms and guises to understand the mind. The publication of Vul, Harris, Winkielman, and Pashler (2009 , this issue) provides an opportunity to highlight some of these lessons. As it turns out, the perils of inferring the mind from measuring the brain are not all that different from those encountered when attempting to infer the mind from measuring behavior. In this commentary, I briefly discuss three lessons from classical measurement theory using test construction as an analogy. When psychologists build a test (a standardized procedure for sampling behavior), a smaller group of items are selected from a larger pool by some means, with the goal of measuring some deep psychological property or trait. As I hope you will see, there is a pretty direct parallel between “items in a test” and “voxels in the brain.” As a consequence, we modern-day neuroscientists can learn a thing or two from 20th century psychometrics.

LESSON 1: DISTINGUISHING RELIABILITY FROM VALIDITY

The Minnesota Multiphasic Personality Inventory (MMPI) is a popular personality test given to help diagnose mental disorders. Created in the 1930s, the MMPI was developed using an external, criterion-based approach to test construction ( Hathaway & McKinley, 1940 ). Researchers assembled a vast, heterogeneous pool of items, many of which had no apparent face validity. These items were then administered to samples of patients, and their friends and relatives, at the University of Minnesota Hospital. During test construction, items were selected on a purely empirical basis —those that discriminated between patient samples (the criteria) were retained in the test (regardless of their content) because they were effective (i.e., the items could reliably distinguish between groups of people). Yet, researchers went further to also assume that these items were measuring something real and important about the respondents' mental health (i.e., that the items were valid). Items were grouped into subscales and named for the diagnostic category they best discriminated on the assumption that the items measured the mental essence of a mental disease and could be used to diagnose it. For example, Scale 8 used to be called “the schizophrenia scale” because it contained the set of items that best classified those who held a clinical diagnosis of schizophrenia and those who did not.

Researchers and clinicians had to learn the hard way that the MMPI subscales did not give a direct and unencumbered window on the mental illnesses that they were designed to measure. There is an important difference between a test's effectiveness and its meaning—a set of items can be consistently valuable in discriminating two groups without meaningfully measuring any psychological property of interest ( Burish, 1984 ). What if the schizophrenic patients differed from controls in some other systematic (but spurious) way that allowed the items to discriminate between the groups and allowed the test to be effective even though these items did not measure schizophrenia per se? The observation that a group of schizophrenic patients score higher on a group of items does not in and of itself warrant the conclusion that the responses to the items measure schizophrenia. Vul et al. demonstrate a parallel observation about the relation between voxels and mental states (and even about weather stations and stock performance).

The distinction between effectiveness and meaningfulness can be framed in formal psychometric terms: An external, criterion-based approach to psychological measurement confounds estimates of reliability (the repeatability of a measurement) and validity (the psychological meaning of a measurement). This is because an external criterion is being used to determine both. As a result, it is easy to think we are measuring one (validity) when, in fact, we are measuring other (reliability).

In their article, Vul et al. observe that some cognitive neuroscience investigations of social processes have confused reliability and validity. If we use an external criterion (such as looking at negative and neutral pictures) to identify those voxels showing a significant change in blood oxygenation level dependent (BOLD) response, and then we correlate the negativity of the slides to the magnitude of this change, it is ambiguous whether the correlation reflects a reliability coefficient or a validity coefficient. We all know, however, that this sort of mistake is not limited to functional imaging studies of social phenomena. Examples can be found here and there throughout the imaging literature. And as the MMPI example shows, this mistake in psychological measurement is not limited to functional imaging studies per se. No matter what the measurement domain, the practice of using some external criterion (be it performance on a task or self-reports) to identify signal from noise blurs the boundaries between estimating what is reliable (and potentially effective) and what is valid (and psychological meaningful), leading to confusions when interpreting changes in BOLD signal.

LESSON 2: THE ELUSIVE NATURE OF “ERROR”

Notions of reliability and validity can be phrased in terms of this equation: X = T + E . Albeit somewhat simple, this little equation might be the single most important equation in any field that attempts to infer something mental from the measurement of something physical. It states that observed scores (the actual numbers generated during measurement, denoted as X ) have two parts: that which is consistent and repeatable (the “true score” variance, denoted as T ) and that which is random and not repeatable (“error”, denoted as E ). Reliability is the proportion of variance in observed scores ( X ) that is accounted for by consistent variance ( T )—it is the proportion of variance that is repeatable on another occasions. Validity refers to the psychological meaning of reliability variance. Now that we have measured something consistently, what is it? What are the numbers a measurement of?

As every observed measurement includes some signal ( T ) and some noise ( E ), the trick is figuring out which part is which. There are various ways to accomplish this. A set of measurements taken at one point in time (Time 1) can be correlated with those exact measurements taken at another point in time (Time 2). This is called test-retest reliability. When a group of respondents take the MMPI twice, separated by some interval, the correlation coefficient that results is interpreted as a test-retest (or stability) coefficient. The assumption from classical measurement theory is that only consistent variance can correlate with something else because error is random and will fluctuate from Time 1 to Time 2. Half of the measurements at Time 1 can be correlated with the other half taken at Time1 on the assumption that the two halves are equivalent. 1 This is called split-half reliability. When responses from half the MMPI items are correlated with responses from the other half within a single group of respondents, the resulting correlation coefficient is interpreted as a split-half reliability (or consistency) coefficient. The split-half logic can be extended to examine the consistency of responses across all possible combinations of MMPI items to compute coefficient alpha, a very common estimate of internal consistency.

No single form of reliability is more “true” than any other. The various ways of computing reliability estimate different aspects of repeatability or consistency. Furthermore, no form of reliability tells us what is being measured—a set of observations (i.e., responses to MMPI items, BOLD responses in voxels) could be measuring one thing (homogeneous variance) or one hundred things simultaneously (heterogeneous variance). Reliability only tells us that we are measuring consistently, either within a measurement moment (split-half reliability or internal consistency) or across measurement moments (test-retest reliability).

Once we know how much consistent variance is contained in X , we can then ask what this variance refers to in psychological terms. This is the question of validity. Just as there are many different types of reliability, so too are there many different types of validity. For example, criterion validity refers to a measurement's ability to predict or estimate some other measurement. When we ask whether some items on the MMPI predict a certain kind of diagnosis under certain circumstances, we are asking a question about criterion validity. When we ask if BOLD responses in a particular set of voxels predict a behavior or self-report, we are asking a question about criterion validity. In these cases, the resulting correlation coefficient (or t test, which can be transformed into a correlation coefficient) is interpreted as a validity coefficient. Criterion validity does not really tell us much about the mind—it does not tell us why a set of items or voxels predicts what they do, only that they do predict it. The question “why?” is answered with an estimation of construct validity. Construct validity refers to a measurement's ability to assess an idealized psychological process or state and only that process or state ( Cronbach & Meehl, 1955 ). When we ask questions about the function of any localized set of voxels, we are asking a question about the construct validity of those BOLD responses. In principle, construct validity can only be established by showing that a measurement is associated with an interlocking set of variables (a nomological net) that is dictated by theory; it can never be established with a single validity coefficient. Furthermore, construct validity must show that a measurement is consistently related to a set of criterion measures (i.e., it must show convergent validity) and that it is specific to that construct (i.e., it must show discriminant validity).

Just as there is no “true” measure of reliability, there is no “true” measure of validity. A measurement can have criterion validity without having construct validity. Scale 8 on the MMPI can differentiate groups of patients without measuring the essence of schizophrenia (albeit we now know that there is no essence to schizophrenia or to any other mental illness). Similarly, BOLD responses in the amygdala can be effective in predicting ratings of distress when viewing negative pictures, but this does not mean that the amygdala's function is to compute or represent distress.

Now, as it turns out, X = T + E is an overly simple equation. And this makes things confusing, particularly when it comes to figuring out what kind of error we are dealing with. In fact, the equation should really be re-written as X = ( T + E c ) + E r , where E c refers to systematic error variance (or the variance of something that we do not care about that is inadvertently measured with some consistency) and E r refers to random error variation that is not repeatable. T + E c refers to the variance in an observed measurement ( X ) that is repeatable and is estimated with some form of reliability analysis . Because there are two kinds of error that can be found in observed measurements ( X ), separating signal ( T ) from noise ( E c or E r ) can be even trickier than was first assumed.

There are instances when random error ( E r ) inadvertently masquerades as true score variance (noise is being treated as signal). In such cases, we run the risk of overestimating an observed correlation between a BOLD response and its criterion relative to its true population value. This is what it means to “capitalize on chance.” This is the risk that is inherent when we estimate reliability and validity with the same data coming from the same sample using the same (or strongly related) statistical comparisons. And as Vul et al. illustrate, this risk is real and potent with functional imaging data. When the method used to select measurements (whether items or voxels) is not independent of the subsequent tests performed on those measurements, random error (variance that exists only in this measurement moment) creeps into the estimate of T and can correlate with the dependent variables of interest, inflating the magnitude of the statistical result. So, if we use an external criterion (such as looking at affectively negative and neutral pictures) to identify voxels showing a significant change in activation, and then we correlate the negativity of the slides to the change in BOLD response in those voxels, the resulting correlation coefficient will likely be inflated from its true population value. And if we then interpret this correlation as a validity coefficient, we have almost certainly capitalized on chance. This kind of mistake cannot be dealt with by making corrections for multiple comparisons. Such comparisons cannot protect us from the fact that when the exact same measurements are taken at another point in time or on another sample, the magnitude of the correlation will shrink. This kind of shrinkage is a well-known problem in regression analysis (because regression coefficients are, not coincidentally, mathematically equivalent to correlation coefficients of one type or another).

Furthermore, the risk of capitalizing on chance exists whenever test-retest reliability is low. For example, we could observe a large coefficient alpha (strong internal consistency) or strong split-half reliability in a set of measurements that have lower stability across time (low test-retest reliability) because something unexpected or undesired is influencing all the responses within a single measurement moment. If this is true, then it is not clear that we can avoid capitalizing on chance by splitting a data set in half, so that half of the data from all participants is used to determine reliability (i.e., to select voxels for analysis) and the other half of the data can be used to estimate validity (i.e., to determine what the BOLD activity in those voxels refers to or means), as Vul et al. suggest. As discussed further in Lesson 3 (later in this article), the measurements from the first and second halves of a study (from the same participants) are not, strictly speaking, independent and therefore cannot really be used for cross-validation. To estimate the degree of shrinkage in a correlation coefficient that is inherent in inadvertent instances of capitalizing on chance, it is necessary to split a sample in half, using one set of participants for voxel selection and another set for validity estimation. This procedure is followed routinely for statistical procedures such as discriminant function analysis (in which a subset of items or variables is chosen from a larger available set and then weighted to optimally predict some psychological outcome). True replication can only be achieved with different sets of participants.

Estimating reliability and validity separately can reduce the risk of capitalizing on chance, but it does not protect against spurious correlations (relationships that exist because of some third, irrelevant cause). Spurious correlations can occur when stable, non-random errors ( E c ) are inadvertently estimated as part of T (when there is some consistent variance in our measurements that we do not care about or are not interested in). As long as the systematic error is shared between the observed measurements and their criterion, then estimates of validity become spuriously inflated because the magnitude of the correlation coefficient reflects something other than what we believe it does (such as method variance; Campbell & Fiske, 1959 ). This causes us to make mistakes about what physical measurements mean in psychological terms. For example, the MMPI uses a true-false response format. If we were to give a group of respondents another test that required them to make true-false judgments, scores on the two tests would be more highly correlated because they share a response format and this would be mistakenly estimated as true score variance. Similarly, if we use ratings of negative pictures to identify those voxels showing a significant change in activation, and then we take any other measurement (such as a self-report measure of momentary distress) that uses a similar rating scale, both will have similarly high (or low) correlations with the change in BOLD response in part because they share a similar response format. To separate the systematic variance into that which we care about and that which we do not care about, we must use multiple measures of a construct and analyze the data with structural equation modeling (e.g., Barrett & Russell, 1998 ).

LESSON 3: WITHIN-SUBJECT DEPENDENCIES

Now, it might seem as if we can avoid spurious correlations by making sure that our estimates of construct validity involve the use of a third measure that is relatively independent and free of method variance or other unwanted shared features. With the MMPI, perhaps the construct validity of the schizophrenia scale items could have been quickly confirmed by correlating patients' scores on the schizophrenia scale with another objective criterion, such as ratings of hallucination severity provided by the diagnosing clinicians. But this kind of a criterion would not completely resolve the spurious correlation problem. The validity coefficients would probably be high, for sure, but the risk remains that the correlations could be inflated by systematic error variance. This is due to the fact that, in reality, the third measure (the additional criterion) was not truly independent from the original criterion used to select the items in the first place. Even asking the patients themselves to report on their severity of hallucinations would be problematic as a criterion for Scale 8, because responses to two different scales from the same subject would not be statistically independent from one another. A similar problem is evident when estimating the reliability and validity of BOLD activity with data from different measures sampled from the same participants. If we use an external criterion (such as looking at negative or neutral pictures) to identify those voxels showing a significant change in activation, and then we take any other measurements (such as a self-report measure of momentary distress, reaction times to judge the slides, or even trait ratings of neuroticism) and correlate them with the change in BOLD response in those voxels, there is always a risk that the resulting correlation coefficient will be inflated from its true population value if all the data are taken from the same sample of participants.

Multiple measurements sampled from the same individuals are never truly independent from one another. Within-subject dependencies exists even when the observed measurements are supposed to be measuring very different psychological domains in different modalities. Since the mid 1990s, behavioral scientists have been statistically modeling within-subject dependencies (using hierarchical linear modeling or multilevel regression modeling; e.g., Laurenceau, Barrett, & Pietromonaco, 1998 ). 2

The within-subject dependencies are even more complicated in neuroimaging experiments. Neurons are nested within columns that are nested within voxels that are nested within brain areas that are nested within individual brains that also produce the behavioral estimates that are measured as criteria. Furthermore, the BOLD signal from different voxels that are close in proximity to one another are made even more dependent as a function of preprocessing procedures (such as smoothing). The fact that there are dependencies in the multiple measurements taken from a single individual means that measures share some variance over and above what is caused by the psychological construct of interest, which in turn inflates the magnitude of correlation coefficients (be they reliability or validity estimates). For example, some factor that is irrelevant to the psychological domain of interest (such as hormonal changes related to circadian rhythms or to menstrual cycles in women, or blood volume changes related to hydration) could lead a host of measurements to have spuriously high correlations. In the behavioral realm, these data dependencies have been shown to matter quite a bit, and the final statistical results that are reported often depend on whether these dependencies are modeled or not. It seems critically important, then, to better deal with these dependencies in a statistical sense when trying to determine the psychological meaning of a physical measurement, especially when those measurements are based on the brain (for a start, see Lindquist & Gelman, 2009 ).

Without taking a stand on all the articles discussed in Vul et al. , in large part because I have not read them all myself, I have tried to show with a simple discussion of classical measurement theory that Vul et al. tell an important cautionary tale about the pitfalls of translating measurements of the brain into knowledge about the mind. More important, I have tried to show that these pitfalls are correctable: (a) don't estimate reliability and validity of the BOLD response simultaneously with the same statistic on the same data, (b) ensure that error (whether random or systematic) is not mistakenly estimated as true score variance through replication, and (c) model the dependencies in measurements that are collected on the same individuals or at least consider those dependencies when interpreting your data.

If there is one true adage in psychology, it is that past behavior is a great predictor of future behavior. Over the last 70 years, the MMPI has been the subject of tremendous study and scientific effort to fix the glaring problems surrounding its inception. Items have been replaced to reflect changes in diagnostic practice. The test has been renormed so that it reflects a broader population than just those people living in Minneapolis and the surrounding area in the 1930s. There has been an attempt to take a more theoretically driven (deductive) approach to test construction. And, of course, the various forms of reliability and validity have been estimated separately for the various subscales of the test. Every year, the MMPI contributes to the millions of clinical assessments that are performed in hospitals, mental health clinics, and research labs in many countries around the world. There is no doubt in my mind that imaging research is following the same path. In 70 years, when someone writes the history of how measurements of the brain eventually translated into knowledge about the mind, psychologists and neuroscientists will marvel at how far we have come.

Acknowledgments

Thanks to both the members of my lab and Moshe Bar's lab for stimulating discussion of the Vul et al. (2009) article and some of the commentaries. I especially thank Maria Gendron, Jennifer Fugate, Yang-Ming Huang, Kristen Lindquist, Spencer Lynn, Elizabeth Kensinger, and Julie Norem for their comments on an earlier draft of this article. Preparation of this manuscript was supported by the National Institutes of Health Director's Pioneer Award (DP1OD003312), grants from the National Institute of Aging (AG030311) and the National Science Foundation (BCS 0721260; BCS 0527440), and a contract with the Army Research Institute (W91WAW).

1 The issue of whether neural responses at the beginning and end of a study are equivalent (reflecting the same psychological process) is somewhat questionable (given issues like habituation and repetition suppression). But a discussion of this issue goes beyond the scope of the present article.

2 In the mid-1980s, behavioral scientists were measuring data dependencies in other nested data structures, such as when children who are nested within classrooms that are nested within schools ( Raudenbush & Bryk, 1986 ), and when measuring people who are nested within families that are nested within neighborhoods ( Duncan, Duncan, Okut, Strycker, & Hix-Small, 2003 ).

  • Barrett LF, Russell JA. Independence and bipolarity in the structure of current affect. Journal of Personality and Social Psychology. 1998; 74 :967–984. [ Google Scholar ]
  • Burish M. Approaches to personality inventory construction: A comparison of merits. American Psychologist. 1984; 39 :214–227. [ Google Scholar ]
  • Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin. 1959; 56 :81–105. [ PubMed ] [ Google Scholar ]
  • Cronbach LJ, Meehl PE. Construct validity in psychological tests. Psychological Bulletin. 1955; 52 :281–302. [ PubMed ] [ Google Scholar ]
  • Duncan TE, Duncan SC, Okut H, Strycker LA, Hix-Small H. A multilevel contextual model of neighborhood collective efficacy. American Journal of Community Psychology. 2003; 32 :245–252. [ PubMed ] [ Google Scholar ]
  • Hathaway SR, McKinley JC. A multiphasic personality schedule (Minnesota): I. Construction of the schedule. Journal of Personality. 1940; 10 :249–254. [ Google Scholar ]
  • Laurenceau JP, Barrett LF, Pietromonaco PR. Intimacy as a process: The importance of self-disclosure and responsiveness in interpersonal exchanges. Journal of Personality and Social Psychology. 1998; 74 :1238–1251. [ PubMed ] [ Google Scholar ]
  • Lindquist MA, Gelman A. Corre lations and multiple comparisons in functional imaging: A statistical perspective. Perspectives on Psychological Science. 2009; 4 :xx–xx. [ PubMed ] [ Google Scholar ]
  • Raudenbush SW, Bryk AS. A hierarchical model for studying school effects. Sociology of Education. 1986; 59 :1–17. [ Google Scholar ]
  • Vul E, Harris C, Winkielman P, Pashler H. Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspectives on Psychological Science. 2009; 4 :xx–xx. [ PubMed ] [ Google Scholar ]

Wundt's first psychology experiment

Chapter 12 endnote 45, from How Emotions are Made: The Secret Life of the Brain by Lisa Feldman Barrett . Some context is:

That is what scientists do: we measure stuff, and then we transform the pattern of numbers into something meaningful by making an inference. [...] This began with the first psychology experiment, which was conducted by Wilhelm Wundt in the late 1800s.

Wilhelm Wundt founded the first psychology laboratory in 1879 at the University of Leipzig. [1] [2] He practiced what might be called empirical or experimental philosophy in his attempts to study the mind by measuring the body.

Wundt is credited with conducting the first formal experiment in psychology, where he tried to assess the speed of thought by measuring how long it took test subjects to make a judgment. He measured the discrepancy between the actual and perceived position of a pendulum swing and inferred that these numbers represented the speed of thought. This might not sound impressive now, but at the time it was a pretty creative idea.

In psychology and neuroscience, it is standard practice to carefully measure physical changes in the body and the brain to make inferences about the mental. This is an example of reverse inference .

Notes on the Notes

  • ↑ Robinson, David K.. 2001. "Reaction-time experiments in Wundt's institute and beyond." In Wilhelm Wundt in History: The Making of a Scientific Psychology , edited by Robert W. Rieber and David K. Robinson (p. 161-204). Springer Science & Business Media.
  • ↑ Haupt, Edward. 2001. "Laboratories for Experimental Psychology: Gottingen's Ascendancy over Leipzig in the 1890s." In Wilhelm Wundt in History: The Making of a Scientific Psychology , edited by Robert W. Rieber and David K. Robinson (205-250). Springer.

Navigation menu

PsyBlog

Wilhelm Wundt: The First Experimentalist

pendulum experiment psychology

“ The only possible conclusion the social sciences can draw is: some do, some don’t .” – Ernest Rutherford

Morton Hunt’s excellent ‘ Story of Psychology ‘ helps explain why people doubt the scientific basis of psychology. Think about the famous figures in the history of the more physical sciences: Biology has Charles Darwin, Physics has Isaac Newton and Albert Einstein, Chemistry has Francis Crick and whole load of other people whose surnames are immediately recognisable: Anders Celsius, Robert Wilhelm Bunsen and Louis Pasteur. Now famous psychologists.

Think for a moment…who have you got?

If you’re not a psychologist then you’ve probably thought of Sigmund Freud…and who else? B. F. Skinner? Maybe Ivan Pavlov and his soggy dog? Perhaps Jean Piaget’s developmental psychology and maybe Alfred Kinsey because of the film with Liam Neeson? If you’re a psychologist then I’m sure you came up with quite a few more but let’s just consider Siggy for a moment because he’s prototypical.

Freud was one of the greatest psychologist of all time. Let’s not split hairs here about his legacy, many think it is incomparable, a few think he was full of it. Either way, everyone can agree that he was the kind of man you could trust to be creative. While he trained as a neurologist, a man of science, his influence pervades the arts.

And what are the things that people know about Freud? That his theories have largely been discredited (not really true). That he thought it all came down to sex (well yes: sex plus aggression certainly). And that he invented/discovered the unconscious (his greatest idea).

The point is that he’s not really known as a scientist in the same way as Darwin, Newton or Einstein. He’s seen more as a literary figure, a man of writing and insight certainly, but not a scientific man. How could anyone interested in dreams in these times of cold hard facts be a man of science?

By contrast, not many people have heard of one of the founding fathers of modern psychology: Wilhelm Wundt . It was Wundt who, in the University of Leipzig, carried out what some credit as the first ever psychological experiment in 1879.

The experiment was fairly simple, though it is still employed today in more complicated guises. It simply measured perceptual processing – the time it takes from hearing a bell ring to pressing a button.

Now, if Wilhelm Wundt was the first name that came to mind when you were asked for a famous psychologist, that would make a big difference to the public perception of psychology.

So, to return to today’s straw man, Ernest Rutherford, while I’m not sure if Rutherford meant his statement to include psychology, he does sum up many people’s attitude to modern psychology. The reason Rutherford is wrong is simply that psychology can also answer the questions: “Which ones?” “Why?” and “How?”.

Unfortunately, here is a science regularly represented in the popular press by a man who has worked out a formula for the ‘ happiest ‘ and ‘ worst ‘ days of the year. A parody of scientific psychology if ever there was one. Instead psychology needs to remember its more prosaic, and more prototypically scientific, alumni like Wundt, Weber , Fechner and Helmholtz .

' data-src=

Author: Dr Jeremy Dean

Psychologist, Jeremy Dean, PhD is the founder and author of PsyBlog. He holds a doctorate in psychology from University College London and two other advanced degrees in psychology. He has been writing about scientific research on PsyBlog since 2004. View all posts by Dr Jeremy Dean

pendulum experiment psychology

Join the free PsyBlog mailing list. No spam, ever.

  • Login | Register

Have 10% off on us on your first purchase - Use code NOW10

Free shipping for orders over $100

Available for dispatch within 2 days

Free gift with purchase of over $100

Check out with Paypal and Afterpay

Piaget’s pendulum

Follow FizzicsEd 150 Science Experiments:

You will need:

  • 50cm of string
  • Something to act as a mass. Metal nuts are easy to work with but you could also use dough too (if you have access to mass carriers that would be great too).
  • A stick to wind the string around – we use a retort stand for our scientific method workshop but you could anything to keep the stick steady will suffice.
  • A stopwatch
  • A stack of books to act as a consistent measure of height
  • *If you have access to a sensitive scale for measuring mass you could use this too.

Piaget pendulum experiment - materials needed

  • Instruction

Weight the metal nuts or whatever you’re using for mass in your experiment. If you can’t measure the weight of these at least make sure that you can show a significant difference in the amount of material you use for each time you repeat the experiment.

Make sure that the stick is secured to a fixed point and that you have another person ready with the stop watch.

Piaget pendulum experiment - adding plasticine to the pendulum bob to change the mass

Tie the string to the stick so that it can swing freely below the stick. Tie the metal nuts or whatever you’re using for a mass to the end of the string. Stack some books next to your pendulum – these will act as a constant measurement of height that you will use to release the pendulum from.

Piaget pendulum experiment - timing the pendulum bob swing

Pull back the mass in the same fashion that you’d pull back a chair swing to the height of the stacked books. Release the mass and record how long it takes for the pendulum to swing 5 times (also known as the period).

pendulum experiment psychology

Repeat the experiment three times so that you can get results for the following combinations:

– short string/light mass – short string/heavy mass – long string/light mass – long string/heavy mass

Ideally you should repeat each combination several times and calculate an average time of swing for each combination.

Is there a difference between the four combinations above? Which variable affects the period of a pendulum?

A man pointing at a bicycle wheel spinning horizontally on a desk (balancing by itself)

Get the Unit of Work on Forces here!

  • Friction & spin!

From inertia to centripetal force, this unit covers many concepts about Newton’s Laws!

Includes cross-curricular teaching ideas, student quizzes, a sample marking rubric, scope & sequences & more

Orange read more button

School science visits since 2004!

– Curriculum-linked & award-winning incursions.

– Over 40 primary & high school programs to choose from.

– Designed by experienced educators.

– Over 2 million students reached.

– Face to face incursions & online programs available.

– Early learning centre visits too!

Teacher showing how to do an experiment outside to a group of kids.

Online courses for teachers & parents

– Help students learn how science really works

Why does this work?

You’ve repeated an experiment that was first discussed by Galileo Galilei (1564 – 1642)! The story goes that he was watching a swinging bronze chandelier in a cathedral in Pisa and he noticed that a pendulum swing always follows the same arc. By using his own pulse he could time how long the period of the pendulum swing was. Galileo recognised that over time the swinging of the bronze chandelier would stop (conservation of energy) and that when repeating this experiment with a string and mass you could see that it was the length of the string rather than the mass that affects the period of a pendulum… i.e, the longer the string the longer the pendulum period.

For those who want to calculate what is going on, the period of oscillation of a simple pendulum is:

T = 2π√(l / g) where:

T = time period for one oscillation (s) l = length of pendulum (m) g = acceleration due to gravity (m/s-2)

But why does the mass have no effect on the pendulum period? Because all materials accelerate towards Earth at the same rate!

Sources of error

  • Defining when the pendulum period ends
  • Did the mass get ‘released’ or ‘pushed’?
  • The initial height of the pendulum mass needs to be consistent

Application

This simple science experiment was used by the psychologist Piaget in 1958 to determine if children could isolate and test variables one at a time to experimentally test an idea (also called formal operational thinking ). Children who struggled with this experiment were found to change more than one variable at a time whilst running the pendulum experiment and as such produce the incorrect answer that it is the mass at the end of the string that influences the speed of the pendulum. It’s all about variable testing!

You can find applications of pendulums in a variety of places:

  • Foucalt’s pendulum used in demonstrating that Earth rotates.
  • On children’s swings and even on the giant swings you see at amusement parks
  • Inside a grandfather clock.
  • On a metronome.
  • Inside some skyscraper buildings to dampen the effects of earthquakes.
  • A plumb line used by builders.
  • Newton’s cradle used to demonstrate transfer of forces

A man with a glove above a liquid nitrogen vapour cloud

Learn more!

From volume and length through to statistics and trigonometry, the Working Mathematically workshop has your mathematics enrichment covered!

Get in touch with FizzicsEd to find out how we can work with your class .

pendulum experiment psychology

Forces, Friction & Movement

Years K to 6

Maximum 30 students

School workshop

60 or 90 minutes

Online Class Available

pendulum experiment psychology

Foucault’s Pendulum

Stem full day accelerator - primary.

Designed from real classroom experiences, this modular day helps you create consistently effective science learning that directly address the new curriculum with easily accessible and cost-effective materials.

Be Amazing! How to teach science, the way primary kids love.

Be Amazing! How to teach science, the way primary kids love.

Love science subscribe.

Receive more lesson plans and fun science ideas.

SCIENCE PARTIES

Calendar of events.

pendulum experiment psychology

HIGH SCHOOL Science@Home 4-Week Membership 12PM: March 2024

pendulum experiment psychology

Price: $50 - $900

pendulum experiment psychology

PRIMARY Science@Home 4-Week Membership 2PM: March 2024

pendulum experiment psychology

Light and Colour Online Workshop, Jan 18 PM

Light and colour online workshop, jan 18 am.

pendulum experiment psychology

Lego Robotics, Sydney Olympic Park Jan 2024

pendulum experiment psychology

Creative Coding, Sydney Olympic Park Jan 2024

pendulum experiment psychology

Creative Coding, Sydney Olympic Park July 11 2023

Price: $100

pendulum experiment psychology

Fizzics Education STEAM Day: Robots vs Dinosaurs, Lalor, Apr 14

Price: $45 - $50

Creative Coding, Sydney Olympic Park April 14 2023

Science@home after school 4-week membership: march 2023.

Price: $40 - $1200

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

School Comments View All

Fizzics Education curated a thoughtful and hands-on experience for the children, incorporating practical, skill-based learning activities and followed by a science presentation at the end of the event involving liquid nitrogen. This was delivered safely and effectively, capturing both the children and the parents for the duration of the presentation.

Fizzics Education ran a show today at our school and it was wonderful. He was a great facilitator and the show was age appropriate and well done.

I just wanted to pass on how much the staff and students really enjoyed it and how perfect it was to launch our science week activities. The students were enthralled, educated and entertained – a perfect trifecta!

Thanks so much for presenting at our school on Monday. Our students enjoyed the show.

Fizzics Education Awards

blue writting saying Australian Small Business Champion with a white ackground and and image at the top

  • Free Resources

Free Chemistry Book! Sign-up to our newsletter and receive a free book!

Female Accountant Apply Here

Physics teacher apply here, science teacher apply here, view all vacancies, join our team apply here.

Send us an Email at [email protected]

pendulum experiment psychology

Phone Number: 1300 856 828

Email: [email protected], address: unit 10/55 fourth ave blacktown, nsw 2148, australia.

  • Privacy & Legal Policy
  • Copyright Notice
  • Terms of Trade
  • Cookie Policy

Copyright 2024 Fizzics Education . All rights reserved.

This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Cookie Policy .

Get more science with our newsletter!

Thank you for looking to subscribing to our newsletter 🙂 Through this service you’ll be first to know about the newest free experiments, science news and special offers.

PLUS: Get a free Kitchen Chemistry Booklet with >20 experiments, how to use variables plus a handy template!

Click the image to preview

Please select an ebook!

pendulum experiment psychology

Kids Edition

pendulum experiment psychology

Parent Edition

pendulum experiment psychology

Teacher Edition

Please fill out the details below and an email will be sent to you. Once you get that just click on the link to confirm your subscription and you're all done!

First Name *

Last Name *

Email Address *

Phone Number

Subscribe as a Teacher?

Preschool Teacher

Primary Teacher

High School Teacher

Vacation Care or Library

Subscribe as a Parent?

Enquiry Form

Extra things, products that might interest you.

Rainbow Fireworks Glasses

Rainbow Fireworks Glasses

White tree in a green pot

Magic Crystal Tree Science Kit

A child holding a Helicopter spiral top with a smile

Helicopter Spiral Top

Fly Back Gliders

Fly Back Glider

  • Grade Level

Where are you located?

  • New South Wales
  • Australian Capital Territory

Location not listed?

Which grade level are you teaching.

  • Special School Events
  • Early Childhood
  • Kindergarten
  • Whole School
  • Teacher Professional Development

Which broad syllabus outcome you want to teach?

Green circle with the letters BS written in the middle

What is the age range of the attendee?

  • Age 5 and up
  • Age 6 and up
  • Age 7 and up
  • Age 8 and up
  • Age 9 and up
  • Age 10 and up
  • Age 11 and up
  • Age 12 and up

General Enquiry Form

Check if you require a live online class.

Subscribe for special offers & receive free resources?

How did you hear about us?

Choose a program *

Choose from school show *

* Please select a value!

* Please add a value!

Date required *

Time required *

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

pendulum experiment psychology

  • Kids' Science Projects >

Pendulum Experiment

The Pendulum Experiment is an experiment about gravity. Pendulums (or pendula if we are being exact!) are a fascinating scientific phenomenon.

This article is a part of the guide:

  • Kids' Science Projects
  • Paper Towel
  • Salt Water Egg
  • Fruit Battery

Browse Full Outline

  • 1 Kids' Science Projects
  • 2 How to Conduct Science Experiments
  • 3.1 Mold Bread
  • 3.2 Popcorn
  • 3.3 Salt Water Egg
  • 3.4 Corrosiveness of Soda
  • 3.5 Egg in a Bottle
  • 3.6 Fruit Battery
  • 4.1 Pendulum
  • 4.2 Paper Towel
  • 4.3 Paper Airplane
  • 4.4 Charge a Light Bulb
  • 4.5 Lifting Ice Cube
  • 4.6 Magic Egg
  • 4.7 Magic Jumping Coin
  • 4.8 Invisible Ink
  • 4.9 Making-a-Rainbow
  • 4.10 Oil Spill
  • 4.11 Balloon Rocket Car
  • 4.12 Build an Electromagnet
  • 4.13 Create a Heat Detector
  • 4.14 Creating a Volcano
  • 4.15 Home-Made Glue
  • 4.16 Home-Made Stethoscope
  • 4.17 Magic Balloon
  • 4.18 Make a Matchbox Guitar
  • 4.19 Make Your Own Slime
  • 5.1 Heron’s Aeolipile
  • 5.2 Make an Archimedes Screw
  • 5.3 Build an Astrolabe
  • 5.4 Archimedes Displacement
  • 5.5 Make Heron’s Fountain
  • 5.6 Create a Sundial

pendulum experiment psychology

For many years they have been used for keeping time. If you pull back a pendulum and then let it go, the time it takes to swing over and then return back to its starting position is one period.

They follow some simple mathematical rules and we are going to find out how they work.

We are going to do a series of three experiments to see what effect changing things has on a pendulum.

Please note that this experiment is probably easier with more than one person.

pendulum experiment psychology

Facts About Pendulums

Pendulums have been around for thousands of years. The ancient Chinese used the pendulum principle to try and help predict earthquakes.

Galileo Galilei was the first European to really study pendulums and he discovered that their regularity could be used for keeping time, leading to the first clocks

In 1656, the Dutch inventor and mathematician, Huygens, was the first man to successfully build an accurate clock.

pendulum experiment psychology

What You Will Need for the Pendulum Experiment

A long piece of string, at least 1 meter long.

One piece of metal wire to bend into a hook.

Some nuts from a toolbox - they must all be the same weight and must fit onto the hook.

A large piece of paper to put behind the pendulum or a wall that nobody minds you drawing on.

A stopwatch.

Initial Setting Up the Pendulum Experiment

To do this experiment requires a little building work but nothing too complicated.

The pencil should be firmly taped to the top of the tabled, leaving about 4cm hanging over the edge.

Next make a loop in your string to fit on the end of the pencil but do not make it too tight fitting.

At the other end of your string tie your hook and slide one of the nuts onto the hook.

Put your piece of card flat behind the pendulum and you are ready to go.

Before performing the pendulum experiment , make sure that everything swings freely without sticking.

Experiment One - Changing the Weight

In this experiment we are going to find out what effect changing the mass on the end of the string makes

Take your string back about 40 - 50 cm. You must make a mark on the wall or your piece of paper to make sure that you let it go from the same place every time.

As you let it go, start the stop-watch, and count the number of oscillations in one minute

Repeat the experiment 5 times and calculate an average

Put another weight on the hook

Release the weight from exactly the same place. Calculate the period as before.

Repeat 5 times and average the results

Try the same procedure with after adding another weight

You may be surprised by your results!

Experiment Two - Changing the Angle

Go back to just one weight on the string

You have the results from the first mark in your last experiment so you can use these results again.

Now, take the string back only about 20cm and make a mark as before

Let go and count the number of periods for one minute

Repeat 5 times and then work out an average

Try exactly the same thing but now let go from 10cm.

What difference does the angle of swing make?

Experiment Three - Changing the Length of the String

You already have your results from the first experiment and can use these again.

Take the string of the pendulum and cut off about 20cm. If you are really organized, you can use another length of string from the same roll to make a shorter one.

Take back to the same angle and let it fly.

Take another 20cm off the string, replace and try again.

What effect does changing the length of the string have on a pendulum?

As you can see from your results, changing a few things on a pendulum can have some unexpected effects.

There are still more questions about pendulums. What makes them slow down and stop? How does a pendulum in a grandfather clock keep swinging for a long time?

Maybe your next experiment could answer some of these questions.

  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Martyn Shuttleworth (Feb 25, 2008). Pendulum Experiment. Retrieved Sep 05, 2024 from Explorable.com: https://explorable.com/pendulum-experiment

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

pendulum experiment psychology

Download electronic versions: - Epub for mobiles and tablets - For Kindle here - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

pendulum experiment psychology

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

What the Pendulum Can Tell Educators about Children’s Scientific Reasoning

  • Published: November 2004
  • Volume 13 , pages 757–790, ( 2004 )

Cite this article

pendulum experiment psychology

  • Erin Stafford 1  

485 Accesses

Explore all metrics

Inhelder and Piaget (1958) studied schoolchildren’s understanding of a simplependulum as a means of investigating the development of the control of variablesscheme and the ceteris paribus principle central to scientific experimentation.The time-consuming nature of the individual interview technique used by Inhelderhas led to the development of a whole range of group test techniques aimed attesting the empirical validity and increasing the practical utility of Piaget’s work.The Rasch measurement techniques utilized in this study reveal that the Piagetian Reasoning Task III — Pendulum and the méthode clinique interview revealthe same underlying ability. Of particular interest to classroom teachers is theevidence that some individuals produced rather disparate performances across thetwo testing situations. The implications of the commonalities and individualdifferences in performance for interpreting children’s scientific understanding arediscussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

Advancing scientific reasoning in upper elementary classrooms: direct instruction versus task structuring.

pendulum experiment psychology

Science-P II: Modeling Scientific Reasoning in Primary School

pendulum experiment psychology

Developing Intellectual Sophistication and Scientific Thinking—The Schemes of William G. Perry and Deanna Kuhn

Adams, R.J. & Khoo, S.T.: 1992, Quest: The Interactive Test Analysis System , ACER, Hawthorn.

Google Scholar  

Ahlawat, K. & Billeh, V.Y.: 1987, ‘Comparative Investigations of the Psychometric Properties of Three Tests of Logical Thinking in Middle and High School Students’, Journal of Research in Science Education 24 (2), 267–285.

Arlin, P.K.: 1982, ‘A Multitrait-Multimethod Validity Study of a Test of Formal Reasoning’, Educational and Psychological Measurement 42 , 1077–1088.

Article   Google Scholar  

Bart, W.M.: 1971, ‘The Factor Structure of Formal Operations’, British Journal of Educational Psychology 41 , 70–77.

Benefield, K.E. & Capie, W.: 1976, ‘An Empirical Derivation of Hierarchies of Propositions Related to Ten of Piaget’s Sixteen Binary Operations’, Journal of Research in Science Teaching 13 (3), 193–204.

Bernard, C. [Hoff, H.H., Guillemin, L. & Guillemin, R.]: 1967, ‘Cahier Rouge in Grande’, in F. & Visscher, M.B. (eds.), Claude Bernard and Experimental Medicine , Schenkman Publishing, Cambridge.

Blake, A.: 1980, ‘The Predictive Power of Two Written Tests of Piagetian Developmental Level’, Journal of Research in Science Teaching 17 , 435–441.

Bond, T.G.: 1976a, BLOT: Bond’s Logical Operations Test , T.C.A.E., Townsville.

Bond, T.G.: 1976b, The Development, Validation and Use of a Test to Assess Piaget’s Formal Stage of Logical Operations , Unpublished Thesis, James Cook University of North Queensland, Townsville.

Bond, T.G.: 1989, ‘The Investigation of the Scaling of Piagetian Formal Operations’, in P. Adey (ed.), Adolescent Development and School Science , Falmer Press, New York, pp. 334–341.

Bond, T.G.: 1991, Assessing Developmental Levels in Children’s Thinking: Matching Measurement Model to Cognitive Theory , paper presented at The Annual Conference for the Australian Association for Research in Education, Gold Coast.

Bond, T.G.: 1992, An Empirical Validation of Piaget’s Logico-Mathematical Model for Formal Operational Thinking , paper presented at the Annual Symposium of the Jean Piaget Society, Montreal.

Bond, T.G. & Bunting, E.: 1995, ‘Piaget and Measurement III: Reassessing the Méthode Clinique ’, Archives de Psychologie 63 , 231–255.

Bond, T.G. & Fox, C.M.: 2001, Applying the Rasch Model: Fundamental Measurement in the Human Sciences , Erlbaum, Mahwah, N.J.

Bond, T.G. & Jackson, I.: 1991, ‘The GOU Protocol Revisited: A Piagetian Contextualization of Critique’, Archives de Psychologie 59 , 31–53.

Carlson, G. & Streitberger, E.: 1983, ‘The Construction and Comparison of Three Related Tests of Formal Reasoning’, Science Education 67 (1), 133–140.

Dale, L.G.: 1970, ‘The Growth of Systematic Thinking: Replication and Analysis of Piaget’s First Chemical Experiment’, Australian Journal of Psychology 22 (3), 277–286.

Descartes, R. [Haldane, E. S. & Ross, G. R. T.]: 1637, Discourse on Method, Optics, Geometry and Meteorology , Jan Marie, Leyden.

Easley, J.R.: 1974, ‘The Structural Paradigm in Protocol Analysis’, Journal of Research in Science Teaching 11 (3), 281–290.

Elliot, C.: 1983, British Ability Scale. Manual 1: Introductory Handbook , NFER-Nelson, Windsor.

Ferguson, G.A.: 1941, ‘The Factorial Interpretation of Test Difficulty’, Psychometrika 6 (5), 323–330.

Flavell, J.H. & Wohwill, J.F.: 1969, ‘Formal and Functional Aspects of Cognitive Development’, in D. Elkind & J.H. Flavell (eds.), Studies in Cognitive Development: Essays in Honour of Jean Piaget , pp. 67–120.

Ginsburg, H. & Opper, S.: 1988, Piaget’s Theory of Intellectual Development (2nd ed.), Prentice-Hall, New Jersey.

Gray, W.M.: 1976, ‘The Factor Structure of Concrete and Formal Operations: A Confirmation of Piaget’, in C. Modgill (ed., Piagetian Research , Vol. 4, NFER, Windsor.

Gray, W.M.: 1990, ‘Formal Operational Thought’, in W.F. Overton (ed.), Reasoning, Necessity and Logic: Developmental Perspectives , Erlbaum, New Jersey.

Hacker, R.G., Pratt, C. & Matthews, B.A.: 1985, ‘Selecting science Reasoning Tasks for Classroom Use’, Educational Research and Perspectives 12 (2), 19–32.

Hales, S.: 1986, ‘Rethinking the Business of Psychology’, Journal for the Theory of Social Behaviour 16 , 57–76.

Hautamäki, J.: 1989, ‘The Application of a Rasch Model of Thinking’, in P. Adey (ed.), Adolescent Development and School Science , Falmer, London, pp. 342–349.

Hume, D.: 1739–40, A Treatise of Human Nature (Vol. 1), Penguin, Harmondsworth.

Inhelder, B. & Piaget, J.: 1958, ‘The Growth of Logical Thinking from Childhood to Adolescence: An Essay on the Construction of Formal Operational Structures’, Routledge & Kegan Paul, London.

Book   Google Scholar  

Karplus, R., Karplus, E., Formisano, M. & Paulsen, A.: 1977, ‘Proportional Reasoning and Control of Variables in Seven Countries’, Journal of Research in Science Teaching 14 (5), 411–417.

Küchemann, D.: 1979, Task III: The Pendulum , Chelsea College, London.

Lawson, A.E.: 1977, ‘Relationships Among Performances on Three Formal Operations Tasks’, The Journal of Psychology 96 , 235–241.

Lawson, A.E.: 1978, ‘The Development and Validation of a Classroom Test of Formal Reasoning’, Journal of Research in Science Teaching 15 (1), 11–24.

Lawson, A.E.: 1979, ‘Combining Variables, Controlling Variables and Proportions: Is There a Psychological Link?’, Science Education 63 (1), 67–72.

Lawson, A.E. & Renner, J.W.: 1974, ‘A Quantitative Analysis of Responses on Piagetian Tasks and its Implications for Curriculum’, Science Education 58 (4), 545–556.

Longeot, F.: 1962, ‘Un Essai d’Application de la Psychologie Genetique a la Psychologie Differentielle’, Bulletin de l’Institute National d’Etude 18 , 153–162.

Lovell, K.: 1961, ‘A Follow-Up Study of Inhelder and Piaget’s “The Growth of Logical Thinking’, British Journal of Educational Psychology 52 (2), 143–153.

Nagy, P. & Griffiths, A.K.: 1982, ‘Limitations of Recent Research Relating Piaget’s Theory to Adolescent Thought’, Review of Educational Research 52 (4), 513–556.

Neimark, E.: 1975, ‘Intellectual Development During Adolescence’, Child Development 4 , 541–594.

Parsons, A.: 1958, ‘Translator’s Introduction: A Guide for Psychologists’, in B. Inhelder & J. Piaget, The Growth of Logical Thinking from Childhood to Adolescence: An Essay on the Construction of Formal Operational Structures , Routledge & Kegan Paul, London.

Pauli, L., Nathan, H., Droz, R. & Grize, J.B.: 1974, Inventaires Piagetians: les experiences de Jean Piaget , OECD: Paris.

Piaget, J.: 1972, The Child and Reality , Tonbridge & Esher, London.

Piaget, J.: 1974, The Grasp of Consciousness , Routledge and Kegan Paul, London.

Rasch, G.: 1960, Probablistic Models for Some Intelligence and Attainment Tests , University of Chicago Press, Chicago.

Raven, R.J. 1973, ‘The Development of a Test of Logical Operations’, Science Education 57 , 377–385.

Shayer, M.: 1976, ‘The Pendulum Problem’, British Journal of Educational Psychology 46 , 85–87.

Shayer, M.: 1979, ‘Has Piaget’s Construct of Formal Operational Thinking any Utility?’, British Journal of Educational Psychology 49 , 265–276.

Shayer, M. & Adey, P.: 1981, Towards a Science of Science Teaching . Heinemann, London.

Shayer, M. & Wharry, D.: 1974, ‘Piaget in the Classroom. Part 1: Testing a Whole Class at the Same Time’, Science Review 192 , 55 , 447–458.

Smith, L.: 1987, ‘A Constructivist Interpretation of Formal Operations’, Human Development 30 , 341–354.

Somerville, S.C.: 1974, ‘The Pendulum Problem: Patterns of Performance Defining Developmental Stages’, British Journal of Educational Psychology 44 , 266–281.

Tobin, K.G. & Capie, W.: 1981, ‘Development and Validation of a Group Test of Logical Thinking’, Educational and Psychological Measurement 41 (2), 413–424.

Walker, R.A., Hendrix, J.R. & Mertens, T.R.: 1979, ‘Written Piagetian Task Instruments: Its Development and Use’, Science Education 63 (2), 211–220.

Wallace, J.G.: 1965, Concept Growth and the Education of the Child . NFER, Slough.

Wright, B.D. & Masters, G.N.: 1981, The Measurement of Knowledge and Attitude , Research Memorandum No. 30. MESA Psychometric Laboratory, University of Chicago.

Wright, B.D. & Masters, G.N.: 1982, Rating Scale Analysis , MESA Press, Chicago.

Wylam, H. & Shayer, M.: 1978, CSMS Science Reasoning Tasks. General Guide , NFER, Windsor.

Download references

Author information

Authors and affiliations.

Catholic Education Office, 2, Gardenia Avenue, Kirwan Qld, 4817, Australia

Erin Stafford

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

About this article

Stafford, E. What the Pendulum Can Tell Educators about Children’s Scientific Reasoning. Sci Educ 13 , 757–790 (2004). https://doi.org/10.1007/s11191-004-3941-5

Download citation

Issue Date : November 2004

DOI : https://doi.org/10.1007/s11191-004-3941-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Operational Ability
  • Partial Credit Model
  • Formal Operation
  • Stage Placement
  • Timing Device
  • Find a journal
  • Publish with us
  • Track your research

Sophia Moskalenko Ph.D.

The Swinging Pendulum of Psychological Wisdom

Bubbles, bubbles everywhere.

Posted May 26, 2011

Martin Seligman , the father of Positive psychology, has taken a step back to look over the years of research in the field that he created and presided over. In his most recent book, Flourish, he suggests that the field overreached in trying to make happiness the measure of well-being and life satisfaction. Seligman points to children, who in polling research are associated with stress and marital discord. Why do otherwise reasonable people keep having them? Perhaps there is more to a life well-lived than how happy you were living it.

This is not the first time a psychological theory that for years dominated journal publications, popular non-fiction best-sellers lists, and graduate dissertation topics has backtracked. Psychoanalytic theory had to give up on the Oedipal and Electra complex. It turned out boys don’t really want to kill their fathers and have sex with their mothers, no more than girls want to have a penis. This overextension caused lasting embarrassment ; to this day bringing up Freud in a research meeting is hazardous to one’s reputation.

pendulum experiment psychology

Sigmund Freud

Behaviorism was next to flourish. But then it turned out that it was not so easy to take a random infant and “train him to become any type of specialist I might select--doctor, lawyer, artist, merchant-chief, and, yes, even beggarman and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.” (John B. Watson) Behaviorism fell out of favor because it overreached. Despite the wealth of new data and theories it produced (including Learned Helplessness , by Martin Seligman—a behavioral model of depression ), it has been pushed to the back of the bus. Few psychology departments today are hiring behaviorists.

Other theories rushed to take the leadership role. Frustration Aggression Theory claimed that frustration ALWAYS leads to aggression, and ALL aggression comes from frustration. Terror Management Theory claimed that “…most of the motives studied by social psychologists are symbolic means of managing existential terror” (Pyszczynski, et al.). These theories too have passed their prime.

In every case, the boom begins with an interesting idea that rapidly gains attention and popularity. Articles and books are published, their authors are invited to serve on journals’ editorial boards, which means more articles on the new theory are published while criticisms are rebuked. Resentments begin to build. And one day, the tables turn. The theory falls out of favor. Submitted papers on the topic are not just rejected, but smeared with personal-sounding criticisms. Papers, dissertations and books disputing the theory begin to appear. The boom turns to bust.

pendulum experiment psychology

Remind you of anything? In the light of the recent collapse of the housing bubble, the parallels between psychological theory and economics should be especially clear. Psychological theory, it appears, is not immune from bubbles. As in the economic bubble, “early adapters,” the people who start a trend, are innovative and daring. They make a big career profit from their pioneering work. But when everyone jumps on the bandwagon, it becomes more difficult to benefit from the innovation , and competition becomes fierce. People start selling mortgages with no income verification, or extending a theory beyond its reasonable scope. Sooner or later, the model becomes unsustainable, and when the music stops, many are left stranded with their unwise investments, financial or intellectual.

According to legend, Joe Kennedy Sr. realized it was time to get out of the stock market in 1920, when his shoeshine boy offered him a stock tip—clear evidence that the bubble in stocks had reached its limits. He sold all his stocks and invested in real estate. This move allowed him not just to avoid losses, but to increase his fortune during the Great Depression.

pendulum experiment psychology

Joe Kennedy, Sr.

Few can be as wise as Joe Kennedy--or Martin Seligman. Their actions with regard to their investments were guided by shrewd intuition for timing the market. If you fear you may not possess this kind of intuition, there is another strategy to avoid losing everything when a bubble bursts.

You know it. You’ve heard it a hundred times with regard to financial investments. “Diversify your portfolio.” People who spread their wealth among many types of investments can stand to see one of them fail. But in psychology graduate school, they give a different advice. Take an idea and make it your own, they say. Publish on it, lecture on it, make it synonymous with your name. In other words, invest all you have in just one type of intellectual asset. No wonder psychology is susceptible to bubbles and their collapses.

Nobel Prize-winner Frances Crick said, “The dangerous man is the one who has only one idea, because then he’ll fight and die for it.” Someone who has invested a whole career into one idea will do everything possible to nourish “His Precious.” On the contrary, Crick continued, “The way real science goes is that you come up with lots of ideas, and most of them will be wrong.”

pendulum experiment psychology

It is still painful to see one of your investments fail, even when you have a diversified portfolio. It is still hard to see your idea fail, even when you have more than one research interest. But look at the upside: when the next bubble collapses, you’ll be more like Kennedy than like the Lehman Brothers; more like Seligman than like Freud.

Sophia Moskalenko Ph.D.

Sophia Moskalenko teaches psychology at the University of Pennsylvania. Her research focuses on radicalization, terrorism, self-sacrifice and martyrdom.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • International
  • New Zealand
  • South Africa
  • Switzerland
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

September 2024 magazine cover

It’s increasingly common for someone to be diagnosed with a condition such as ADHD or autism as an adult. A diagnosis often brings relief, but it can also come with as many questions as answers.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

IMAGES

  1. The pendulum experiment.

    pendulum experiment psychology

  2. Pendulum experiment. Do you want to try a fun and easy…

    pendulum experiment psychology

  3. simple pendulum Experiment

    pendulum experiment psychology

  4. Picture showing the double pendulum laboratory experiment (Figure

    pendulum experiment psychology

  5. SIMPLE PENDULUM EXPERIMENT KIT

    pendulum experiment psychology

  6. Simple Pendulum Experiment to determine the value of g

    pendulum experiment psychology

VIDEO

  1. simple pendulum experiment।। Time period of simple pendulum। Value of g experiment

  2. Pendulum science experiment. Act Taiyuan

  3. Pendulum experiment by Sir Walter Lewin #science #physics #walterlewin #pendulum #shorts

  4. Linear Pendulum Experiment Assembling and Boxing #robot #engineering #machine #asmr #boxing

  5. THE MATHEMATICAL PENDULUM EXPERIMENT

  6. electric pendulum #experiment #physiscs #explore #physics #physicsexperiments

COMMENTS

  1. Piaget's Formal Operational Stage: Definition & Examples

    The formal operational stage begins at approximately age twelve and lasts into adulthood. As adolescents enter this stage, they can think abstractly by manipulating ideas in their head, without any dependence on concrete manipulation (Inhelder & Piaget, 1958). In the formal operational stage, children tend to reason more abstractly ...

  2. PDF Piaget and the Pendulum

    THE PENDULUM An analogous, but much simpler, experiment can be arranged by means of a pendulum. It is used for children or adolescents to discover that the frequency of a pendulum isafunction ofitslength,tothe exclusion of all other invoked factors, such as, for example, the suspended weight, the momentum imparted or even the height of its drop.

  3. (PDF) Piaget and the Pendulum

    The pendulum is a universal topic in high-school science programmes and some elementary science courses; an enriched approach to its study can result in deepened science literacy across the whole ...

  4. Piaget and the Pendulum

    Abstract. Piaget's investigations into children's understanding of the laws governing the movement of a simple pendulum were first reported in 1955 as part of a report into how children's knowledge of the physical world changes during development. Chapter 4 of Inhelder & Piaget (1955/1958) entitled 'The Oscillation of a Pendulum and the ...

  5. Piaget and the Pendulum

    Piaget's investigations into children's understanding of the laws governing the movement of a simple pendulum were first reported in 1955 as part of a report into how children's knowledge of the physical world changes during development. Chapter 4 of Inhelder & Piaget (1955/1958) entitled `The Oscillation of a Pendulum and the Operations of Exclusion' demonstrated how adolescents could ...

  6. The pendulum problem: Patterns of performance defining developmental

    Performance on B. Inhelder and Piaget's (1958) pendulum problem was obtained as a validating measure in a study of 236 children, aged 10-14 yrs, investigating the transition from concrete to formal thinking. A detailed scoring procedure was devised, which distinguished method from content aspects of performance on the problem and which allowed categorization of each child into 1 of 9 ...

  7. The Pendulum Problem: Patterns of Performance Defining Developmental

    British Journal of Educational Psychology is an international journal publishing psychological research aiming to improve the understanding of all aspects of education. Summary. Performance on Inhelder and Piaget's pendulum problem was obtained as a validating measure in a study of the transition from concrete to formal thinking.

  8. PDF Piaget and the Pendulum

    An analogous, but much simpler, experiment can be arranged by means of a pendulum. It is used for children or adolescents to discover that the frequency of a pendulum is a function of its length ...

  9. Understanding the Mind by Measuring the Brain

    In 1862, Wilhelm Wundt tried to measure the speed of thought by tracking the discrepancy between the actual and perceived position of a swinging pendulum. By 1879, he had invented the reaction time experiment to measure the speed of perception by presenting participants with a tone or light of a particular color and measuring their latency to ...

  10. Wundt's first psychology experiment

    Wundt is credited with conducting the first formal experiment in psychology, where he tried to assess the speed of thought by measuring how long it took test subjects to make a judgment. He measured the discrepancy between the actual and perceived position of a pendulum swing and inferred that these numbers represented the speed of thought.

  11. Wilhelm Wundt: The First Experimentalist

    By contrast, not many people have heard of one of the founding fathers of modern psychology: Wilhelm Wundt. It was Wundt who, in the University of Leipzig, carried out what some credit as the first ever psychological experiment in 1879. The experiment was fairly simple, though it is still employed today in more complicated guises.

  12. APA Dictionary of Psychology

    A trusted reference in the field of psychology, offering more than 25,000 clear and authoritative entries. ... pendulum problem. Share button. Updated on 04/19/2018. a Piagetian task used to assess cognitive development. The participant is asked to work out what governs the speed of an object swinging on a piece of string.

  13. Piaget's pendulum science experiment : Fizzics Education

    Tie the metal nuts or whatever you're using for a mass to the end of the string. Stack some books next to your pendulum - these will act as a constant measurement of height that you will use to release the pendulum from. 3. Pull back the mass in the same fashion that you'd pull back a chair swing to the height of the stacked books.

  14. What the Pendulum Can Tell Educators about Children's Scientific

    Inhelder and Piaget (1958) studied schoolchildren's understanding of a simple pendulum as a means of investigating the development of the control of variables scheme and the ceteris paribus principle central to scientific experimentation. The time-consuming nature of the individual interview technique used by Inhelder has led to the development of a whole range of group test techniques aimed ...

  15. Pendulum Experiment

    The Pendulum Experiment is an experiment about gravity. Pendulums (or pendula if we are being exact!) are a fascinating scientific phenomenon. For many years they have been used for keeping time. If you pull back a pendulum and then let it go, the time it takes to swing over and then return back to its starting position is one period.

  16. Swinging Science

    For an experiment with a pendulum, some examples include length of the string, , mass (how much stuff is at the end of the string), and amplitude (how high do the pendulum starts from). Dependent Variable : This is a variable that cannot be changed by the person doing the experiment but changes based on the independent variable.

  17. PDF Moving by thoughts alone? Amount of finger movement and pendulum length

    mechanistic accounts of 'automatic' pendulum oscillations. A pendulum has a resonant frequency that is primarily dependent on its length. The maximum oscillation amplitude for a hand-held pendulum will be achieved if the driving frequency (i.e. the frequency of oscillations of the hand) is equal to the resonant frequency (Newburgh, 2004).

  18. Wundt´s pendulum apparatus used in " complication experiments " (from

    Examples of the period, from the founding of the "Ferenc József University" (1872) at Kolozsvár to its liquidating in 1919, are the first "Physiological psychology" (1887) school-book by ...

  19. (1.2) Experimental Psychology and Schools of Thought

    Study with Quizlet and memorize flashcards containing terms like Prior to 1879, people who studied behavior associated themselves with:, Psychophysics is the study of _____., What did Wundt's pendulum experiment demonstrate? and more.

  20. psychology Unit 1 lesson 3 Flashcards

    cognitive psychology. considers thoughts, desires, and motives. humanism. considers the empathy and self-worth of people. Gestalt. considers patterns. psychoanalysis. explores consciousness and unconsciousness. Study with Quizlet and memorize flashcards containing terms like Prior to 1879, people who studied behavior associated themselves with ...

  21. PDF What the Pendulum Can Tell Educators about Children's ...

    Amongst the tasks outlined in GLT, the pendulum experiment, in particular, has been widely used in science classrooms as a means of studying students' ability to use the principle of ceteris paribus in scientific reasoning. The pendulum problem utilised a simple apparatus consisting of a string, which could be shortened or

  22. Tired of Feeling Stuck? Give the Pendulum Lifestyle a Shot

    The pendulum lifestyle reflects the dynamic nature of life itself, much like the ever-changing seasons. Just as nature experiences periods of growth and dormancy, so too will our internal pendulums.

  23. The Swinging Pendulum of Psychological Wisdom

    Martin Seligman, the father of Positive psychology, has taken a step back to look over the years of research in the field that he created and presided over. In his most recent book, Flourish, he ...

  24. Experimenting with a Pendulum

    Pendulums have proven to be important components of certain kinds of clocks because they swing back and forth at a very predictable rate. In this video segment adapted from ZOOM, members of the cast experiment with three different factors—weight, length, and starting angle—to see which, if any, affect the time it takes a pendulum to swing back and forth once.