U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Information Processing: The Language and Analytical Tools for Cognitive Psychology in the Information Age

The information age can be dated to the work of Norbert Wiener and Claude Shannon in the 1940s. Their work on cybernetics and information theory, and many subsequent developments, had a profound influence on reshaping the field of psychology from what it was prior to the 1950s. Contemporaneously, advances also occurred in experimental design and inferential statistical testing stemming from the work of Ronald Fisher, Jerzy Neyman, and Egon Pearson. These interdisciplinary advances from outside of psychology provided the conceptual and methodological tools for what is often called the cognitive revolution but is more accurately described as the information-processing revolution. Cybernetics set the stage with the idea that everything ranging from neurophysiological mechanisms to societal activities can be modeled as structured control systems with feedforward and feedback loops. Information theory offered a way to quantify entropy and information, and promoted theorizing in terms of information flow. Statistical theory provided means for making scientific inferences from the results of controlled experiments and for conceptualizing human decision making. With those three pillars, a cognitive psychology adapted to the information age evolved. The growth of technology in the information age has resulted in human lives being increasingly interweaved with the cyber environment, making cognitive psychology an essential part of interdisciplinary research on such interweaving. Continued engagement in interdisciplinary research at the forefront of technology development provides a chance for psychologists not only to refine their theories but also to play a major role in the advent of a new age of science.

Information is information, not matter or energy Wiener (1952 , p. 132)

Introduction

The period of human history in which we live is frequently called the information age , and it is often dated to the work of Wiener (1894–1964) and Shannon (1916–2001) on cybernetics and information theory. Each of these individuals has been dubbed the “father of the information age” ( Conway and Siegelman, 2005 ; Nahin, 2013 ). Wiener’s and Shannon’s work quantitatively described the fundamental phenomena of communication, and subsequent developments linked to that work had a profound influence on re-shaping many fields, including cognitive psychology from what it was prior to the 1950s ( Cherry, 1957 ; Edwards, 1997 , p. 222). Another closely related influence during that same period is the statistical hypothesis testing of Fisher (1890–1962), the father of modern statistics and experimental design ( Dawkins, 2010 ), and Jerzy Neyman (1894–1981), and Egon Pearson (1895–1980). In the U.S., during the first half of the 20th century, the behaviorist approach dominated psychology ( Mandler, 2007 ). In the 1950s, though, based mainly on the progress made in communication system engineering, as well as statistics, the human information-processing approach emerged in what is often called the cognitive revolution ( Gardner, 1985 ; Miller, 2003 ).

The information age has had, and continues to have, a great impact on psychology and society at large. Since the 1950s, science and technology have progressed with each passing day. The promise of the information-processing approach was to bring knowledge of human mind to a level in which cognitive mechanisms could be modeled to explain the processes between people’s perception and action. This promise, though far from completely fulfilled, has been increasingly realized. However, as any period in human history, the information age will come to an end at some future time and be replaced by another age. We are not claiming that information will become obsolete in the new age, just that it will become necessary but not sufficient for understanding people and society in the new era. Comprehending how and why the information-processing revolution in psychology occurred should prepare psychologists to deal with the changes that accompany the new era.

In the present paper, we consider the information age from a historical viewpoint and examine its impact on the emergence of contemporary cognitive psychology. Our analysis of the historical origins of cognitive psychology reveals that applied research incorporating multiple disciplines provided conceptual and methodological tools that advanced the field. An implication, which we explore briefly, is that interdisciplinary research oriented toward solving applied problems is likely to be the source of the next advance in conceptual and methodological tools that will enable a new age of psychology. In the following sections, we examine milestones of the information age and link them to the specific language and methodology for conducting psychological studies. We illustrate how the research methods and theory evolved over time and provide hints for developing research tools in the next age for cognitive psychology.

Cybernetics and Information Theory

Wiener and cybernetics.

Norbert Wiener is an individual whose impact on the field of psychology has not been acknowledged adequately. Wiener, a mathematician and philosopher, was a child prodigy who earned his Ph.D. from Harvard University at age 18 years. He is best-known for establishing what he labeled Cybernetics ( Wiener, 1948b ), which is also known as control theory, although he made many other contributions of note. A key feature of Wiener’s intellectual development and scientific work is its interdisciplinary nature ( Montagnini, 2017b ).

Prior to college, Wiener was influenced by Harvard physiologist Walter B. Cannon ( Conway and Siegelman, 2005 ), who later, in 1926, devised the term homeostasis , “the tendency of an organism or a cell to regulate its internal conditions, usually by a system of feedback controls…” ( Biology Online Dictionary, 2018 ). During his undergraduate and graduate education, Wiener was inspired by several Harvard philosophers ( Montagnini, 2017a ), including William James (pragmatism), George Santayana (positivistic idealism), and Josiah Royce (idealism and the scientific method). Motivated by Royce, Wiener made his commitment to study logic and completed his dissertation on mathematic logic. Following graduate school, Wiener traveled on a postdoctoral fellowship to pursue his study of mathematics and logic, working with philosopher/logician Bertrand Russell and mathematician/geneticist Godfrey H. Hardy in England, mathematicians David Hilbert and Edmund Landau in Europe, and philosopher/psychologist John Dewey in the U.S.

Wiener’s career was characterized by a commitment to apply mathematics and logic to real-world problems, which was sparked by his working for the U.S. Army. According to Hulbert ( 2018 , p. 50),

  • simple  He returned to the United States in 1915 to figure out what he might do next, at 21 jumping among jobs… His stint in 1918 at the U.S. Army’s Aberdeen Proving Ground was especially rewarding…. Busy doing invaluable work on antiaircraft targeting with fellow mathematicians, he found the camaraderie and the independence he yearned for. Soon, in a now-flourishing postwar academic market for the brainiacs needed in a science-guided era, Norbert found his niche. At MIT, social graces and pedigrees didn’t count for much, and wartime technical experience like his did. He got hired. The latest mathematical tools were much in demand as electronic communication technology took off in the 1920s.

Wiener began his early research in applied mathematics on stochastic noise processes (i.e., Brownian motion; Wiener, 1921 ). The Wiener process named in honor of him has been widely used in engineering, finance, physical sciences, and, as described later, psychology. From the mid 1930s until 1953, Wiener also was actively involved in a series of interdisciplinary seminars and conferences with a group of researchers that included mathematicians (John von Neumann, Walter Pitts), engineers (Julian Bigelow, Claude Shannon), physiologists (Warren McCulloch, Arturo Rosenblueth), and psychologists (Wolfgang Köhler, Joseph C. R. Licklider, Duncan Luce). “Models of the human brain” is one topic discussed in those meetings, and concepts proposed during those conferences had significant influence on the research in information technologies and the human sciences ( Heims, 1991 ).

One of Wiener’s major contributions was in World War II, when he applied mathematics to electronics problems and developed a statistical prediction method for fire control theory. This method predicted the position in space where an enemy aircraft would be located in the future so that an artillery shell fired from a distance would hit the aircraft ( Conway and Siegelman, 2005 ). As told by Conway and Siegelman, “Wiener’s focus on a practical real-world problem had led him into that paradoxical realm of nature where there was no certainty, only probabilities, compromises, and statistical conclusions…” (p. 113). Advances in probability and statistics provided a tool for Wiener and others to investigate this paradoxial realm. Early in 1942 Wiener wrote a classified report for the National Defense Research Committee (NRDC), “The Extrapolation, Interpolating, and Smoothing of Stationary Time Series,” which was published as a book in 1949. This report is credited as the founding work in communications engineering, in which Wiener concluded that communication in all fields is in terms of information. In his words,

  • simple  The proper field of communication engineering is far wider than that generally assigned to it. Communication engineering concerns itself with the transmission of messages. For the existence of a message, it is indeed essential that variable information be transmitted. The transmission of a single fixed item of information is of no communicative value. We must have a repertory of possible messages, and over this repertory a measure determining the probability of these messages ( Wiener, 1949 , p. 2).

Wiener went on to say “such information will generally be of a statistical nature” (p. 10).

From 1942 onward, Wiener developed his ideas of control theory more broadly in Cybernetics , as described in a Scientific American article ( Wiener, 1948a ):

  • simple  It combines under one heading the study of what in a human context is sometimes loosely described as thinking and in engineering is known as control and communication. In other words, cybernetics attempts to find the common elements in the functioning of automatic machines and of the human nervous system, and to develop a theory which will cover the entire field of control and communication in machines and in living organisms (p. 14).

Wiener (1948a) made apparent in that article that the term cybernetics was chosen to emphasize the concept of feedback mechanism. The example he used was one of human action:

  • simple  Suppose that I pick up a pencil. To do this I have to move certain muscles. Only an expert anatomist knows what all these muscles are, and even an anatomist could hardly perform the act by a conscious exertion of the will to contract each muscle concerned in succession. Actually, what we will is not to move individual muscles but to pick up the pencil. Once we have determined on this, the motion of the arm and hand proceeds in such a way that we may say that the amount by which the pencil is not yet picked up is decreased at each stage. This part of the action is not in full conscious (p. 14; see also p. 7 of Wiener, 1961 ).

Note that in this example, Wiener hits on the central idea behind contemporary theorizing in action selection – the choice of action is with reference to a distal goal ( Hommel et al., 2001 ; Dignath et al., 2014 ). Wiener went on to say,

  • simple  To perform an action in such a manner, there must be a report to the nervous system, conscious or unconscious, of the amount by which we have failed to pick up the pencil at each instant. The report may be visual, at least in part, but it is more generally kinesthetic, or, to use a term now in vogue, proprioceptive (p. 14; see also p. 7 of Wiener, 1961 ).

That is, Wiener emphasizes the role of negative feedback in control of the motor system, as in theories of motor control ( Adams, 1971 ; Schmidt, 1975 ).

Wiener (1948b) developed his views more thoroughly and mathematically in his master work, Cybernetics or Control and Communication in the Animal and in the Machine , which was extended in a second edition published in 1961. In this book, Wiener devoted considerable coverage to psychological and sociological phenomena, emphasizing a systems view that takes into account feedback mechanisms. Although he was interested in sensory physiology and neural functioning, he later noted, “The need of including psychologists had indeed been obvious from the beginning. He who studies the nervous system cannot forget the mind, and he who studies the mind cannot forget the nervous system” ( Wiener, 1961 , p. 18).

Later in the Cybernetics book, Wiener indicated the value of viewing society as a control system, stating “Of all of these anti-homeostatic factors in society, the control of the means of communication is the most effective and most important” (p. 160). This statement is followed immediately by a focus on information processing of the individual: “One of the lessons of the present book is that any organism is held together in this action by the possession of means for the acquisition, use, retention, and transmission of information” ( Wiener, 1961 , p. 160).

Cybernetics, or the study of control and communication in machines and living things, is a general approach to understanding self-regulating systems. The basic unit of cybernetic control is the negative feedback loop, whose function is to reduce the sensed deviations from an expected outcome to maintain a steady state. Specifically, a present condition is perceived by the input function and then compared against a point of reference through a mechanism called a comparator. If there is a discrepancy between the present state and the reference value, an action is taken. This arrangement thus constitutes a closed loop of control, the overall purpose of which is to minimize deviations from the standard of comparison (reference point). Reference values are typically provided by superordinate systems, which output behaviors that constitute the setting of standards for the next lower level.

Cybernetics thus illustrates one of the most valuable characteristics of mathematics: to identify a common feature (feedback) across many domains and then study it abstracted from those domains. This abstracted study draws the domains closer together and often enables results from one domain to be extended to the other. From its birth, Wiener conceived of cybernetics as an interdisciplinary field, and control theory has had a major impact on diverse areas of work, such as biology, psychology, engineering, and computer science. Besides the mathematical nature, cybernetics has also been claimed as the science of complex probabilistic systems ( Beer, 1959 ). In other words, cybernetics is a science of combined constant flows of communication and self-regulating systems.

Shannon and Information Theory

With backgrounds in electrical engineering and mathematics, Claude Shannon obtained his Ph.D. in electrical engineering at MIT in 1940. Shannon is known within psychology primarily for information theory, but prior to his contribution on that topic, in his Master’s thesis, he showed how to design switch circuits according to Boole’s symbolic logic. Use of combinations of switches that represent binary values provides the foundation of modern computers and telecommunication systems ( O’Regan, 2012 ). In the 1940s, Shannon’s work on digit circuit theory opened the doors for him and allowed him to make connections with great scientists of the day, including von Neumann, Albert Einstein, and Alan Turing. These connections, along with his work on cryptography, affected his thoughts about communication theory.

With regard to information theory, or what he called communication theory, Shannon (1948a) stated the essential problem of communication in the first page of his classic article:

  • simple  The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point… The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function (p. 379).

Shannon (1948a) characterized an information system as having five elements: (1) an information source; (2) a transmitter; (3) a channel; (4) a receiver; (5) a destination. Note the similarity of Figure ​ Figure1 1 , taken from his article, to the human information-processing models of cognitive psychology. Shannon provided mathematical analyses of each element for three categories of communication systems: discrete, continuous, and mixed. A key measure in information theory is entropy , which Shannon defined as the amount of uncertainty involved in the value of a random variable or the outcome of a random process. Shannon also introduced the concepts of encoding and decoding for the transmitter and receiver, respectively. His main concern was to find explicit methods, also called codes , to increase the efficiency and reduce the error rate during data communication over noisy channels to near the channel capacity.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-09-01270-g001.jpg

Shannon’s schematic diagram of a general communication system ( Shannon, 1948a ).

Shannon explicitly acknowledged Wiener’s influence on the development of information theory:

  • simple  Communication theory is heavily indebted to Wiener for much of its basic philosophy and theory. His classic NDRC report, The Interpolation, Extrapolation, and Smoothing of Stationary Time Series ( Wiener, 1949 ), contains the first clear-cut formulation of communication theory as a statistical problem, the study of operations on time series. This work, although chiefly concerned with the linear prediction and filtering problem, is an important collateral reference in connection with the present paper. We may also refer here to Wiener’s Cybernetics ( Wiener, 1948b ), dealing with the general problems of communication and control ( Shannon and Weaver, 1949 , p. 85).

Although Shannon and Weaver (1949) developed similar measures of information independently from Wiener, they approached the same problem from different angles. Wiener developed the statistical theory of communication, equated information with negative entropy and related it to solve the problems of prediction and filtering while he worked on designing anti-aircraft fire-control systems ( Galison, 1994 ). Shannon, working primarily on cryptography at Bell Labs, drew an analogy between a secrecy system and a noisy communication system through coding messages into signals to transmit information in the presence of noise ( Shannon, 1945 ). According to Shannon, the amount of information and channel capacity were expressed in terms of positive entropy. With regard to the difference in sign for entropy in his and Wiener’s formulations, Shannon wrote to Wiener:

  • simple  I do not believe this difference has any real significance but is due to our taking somewhat complementary views of information. I consider how much information is produced when a choice is made from a set – the larger the set the more the information. You consider the larger uncertainty in the case of a larger set to mean less knowledge of the situation and hence less information ( Shannon, 1948b ).

A key element of the mathematical theory of communication developed by Shannon is that it omits “the question of interpretation” ( Shannon and Weaver, 1949 ). In other words, it separates information from the “psychological factors” involved in the ordinary use of information and establishes a neutral or non-specific human meaning of the information content ( Luce, 2003 ). In such sense, consistent with cybernetics, information theory also confirmed this neutral meaning common to systems of machines, human beings, or combinations of them. The view that information refers not to “what” you send but what you “can” send, based on probability and statistics, opened a new science that used the same methods to study machines, humans, and their interactions.

Inference Revolution

Although it is often overlooked, a related impact on psychological research during roughly the same period was that of using statistical thinking and methodology for small sample experiments. The two approaches that have been most influential in psychology, the null hypothesis significance testing of Ronald Fisher and the more general hypothesis testing view of Jerzy Neyman and Egon Pearson, resulted in what Gigerenzer and Murray (1987) called the inference revolution .

Fisher, Information, Inferential Statistics, and Experiment Design

Ronald Fisher got his degree in mathematics from Cambridge University where he spent another year studying statistical mechanics and quantum theory ( Yates and Mather, 1963 ). He has been described as “a genius who almost single-handedly created the foundations for modern statistical science” ( Hald, 2008 , p. 147) and “the single most important figure of 20th century statistics” ( Efron, 1998 , p. 95). Fisher is also “rightly regarded as the founder of the modern methods of design and analysis of experiments” ( Yates, 1964 , p. 307). In addition to his work on statistics and experimental design, Fisher made significant scientic contributions to genetics and evolutionary biology. Indeed, Dawkins (2010) , the famous biologist, called Fisher the greatest biologist since Darwin, saying:

  • simple  Not only was he the most original and constructive of the architects of the neo-Darwinian synthesis. Fisher also was the father of modern statistics and experimental design. He therefore could be said to have provided researchers in biology and medicine with their most important research tools.

Our interest in this paper is, of course, with the research tools and logic that Fisher provided, along with their application to scientific content.

Fisher began his early research as a statistician at Rothamsted Experimental Station in Harpenden, England (1919–1933). There, he was hired to develop statistical methods that could be applied to interpret the cumulative results of agriculture experiments ( Russell, 1966 , p. 326). Besides dealing with the past data, he became involved with ongoing experiments and developing methods to improve them ( Lehmann, 2011 ). Fisher’s hands-on work with experiments is a necessary feature of his background for understanding his positions regarding statistics and experimental design. Fisher (1962 , p. 529) essentially said as much in an address published posthumously:

  • simple  There is, frankly, no easy substitute for the educational discipline of whole time personal responsibility for the planning and conduct of experiments, designed for the ascertainment of fact, or the improvement of Natural Knowledge. I say “educational discipline” because such experience trains the mind and deepens the judgment for innumerable ancillary decisions, on which the value or cogency of an experimental program depends.

The analysis of variance (ANOVA, Fisher, 1925 ) and an emphasis on experimental design ( Fisher, 1937 ) were both outcomes of Fisher’s work in response to the experimental problems posed by the agricultural research performed at Rothamsted ( Parolini, 2015 ).

Fisher’s work synthesized mathematics with practicality and reshaped the scientific tools and practice for conducting and analyzing experiments. In the preface to the first edition of his textbook Statistical Methods for Research Workers , Fisher (1925 , p. vii) made clear that his main concern was application:

  • simple  Daily contact with the statistical problems which present themselves to the laboratory worker has stimulated the purely mathematical researches upon which are based the methods here presented. Little experience is sufficient to show that the traditional machinery of statistical processes is wholly unsuited to the needs of practical research.

Although prior statisticians developed probabilistic methods to estimate errors of experimental data [e.g., Student’s, 1908 (Gosset’s) t -test], Fisher carried the work a step further, developing the concept of null hypothesis testing using ANOVA ( Fisher, 1925 , 1935 ). Fisher demonstrated that by proposing a null hypothesis (usually no effect of an independent variable over a population), a researcher could evaluate whether a difference between conditions was sufficiently unlikely to occur due to chance to allow rejection of the null hypothesis. Fisher proposed that tests of significance with a low p -value can be taken as evidence against the null hypothesis. The following quote from Fisher (1937 , pp. 15–16), captures his position well:

  • simple  It is usual and convenient for experimenters to take 5 per cent. as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results.

While Fisher recommended using the 0.05 probability level as a criterion to decide whether to reject the null hypothesis, his general position was that researchers should set the critical level of significance at sufficiently low probability so as to limit the chance of concluding that an independent variable has an effect when the null hypothesis is true. Therefore, the criterion of significance does not necessarily have to be 0.05 (see also Lehmann, 1993 ), and Fisher’s main point was that failing to reject the null hypothesis regardless of what criterion is used does not warrant accepting it (see Fisher, 1956 , pp. 4 and 42).

Fisher (1925 , 1935 ) also showed how different sources of variance can be partitioned to allow tests of separate and combined effects of two or more independent variables. Prior to this work, much experimental research in psychology, though not all, used designs in which a single independent variable was manipulated. Fisher made a case that designs with two or more independent variables were more informative than multiple experiments using different single independent variables because they allowed determination of whether the variables interacted. Fisher’s application of the ANOVA to data from factorial experimental designs, coupled with his treatment of extracting always the maximum amount of information (likelihood) conveyed by a statistic (see later), apparently influenced both Wiener and Shannon ( Wiener, 1948a , p. 10).

In “The Place of Design of Experiments in the Logic of Scientific Inference,” Fisher (1962) linked experimental design, the application of correct statistical methods, and the subsequent extraction of a valid conclusion through the concept of information (known as Fisher information ). Fisher information measures the amount of information that an obtained random sample of data has about the variable of interest ( Ly et al., 2017 ). It is the expected value of the second moment of the log-likelihood function, which is the probability density function for obtained data conditional on the variable. In other words, the variance is defined to be the Fisher information, which measures the sensitivity of the likelihood function to the changes of a manipulated variable on the obtained results. Furthermore, Fisher argued that experimenters should be interested in not only minimizing loss of information in the process of statistical reduction (e.g., use ANOVA to summarize evidence that preserves the relevant information from data, Fisher, 1925 , pp. 1 and 7) but also the deliberate study of experimental design, for example, by introducing randomization or control, to maximize the amount of information provided by estimates derived from the resulting experimental data (see Fisher, 1947 ). Therefore, Fisher unfied experimental design and statistical analysis through information ( Seidenfeld, 1992 ; Aldrich, 2007 ), an approach that resonates with the system view of cybernetics.

Neyman-Pearson Approach

Jerzy Neyman obtained his doctorate from the University of Warsaw with a thesis based on his statistical work at the Agricultural Institute in Bydgoszcz, Poland, in 1924. Egon Pearson received his undergraduate degree in mathematics and continued his graduate study in astronomy at Cambridge University until 1921. In 1926, Neyman and Pearson started their collaboration and raised a question with regard to Fisher’s method, why test only the null hypothesis? They proposed a solution in which not only the null hypothesis but also a class of possible alternatives are considered, and the decision is one of accepting or rejecting the null hypothesis. This decision yields probabilities of two kinds of error: false rejection of the null hypothesis (Type I or Alpha) or false acceptance of the alternative hypothesis (Type II or Beta; Neyman and Pearson, 1928 , 1933 ). They suggested that the best test was the one that minimized the Type II error subject to a bound on the Type I error, i.e., the significance level of the test. Thus, instead of classifying the null hypothesis as rejected or not, the central consideration of the Neyman-Pearson approach was that one must specify not only the null hypothesis but also the alternative hypotheses against which it is tested. With this symmetric decision approach, statistical power (1 – Type II error) becomes an issue. Fisher (1947) also realized the importance and necessity of power but argued that it is a qualitative concern addressed during the experimental design to increase the sensitivity of the experiment and not part of the statistical decision process. In other words, Fisher thought that researchers should “conduct experimental and observational inquiries so as to maximize the information obtained for a given expenditure” ( Fisher, 1951 , p. 54), but did not see that as being part of statistics.

To reject or accept the null hypothesis, rather than disregarding results that do not allow rejection of the null hypothesis, was the rule of behavior for Neyman-Pearson hypothesis testing. Thus, their approach assumed that a decision made from the statistical analysis was sufficient to draw a conclusion as to whether the null or alternative hypothesis was most likely and did not put emphasis on the need for non-statistical inductive inference to understand the problems. In other words, tests of significance were interpreted as means to make decisions in an acceptance procedure (also see Wald, 1950 ) but not specifically for research workers to gain a better understanding of the experimental material. Also, the Neyman-Pearson approach interpreted probability (or the value of a significance level) as a realization of long-run frequency in a sequence of repetitions under constant conditions. Their view was that if a sequence of independent events was obtained with probability p of success, then the long-run success frequency will be close to p (this is known as a frequentist probability, Neyman, 1977 ). Fisher (1956) vehemently disagreed with this frequentist position.

Fisherian vs. Frequentist Approaches

In a nutshell, the differences between Fisherians and frequentists are mostly about research philosophy and how to interpret the results ( Fisher, 1956 ). In particular, Fisher emphasized that in scientific research, failure to reject the null hypothesis should not be interpreted as acceptance of it, whereas Neyman and Pearson portrayed the process as a decision between accepting or rejecting the null hypothesis. Nevertheless, the usual practice for statistical testing in psychology is based on a hybrid of Fisher’s and Neyman-Pearson’s approaches ( Gigerenzer and Murray, 1987 ). In practice, when behavioral researchers speak of the results of research, they are primarily referring to the statistically significant results and less often to null effects and the effect size estimates associated with those p -values.

The reliance on the results of significance testing has been explained from two perspectives: (1) Neither experienced behavioral researchers nor experienced statisticians have a good intuitive feel for the practical meaning of effect size estimation (e.g., Rosenthal and Rubin, 1982 ); (2) The reliance on a reject-or-accept dichotomous decision procedure, in which the differences between p levels are taken to be trivial relative to the difference between exceeding or failing to exceed a 0.05 or some other accepted level of significance ( Nelson et al., 1986 ). The reject-accept procedure follows the Neyman-Pearson approach and is compatible with the view that information is binary ( Shannon, 1948a ). Nevertheless, even if an accurate statistical power analysis is conducted, a replication study properly powered can produce results that are consistent with the effect size of interest or consistent with absolutely no effect ( Nelson et al., 1986 ; Maxwell et al., 2015 ). Therefore, instead of solely relying on hypothesis testing, or whether an effect is true or false, a report of actual p level obtained along with a statement of the effect size estimation, should be considered.

Fisher emphasized in his writings that an essential ingredient in the research process is the judgment of the researcher, who must decide by how much the obtained results have advanced a particular theoretical proposition (that is, how meaningful the results are). This decision is based in large part on decisions made during experimental design. The statistical significance test is just a useful tool to inform such decisions during the process to allow the researcher to be confident that the results are likely not due to chance. Moreover, he wanted this statistical decision in scientific research to be independent of a priori probabilities or estimates because he did not think these could be made accurately. Consequently, Fisher considered that only a statistically significant effect in an exact test for which the null hypothesis can be rejected should be open to subsequent interpretation by the researcher.

Acree (1978 , pp. 397–398) conducted a thorough evaluation of statistical inference in psychological research that for the most part captures why Fisher’s views had greater impact on the practice of psychological researchers than those of Neyman and Pearson (emphasis ours):

  • simple  On logical grounds, Neyman and Pearson had decidedly the better theory; but Fisher’s claims were closer to the ostensible needs of psychological research . The upshot is that psychologists have mostly followed Fisher in their thinking and practice: in the use of the hypothetical infinite population to justify probabilistic statements about a single data set; in treating the significance level evidentially; in setting it after the experiment is performed; in never accepting the null hypothesis; in disregarding power…. Yet the rationale for all our statistical methods, insofar as it is presented, is that of Neyman and Pearson, rather than Fisher.

Although Neyman and Pearson may have had “decidedly the better theory” for statistical decisions in general, Fisher’s approach provides a better theory for scientific inferences from controlled experiments.

Interim Summary

The work we described in Sections “Cybernetics and Information Theory” and “Inference Revolution” identifies three crucial pillars of research that were developed mainly in the period from 1940 to 1955: cybernetics/control theory, information/communication theory, and inferential statistical theory. Moreover, our analysis revealed a correspondence among those pillars. Specifically, the cybernetics/control theory corresponds to the experimental design, both of which provide the framework for cognitive psychology. The information theory corresponds to the statistical test, both of which provide quantitative evidence for the qualitative assumption.

These pillars were identified as early as 1952 in the preface to the proceedings of a conference called Information Theory in Biology , in which the editor, Quastler (1953 , p. 1), said:

  • simple  The “new movement” [what we would call information-processing theory] is based on evaluative concepts (R. A. Fisher’s experimental design, A. Wald’s statistical decision function, J. von Neumann’s theory of games), on the development of a measure of information (R. Hartley, D. Gabor, N. Wiener, C. Shannon), on studies of control mechanisms, and the analysis and design of large systems (W. S. McCulloch and W. Pitt’s “neurons,” J. von Neumann’s theory of complicated automata, N. Wiener’s cybernetics).

The pillars undergirded not only the new movement in biology but also the new movement in psychology. The concepts introduced in the dawning information age of 1940–1955 had tremendous impact on applied and basic research in experimental psychology that transformed psychological research into a form that has developed to the present.

Human Information Processing

As noted in earlier sections, psychologists and neurophysiologists were involved in the cybernetics, information theory, and inferential statistics movements from the earliest days. Each of these movements was crucial to the ascension of the information-processing approach in psychology and the emergence of cognitive science, which are often dated to 1956. In this section, we review developments in cognitive psychology linked to each of the three pillars, starting with the most fundamental one, cybernetics.

The Systems Viewpoint of Cybernetics

George A. Miller explicitly credited cybernetics as being seminal in 1979, stating, “I have picked September 11, 1956 [the date of the second MIT symposium on Information Theory] as the birthday of cognitive science – the day that cognitive science burst from the womb of cybernetics and became a recognizable interdisciplinary adventure in its own right” (quoted by Elias, 1994 , p. 24; emphasis ours). With regard to the development of human factors (ergonomics) in the United Kingdom, Waterson (2011 , pp. 1126–1127) remarks similarly:

  • simple  During the 1960s, the ‘systems approach’ within ergonomics took on a precedence which has lasted until the present day, and a lot of research was informed from cybernetics and general systems theory. In many respects, a concern in applying a systemic approach to ergonomic issues could be said to be one of the factors which ‘glues’ together all of the elements and sub-disciplines within ergonomics.

This seminal role for cybernetics is due to its fundamental idea that various levels of processing in humans and non-humans can be viewed as control systems with interconnected stages and feedback loops. It should be apparent that the human information-processing approach, which emphasizes the human as a processing system with feedback loops, stems directly from cybernetics, and the human-machine system view that underlies contemporary human factors and ergonomics, can be traced directly to cybernetics.

We will provide a few more specific examples of the impact of cybernetics. McCulloch and Pitts (1943) , members of the cybernetics movement, are given credit for developing “the first conceptual model of an artificial neural network” ( Shiffman, 2012 ) and “the first modern computational theory of mind and brain” ( Piccinini, 2004 ). The McCulloch and Pitts model identified the neurons as logical decision elements by on and off states, which are the basis of building brain-like machines. Since then, Boolean function, together with feedback through neurons, has been used extensively to quantify theorizing in relation to both neural and artificial intelligent systems ( Piccinini, 2004 ). Thus, computational modeling of brain processes was part of the cybernetics movement from the outset.

Franklin Taylor a noted engineering psychologist, reviewed Wiener’ (1948b) Cybernetics book, calling it “a curious and provocative book” ( Taylor, 1949 , p. 236). Taylor noted, “The author’s most important assertion for psychology is his suggestion, often repeated, that computers, servos, and other machines may profitably be used as models of human and animal behavior” (p. 236), and “It seems that Dr. Wiener is suggesting that psychologists should borrow the theory and mathematics worked out for machines and apply them to the behavior of men” (p. 237). Psychologists have clearly followed this suggestion, making ample use of the theory and mathematics of control systems. Craik (1947 , 1948 ) in the UK had in fact already started to take a control theory approach to human tracking performance, stating that his analysis “puts the human operator in the class of ‘intermittent definite correction servos’ ” ( Craik, 1948 , p. 148).

Wiener’s work seemingly had considerable impact on Taylor, as reflected in the opening paragraphs of a famous article by Birmingham and Taylor (1954) on human performance of tracking tasks and design of manual control systems:

  • simple  The cardinal purpose of this report is to discuss a principle of control system design based upon considerations of engineering psychology. This principle will be found to advocate design practices for man-operated systems similar to those customarily employed by engineers with fully automatic systems…. In many control systems the human acts as the error detector… During the last decade it has become clear that, in order to develop control systems with maximum precision and stability, human response characteristics have to be taken into account. Accordingly, the new discipline of engineering psychology was created to undertake the study of man from an engineering point of view (p. 1748).

Control theory continues to provide a quantitative means for modeling basic and applied human performance ( Jagacinski and Flach, 2003 ; Flach et al., 2015 ).

Colin Cherry, who performed the formative study on auditory selective attention, studied with Wiener and Jerome Wiesner at MIT in 1952. It was during this time that he conducted his classic experiments on the cocktail party problem – the question of how we identify what one person is saying when others are speaking at the same time ( Cherry, 1953 ). His detailed investigations of selective listening, including attention switching, provided the basis for much research on the topic in the next decade that laid the foundation for contemporary studies of attention. The initial models explored the features and locus of a “limited-capacity processing channel” ( Broadbent, 1958 ; Deutsch and Deutsch, 1963 ). Subsequent landmark studies of attention include the attentuation theory of Treisman (1960 ; also see Moray, 1959 ); capacity models that conceive of attention as a resource to be flexibly allocated to various stages of human information processing ( Kahneman, 1973 ; Posner, 1978 ); the distinction between controlled and automatic processing ( Shiffrin and Schneider, 1977 ); the feature-integration theory of visual search ( Treisman and Gelade, 1980 ).

As noted, Miller (2003) and others identified the year 1956 as a critical one in the development of contemporary psychology ( Newell and Simon, 1972 ; Mandler, 2007 ). Mandler lists two events that year that ignited the field, in both of which Allan Newell and Herbert Simon participated. The first is the meeting of the Special Group on Information Theory of the Institute of Electrical and Electronics Engineers, which included papers by linguist Noam Chomsky (who argued against an information theory approach to language for his transformational-generative grammer) and psychologist Miller (on avoiding the short-term memory bottleneck), in addition to Newell and Simon (on their Logic Theorist “thinking machine”) and others ( Miller, 2003 ). The other event is the Dartmouth Summer Seminar on Artificial Intelligence (AI), which was organized by John McCarthy, who had coined the term AI the previous year. It included Shannon, Oliver Selfridge (who discussed initial ideas that led to his Pandemonium model of human pattern recognition, described in the next paragraph), and Marvin Minsky (a pioneer of AI, who turned to symbolic AI after earlier work on neural net; Moor, 2006 ), among others. A presentation by Newell and Simon at that seminar is regarded as essential in the birth of AI, and their work on human problem solving exploited concepts from work on AI.

Newell applied a combination of experimental and theoretical research during his work in RAND Corporation from 1950 ( Simon, 1997 ). For example, in 1952, he and his colleagues designed and conducted laboratory experiments on a full-scale simulation of an Air-Force Early Warning Station to study the decision-making and information-handling processes of the station crews. Central to the research was the recording and analyzing the crew’s interaction with their radar screens, with interception aircraft, and with each other. From these studies, Newell became to believe that information processing is the central activity in organizations (systems).

Selfridge (1959) laid the foundation for a cognitive theory of letter perception with his Pandemonium model, in which the letter identification is achieved by way of hierarchically organized layers of features and letter detectors. Inspired by Selfridge’s work on Pandemonium, Newell started to converge on the idea that systems can be created that contain intelligence and have the ability to adapt. Based on his understanding of computers, heuristics, information processing in organizations (systems), and cybernetics, Newell (1955) delineated the design of a computer program to play chess in “The Chess Machine: An Example of Dealing with a Complex Task by Adaptation.” After that, for Newell, the investigation of organizations (systems) became the examination of the mind, and he committed himself to understand human learning and thinking through computer simulations.

In the study of problem solving, think-aloud protocols in laboratory settings revealed that means-end analysis is a key heuristic mechanism. Specifically, the current situation is compared to the desired goal state and mental or physical actions are taken to reduce the gap. Newell, Simon, and Jeff Shaw developed the General Problem Solver, a computer program that could solve problems in various domains if given a problem space (domain representation), possible actions to move between space states, and information about which actions would reduce the gap between the current and goal states (see Ernst and Newell, 1969 , for a detailed treatment, and Newell and Simon, 1972 , for an overview). The program built into the system underlined the importance of control structure for solving the problems, revealing a combination of cybernetics and information theory.

Besides using cybernetics, neuroscientists further developed it to explain anticipation in biological systems. Although closed-loop feedback can perform online corrections in a determinate machine, it does not give any direction ( Ashby, 1956 , pp. 224–225). Therefore, a feedforward loop was proposed in cybernetics that could improve control over systems through anticipation of future actions ( Ashby, 1956 ; MacKay, 1956 ). Generally, the feedforward mechansim is constructed as another input pathway parallel to the actual input, which enables comparison between the actual and anticipated inputs before they are processed by the system ( Ashby, 1960 ; Pribram, 1976 , p. 309). In other words, a self-organized system is not only capable of self-adjusting its own behavior (feedback), but is also able to change its own internal organization in such a way as to select the response that eliminates a disturbance from the outside among the random responses that it attempts ( Ashby, 1960 ). Therefore, the feedforward loop “nudges” the inputs based on predefined parameters in an automatic manner to account for cognitive adaptation, indicating a higher level action planning. Moreover, different from error-based feedback control, the knowledge-based feedforward control cannot be further adjusted once the feedforward input has been processed. The feedforward control from cybernetics has been used by psychologists to understand human action control at behavioral, motoric, and neural levels (for a review, see Basso and Olivetti Belardinelli, 2006 ).

Therefore, both feedback and feedforward are critical to a control system, in which feedforward control is valuable and could improve the performance when feedback control is not sufficient. A control system with feedforward and feedback loops allows the interaction between top-down and bottom-up information processing. Consequently, the main function of a control system is not to create “behavior” but to create and maintain the anticipation of a specific desired condition, which constitutes its reference value or standard of comparison.

Cognitive psychology and neuroscience suggest that the combination of anticipatory and hierarchical structures are involved for human action learning and control ( Herbort et al., 2005 ). Specifically, anticipatory mechanisms lead to direct action selections in inverse model and effective filtering mechanisms in forward models, both of which are based on sensorimotor contingencies through people’s interaction with the environment (ideomotor principle, Greenwald, 1970 ; James, 1890 ). Therefore, the feedback loop included in cybernetics as well as the feedforward loop are essential to the learning processes.

We conclude this section with mention of one of the milestone books in cognitive psychology, Plans and the Structure of Behavior , by Miller et al. (1960) . In the prolog to the book, the authors indicate that they worked on it together for a year at the Center for Advanced Study in the Behavioral Sciences in California. As indicated by the title, the central idea motivating the book was that of a plan, or program, that guides behavior. But, the authors said:

  • simple  Our fundamental concern, however, was to discover whether the cybernetic ideas have any relevance for psychology…. There must be some way to phrase the new ideas [of cybernetics] so that they can contribute to and profit from the science of behavior that psychologists have created. It was the search for that favorable intersection that directed the course of our year-long debate (p. 3, emphasis ours).

In developing the central concept of the test-operate-test (TOTE) unit in the book, Miller et al. (1960) stated, “The interpretation to which the argument builds is one that has been called the ‘cybernetic hypothesis,’ namely that the fundamental building block of the nervous system is the feedback loop” (pp. 26–27). As noted by Edwards (1997) , the TOTE concept “is the same principle upon which Weiner, Rosenblueth, and Bigelow had based ‘Behavior, Purpose, and Teleology”’ (p. 231). Thus, although Miller later gave 1956 as the date that cognitive science “burst from the womb of cybernetics,” even after the birth of cognitive science, the genes inherited from cybernetics continued to influence its development.

Information and Uncertainty

Information theory, a useful way to quantify psychological and behavior concepts, had possibly a more direct impact than cybernetics on psychological research. No articles were retrieved from the PsycINFO database prior to 1950 when we entered “information theory” as an unrestricted field search term on May 3, 2018. But, from 1950 to 1956 there were 37 entries with “information theory” in the title and 153 entries with the term in some field. Two articles applying information theory to speech communication appeared in 1950, a general theoretical article by Fano (1950) of the Research Laboratory in Electronics at MIT, and an empirical article by Licklider (1950) of the Acoustics Laboratory, also at MIT. Licklider (1950) presented two methods of reducing the frequncies of speech without destoying the intelligibility by using the Shannon-Weaver information formula based on first-order probability.

Given Licklider’s background in cybernetics and information theory, it is not too surprising that he played a major role in establishing the ARPAnet, which was later replaced by the Internet:

  • simple  His 1968 paper called “The Computer as a Communication Device” illustrated his vision of network applications and predicted the use of computer networks for communications. Until then, computers had generally been thought of as mathematical devices for speeding up computations ( Internet Hall of Fame, 2016 ).

Licklider worked from 1943–1950 at the Psyco-Acoustics Laboratory (PAL) of Harvard University University, headed by Edwards ( 1997 , p. 212) noted, “The PAL played a crucial role in the genesis of postwar information processing psychologies.” He pointed out, “A large number of those who worked at the lab… helped to develop computer models and metaphors and to introduce information theory into human experimental psychology” (p. 212). Among those were George Miller and Wendell Garner, who did much to promulgate information theory in psychology ( Garner and Hake, 1951 ; Miller, 1953 ), as well as Licklider, Galanter and Pribram. Much of PAL’s research was based in solving engineering problems for the military and industry.

The exploration of human information-processing limitations using information theory that led to one of the most influential applications was that of Hick (1952) and Hyman (1953) to explain increases in reaction time as a function of uncertainty regarding the potential stimulus-response alternatives. Their analyses showed that reaction time increased as a logarithmic function of the number of equally likely alternatives and as a function of the amount of information as computed from differential probabilities of occurrence and sequential effects. This relation, call Hick’s law or the Hick-Hyman law has continued to be a source of research to the present and is considered to be a fundamental law of human-computer interaction ( Proctor and Schneider, 2018 ). Fitts and Seeger (1953) showed that uncertainty was not the only factor influencing reaction time. They examined performance of eight-choice task for all combinations of three spatial-location stimulus and response arrays. Responses were faster and more accurate when the response array corresponded to that of the stimulus array than when it did not, which Fitts and Seeger called a stimulus-response compatibility effect. The main point of their demonstration was that correspondence of the spatial codes for the stimulus and response alternatives was crucial, and this led to detailed investigation of compatibility effects that continue to the present ( Proctor and Vu, 2006 , 2016 ).

Even more influential has been Fitts’s law, which describes movement time in tasks where people make discrete aimed movements to targets or series of repetitive movements between two targets. Fitts (1954) developed the index of difficulty as –log 2 W/2A bits/response, where W , is the target width and A is the amplitude (or distance) of the movement. The resulting movement time is a linear function of the index of difficulty, with the slope differing for different movement types. Fitts’s law continues to be the subject of basic and applied research to the present ( Glazebrook et al., 2015 ; Velasco et al., 2017 ).

Information theory was applied to a range of other topics during the 1950s, including intelligence tests ( Hick, 1951 ), memory ( Aborn and Rubenstein, 1952 ; Miller, 1956 ), perception ( Attneave, 1959 ), skilled performance ( Kay, 1957 ), music ( Meyer, 1957 ), and psychiatry ( Brosin, 1953 ). However, the key concept of information theory, entropy, or uncertainty, was found not to provide an adequate basis for theories of human performance (e.g., Ambler et al., 1977 ; Proctor and Schneider, 2018 ).

Shannon’ (1948a) advocacy of information theory for electronic communication was mainly built on there being a mature understanding of the structured pattern of information transmission within electromagnetic systems at that time. In spite of cognitive research having greatly expanded our knowledge about how humans select, store, manipulate, recover, and output information, the fundamental mechanisms of those information processes remained under further investigation ( Neisser, 1967 , p. 8). Thus, although information theory provided a useful mathematic metric, it did not provide a comprehensive account of events between the stimulus and response, which is what most psychologists were interested in ( Broadbent, 1959 ). With the requirement that information be applicable to a vast array of psychological issues, “information” has been expanded from a measure of informativeness of stimuli and responses, to a framework for describing the mental or neural events between stimuli and responses in cognitive psychology ( Collins, 2007 ). Therefore, the more enduring impact of information theory was through getting cognitive psychologists to focus on the nature of human information processing, such that by Lachman et al. (1979) titled their introduction to the field, Cognitive Psychology and Information Processing .

Along with the information theory, the arrival of the computer provided one of the most viable models to help researchers understand the human mind. Computers grew from a desire to make machines smart ( Laird et al., 1987 ), which assumes that stored knowledge inside of a machine can be applied to the world similar to the way that people do, constituting intelligence (e.g., intelligent machine, Turing, 1937 ; AI, Minsky, 1968 ; McCarthy et al., 2006 ). The core idea of the computer metaphor is that the mind functions like a digital computer, in which mental states are computational states and mental processes are computational processes. The use of the computer as a tool for thinking about how the mind handles information has been highly influential in cognitive psychology. For example, the PsycINFO database returned no articles prior to 1950 when “encoding” was entered as an unrestricted field search term on May 3, 2018. From 1950 to 1956 there was 1 entry with “encoding” in the title and 4 entries with the term in some field. But, from 1956 to 1973, there were 214 entries with “encoding” in the title and 578 entries with term in some field, including the famous encoding specificity principle of Tulving and Thomson (1973) . Some models in cognitive psychology were directly inspired by how the memory system of a computer works, for example, the multi-store memory ( Atkinson and Shiffrin, 1968 ) and working memory ( Baddeley and Hitch, 1974 ) models.

Although cybernetics is the origin of early AI ( Kline, 2011 ) and the computer metaphor and cybernetics share similar concepts (e.g., representation), they are fundamentally different at the conceptual level. The computer metaphor represents a genuine simplification: Terms like “encoding” and “retrieving” can be used to describe human behavior analogously to machine operation but without specifying a precise mapping between the analogical “computer” and the target “human” domain ( Gentner and Grudin, 1985 ). In contrast, cybernetics provides a powerful framework to help people understand the human mind, which holds that, regardless of human or machine, it is necessary and possible to achieve goals through correcting action using feedback and adapting to the external environment using feedforward. Recent breakthroughs in AI (e.g., AlphaGo beating professional Go players) rely on training the machine to learn how to perform tasks at a level not seen before using a large number of examples and an artificial neural network (ANN) without human guidance. This unsupervised learning allows the machine to determine on its own whether a certain function should be executed. The development of ANN has been greatly influenced by consideration of dynamic properties of cybernetics ( Cruse, 2009 ), to achieve the goal of self-organization or self-regulation.

Statistical Inference and Decisions

Statistical decision theory also had substantial impact. Engineering psychologists were among the leaders in promulgating use of the ANOVA, with Chapanis and Schachter (1945 ; Schachter and Chapanis, 1945 ) using it in research on depth perception through distorted glass, conducted in the latter part of World War II and presented in Technical Reports. As noted by Rucci and Tweney (1980) , “Following the war, these [engineering] psychologists entered the academic world and began to publish in regular journals, using ANOVA” (p. 180).

Factorial experiments and use of the ANOVA were slow to take hold in psychology. Rucci and Tweney (1980) counted the frequency with which the t -test and ANOVA were used in major psychology journals from 1935 to 1952. They described the relation as, “Use of both t and ANOVA increased gradually prior to World War II, declined during the war, and increased immediately thereafter” (p. 172). Rucci and Tweney concluded, “By 1952 it [ANOVA] was fully established as the most frequently used technique in experimental research” (p. 166). They emphasized that this increased use of ANOVA reflected a radical change in experimental design, and emphasized that although one could argue that the statistical technique caused the change in psychological research, “It is just as plausible that the discipline had developed in such a way that the time was ripe for adoption of the technique” (p. 167). Note that the rise in use of null hypothesis testing and ANOVA paralleled that of cybernetics and information theory, which suggests that the time was indeed ripe for the use of probability theory, multiple independent variables, and formal scientific decision making through hypothesis testing that is embodied in the factorial design and ANOVA.

The first half of the 1950s also saw the introduction of signal detection theory, a variant of statistical decision theory, for analyzing human perception and performance. Initial articles by Peterson et al. (1954) and Van Meter and Middleton (1954) were published in a journal of the Institute of Electrical and Electronics Engineers (IEEE), but psychologists were quick to realize the importance of the approach. This point is evident in the first sentence of Swets et al.’s (1961) article describing signal detection theory in detail:

  • simple  About 5 years ago, the theory of statistical decision was translated into a theory of signal detection. Although the translation was motivated by problems in radar, the detection theory that resulted is a general theory… The generality of the theory suggested to us that it might also be relevant to the detection of signals by human observers… The detection theory seemed to provide a framework for a realistic description of the behavior of the human observer in a variety of perceptual tasks (p. 301).

Signal detection theory has proved to be an invaluable tool because it dissociates influences of the evidence on which decisions are based from the criteria applied to that evidence. This way of conceiving decisions is useful not only for perceptual tasks but for a variety of tasks in which choices on the basis of noisy information are required, including recognition memory ( Kellen et al., 2012 ). Indeed, Wixted (2014) states, “Signal-detection theory is one of psychology’s most notable achievements, but it is not a theory about typical psychological phenomena such as memory, attention, vision or psychopathology (even though it applies to all of those areas and more). Instead, it is a theory about how we use evidence to make decisions.”

In the 1960s, Sternberg (1969) formalized the additive factors method of analyzing reaction-time data to identify different information-processing stages. Specifically, a factorial experiment is conducted, and if two independent variables affect different processing stages, the two variables do not interact. If, on the other hand, there is a significant interaction, then the variables can be assumed to affect at least one processing stage in common. Note that the subtitle of Sternberg’s article is “Extension of Donders’ Method,” which is reference to the research reported by F. C. Donders 100 years earlier in which he estimated the time for various processing stages by subtracting the reaction time obtained for a task that did not have an additional processing stage inserted from one that did. A limitation of Donders’s (1868/1969) subtraction method is that the stages had to be assumed and could not be identified. Sternberg’s extension that provided a means for identifying the stages did not occur until both the language of information processing and the factorial ANOVA were available as tools for analyzing reaction-time data. Additive factors method formed a cornerstone for much research in cognitive psychology for the following couple of decades, and the logic is still often applied to interpret empirical results, often without explicit acknowledgment.

In psychology, how people make decisions in perceptual and cognitive tasks has often been proposed on the basis of sequential sampling to explain the pattern of obtained reaction time (RT) and percentage error. The study of such mechanisms addresses one of the fundamental question in psychology, namely, how the central nervous system translates perception into action and how this translation depends on the interaction and expectation of individuals. Like signal detection theory, the theory of sequential sampling starts from the premise that perceptual and cognitive decisions are statistical in nature. It also follws the widely accepted assumption that sensory and cognitive systems are inherently noise and time-varying. In practice, the study of a given sequential model reduce to the study of a stochastic process, which represents the accumulative information avaliable to the decision at a given time. A Wiener process forms the basis of Ratcliff’ (1978) influential diffusion model of reaction times, in which noisy information accumulates continuously over time from a starting point to response thresholds ( Ratcliff and Smith, 2004 ). Recently, Srivastava et al. (2017) extended this diffusion model to make the Wiener process time dependent. More generally, Shalizi (2007 , p. 126) makes the point, “The fully general theory of stochastic calculus considers integration with respect to a very broad range of stochastic processes, but the original case, which is still the most important, is integration with respect to the Wiener process.”

In parallel to use of the computer metaphor to understand human mind, use of the laws of probability as metaphors of the mind also has had a profound influence on physiology and psychology ( Gigerenzer, 1991 ). Gregory (1968) regarded seeing an object from an image as an inference from a hypothesis (also see “unconscious inference” of Helmholtz, 1866/1925 ). According to Gregory (1980) , in spite of differences between perception and science, the cognitive procedures carried out by perceptual neural processes are essentially the same as the processes of predictive hypotheses of science. Especially, Gregory emphasized the importance and distinction between bottom–up and top–down procedures in perception. For normal perception and the perceptual illusions, the bottom–up procedures filter and structure the input and the top–down procedures refer to stored knowledge or assumptions that can work downwards to parcel signals and data into object.

A recent development of the statistics metaphor is the Bayesian brain hypothesis, which has been used to model perception and decision making since the 1990s ( Friston, 2012 ). Rao and Ballard (1997) described a hierarchical neural network model of visual recognition, in which both input-driven bottom–up signals and expectation-driven top–down signals were used to predict the current recognition state. They showed that feedback from a higher layer to the input later carries predictions of expected inputs, and the feedforward connections convey the errors in prediction which are used to correct the estimation. Rao (2004) illustrated how the Bayesian model could be implemented with neural networks by feedforward and recurrent connections, showing that for both perception and decision-making tasks the resulting network exhibits direction selectivity and computes posterior error corrections.

We would like to highlight that, different from the analogy to computers, the cybernetics view is essential for the Bayesian brain hypothesis. This reliance on cybernetics is because the Bayesian brain hypothesis models the interaction between prior knowledge (top–down) and sensory evidence (bottom–up) quantitatively. Therefore, the success of Bayesian brain modeling is due to both the framework from cybernetics and the computation of probability. Seth (2015) explicitly acknowledges this relation in his article, The Cybernetic Bayesian Brain .

Meanwhile, the external information format (or representation) on which the Bayesian inferences and statistical reasoning operate has been investigated. For example, Gigerenzer and Hoffrage (1995) varied mathematically equivalent representation of information in percentage or frequency for various problems (e.g., the mammography problem, the cab problem) and found that frequency formats enabled participants’ inferences to conform to Bayes’ theorem without any teaching or instruction.

The Information Age of humans follows on periods that are called the Stone Age , Bronze Age , Iron Age , and Industrial Age . These labels indicate that the advancement and time period of a specific human history are often represented by the type of tool material used by humans. The formation of the Information Age is inseparable from the interdisciplinary work of cybernetics, information theory, and statistical inference, which together generated a cognitive psychology adapted to the age. Each of these three pillars has been acknowledged separately by other authors, and contemporary scientifical approaches to motor control and cognitive processing have been continuously inspired by cybernetics, information theory, and statistical inference.

Kline (2015 , p. 1) aptly summarized the importance of cybernetics in the founding of the information age:

  • simple  During contentious meetings filled with brilliant arguments, rambling digressions, and disciplinary posturing, the cybernetics group shaped a language of feedback, control, and information that transformed the idiom of the biological and social sciences, sparked the invention of information technologies, and set the intellectual foundation for what came to be called the information age. The premise of cybernetics was a powerful analogy: that the principles of information-feedback machines, which explained how a thermostat controlled a household furnace, for example, could also explain how all living things—from the level of the cell to that of society—behaved as they interacted with their environment.

Pezzulo and Cisek (2016) extended the cybernetic principles of feedback and forward control for understanding cognition. In particular, they proposed hierachical feedback control, indicating that adaptive action selection is influenced not only by prediction of immediate outcomes but also prediction of new opportunites afforded by the outcomes. Scott (2016) highlighted the use of sensory feedback, after a person becomes familiar with performing a perceptual-motor task, to drive goal-directed motor control, reducing the role of top–down control through utilizing bottom–up sensory feedback.

Although less expansive than Kline (2015) , Fan (2014 , p. 2) emphasized the role of information theory. He stated, “Information theory has a long and distinguished role in cognitive science and neuroscience. The ‘cognitive revolution’ of the 1950s, as spearheaded by Miller (1956) and Broadbent (1958) , was highly influenced by information theory.” Likewise, Gigerenzer (1991 , p. 255) said, “Inferential statistics… provided a large part of the new concepts for mental processes that have fueled the so called cognitive revolution since the 1960s.” The separate treatment of the three pillars by various authors indicates that the pillars have distinct emphases, which are sometimes treated as in opposition ( Verschure, 2016 ). However, we have highlighted the convergent aspects of the three that were critical to the founding of cognitive psychology and its continued development to the present. An example of a contemporary approach utilizing information theory and statistics in computational and cognitive neuroscience is the study of activity of neuronal populations to understand how the brain processes information. Quian Quiroga and Panzeri (2009) reviewed methods based on statistical decoding and information theory, and concluded, “Decoding and information theory describe complementary aspects of knowledge extraction… A more systematic joint application of both methodologies may offer additional insights” (p. 183).

Leahey (1992) claimed that it is incorrect to say that there was a “cognitive revolution” in the 1950s, but he acknowledged “that information-processing psychology has had world-wide influence…” (p. 315). Likewise, Mandler (2007) pointed out that the term “cognitive revolution” for the changes that occurred in the 1950s is a misnomer because, although behaviorism was dominant in the United States, much psychology outside of the United States prior to that time could be classified as “cognitive.” However, he also said, after reviewing the 1956 meetings and ones in 1958, that “the 1950s surely were ready for the emergence of the new information-processing psychology—the new cognitive psychology” (p. 187). Ironically, although both Leahey and Mandler identified the change as being one of information processing, neither author acknowledged the implication of their analyses, which is that there was a transformation that is more aptly labeled the information-processing revolution rather than the cognitive revolution. The concepts provided by the advances in cybernetics, information theory, and inferential statistical theory together provided the language and methodological tools that enabled a significant leap forward in theorizing.

Wootton (2015) , says of his book The Invention of Science: A New History of the Scientific Revolution , “We can state one of its core premises quite simply: a revolution in ideas requires a revolution in language” (p. 48). That language is what the concepts of communications systems engineering and inferential statistical theory provided for psychological research. Assessing the early influence of cybernetics and information theory on cognitive psychology, Broadbent (1959) stated, “It is in fact, the cybernetic approach above all others which has provided a clear language for discussing those various internal complexities which make the nervous system differ from a simple channel” (p. 113). He also identified a central feature of information theory as being crucial: The information conveyed by a stimulus is dependent on the stimuli that might have occurred but did not.

Posner (1986) , in his introduction to the Information Processing section of the Handbook of Perception and Human Performance , highlights more generally that the language of information processing affords many benefits to cognitive psychologists. He states, “Information processing language provides an alternative way of discussing internal mental operations intermediate between subjective experience and activity of neurons” (p. V-3). Later in the chapter, he elaborates:

  • simple  The view of the nervous system in terms of information flow provided a common language in which both conscious and unconscious events might be discussed. Computers could be programmed to simulate exciting tasks heretofore only performed by human beings without requiring any discussion of consciousness. By analogies with computing systems, one could deal with the format (code) in which information is presented to the senses and the computations required to change code (recodings) and for storage and overt responses. These concepts brought a new unity to areas of psychology and a way of translating between psychological and physiological processes. The presence of the new information processing metaphor reawakened interest in internal mental processes beyond that of simple sensory and motor events and brought cognition back to a position of centrality in psychology (p. V-7).

Posner also notes, “The information processing approach has a long and respected relationship with applications of experimental psychology to industrial and military settings” (V-7). The reason, as emphasized years earlier by Wiener, is that it allows descriptions of humans to be integrated with those of the nonhuman parts of the system. Again, from our perspective, there was a revolution, but it was specifically an information-processing revolution.

Our main points can be summarized as follows:

  • simple  (1) The information age originated in interdisciplinary research of an applied nature.
  • simple  (2) Cybernetics and information theory played pivotal roles, with the former being more fundamental than the latter through its emphasis on a systems approach.
  • simple  (3) These roles of communication systems theory were closely linked to developments in inferential statistical theory and applications.
  • simple  (4) The three pillars of cybernetics, information theory, and inferential statistical theory undergirded the so-called cognitive revolution in psychology, which is more appropriately called the information-processing revolution.
  • simple  (5) Those pillars, rooted in solving real-world problems, provided the language and methodological tools that enabled growth of the basic and applied fields of psychology.
  • simple  (6) The experimental design and inferential statistics adopted in cognitive psychology, with emphasis on rejecting null hypotheses, originated in the applied statistical analyses of the scientist Ronald Fisher and were influential because of their compatibility with scientific research conducted using controlled experiments.

Simon (1969) , in an article entitled “Designing Organizations for an Information-Rich World,” pointed out the problems created by the wealth of information:

  • simple  Now, when we speak of an information-rich world, we may expect, analogically, that the wealth of information means a dearth of something else – a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it (manuscript pp. 6-7).

What Simon described is valid and even more evident in the current smart device and Internet era, where the amount of information is overwhelming. Phenomena as disparate as accidents caused by talking on a cellphone while driving ( Banducci et al., 2016 ) and difficulty assessing the credibility of information reported on the Internet or other media ( Chen et al., 2015 ) can be attributed to the overload. Moreover, the rapid rate at which information is encountered may have a negative impact on maintaining a prolonged focus of attention ( Microsoft Canada, 2015 ). Therefore, knowing how people process information and allocate attention is increasingly essential in the current explosion of information.

As noted, the predominant method of cognitive psychology in the information age has been that of drawing theoretical inferences from the statistical results of small-scale sets of data collected in controlled experimental settings (e.g., laboratory). The progress in psychology is tied to progress in statistics as well as technological developments that improve our ability to measure and analyze human behavior. Outside of the lab, with the continuing development of the Internet of Things (IoT), especially the implementation of AI, human physical lives are becoming increasingly interweaved into the cyber world. Ubiquitous records of human behavior, or “big data,” offer the potential to examine cognitive mechanisms at an escalated scale and level of ecological validity that cannot be achieved in the lab. This opportunity seems to require another significant transformation of cognitive psychology to use those data effectively to push forward understanding of the human mind and ensure seamless integration with cyber physical systems.

In a posthumous article, Brunswik (1956) noted that psychology should have the goal of broadening perception and learning by including interactions with a probabilistic environment. He insisted that psychology “must link behavior and environment statistically in bivariate or multivariate correlation rather than with the predominant emphasis on strict law…” (p. 158). As part of this proposal, Brunswik indicated a need to relate psychology more closely to disciplines that “use autocorrelation and intercorrelation, as theoretically stressed especially by Wiener (1949) , for probability prediction” (p. 160). With the ubiquitous data being collected within cyber physical systems, more extensive use of sophisticated correlational methods to extract the information embedded within the data will likely be necessary.

Using Stokes (1997) two dimensions of scientific research (considerations of use; quest for fundamental understanding), the work of pioneers of the Information Age, including Wiener, Shannon, and Fisher, falls within Pasteur’s Quadrant of use-inspired basic research. They were motivated by the need to solve immediate applied problems and through their research advanced human’s fundamental understanding of nature. Likewise, in seizing the opportunity to use big data to inform cognitive psychology, psychologists need to increase their involvement in interdisciplinary research targeted at real-world problems. In seeking to mine the information from big data, a new age is likely to emerge for cognitive psychology and related disciplines.

Although we reviewed the history of the information-processing revolution and subsequent developments in this paper, our ultimate concern is with the future of cognitive psychology. So, it is fitting to end as we began with a quote from Wiener (1951 , p. 68):

To respect the future, we must be aware of the past.

Author Contributions

AX and RP contributed jointly and equally to the paper.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • Aborn M., Rubenstein H. (1952). Information theory and immediate recall. J. Exp. Psychol. 44 260–266. 10.1037/h0061660 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Acree M. C. (1978). Theories of Statistical Inference in Psychological Research: A Historico-Critical Study. Doctoral dissertation, Clark University, Worcester, MS. [ Google Scholar ]
  • Adams J. A. (1971). A closed-loop theory of motor learning. J. Mot. Behav. 3 111–150. 10.1080/00222895.1971.10734898 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Aldrich J. (2007). Information and economics in Fisher’s design of experiments. Int. Stat. Rev. 75 131–149. 10.1111/j.1751-5823.2007.00020.x [ CrossRef ] [ Google Scholar ]
  • Ambler B. A., Fisicaro S. A., Proctor R. W. (1977). Information reduction, internal transformations, and task difficulty. Bull. Psychon. Soc. 10 463–466. 10.3758/BF03337698 [ CrossRef ] [ Google Scholar ]
  • Ashby W. R. (1956). An Introduction to Cybernetics. London: Chapman & Hall; 10.5962/bhl.title.5851 [ CrossRef ] [ Google Scholar ]
  • Ashby W. R. (1960). Design for a Brain: The Origin of Adaptive Behavior. New York, NY: Wiley & Sons; 10.1037/11592-000 [ CrossRef ] [ Google Scholar ]
  • Atkinson R. C., Shiffrin R. M. (1968). “Human memory: a proposed system and its control,” in The Psychology of Learning and Motivation Vol. 2 eds Spence K. W., Spence J. T. (New York, NY: Academic Press; ), 89–195. [ Google Scholar ]
  • Attneave F. (1959). Applications of Information theory to Psychology: A Summary of Basic Concepts, Methods, and Results. New York, NY: Henry Holt. [ Google Scholar ]
  • Baddeley A. D., Hitch G. (1974). “Working memory,” in The Psychology of Learning and Motivation Vol. 8 ed. Bower G. A. (New York, NY: Academic press; ), 47–89. [ Google Scholar ]
  • Banducci S. E., Ward N., Gaspar J. G., Schab K. R., Crowell J. A., Kaczmarski H., et al. (2016). The effects of cell phone and text message conversations on simulated street crossing. Hum. Factors 58 150–162. 10.1177/0018720815609501 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Basso D., Olivetti Belardinelli M. (2006). The role of the feedforward paradigm in cognitive psychology. Cogn. Process. 7 73–88. 10.1007/s10339-006-0034-1 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Beer S. (1959). Cybernetics and Management. New York, NY: John Wiley & Sons. [ Google Scholar ]
  • Biology Online Dictionary (2018). Homeostasis. Available at: https://www.biology-online.org/dictionary/Homeostasis [ Google Scholar ]
  • Birmingham H. P., Taylor F. V. (1954). A design philosophy for man-machine control systems. Proc. Inst. Radio Eng. 42 1748–1758. 10.1109/JRPROC.1954.274775 [ CrossRef ] [ Google Scholar ]
  • Broadbent D. E. (1958). Perception and Communication. London: Pergamon Press; 10.1037/10037-000 [ CrossRef ] [ Google Scholar ]
  • Broadbent D. E. (1959). Information theory and older approaches in psychology. Acta Psychol. 15 111–115. 10.1016/S0001-6918(59)80030-5 [ CrossRef ] [ Google Scholar ]
  • Brosin H. W. (1953). “Information theory and clinical medicine (psychiatry),” in Current Trends in Information theory , ed. Patton R. A. (Pittsburgh, PA: University of Pittsburgh Press; ), 140–188. [ Google Scholar ]
  • Brunswik E. (1956). Historical and thematic relations of psychology to other sciences. Sci. Mon. 83 151–161. [ Google Scholar ]
  • Chapanis A., Schachter S. (1945). Depth Perception through a P-80 Canopy and through Distorted Glass. Memorandum Rep. TSEAL-69S-48N. Dayton, OH: Aero Medical Laboratory. [ Google Scholar ]
  • Chen Y., Conroy N. J., Rubin V. L. (2015). News in an online world: the need for an “automatic crap detector”. Proc. Assoc. Inform. Sci. Technol. 52 1–4. 10.1002/pra2.2015.145052010081 [ CrossRef ] [ Google Scholar ]
  • Cherry E. C. (1953). Some experiments on the recognition of speech, with one and with two ears. J. Acoust. Soc. Am. 25 975–979. 10.1121/1.1907229 [ CrossRef ] [ Google Scholar ]
  • Cherry E. C. (1957). On Human Communication. New York, NY: John Wiley. [ Google Scholar ]
  • Collins A. (2007). From H = log s n to conceptual framework: a short history of information. Hist. Psychol. 10 44–72. 10.1037/1093-4510.10.1.44 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Conway F., Siegelman J. (2005). Dark Hero of the Information Age. New York, NY: Basic Books. [ Google Scholar ]
  • Craik K. J. (1947). Theory of the human operator in control systems. I. The operator as an engineering system. Br. J. Psychol. 38 56–61. [ PubMed ] [ Google Scholar ]
  • Craik K. J. (1948). Theory of the human operator in control systems. II. Man as an element in a control system. Br. J. Psychol. 38 142–148. [ PubMed ] [ Google Scholar ]
  • Cruse H. (2009). Neural Networks as Cybernetic Systems , 3rd Edn. Bielefeld: Brain, Minds, and Media. [ Google Scholar ]
  • Dawkins R. (2010). Who is the Greatest Biologist Since Darwin? Why? Available at: https://www.edge.org/3rd_culture/leroi11/leroi11_index.html#dawkins [ Google Scholar ]
  • Deutsch J. A., Deutsch D. (1963). Attention: some theoretical considerations. Psychol. Rev. 70 80–90. 10.1037/h0039515 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dignath D., Pfister R., Eder A. B., Kiesel A., Kunde W. (2014). Representing the hyphen in action–effect associations: automatic acquisition and bidirectional retrieval of action–effect intervals. J. Exp. Psychol. Learn. Mem. Cogn. 40 1701–1712. 10.1037/xlm0000022 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Donders F. C. (1868/1969). “On the speed of mental processes,” in Attention and Performance II , ed. Koster W. G. (Amsterdam: North Holland Publishing Company; ), 412–431. [ Google Scholar ]
  • Edwards P. N. (1997). The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, MA: MIT Press. [ Google Scholar ]
  • Efron B. (1998). R. A. Fisher in the 21st century: invited paper presented at the 1996 R. A. Fisher lecture. Stat. Sci. 13 95–122. [ Google Scholar ]
  • Elias P. (1994). “The rise and fall of cybernetics in the US and USSR,” in The Legacy of Norbert Wiener: A Centennial Symposium , eds D. Jerison I, Singer M., Stroock D. W. (Providence, RI: American Mathematical Society; ), 21–30. [ Google Scholar ]
  • Ernst G. W., Newell A. (1969). GPS: A Case Study in Generality and Problem Solving. New York, NY: Academic Press. [ Google Scholar ]
  • Fan J. (2014). An information theory account of cognitive control. Front. Hum. Neurosci. 8 : 680 . 10.3389/fnhum.2014.00680 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fano R. M. (1950). The information theory point of view in speech communication. J. Acoust. Soc. Am. 22 691–696. 10.1121/1.1906671 [ CrossRef ] [ Google Scholar ]
  • Fisher R. A. (1925). Statistical Methods for Research Workers. London: Oliver & Boyd. [ Google Scholar ]
  • Fisher R. A. (1935). The Design of Experiments. London: Oliver & Boyd. [ Google Scholar ]
  • Fisher R. A. (1937). The Design of Experiments , 2nd Edn. London: Oliver & Boyd. [ Google Scholar ]
  • Fisher R. A. (1947). “Development of the theory of experimental design,” in Proceedings of the International Statistical Conferences Vol. 3 Poznań, 434–439. [ Google Scholar ]
  • Fisher R. A. (1951). “Statistics,” in Scientific thought in the Twentieth century , ed. Heath A. E. (London: Watts; ). [ Google Scholar ]
  • Fisher R. A. (1956). Statistical Methods and Scientific Inference. Edinburgh: Oliver & Boyd. [ Google Scholar ]
  • Fisher R. A. (ed.). (1962). “The place of the design of experiments in the logic of scientific inference,” in Fisher: Collected Papers Relating to Statistical and Mathematical theory and Applications Vol. 110 (Paris: Centre National de la Recherche Scientifique), 528–532. [ Google Scholar ]
  • Fitts P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. J. Exp. Psychol. 47 381–391. 10.1037/h0055392 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fitts P. M., Seeger C. M. (1953). S-R compatibility: spatial characteristics of stimulus and response codes. J. Exp. Psychol. 46 199–210. 10.1037/h0062827 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Flach J. M., Bennett K. B., Woods D. D., Jagacinski R. J. (2015). “Interface design: a control theoretic context for a triadic meaning processing approach,” in The Cambridge Handbook of Applied Perception Research , Vol. II , eds Hoffman R. R., Hancock P. A., Scerbo M. W., Parasuraman R., Szalma J. L. (New York, NY: Cambridge University Press; ), 647–668. [ Google Scholar ]
  • Friston K. (2012). The history of the future of the Bayesian brain. Neuroimage 62 1230–1233. 10.1016/j.neuroimage.2011.10.004 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Galison P. (1994). The ontology of the enemy: norbert Wiener and the cybernetic vision. Crit. Inq. 21 228–266. 10.1086/448747 [ CrossRef ] [ Google Scholar ]
  • Gardner H. E. (1985). The Mind’s New Science: A History of the Cognitive Revolution. New York, NY: Basic Books. [ Google Scholar ]
  • Garner W. R., Hake H. W. (1951). The amount of information in absolute judgments. Psychol. Rev. 58 446–459. 10.1037/h0054482 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gentner D., Grudin J. (1985). The evolution of mental metaphors in psychology: a 90-year retrospective. Am. Psychol. 40 181–192. 10.1037/0003-066X.40.2.181 [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G. (1991). From tools to theories: a heuristic of discovery in Cognitive Psychology. Psychol. Rev. 98 254–267. 10.1037/0033-295X.98.2.254 [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G., Hoffrage U. (1995). How to improve Bayesian reasoning without instruction: frequency formats. Psychol. Rev. 102 684–704. 10.1037/0033-295X.102.4.684 [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G., Murray D. J. (1987). Cognition as Intuitive Statistics. Mahwah, NJ: Lawrence Erlbaum. [ Google Scholar ]
  • Glazebrook C. M., Kiernan D., Welsh T. N., Tremblay L. (2015). How one breaks Fitts’s Law and gets away with it: moving further and faster involves more efficient online control. Hum. Mov. Sci. 39 163–176. 10.1016/j.humov.2014.11.005 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greenwald A. G. (1970). Sensory feedback mechanisms in performance control: with special reference to the ideo-motor mechanism. Psychol. Rev. 77 73–99. 10.1037/h0028689 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gregory R. L. (1968). Perceptual illusions and brain models. Proc. R. Soc. Lond. B Biol. Sci. 171 279–296. 10.1098/rspb.1968.0071 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gregory R. L. (1980). Perceptions as hypotheses. Philos. Trans. R. Soc. Lond. B 290 181–197. 10.1098/rstb.1980.0090 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hald A. (2008). A History of Parametric Statistical Inference from Bernoulli to Fisher. Copenhagen: Springer Science & Business Media, 1713–1935. [ Google Scholar ]
  • Heims S. J. (1991). The Cybernetics Group. Cambridge, MA: Massachusetts Institute of Technology. [ Google Scholar ]
  • Helmholtz H. (1866/1925). Handbuch der Physiologischen Optik [Treatise on Physiological Optics] Vol. 3 ed. Southall J. (Rochester, NY: Optical Society of America; ). [ Google Scholar ]
  • Herbort O., Butz M. V., Hoffmann J. (2005). “Towards an adaptive hierarchical anticipatory behavioral control system,” in From Reactive to Anticipatory Cognitive Embodied Systems: Papers from the AAAI Fall Symposium , eds Castelfranchi C., Balkenius C. Butz M. V. Ortony A. (Menlo Park, CA: AAAI Press; ), 83–90. [ Google Scholar ]
  • Hick W. E. (1951). Information theory and intelligence tests. Br. J. Math. Stat. Psychol. 4 157–164. 10.1111/j.2044-8317.1951.tb00317.x [ CrossRef ] [ Google Scholar ]
  • Hick W. E. (1952). On the rate of gain of information. Q. J. Exp. Psychol. 4 11–26. 10.1080/17470215208416600 [ CrossRef ] [ Google Scholar ]
  • Hommel B., Müsseler J., Aschersleben G., Prinz W. (2001). The theory of event coding (TEC): a framework for perception and action planning. Behav. Brain Sci. 24 849–878. 10.1017/S0140525X01000103 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hulbert A. (2018). Prodigies’ Progress: Parents and Superkids, then and Now. Cambridge, MA: Harvard Magazine, 46–51. [ Google Scholar ]
  • Hyman R. (1953). Stimulus information as a determinant of reaction time. J. Exp. Psychol. 53 188–196. 10.1037/h0056940 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Internet Hall of Fame (2016). Internet Hall of Fame Pioneer J.C.R. Licklider: Posthumous Recipient. Available at: https://www.internethalloffame.org/inductees/jcr-licklider [ Google Scholar ]
  • Jagacinski R. J., Flach J. M. (2003). Control theory for Humans: Quantitative Approaches to Modeling Performance. Mahwah, NJ: Lawrence Erlbaum. [ Google Scholar ]
  • James W. (1890). The Principles of Psychology. New York, NY: Dover. [ Google Scholar ]
  • Kahneman D. (1973). Attention and Effort. Englewood Cliffs, NJ: Prentice Hall. [ Google Scholar ]
  • Kay H. (1957). Information theory in the understanding of skills. Occup. Psychol. 31 218–224. [ Google Scholar ]
  • Kellen D., Klauer K. C., Singmann H. (2012). On the measurement of criterion noise in signal detection theory: the case of recognition memory. Psychol. Rev. 119 457–479. 10.1037/a0027727 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kline R. R. (2011). Cybernetics, automata studies, and the Dartmouth conference on artificial intelligence. IEEE Ann. His. Comput. 33 5–16. 10.1109/MAHC.2010.44 [ CrossRef ] [ Google Scholar ]
  • Kline R. R. (2015). The Cybernetics Moment: Or why We Call our Age the Information Age. Baltimore, MD: John Hopkins University Press. [ Google Scholar ]
  • Lachman R., Lachman J. L., Butterfield E. C. (1979). Cognitive Psychology and Information Processing: An Introduction. Hillsdale, NJ: Lawrence Erlbaum. [ Google Scholar ]
  • Laird J., Newell A., Rosenbloom P. (1987). SOAR: an architecture for general intelligence. Artif. Intell. 33 1–64. 10.1016/0004-3702(87)90050-6 [ CrossRef ] [ Google Scholar ]
  • Leahey T. H. (1992). The mythical revolutions of American psychology. Am. Psychol. 47 308–318. 10.1037/0003-066X.47.2.308 [ CrossRef ] [ Google Scholar ]
  • Lehmann E. L. (1993). The Fisher, Neyman-Pearson theories of testing hypotheses: one theory or two? J. Am. Stat. Assoc. 88 1242–1249. 10.1080/01621459.1993.10476404 [ CrossRef ] [ Google Scholar ]
  • Lehmann E. L. (2011). Fisher, Neyman, and the Creation of Classical Statistics. New York, NY: Springer Science & Business Media; 10.1007/978-1-4419-9500-1 [ CrossRef ] [ Google Scholar ]
  • Licklider J. R. (1950). The intelligibility of amplitude-dichotomized, time-quantized speech waves. J. Acoust. Soc. Am. 22 820–823. 10.1121/1.1906695 [ CrossRef ] [ Google Scholar ]
  • Luce R. D. (2003). Whatever happened to information theory in psychology? Rev. Gen. Psychol. 7 183–188. 10.1037/1089-2680.7.2.183 [ CrossRef ] [ Google Scholar ]
  • Ly A., Marsman M., Verhagen J., Grasman R. P., Wagenmakers E. J. (2017). A tutorial on Fisher information. J. Math. Psychol. 80 40–55. 10.1016/j.jmp.2017.05.006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • MacKay D. M. (1956). Towards an information-flow model of human behaviour. Br. J. Psychol. 47 30–43. 10.1111/j.2044-8295.1956.tb00559.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mandler G. (2007). A History of Modern Experimental Psychology: From James and Wundt to Cognitive Science. Cambridge, MA: MIT Press. [ Google Scholar ]
  • Maxwell S. E., Lau M. Y., Howard G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? Am. Psychol. 70 487–498. 10.1037/a0039400 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McCarthy J., Minsky M. L., Rochester N., Shannon C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31 1955. AI Mag. 27 12–14. [ Google Scholar ]
  • McCulloch W., Pitts W. (1943). A logical calculus of ideas immanent in nervous activity. Bull. Math. Biophys. 5 115–133. 10.1007/BF02478259 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meyer L. B. (1957). Meaning in music and information theory. J. Aesthet. Art Crit. 15 412–424. 10.1016/j.plrev.2013.05.008 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Microsoft Canada (2015). Attention Spans Research Report. Available at: https://www.scribd.com/document/317442018/microsoft-attention-spans-research-report-pdf [ Google Scholar ]
  • Miller G. A. (1953). What is information measurement? Am. Psychol. 8 3–11. 10.1037/h0057808 [ CrossRef ] [ Google Scholar ]
  • Miller G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol. Rev. 63 81–97. 10.1037/h0043158 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miller G. A. (2003). The cognitive revolution: a historical perspective. Trends Cogn. Sci. 7 141–144. 10.1016/S1364-6613(03)00029-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miller G. A., Galanter E., Pribram K. H. (1960). Plans and the Structure of Behavior. New York, NY: Holt; 10.1037/10039-000 [ CrossRef ] [ Google Scholar ]
  • Minsky M. (ed.). (1968). Semantic Information Processing. Cambridge, MA: The MIT Press. [ Google Scholar ]
  • Montagnini L. (2017a). Harmonies of Disorder: Norbert Wiener: A Mathematician-Philosopher of our Time. Roma: Springer. [ PubMed ] [ Google Scholar ]
  • Montagnini L. (2017b). Interdisciplinarity in Norbert Wiener, a mathematician-philosopher of our time. Biophys. Chem. 229 173–180. 10.1016/j.bpc.2017.06.009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moor J. (2006). The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 27 87–91. [ Google Scholar ]
  • Moray N. (1959). Attention in dichotic listening: affective cues and the influence of instructions. Q. J. Exp. Psychol. 11 56–60. 10.1080/17470215908416289 [ CrossRef ] [ Google Scholar ]
  • Nahin P. J. (2013). The Logician and the Engineer: How George Boole and Claude Shannon Created the Information Age. Princeton, NJ: Princeton University Press. [ Google Scholar ]
  • Neisser U. (1967). Cognitive Psychology. New York, NY: Apple Century Crofts. [ Google Scholar ]
  • Nelson N., Rosenthal R., Rosnow R. L. (1986). Interpretation of significance levels and effect sizes by psychological researchers. Am. Psychol. 41 1299–1301. 10.1037/0003-066X.41.11.1299 [ CrossRef ] [ Google Scholar ]
  • Newell A. (1955). “The chess machine: an example of dealing with a complex task by adaptation,” in Proceedings of the March 1-3 1955 Western Joint Computer Conference , (New York, NY: ACM; ), 101–108. 10.1145/1455292.1455312 [ CrossRef ] [ Google Scholar ]
  • Newell A., Simon H. A. (1972). Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall. [ Google Scholar ]
  • Neyman J. (1977). Frequentist probability and frequentist statistics. Synthese 36 97–131. 10.1007/BF00485695 [ CrossRef ] [ Google Scholar ]
  • Neyman J., Pearson E. S. (1928). On the use and interpretation of certain test criteria for purposes of statistical inference: Part I. Biometrika 20A , 175–240. [ Google Scholar ]
  • Neyman J., Pearson E. S. (1933). IX. On the problem of the most efficient tests of statistical hypotheses. Philos. Trans. R. Soc. Lond. A 231 289–337. 10.1098/rsta.1933.0009 [ CrossRef ] [ Google Scholar ]
  • O’Regan G. (2012). A Brief History of Computing , 2nd Edn. London: Springer; 10.1007/978-1-4471-2359-0 [ CrossRef ] [ Google Scholar ]
  • Parolini G. (2015). The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919–1933. J. His. Biol. 48 301–335. 10.1007/s10739-014-9394-z [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peterson W. W. T. G., Birdsall T., Fox W. (1954). The theory of signal detectability. Trans. IRE Prof. Group Inform. Theory 4 171–212. 10.1109/TIT.1954.1057460 [ CrossRef ] [ Google Scholar ]
  • Pezzulo G., Cisek P. (2016). Navigating the affordance landscape: feedback control as a process model of behavior and cognition. Trends Cogn. Sci. 20 414–424. 10.1016/j.tics.2016.03.013 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Piccinini G. (2004). The First computational theory of mind and brain: a close look at McCulloch and Pitts’s “Logical calculus of ideas immanent in nervous activity”. Synthese 141 175–215. 10.1023/B:SYNT.0000043018.52445.3e [ CrossRef ] [ Google Scholar ]
  • Posner M. I. (1978). Chronometric Explorations of Mind. Hillsdale, NJ: Lawrence Erlbaum. [ Google Scholar ]
  • Posner M. I. (1986). “Overview,” in Handbook of Perception and Human Performance: Cognitive Processes and Performance Vol. 2 eds Boff K. R., Kaufman L. I., Thomas J. P. (New York, NY: John Wiley; ), V.1—-V.10 . [ Google Scholar ]
  • Pribram K. H. (1976). “Problems concerning the structure of consciousness,” in Consciousness and the Brain: A Scientific and Philosophical Inquiry , eds Globus G. G., Maxwell G., Savodnik I. (New York, NY: Plenum; ), 297–313. [ Google Scholar ]
  • Proctor R. W., Schneider D. W. (2018). Hick’s law for choice reaction time: a review. Q. J. Exp. Psychol. 71 1281–1299. 10.1080/17470218.2017.1322622 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Proctor R. W., Vu K. P. L. (2006). Stimulus-Response Compatibility Principles: Data, theory, and Application. Boca Raton, FL: CRC Press. [ Google Scholar ]
  • Proctor R. W., Vu K. P. L. (2016). Principles for designing interfaces compatible with human information processing. Int. J. Hum. Comput. Interact. 32 2–22. 10.1080/10447318.2016.1105009 [ CrossRef ] [ Google Scholar ]
  • Quastler H. (ed.). (1953). Information theory in Biology. Urbana, IL: University of Illinois Press. [ Google Scholar ]
  • Quian Quiroga R., Panzeri E. (2009). Extracting information from neuronal populations: information theory and decoding approaches. Nat. Rev. Neurosci. 10 173–185. 10.1038/nrn2578 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rao R. P. (2004). Bayesian computation in recurrent neural circuits. Neural Comput. 16 1–38. 10.1162/08997660460733976 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rao R. P., Ballard D. H. (1997). Dynamic model of visual recognition predicts neural response properties in the visual cortex. Neural Comput. 9 721–763. 10.1162/neco.1997.9.4.721 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ratcliff R. (1978). A theory of memory retrieval. Psychol. Rev. 85 59–108. 10.1037/0033-295X.85.2.59 [ CrossRef ] [ Google Scholar ]
  • Ratcliff R., Smith P. L. (2004). A comparison of sequential sampling models for two-choice reaction time. Psychol. Rev. 111 333–367. 10.1037/0033-295X.111.2.333 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosenthal R., Rubin D. B. (1982). A simple, general purpose display of magnitude of experimental effect. J. Educ. Psychol. 74 166–169. 10.1037/0022-0663.74.2.166 [ CrossRef ] [ Google Scholar ]
  • Rucci A. J., Tweney R. D. (1980). Analysis of variance and the “second discipline” of scientific psychology: a historical account. Psychol. Bull. 87 166–184. 10.1037/0033-2909.87.1.166 [ CrossRef ] [ Google Scholar ]
  • Russell E. J. (1966). A History of Agricultural Science in Great Britain. London: George Allen and Unwin, 1620–1954. [ Google Scholar ]
  • Schachter S., Chapanis A. (1945). Distortion in Glass and Its Effect on Depth Perception. Memorandum Report No. TSEAL-695-48B. Dayton, OH: Aero Medical Laboratory. [ Google Scholar ]
  • Schmidt R. A. (1975). A schema theory of discrete motor skill learning. Psychol. Rev. 82 225–260. 10.1037/h0076770 [ CrossRef ] [ Google Scholar ]
  • Scott S. H. (2016). A functional taxonomy of bottom-up sensory feedback processing for motor actions. Trends Neurosci. 39 512–526. 10.1016/j.tins.2016.06.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Seidenfeld T. (1992). “R. A. Fisher on the design of experiments and statistical estimation,” in The Founders of Evolutionary Genetics , ed. Sarkar S. (Dordrecht: Springer; ), 23–36. [ Google Scholar ]
  • Selfridge O. G. (1959). “Pandemonium: a paradigm for learning,” in Proceedings of the Symposium on Mechanisation of thought Processes , (London: Her Majesty’s Stationery Office; ), 511–529. [ Google Scholar ]
  • Seth A. K. (2015). “The cybernetic Bayesian brain - From interoceptive inference to sensorimotor contingencies,” in Open MIND: 35(T) , eds Metzinger T., Windt J. M. (Frankfurt: MIND Group; ). [ Google Scholar ]
  • Shalizi C. (2007). Advanced Probability II or Almost None of the theory of Stochastic Processes. Available at: http://www.stat.cmu.edu/cshalizi/754/notes/all.pdf [ Google Scholar ]
  • Shannon C. E. (1945). A Mathematical theory of Cryptography. Technical Report Memoranda 45-110-02 . Murray Hill, NJ: Bell; Labs. [ Google Scholar ]
  • Shannon C. E. (1948a). A mathematical theory of communication. Bell Syst. Tech. J. 27 379–423. 10.1002/j.1538-7305.1948.tb01338.x [ CrossRef ] [ Google Scholar ]
  • Shannon C. E. (1948b). Letter to Norbert Wiener, October 13. In box 5-85 Norbert Wiener Papers. Cambridge, MA: MIT Archives. [ Google Scholar ]
  • Shannon C. E., Weaver W. (1949). The Mathematical theory of Communication. Urbana, IL: University of Illinois Press. [ Google Scholar ]
  • Shiffman D. (2012). The Nature of Code. Available at: http://natureofcode.com/book/ [ Google Scholar ]
  • Shiffrin R. M., Schneider W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychol. Rev. 84 127–190. 10.1037/0033-295X.84.2.127 [ CrossRef ] [ Google Scholar ]
  • Simon H. A. (1969). Designing organizations for an information-rich world. Int. Libr. Crit. Writ. Econ. 70 187–202. [ Google Scholar ]
  • Simon H. A. (1997). Allen Newell (1927-1992). Biographical Memoir. Washington DC: National Academics Press. [ Google Scholar ]
  • Srivastava V., Feng S. F., Cohen J. D., Leonard N. E., Shenhav A. (2017). A martingale analysis of first passage times of time-dependent Wiener diffusion models. J. Math. Psychol. 77 94–110. 10.1016/j.jmp.2016.10.001 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sternberg S. (1969). The discovery of processing stages: extensions of Donders’ method. Acta Psychol. 30 276–315. 10.1016/0001-6918(69)90055-9 [ CrossRef ] [ Google Scholar ]
  • Stokes D. E. (1997). Pasteur’s Quadrant – Basic Science and Technological Innovation. Washington DC: Brookings Institution Press. [ Google Scholar ]
  • Student’s (1908). The probable error of a mean. Biometrika 6 1–25. [ Google Scholar ]
  • Swets J. A., Tanner W. P., Jr., Birdsall T. G. (1961). Decision processes in perception. Psychol. Rev. 68 301–340. 10.1037/h0040547 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Taylor F. V. (1949). Review of Cybernetics (or control and communication in the animal and the machine). Psychol. Bull. 46 236–237. 10.1037/h0051026 [ CrossRef ] [ Google Scholar ]
  • Treisman A. M. (1960). Contextual cues in selective listening. Q. J. Exp. Psychol. 12 242–248. 10.1080/17470216008416732 [ CrossRef ] [ Google Scholar ]
  • Treisman A. M., Gelade G. (1980). A feature-integration theory of attention. Cogn. Psychol. 12 97–136. 10.1016/0010-0285(80)90005-5 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tulving E., Thomson D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychol. Rev. 80 352–373. 10.1037/h0020071 [ CrossRef ] [ Google Scholar ]
  • Turing A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 2 230–265. 10.1112/plms/s2-42.1.230 [ CrossRef ] [ Google Scholar ]
  • Van Meter D., Middleton D. (1954). Modern statistical approaches to reception in communication theory. Trans. IRE Prof. Group Inform. Theory 4 119–145. 10.1109/TIT.1954.1057471 [ CrossRef ] [ Google Scholar ]
  • Velasco M. A., Clemotte A., Raya R., Ceres R., Rocon E. (2017). Human-computer interaction for users with cerebral palsy based on head orientation. Can cursor’s movement be modeled by Fitts’s law? Int. J. Hum. Comput. Stud. 106 1–9. 10.1016/j.ijhcs.2017.05.002 [ CrossRef ] [ Google Scholar ]
  • Verschure P. F. M. J. (2016). “Consciousness in action: the unconscious parallel present optimized by the conscious sequential projected future,” in The Pragmatic Turn: Toward Action-Oriented Views in Cognitive Science , eds Engel A. K., Friston K. J., Kragic D. (Cambridge, MA: MIT Press; ). [ Google Scholar ]
  • Wald A. (1950). Statistical Decision Functions. New York, NY: John Wiley. [ Google Scholar ]
  • Waterson P. (2011). World War II and other historical influences on the formation of the Ergonomics Research Society. Ergonomics 54 1111–1129. 10.1080/00140139.2011.622796 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wiener N. (1921). The average of an analytic functional and the Brownian movement. Proc. Natl. Acad. Sci. U.S.A. 7 294–298. 10.1073/pnas.7.10.294 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wiener N. (1948a). Cybernetics. Sci. Am. 179 14–19. 10.1038/scientificamerican1148-14 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wiener N. (1948b). Cybernetics or Control and Communication in the Animal and the Machine. New York, NY: John Wiley. [ Google Scholar ]
  • Wiener N. (1949). Extrapolation, Interpolation, and Smoothing of Stationary Time Series, with Engineering Applications. Cambridge, MA: Technology Press of the Massachusetts Institute of Technology. [ Google Scholar ]
  • Wiener N. (1951). Homeostasis in the individual and society. J. Franklin Inst. 251 65–68. 10.1016/0016-0032(51)90897-6 [ CrossRef ] [ Google Scholar ]
  • Wiener N. (1952). Cybernetics or Control and Communication in the Animal and the Machine. Cambridge, MA: The MIT Press. [ Google Scholar ]
  • Wiener N. (1961). Cybernetics or Control and Communication in the Animal and the Machine , 2nd Edn. Cambridge MA: MIT Press; 10.1037/13140-000 [ CrossRef ] [ Google Scholar ]
  • Wixted J. T. (2014). Signal Detection theory. Hoboken, NJ: Wiley; 10.1002/9781118445112.stat06743 [ CrossRef ] [ Google Scholar ]
  • Wootton D. (2015). The Invention of Science: A new History of the Scientific Revolution. New York, NY: Harper. [ Google Scholar ]
  • Yates F. (1964). Sir Ronald Fisher and the design of experiments. Biometrics 20 307–321. 10.1080/03639045.2017.1291672 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yates F., Mather K. (1963). Ronald Aylmer fisher. Biogr. Mem. Fellows R. Soc. Lond. 9 91–120. 10.1098/rsbm.1963.0006 [ CrossRef ] [ Google Scholar ]

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

mathematics-logo

Article Menu

information processing research paper

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Artificial intelligence and information processing: a systematic literature review.

information processing research paper

1. Introduction

2. data collection and methods, 3. proposed design.

  • Step 1: Keyword definition and data collection
  • Step 2: Metadata statistical analysis
  • Step 3: Author analysis
  • Step 4: Affiliation analysis
  • Step 5: Keyword analysis
  • Step 6: Research areas and applications analysis

4. Analysis Results

4.1. keyword definition and data collection, 4.2. metadata statistical analysis, 4.3. author analysis, 4.4. affiliation analysis, 4.5. keyword analysis, 4.6. research areas and applications analysis, 5. discussion of gaps and opportunities, 6. conclusions, author contributions, data availability statement, conflicts of interest.

  • Zhang, S.; Yao, L.; Sun, A.; Tay, Y. Deep learning based recommender system: A survey and new perspectives. ACM. Comput. Surv. 2019 , 52 , 1–38. [ Google Scholar ] [ CrossRef ]
  • Mushtaq, S.; Islam, M.M.; Sohaib, M. Deep learning aided data-driven fault diagnosis of rotatory machine: A comprehensive review. Energies 2021 , 14 , 5150. [ Google Scholar ] [ CrossRef ]
  • Li, G.; Xie, S.; Wang, B.; Xin, J.; Li, Y.; Du, S. Photovoltaic power forecasting with a hybrid deep learning approach. IEEE Access 2020 , 8 , 175871–175880. [ Google Scholar ] [ CrossRef ]
  • Bagatur, T.; Onen, F. Computation of design coefficients in ogee-crested spillway structure using GEP and regression models. KSCE J. Civ. Eng. 2016 , 20 , 951–959. [ Google Scholar ] [ CrossRef ]
  • Al-Roomi, A.R.; El-Hawary, M.E. Universal functions originator. Appl. Soft Comput. 2020 , 94 , 106417. [ Google Scholar ] [ CrossRef ]
  • Mendyk, A.; Pacławski, A.; Szafraniec-Szczęsny, J.; Antosik, A.; Jamróz, W.; Paluch, M.; Jachowicz, R. Data-driven modeling of the bicalutamide dissolution from powder systems. AAPS PharmSciTech 2020 , 21 , 1–9. [ Google Scholar ] [ CrossRef ]
  • Karaci, A.; Ozkaraca, O.; Acar, E.; Demir, A. Prediction of traumatic pathology by classifying thorax trauma using a hybrid method for emergency services. IET Signal Process. 2020 , 14 , 754–764. [ Google Scholar ] [ CrossRef ]
  • Sejnowski, T.J. The unreasonable effectiveness of deep learning in artificial intelligence. Proc. Natl. Acad. Sci. USA 2020 , 117 , 30033–30038. [ Google Scholar ] [ CrossRef ]
  • Wang, J.; Wang, D.; Wang, S.; Li, W.; Song, K. Fault diagnosis of bearings based on multi-sensor information fusion and 2D convolutional neural network. IEEE Access 2021 , 9 , 23717–23725. [ Google Scholar ] [ CrossRef ]
  • Chang, Y.C.; Chang, K.H.; Meng, H.M.; Chiu, H.C. A novel multi-category defect detection method based on the convolutional neural network method for TFT-LCD panels. Math. Probl. Eng. 2022 , 2022 , 6505372. [ Google Scholar ] [ CrossRef ]
  • Amerini, I.; Anagnostopoulos, A.; Maiano, L.; Celsi, L.R. Deep learning for multimedia forensics. Found. Trends Comput. Graph. Vis. 2021 , 12 , 309–457. [ Google Scholar ] [ CrossRef ]
  • Chang, Y.C.; Chang, K.H.; Zheng, C.P. Application of non-dominated sorting genetic algorithm to solving bi-objective scheduling problem of printed circuit board. Mathematics 2022 , 10 , 2305. [ Google Scholar ] [ CrossRef ]
  • Chang, K.H. Integrating subjective-objective weights consideration and a combined compromise solution method for handling supplier selection issues. Systems 2023 , 11 , 74. [ Google Scholar ] [ CrossRef ]
  • Chang, K.H. A new emergency-risk-evaluation approach under spherical fuzzy-information environments. Axioms 2022 , 11 , 474. [ Google Scholar ] [ CrossRef ]
  • Wen, T.C.; Chang, K.H.; Lai, H.H. Integrating the 2-tuple linguistic representation and soft set to solve supplier selection problems with incomplete information. Eng. Appl. Artif. Intell. 2020 , 87 , 103248. [ Google Scholar ] [ CrossRef ]
  • Desjardins-Proulx, P.; Poisot, T.; Gravel, D. Artificial intelligence for ecological and evolutionary synthesis. Front. Ecol. Evol. 2019 , 7 , 402. [ Google Scholar ] [ CrossRef ]
  • Popova, O.; Romanov, D.; Drozdov, A.; Gerashchenko, A. Citation-based criteria of the significance of the research activity of scientific teams. Scientometrics 2017 , 112 , 1179–1202. [ Google Scholar ] [ CrossRef ]
  • Chang, K.H.; Chung, H.Y.; Wang, C.N.; Lai, Y.D.; Wu, C.H. A new hybrid Fermatean fuzzy set and entropy method for risk assessment. Axioms 2023 , 12 , 58. [ Google Scholar ] [ CrossRef ]
  • Wen, T.C.; Chung, H.Y.; Chang, K.H.; Li, Z.S. A flexible risk assessment approach integrating subjective and objective weights under uncertainty. Eng. Appl. Artif. Intell. 2021 , 103 , 104310. [ Google Scholar ] [ CrossRef ]
  • Sharma, N.; Sharma, H.; Sharma, A. An effective solution for large scale single machine total weighted tardiness problem using lunar cycle inspired artificial bee colony algorithm. IEEE/ACM Trans. Comput. Biol. Bioinform. 2019 , 17 , 1573–1581. [ Google Scholar ] [ CrossRef ]
  • Finkel, J.; Webber, R.J.; Gerber, E.P.; Abbot, D.S.; Weare, J. Learning forecasts of rare stratospheric transitions from short simulations. Mon. Weather Rev. 2021 , 149 , 3647–3669. [ Google Scholar ] [ CrossRef ]
  • Vierlboeck, M.; Nilchiani, R.R.; Edwards, C.M. The Pandemic Holiday Blip in New York City. IEEE. Trans. Comput. Social Syst. 2021 , 8 , 568–577. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Khalilpourazari, S.; Khalilpourazary, S.; Özyüksel Çiftçioğlu, A.; Weber, G.W. Designing energy-efficient high-precision multi-pass turning processes via robust optimization and artificial intelligence. J. Intell. Manuf. 2021 , 32 , 1621–1647. [ Google Scholar ] [ CrossRef ]
  • de Carvalho, V.R.; Özcan, E.; Sichman, J.S. Comparative analysis of selection hyper-heuristics for real-world multi-objective optimization problems. Appl. Sci. 2021 , 11 , 9153. [ Google Scholar ] [ CrossRef ]
  • Feng, Z.K.; Niu, W.J.; Liu, S. Cooperation search algorithm: A novel metaheuristic evolutionary intelligence algorithm for numerical optimization and engineering optimization problems. Appl. Soft Comput. 2021 , 98 , 106734. [ Google Scholar ] [ CrossRef ]
  • Pan, H.; Yang, X. Intelligent recommendation method integrating knowledge graph and Bayesian network. Soft Comput. 2023 , 27 , 483–492. [ Google Scholar ] [ CrossRef ]
  • Ma, Z.; Xin, C.; Zheng, H. Construction of a teaching system based on big data and artificial intelligence to promote the physical health of primary school students. Math. Probl. Eng. 2021 , 2021 , 9777862. [ Google Scholar ] [ CrossRef ]
  • Nayak, J.; Naik, B.; Dinesh, P.; Vakula, K.; Rao, B.K.; Ding, W.; Pelusi, D. Intelligent system for COVID-19 prognosis: A state-of-the-art survey. Appl. Intell. 2021 , 51 , 2908–2938. [ Google Scholar ] [ CrossRef ]
  • Shrestha, A.; Mahmood, A. Review of deep learning algorithms and architectures. IEEE Access 2019 , 7 , 53040–53065. [ Google Scholar ] [ CrossRef ]
  • Abd Elaziz, M.; Dahou, A.; Abualigah, L.; Yu, L.; Alshinwan, M.; Khasawneh, A.M.; Lu, S. Advanced metaheuristic optimization techniques in applications of deep neural networks: A review. Neural. Comput. Appl. 2021 , 33 , 14079–14099. [ Google Scholar ] [ CrossRef ]
  • Talpur, N.; Abdulkadir, S.J.; Alhussian, H.; Hasan, M.H.; Aziz, N.; Bamhdi, A. A comprehensive review of deep neuro-fuzzy system architectures and their optimization methods. Neural. Comput. Appl. 2022 , 34 , 1837–1875. [ Google Scholar ] [ CrossRef ]
  • Minaee, S.; Kalchbrenner, N.; Cambria, E.; Nikzad, N.; Chenaghlu, M.; Gao, J. Deep learning-based text classification: A comprehensive review. ACM. Comput. Surv. 2021 , 54 , 1–40. [ Google Scholar ] [ CrossRef ]
  • Chen, L.C.; Chang, K.H.; Yang, S.C. Integrating corpus-based and NLP approach to extract terminology and domain-oriented information: An example of US military corpus. Acta Sci. -Technol. 2022 , 44 , e60486. [ Google Scholar ] [ CrossRef ]
  • Chen, L.C.; Chang, K.H. A novel corpus-based computing method for handling critical word ranking issues: An example of COVID-19 research articles. Int. J. Intell. Syst. 2021 , 36 , 3190–3216. [ Google Scholar ] [ CrossRef ]
  • Hirschberg, J.; Manning, C.D. Advances in natural language processing. Science 2015 , 349 , 261–266. [ Google Scholar ] [ CrossRef ]
  • Li, J.; Sun, A.; Han, J.; Li, C. A survey on deep learning for named entity recognition. IEEE Trans. Knowl. Data Eng. 2022 , 34 , 50–70. [ Google Scholar ] [ CrossRef ]
  • Fahimnia, B.; Sarkis, J.; Davarzani, H. Green supply chain management: A review and bibliometric analysis. Int. J. Prod. Econ. 2015 , 162 , 101–114. [ Google Scholar ] [ CrossRef ]
  • Nita, A.; Hartel, T.; Manolache, S.; Ciocanea, C.M.; Miu, I.V.; Rozylowicz, L. Who is researching biodiversity hotspots in Eastern Europe? A case study on the grasslands in Romania. PLoS ONE 2019 , 14 , e0217638. [ Google Scholar ] [ CrossRef ]
  • Maditati, D.R.; Munim, Z.H.; Schramm, H.J.; Kummer, S. A review of green supply chain management: From bibliometric analysis to a conceptual framework and future research directions. Resour. Conserv. Recycl. 2018 , 139 , 150–162. [ Google Scholar ] [ CrossRef ]
  • Ganbat, T.; Chong, H.Y.; Liao, P.C.; Wu, Y.D. A bibliometric review on risk management and building information modeling for international construction. Adv. Civ. Eng. 2018 , 2018 , 8351679. [ Google Scholar ] [ CrossRef ]
  • Moon, S.; Shin, Y.; Hwang, B.G.; Chi, S. Document management system using text mining for information acquisition of international construction. KSCE J. Civ. Eng. 2018 , 22 , 4791–4798. [ Google Scholar ] [ CrossRef ]
  • Zhou, X.; Wang, L.; Liao, H.; Wang, S.; Lev, B.; Fujita, H. A prospect theory-based group decision approach considering consensus for portfolio selection with hesitant fuzzy information. Knowl. -Based Syst. 2019 , 168 , 28–38. [ Google Scholar ] [ CrossRef ]
  • Song, C.; Wang, L.; Hou, J.; Xu, Z.; Huang, Y. The optimized GRNN based on the FDS-FOA under the hesitant fuzzy environment and its application in air quality index prediction. Appl. Intell. 2021 , 51 , 8365–8376. [ Google Scholar ] [ CrossRef ]
  • Gao, J.; Xu, Z.; Liang, Z.; Liao, H. Expected consistency-based emergency decision making with incomplete probabilistic linguistic preference relations. Knowl. -Based Syst. 2019 , 176 , 15–28. [ Google Scholar ] [ CrossRef ]
  • Lin, M.; Zhan, Q.; Xu, Z. Decision making with probabilistic hesitant fuzzy information based on multiplicative consistency. Int. J. Intell. Syst. 2020 , 35 , 1233–1261. [ Google Scholar ] [ CrossRef ]
  • Gou, X.; Xu, Z.; Liao, H.; Herrera, F. Consensus model handling minority opinions and noncooperative behaviors in large-scale group decision-making under double hierarchy linguistic preference relations. IEEE T. Cybern. 2020 , 51 , 283–296. [ Google Scholar ] [ CrossRef ]
  • Liao, H.; Tang, M.; Qin, R.; Mi, X.; Altalhi, A.; Alshomrani, S.; Herrera, F. Overview of hesitant linguistic preference relations for representing cognitive complex information: Where we stand and what is next. Cogn. Comput. 2020 , 12 , 25–48. [ Google Scholar ] [ CrossRef ]
  • Song, C.; Xu, Z.; Zhang, Y.; Wang, X. Dynamic hesitant fuzzy Bayesian network and its application in the optimal investment port decision making problem of “twenty-first century maritime silk road”. Appl. Intell. 2020 , 50 , 1846–1858. [ Google Scholar ] [ CrossRef ]
  • Liao, H.; Mi, X.; Xu, Z. A survey of decision-making methods with probabilistic linguistic information: Bibliometrics, preliminaries, methodologies, applications and future directions. Fuzzy Optim. Decis. Mak. 2020 , 19 , 81–134. [ Google Scholar ] [ CrossRef ]
  • Gou, X.; Xu, Z.; Wang, X.; Liao, H. Managing consensus reaching process with self-confident double hierarchy linguistic preference relations in group decision making. Fuzzy Optim. Decis. Mak. 2021 , 20 , 51–79. [ Google Scholar ] [ CrossRef ]
  • Zheng, Y.; Xu, Z.; He, Y. A novel weight-derived method and its application in graduate students’ physical health assessment. Int. J. Intell. Syst. 2021 , 36 , 200–236. [ Google Scholar ]
  • Liu, Q.; Wu, H.; Xu, Z. Consensus model based on probability k-means clustering algorithm for large scale group decision making. Int. J. Mach. Learn. Cybern. 2021 , 12 , 1609–1626. [ Google Scholar ]
  • Li, Z.; Liu, X.; Dai, J.; Chen, J.; Fujita, H. Measures of uncertainty based on Gaussian kernel for a fully fuzzy information system. Knowl. -Based Syst. 2020 , 196 , 105791. [ Google Scholar ]
  • Wang, D.; Nie, P.; Zhu, X.; Pedrycz, W.; Li, Z. Designing of higher order information granules through clustering heterogeneous granular data. Appl. Soft Comput. 2021 , 112 , 107820. [ Google Scholar ]
  • Zhou, Y.; Ren, H.; Li, Z.; Pedrycz, W. Anomaly detection based on a granular Markov model. Expert Syst. Appl. 2022 , 187 , 115744. [ Google Scholar ]
  • Zhu, X.; Pedrycz, W.; Li, Z. A development of hierarchically structured granular models realized through allocation of information granularity. IEEE Trans. Fuzzy Syst. 2020 , 29 , 3845–3858. [ Google Scholar ] [ CrossRef ]
  • Zhu, X.; Pedrycz, W.; Li, Z. Development and analysis of neural networks realized in the presence of granular data. IEEE Trans. Neural Netw. Learn. Syst. 2019 , 31 , 3606–3619. [ Google Scholar ] [ CrossRef ]
  • Zhu, X.; Pedrycz, W.; Li, Z. A development of granular input space in system modeling. IEEE Trans. Cybern. 2019 , 51 , 1639–1650. [ Google Scholar ]
  • Taghavi, A.; Eslami, E.; Herrera-Viedma, E.; Ureña, R. Trust based group decision making in environments with extreme uncertainty. Knowl. -Based Syst. 2020 , 191 , 105168. [ Google Scholar ]
  • Morente-Molinera, J.A.; Wu, X.; Morfeq, A.; Al-Hmouz, R.; Herrera-Viedma, E. A novel multi-criteria group decision-making method for heterogeneous and dynamic contexts using multi-granular fuzzy linguistic modelling and consensus measures. Inf. Fusion. 2020 , 53 , 240–250. [ Google Scholar ]
  • Zuheros, C.; Martínez-Cámara, E.; Herrera-Viedma, E.; Herrera, F. Sentiment analysis based multi-person multi-criteria decision making methodology using natural language processing and deep learning for smarter decision aid. Case study of restaurant choice using TripAdvisor reviews. Inf. Fusion. 2021 , 68 , 22–36. [ Google Scholar ]
  • Wu, J.; Cao, M.; Chiclana, F.; Dong, Y.; Herrera-Viedma, E. An optimal feedback model to prevent manipulation behavior in consensus under social network group decision making. IEEE Trans. Fuzzy Syst. 2020 , 29 , 1750–1763. [ Google Scholar ]
  • Xie, W.; Xu, Z.; Ren, Z.; Herrera-Viedma, E. A new multi-criteria decision model based on incomplete dual probabilistic linguistic preference relations. Appl. Soft Comput. 2020 , 91 , 106237. [ Google Scholar ]
  • Labella, A.; Rodriguez, R.M.; Martinez, L. Extending the linguistic decision suite FLINTSTONES to deal with comparative linguistic expressions with symbolic translation information. J. Intell. Fuzzy Syst. 2020 , 39 , 6245–6258. [ Google Scholar ]
  • Wang, R.; Chen, Z.S.; Shuai, B.; Chin, K.S.; Martínez, L. Site selection of high-speed railway station: A trapezoidal fuzzy neutrosophic-based consensual group decision-making approach. J. Intell. Fuzzy Syst. 2021 , 40 , 5347–5367. [ Google Scholar ]
  • Labella, A.; Rodríguez, R.M.; Martinez, L. A consensus reaching process dealing with comparative linguistic expressions for group decision making: A fuzzy approach. J. Intell. Fuzzy Syst. 2020 , 38 , 735–748. [ Google Scholar ]
  • Rodríguez, R.M.; Labella, Á.; Dutta, B.; Martínez, L. Comprehensive minimum cost models for large scale group decision making with consistent fuzzy preference relations. Knowl. -Based Syst. 2021 , 215 , 106780. [ Google Scholar ]
  • Romero, Á.L.; Rodríguez, R.M.; Martínez, L. Computing with comparative linguistic expressions and symbolic translation for decision making: ELICIT information. IEEE Trans. Fuzzy Syst. 2019 , 28 , 2510–2522. [ Google Scholar ]
  • Chen, Z.S.; Yang, L.L.; Rodríguez, R.M.; Xiong, S.H.; Chin, K.S.; Martínez, L. Power-average-operator-based hybrid multiattribute online product recommendation model for consumer decision-making. Int. J. Intell. Syst. 2021 , 36 , 2572–2617. [ Google Scholar ]
  • Garg, H. Sine trigonometric operational laws and its based Pythagorean fuzzy aggregation operators for group decision-making process. Artif. Intell. Rev. 2021 , 54 , 4421–4447. [ Google Scholar ]
  • Liu, X.D.; Wu, J.; Zhang, S.T.; Wang, Z.W.; Garg, H. Extended cumulative residual entropy for emergency group decision-making under probabilistic hesitant fuzzy environment. Int. J. Fuzzy Syst. 2022 , 24 , 159–179. [ Google Scholar ] [ CrossRef ]
  • Liu, X.; Wang, Z.; Zhang, S.; Garg, H. Novel correlation coefficient between hesitant fuzzy sets with application to medical diagnosis. Expert Syst. Appl. 2021 , 183 , 115–127. [ Google Scholar ] [ CrossRef ]
  • Wang, X.; Pedrycz, W.; Gacek, A.; Liu, X. From numeric data to information granules: A design through clustering and the principle of justifiable granularity. Knowl. -Based Syst. 2016 , 101 , 100–113. [ Google Scholar ] [ CrossRef ]
  • Li, T.; Zhang, L.; Lu, W.; Hou, H.; Liu, X.; Pedrycz, W.; Zhong, C. Interval kernel fuzzy c-means clustering of incomplete data. Neurocomputing 2017 , 237 , 316–331. [ Google Scholar ] [ CrossRef ]
  • Lu, W.; Chen, X.; Pedrycz, W.; Liu, X.; Yang, J. Using interval information granules to improve forecasting in fuzzy time series. Int. J. Approx. Reason. 2015 , 57 , 1–18. [ Google Scholar ] [ CrossRef ]
  • Zhang, H.; Zhao, S.; Kou, G.; Li, C.C.; Dong, Y.; Herrera, F. An overview on feedback mechanisms with minimum adjustment or cost in consensus reaching in group decision making: Research paradigms and challenges. Inf. Fusion. 2020 , 60 , 65–79. [ Google Scholar ] [ CrossRef ]
  • Zhou, Q.; Dong, Y.; Zhang, H.; Gao, Y. The analytic hierarchy process with personalized individual semantics. Int. J. Comput. Intell. Syst. 2018 , 11 , 451–468. [ Google Scholar ] [ CrossRef ]
  • Li, C.C.; Dong, Y.; Herrera, F. A consensus model for large-scale linguistic group decision making with a feedback recommendation based on clustered personalized individual semantics and opposing consensus groups. IEEE Trans. Fuzzy Syst. 2018 , 27 , 221–233. [ Google Scholar ] [ CrossRef ]
  • Li, C.C.; Dong, Y.; Xu, Y.; Chiclana, F.; Herrera-Viedma, E.; Herrera, F. An overview on managing additive consistency of reciprocal preference relations for consistency-driven decision making and fusion: Taxonomy and future directions. Inf. Fusion. 2019 , 52 , 143–156. [ Google Scholar ] [ CrossRef ]
  • Pascual-Triana, J.D.; Charte, D.; Andrés Arroyo, M.; Fernández, A.; Herrera, F. Revisiting data complexity metrics based on morphology for overlap and imbalance: Snapshot, new overlap number of balls metrics and singular problems prospect. Knowl. Inf. Syst. 2021 , 63 , 1961–1989. [ Google Scholar ] [ CrossRef ]
  • Feng, J.; Ye, Z.; Liu, S.; Zhang, X.; Chen, J.; Shang, R.; Jiao, L. Dual-graph convolutional network based on band attention and sparse constraint for hyperspectral band selection. Knowl. -Based Syst. 2021 , 231 , 107428. [ Google Scholar ] [ CrossRef ]
  • Liu, X.; Jiao, L.; Li, L.; Tang, X.; Guo, Y. Deep multi-level fusion network for multi-source image pixel-wise classification. Knowl. -Based Syst. 2021 , 221 , 106921. [ Google Scholar ] [ CrossRef ]
  • Shang, R.; Song, J.; Jiao, L.; Li, Y. Double feature selection algorithm based on low-rank sparse non-negative matrix factorization. Int. J. Mach. Learn. Cybern. 2020 , 11 , 1891–1908. [ Google Scholar ] [ CrossRef ]
  • Liu, X.; Li, L.; Liu, F.; Hou, B.; Yang, S.; Jiao, L. GAFNet: Group attention fusion network for PAN and MS image high-resolution classification. IEEE Trans. Cybern. 2021 , 52 , 10556–10569. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Yang, M.; Jiao, L.; Liu, F.; Hou, B.; Yang, S.; Jian, M. DPFL-Nets: Deep pyramid feature learning networks for multiscale change detection. IEEE Trans. Neural Netw. Learn. Syst. 2021 , 33 , 6402–6416. [ Google Scholar ] [ CrossRef ]
  • Arshaghi, A.; Razmjooy, N.; Estrela, V.; Burdziakowski, P.; Nascimento, D.; Deshpande, A.; Patavardhan, P. Image transmission in UAV MIMO UWB-OSTBC system over Rayleigh channel using multiple description coding (MDC). In Imaging and Sensing for Unmanned Aircraft Systems ; Institution of Engineering and Technology (IET): Stevenage, UK, 2020. [ Google Scholar ]
  • Deshpande, A.; Razmjooy, N.; Estrela, V.V. Introduction to computational intelligence and super-resolution. In Computational Intelligence Methods for Super-Resolution in Image Processing Applications ; Springer: Cham, Switzerland, 2021; pp. 3–23. [ Google Scholar ]

Click here to enlarge figure

No.AuthorsCountryOrganizationPublication QuantityTotal Citations
1Xu, Z.S.ChinaSichuan University1035177
2Pedrycz, W.CanadaUniversity of Alberta1034621
3Herrera-Viedma, E.SpainUniversity of Granada402344
4Martinez, L.SpainUniversity of Jaén392208
5Fujita, H.JapanIwate Prefectural University341759
6Garg, H.IndiaThapar Inst of Engn and Technol321543
7Liu, P.D.ChinaShandong University of Finance and Econ321324
8Herrera, F.SpainUniversity of Granada311976
9Dong, Y.C.ChinaSichuan University292068
10Jiao, L.C.ChinaXidian University281495
No.CountryPublication NumberNo.CountryPublication Number
1China56316Australia557
2USA17927France538
3Spain10328Canada522
4India10089Italy499
5England85110Germany481
Country No.ChinaUSASpainIndiaEngland
1Chinese Academy of SciencesUniversity of California SystemUniversity of GranadaIndian Institute of Technology SystemUniversity of London
2Sichuan UniversityUniversity of Texas SystemUniversitat Politecnica de ValenciaNational Institute of TechnologyImperial College London
3Xidian UniversityUniversity System of GeorgiaUniversidad de JaénVellore Institute of TechnologyUniversity of Manchester
4Zhejiang UniversityState University System of FloridaUniversidad Politecnica de MadridThapar Institute of Engineering and TechnologyDe Montfort University
5University of Electronic Science and Technology of ChinaGeorgia Institute of TechnologyUniversity of SevilleAnna UniversityUniversity of Oxford
6Xi’an Jiaotong UniversityState University of New York SystemUniversity of the Basque CountryIndian Statistical InstituteUniversity College London
7Tsinghua UniversityUniversity of Illinois SystemUniversidad de MalagaVIT VelloreUniversity of Nottingham
8Harbin Institute of TechnologyCarnegie Mellon UniversityUniversidad Carlos III de MadridAnna University ChennaiUniversity of Sheffield
9Northwestern Polytechnical UniversityMassachusetts Institute of Technology (MIT)Universitat Politecnica de CatalunyaIndian Statistical Institute KolkataUniversity of Granada
10Huazhong University of Science and TechnologyPennsylvania Commonwealth System of Higher EducationUniversitat d’AlacantShanmugha Arts, Science, Technology and Research AcademyUniversity of Southampton
1University of Technology SydneyCentre National de la Recherche Scientifique (CNRS)University of AlbertaConsiglio Nazionale delle Ricerche (CNR)Technical University of Munich
2University of SydneyUDICE—French Research UniversitiesConcordia University—CanadaUniversity of SalernoHelmholtz Association
3University of New South Wales SydneyUniversité Paris-SaclayUniversité de MontrealSapienza University RomeUniversity of Erlangen Nuremberg
4Queensland University of Technology (QUT)Université de ToulouseUniversity of WaterlooUniversity of TrentoMax Planck Society
5Monash UniversityInstitut Mines- Télécom (IMT)University of British ColumbiaUniversity of Naples Federico IIRuprecht Karls University Heidelberg
6Commonwealth Scientific and Industrial Research Organisation (CSIRO)Sorbonne UniversitéUniversity of TorontoUniversity of PaduaKarlsruhe Institute of Technology
7University of QueenslandINRAEUniversity of CalgaryUniversity of BolognaTechnical University of Darmstadt
8Deakin UniversityUniversité de RennesToronto Metropolitan UniversityUniversity of PisaLeipzig University
9Chinese Academy of SciencesUniversité de LorraineUniversity of QuebecPolytechnic University of MilanUlm University
10University of AdelaideCentre National de la Recherche Scientifique (CNRS)Western University (University of Western Ontario)Polytechnic University of TurinUniversity of Munich
Research AreasApplications
EngineeringInformation process
Systems
Decision support
Computer engineering
Knowledge management
Control systems
Manufacturing
Sensors
Operations Research and Management ScienceDecision making
Fuzzy arithmetic
Project management
Classification
Quality control
Big data
Supply chain management
Automation Control SystemsProcess control
Fuzzy approach
Robotics
Environmental monitoring
Manufacturing
Smart sensors
Energy management
Industry 4.0
Neurosciences and Neurology Cognitive architecture
Neurorobots
Electroencephalography
Emotion
Imaging Science and Photographic TechnologyMedical imaging
Image classification
Geophysical imaging
MathematicsSystems modeling
Data science
Optimization
Soft sensors
Fuzzy sets
TelecommunicationsMobile computing
Internet of Things
Wireless sensor networks
RoboticsAutonomous
Automatization
Service robotics
ChemistryChemical process monitoring
Operational optimization
Instruments and InstrumentationFault detection
Autoencoders
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Lin, K.-Y.; Chang, K.-H. Artificial Intelligence and Information Processing: A Systematic Literature Review. Mathematics 2023 , 11 , 2420. https://doi.org/10.3390/math11112420

Lin K-Y, Chang K-H. Artificial Intelligence and Information Processing: A Systematic Literature Review. Mathematics . 2023; 11(11):2420. https://doi.org/10.3390/math11112420

Lin, Keng-Yu, and Kuei-Hu Chang. 2023. "Artificial Intelligence and Information Processing: A Systematic Literature Review" Mathematics 11, no. 11: 2420. https://doi.org/10.3390/math11112420

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

  • Information processing
  • Information technology
  • Get an email alert for Information processing
  • Get the RSS feed for Information processing

Showing 1 - 13 of 195

View by: Cover Page List Articles

Sort by: Recent Popular

information processing research paper

Investigation of eye movement characteristics during free throws at varying intensities among basketball players and its correlation with free throw percentage

Chunzhou Zhao, Na Liu, Sunnan Li, Xuetong Zhao

information processing research paper

The role of supplier-induced demand on the occurrence of information overload in managerial reporting environments

Peter Gordon Rötzel

information processing research paper

Team pro-social rule breaking and team innovation performance: An information processing theory perspective

Guosen Miao, Guoping Chen, Ying Yao

information processing research paper

Development of the digital retrieval system integrating intelligent information and improved genetic algorithm: A study based on art museums

Cun Lin, XiaoChen Hu, TianYi Cheng, Rao Yin

information processing research paper

Study on the relationship between fixation characteristics and hit rate in psychological procedure training of free throw

Chunzhou Zhao, Na Liu, Sunnan Li

information processing research paper

Psychological effects of anti-Arab politics on American and Arab peoples’ views of each other

Youngki Hong, Angela T. Maitner, Kyle G. Ratner

information processing research paper

Call it a conspiracy: How conspiracy belief predicts recognition of conspiracy theories

J. P. Prims

information processing research paper

“It doesn’t matter if you are in charge of the trees, you always miss the trees for the forest”: Power and the illusion of explanatory depth

Robert Körner, Astrid Schütz, Lars-Eric Petersen

information processing research paper

Unveiling the affecting mechanism of digital transformation on total factor productivity of Chinese firms

Zhiyuan Fu, Ghulam Rasool Madni

information processing research paper

The effects of external cues on cross-border e-commerce product sales: An application of the elaboration likelihood model

Meiyu Fang, Ziling Deng, Junhui Guo

information processing research paper

f structure of movement amplitude time series">Visual feedback modulates the 1/ f structure of movement amplitude time series

Andrew B. Slifkin, Jeffrey R. Eder

information processing research paper

Deep phenotyping of socio-emotional skills in children with typical development, neurodevelopmental disorders, and mental health conditions: Evidence from the PEERS

Vicki Anderson, Simone Darling,  [ ... ], Miriam H. Beauchamp

information processing research paper

Examining the influence of information overload on consumers’ purchase in live streaming: A heuristic-systematic model perspective

Guihua Zhang, Junwei Cao, Dong Liu

Connect with Us

  • PLOS ONE on Twitter
  • PLOS on Facebook

FSU Digital Repository

Why Log In?

  • All collections

You are here

Introduction to cognitive information processing theory, research, and practice.

PREVIEW Datastream

  • Description

https://creativecommons.org/licenses/by-nd/4.0/

An Introduction to Cognitive Information Processing Theory, Research, and Practice

Sampson, James P (author) Osborn, Debra S (author) Bullock-Yowell, Emily (author) Lenz, Janet G (author) Peterson, Gary W (author) Reardon, Robert C (author) Dozier, V Casey (author) Leierer, Stephen J (author) Hayden, Seth C W (author) Saunders, Denise E (author)

text technical report

The primary purpose of this paper is to introduce essential elements of cognitive information processing (CIP) theory, research, and practice as they existed at the time of this writing. The introduction that follows describes the nature of career choices and career interventions, and the integration of theory, research, and practice. After the introduction, the paper continues with three main sections that include CIP theory related to vocational behavior, research related to vocational behavior and career intervention, and CIP theory related to career interventions. The first main section describes CIP theory, including the evolution of CIP theory, the nature of career problems, theoretical assumptions, the pyramid of information processing domains, the CASVE Cycle, and the use of the pyramid and CASVE cycle. The second main section describes CIP theory-based research in examining vocational behavior and establishing evidence-based practice for CIP theory-based career interventions. The third main section describes CIP theory related to career intervention practice, including theoretical assumptions, readiness for career decision making, readiness for career intervention, the differentiated service delivery model, and critical ingredients of career interventions. The paper concludes with regularly updated sources of information on CIP theory.

Cognitive Information Processing Theory, Cognitive Information Processing Research, Cognitive Information Processing Practice, Vocational Behavior, Career Decision Making, Pyramid of Information Processing Domains, CASVE Cycle, Readiness for Career Decision Making, Differentiated Career Service Delivery, Career Intervention, Career Counseling, Career Services, Client Use of Career Theory, Social Justice, Critical Ingredients of Career Intervention, Response to Intervention

https://doi.org/10.33009/fsu.1593091156

FSU_libsubv1_scholarship_submission_1593091156_c171f50a 10.33009/fsu.1593091156

Creative Commons Attribution-NoDerivatives (CC BY-ND 4.0)

In Collections

  • Center for the Study of Technology in Counseling and Career Development
  • Open access
  • Published: 09 March 2020

Rubrics to assess critical thinking and information processing in undergraduate STEM courses

  • Gil Reynders 1 , 2 ,
  • Juliette Lantz 3 ,
  • Suzanne M. Ruder 2 ,
  • Courtney L. Stanford 4 &
  • Renée S. Cole   ORCID: orcid.org/0000-0002-2807-1500 1  

International Journal of STEM Education volume  7 , Article number:  9 ( 2020 ) Cite this article

71k Accesses

64 Citations

3 Altmetric

Metrics details

Process skills such as critical thinking and information processing are commonly stated outcomes for STEM undergraduate degree programs, but instructors often do not explicitly assess these skills in their courses. Students are more likely to develop these crucial skills if there is constructive alignment between an instructor’s intended learning outcomes, the tasks that the instructor and students perform, and the assessment tools that the instructor uses. Rubrics for each process skill can enhance this alignment by creating a shared understanding of process skills between instructors and students. Rubrics can also enable instructors to reflect on their teaching practices with regard to developing their students’ process skills and facilitating feedback to students to identify areas for improvement.

Here, we provide rubrics that can be used to assess critical thinking and information processing in STEM undergraduate classrooms and to provide students with formative feedback. As part of the Enhancing Learning by Improving Process Skills in STEM (ELIPSS) Project, rubrics were developed to assess these two skills in STEM undergraduate students’ written work. The rubrics were implemented in multiple STEM disciplines, class sizes, course levels, and institution types to ensure they were practical for everyday classroom use. Instructors reported via surveys that the rubrics supported assessment of students’ written work in multiple STEM learning environments. Graduate teaching assistants also indicated that they could effectively use the rubrics to assess student work and that the rubrics clarified the instructor’s expectations for how they should assess students. Students reported that they understood the content of the rubrics and could use the feedback provided by the rubric to change their future performance.

The ELIPSS rubrics allowed instructors to explicitly assess the critical thinking and information processing skills that they wanted their students to develop in their courses. The instructors were able to clarify their expectations for both their teaching assistants and students and provide consistent feedback to students about their performance. Supporting the adoption of active-learning pedagogies should also include changes to assessment strategies to measure the skills that are developed as students engage in more meaningful learning experiences. Tools such as the ELIPSS rubrics provide a resource for instructors to better align assessments with intended learning outcomes.

Introduction

Why assess process skills.

Process skills, also known as professional skills (ABET Engineering Accreditation Commission, 2012 ), transferable skills (Danczak et al., 2017 ), or cognitive competencies (National Research Council, 2012 ), are commonly cited as critical for students to develop during their undergraduate education (ABET Engineering Accreditation Commission, 2012 ; American Chemical Society Committee on Professional Training, 2015 ; National Research Council, 2012 ; Singer et al., 2012 ; The Royal Society, 2014 ). Process skills such as problem-solving, critical thinking, information processing, and communication are widely applicable to many academic disciplines and careers, and they are receiving increased attention in undergraduate curricula (ABET Engineering Accreditation Commission, 2012 ; American Chemical Society Committee on Professional Training, 2015 ) and workplace hiring decisions (Gray & Koncz, 2018 ; Pearl et al., 2019 ). Recent reports from multiple countries (Brewer & Smith, 2011 ; National Research Council, 2012 ; Singer et al., 2012 ; The Royal Society, 2014 ) indicate that these skills are emphasized in multiple undergraduate academic disciplines, and annual polls of about 200 hiring managers indicate that employers may place more importance on these skills than in applicants’ content knowledge when making hiring decisions (Deloitte Access Economics, 2014 ; Gray & Koncz, 2018 ). The assessment of process skills can provide a benchmark for achievement at the end of an undergraduate program and act as an indicator of student readiness to enter the workforce. Assessing these skills may also enable instructors and researchers to more fully understand the impact of active learning pedagogies on students.

A recent meta-analysis of 225 studies by Freeman et al. ( 2014 ) showed that students in active learning environments may achieve higher content learning gains than students in traditional lectures in multiple STEM fields when comparing scores on equivalent examinations. Active learning environments can have many different attributes, but they are commonly characterized by students “physically manipulating objects, producing new ideas, and discussing ideas with others” (Rau et al., 2017 ) in contrast to students sitting and listening to a lecture. Examples of active learning pedagogies include POGIL (Process Oriented Guided Inquiry Learning) (Moog & Spencer, 2008 ; Simonson, 2019 ) and PLTL (Peer-led Team Learning) (Gafney & Varma-Nelson, 2008 ; Gosser et al., 2001 ) in which students work in groups to complete activities with varying levels of guidance from an instructor. Despite the clear content learning gains that students can achieve from active learning environments (Freeman et al., 2014 ), the non-content-gains (including improvements in process skills) in these learning environments have not been explored to a significant degree. Active learning pedagogies such as POGIL and PLTL place an emphasis on students developing non-content skills in addition to content learning gains, but typically only the content learning is assessed on quizzes and exams, and process skills are not often explicitly assessed (National Research Council, 2012 ). In order to fully understand the effects of active learning pedagogies on all aspects of an undergraduate course, evidence-based tools must be used to assess students’ process skill development. The goal of this work was to develop resources that could enable instructors to explicitly assess process skills in STEM undergraduate classrooms in order to provide feedback to themselves and their students about the students’ process skills development.

Theoretical frameworks

The incorporation of these rubrics and other currently available tools for use in STEM undergraduate classrooms can be viewed through the lenses of constructive alignment (Biggs, 1996 ) and self-regulated learning (Zimmerman, 2002 ). The theory of constructivism posits that students learn by constructing their own understanding of knowledge rather than acquiring the meaning from their instructor (Bodner, 1986 ), and constructive alignment extends the constructivist model to consider how the alignment between a course’s intended learning outcomes, tasks, and assessments affects the knowledge and skills that students develop (Biggs, 2003 ). Students are more likely to develop the intended knowledge and skills if there is alignment between the instructor’s intended learning outcomes that are stated at the beginning of a course, the tasks that the instructor and students perform, and the assessment strategies that the instructor uses (Biggs, 1996 , 2003 , 2014 ). The nature of the tasks and assessments indicates what the instructor values and where students should focus their effort when studying. According to Biggs ( 2003 ) and Ramsden ( 1997 ), students see assessments as defining what they should learn, and a misalignment between the outcomes, tasks, and assessments may hinder students from achieving the intended learning outcomes. In the case of this work, the intended outcomes are improved process skills. In addition to aligning the components of a course, it is also critical that students receive feedback on their performance in order to improve their skills. Zimmerman’s theory of self-regulated learning (Zimmerman, 2002 ) provides a rationale for tailoring assessments to provide feedback to both students and instructors.

Zimmerman’s theory of self-regulated learning defines three phases of learning: forethought/planning, performance, and self-reflection. According to Zimmerman, individuals ideally should progress through these three phases in a cycle: they plan a task, perform the task, and reflect on their performance, then they restart the cycle on a new task. If a student is unable to adequately progress through the phases of self-regulated learning on their own, then feedback provided by an instructor may enable the students to do so (Butler & Winne, 1995 ). Thus, one of our criteria when creating rubrics to assess process skills was to make the rubrics suitable for faculty members to use to provide feedback to their students. Additionally, instructors can use the results from assessments to give themselves feedback regarding their students’ learning in order to regulate their teaching. This theory is called self-regulated learning because the goal is for learners to ultimately reflect on their actions to find ways to improve. We assert that, ideally, both students and instructors should be “learners” and use assessment data to reflect on their actions, although with different aims. Students need consistent feedback from an instructor and/or self-assessment throughout a course to provide a benchmark for their current performance and identify what they can do to improve their process skills (Black & Wiliam, 1998 ; Butler & Winne, 1995 ; Hattie & Gan, 2011 ; Nicol & Macfarlane-Dick, 2006 ). Instructors need feedback on the extent to which their efforts are achieving their intended goals in order to improve their instruction and better facilitate the development of process skills through course experiences.

In accordance with the aforementioned theoretical frameworks, tools used to assess undergraduate STEM student process skills should be tailored to fit the outcomes that are expected for undergraduate students and be able to provide formative assessment and feedback to both students and faculty about the students’ skills. These tools should also be designed for everyday classroom use to enable students to regularly self-assess and faculty to provide consistent feedback throughout a semester. Additionally, it is desirable for assessment tools to be broadly generalizable to measure process skills in multiple STEM disciplines and institutions in order to increase the rubrics’ impact on student learning. Current tools exist to assess these process skills, but they each lack at least one of the desired characteristics for providing regular feedback to STEM students.

Current tools to assess process skills

Current tests available to assess critical thinking include the Critical Thinking Assessment Test (CAT) (Stein & Haynes, 2011 ), California Critical Thinking Skills Test (Facione, 1990a , 1990b ), and Watson Glaser Critical Thinking Appraisal (Watson & Glaser, 1964 ). These commercially available, multiple-choice tests are not designed to provide regular, formative feedback throughout a course and have not been implemented for this purpose. Instead, they are designed to provide summative feedback with a focus on assessing this skill at a programmatic or university level rather than for use in the classroom to provide formative feedback to students. Rather than using tests to assess process skills, rubrics could be used instead. Rubrics are effective assessment tools because they can be quick and easy to use, they provide feedback to both students and instructors, and they can evaluate individual aspects of a skill to give more specific feedback (Brookhart & Chen, 2014 ; Smit & Birri, 2014 ). Rubrics for assessing critical thinking are available, but they have not been used to provide feedback to undergraduate STEM students nor were they designed to do so (Association of American Colleges and Universities, 2019 ; Saxton et al., 2012 ). The Critical Thinking Analytic Rubric is designed specifically to assess K-12 students to enhance college readiness and has not been broadly tested in collegiate STEM courses (Saxton et al., 2012 ). The critical thinking rubric developed by the Association of American Colleges and Universities (AAC&U) as part its Valid Assessment of Learning in Undergraduate Education (VALUE) Institute and Liberal Education and America’s Promise (LEAP) initiative (Association of American Colleges and Universities, 2019 ) is intended for programmatic assessment rather than specifically giving feedback to students throughout a course. As with tests for assessing critical thinking, current rubrics to assess critical thinking are not designed to act as formative assessments and give feedback to STEM faculty and undergraduates at the course or task level. Another issue with the assessment of critical thinking is the degree to which the construct is measurable. A National Research Council report (National Research Council, 2011 ) has suggested that there is little evidence of a consistent, measurable definition for critical thinking and that it may not be different from one’s general cognitive ability. Despite this issue, we have found that critical thinking is consistently listed as a programmatic outcome in STEM disciplines (American Chemical Society Committee on Professional Training, 2015 ; The Royal Society, 2014 ), so we argue that it is necessary to support instructors as they attempt to assess this skill.

Current methods for evaluating students’ information processing include discipline-specific tools such as a rubric to assess physics students’ use of graphs and equations to solve work-energy problems (Nguyen et al., 2010 ) and assessments of organic chemistry students’ ability to “[manipulate] and [translate] between various representational forms” including 2D and 3D representations of chemical structures (Kumi et al., 2013 ). Although these assessment tools can be effectively used for their intended context, they were not designed for use in a wide range of STEM disciplines or for a variety of tasks.

Despite the many tools that exist to measure process skills, none has been designed and tested to facilitate frequent, formative feedback to STEM undergraduate students and faculty throughout a semester. The rubrics described here have been designed by the Enhancing Learning by Improving Process Skills in STEM (ELIPSS) Project (Cole et al., 2016 ) to assess undergraduate STEM students’ process skills and to facilitate feedback at the classroom level with the potential to track growth throughout a semester or degree program. The rubrics described here are designed to assess critical thinking and information processing in student written work. Rubrics were chosen as the format for our process skill assessment tools because the highest level of each category in rubrics can serve as an explicit learning outcome that the student is expected to achieve (Panadero & Jonsson, 2013 ). Rubrics that are generalizable to multiple disciplines and institutions can enable the assessment of student learning outcomes and active learning pedagogies throughout a program of study and provide useful tools for a greater number of potential users.

Research questions

This work sought to answer the following research questions for each rubric:

Does the rubric adequately measure relevant aspects of the skill?

How well can the rubrics provide feedback to instructors and students?

Can multiple raters use the rubrics to give consistent scores?

This work received Institutional Review Board approval prior to any data collection involving human subjects. The sources of data used to construct the process skill rubrics and answer these research questions were (1) peer-reviewed literature on how each skill is defined, (2) feedback from content experts in multiple STEM disciplines via surveys and in-person, group discussions regarding the appropriateness of the rubrics for each discipline, (3) interviews with students whose work was scored with the rubrics and teaching assistants who scored the student work, and (4) results of applying the rubrics to samples of student work.

Defining the scope of the rubrics

The rubrics described here and the other rubrics in development by the ELIPSS Project are intended to measure process skills, which are desired learning outcomes identified by the STEM community in recent reports (National Research Council, 2012 ; Singer et al., 2012 ). In order to measure these skills in multiple STEM disciplines, operationalized definitions of each skill were needed. These definitions specify which aspects of student work (operations) would be considered evidence for the student using that skill and establish a shared understanding of each skill by members of each STEM discipline. The starting point for this work was the process skill definitions developed as part of the POGIL project (Cole et al., 2019a ). The POGIL community includes instructors from a variety of disciplines and institutions and represented the intended audience for the rubrics: faculty who value process skills and want to more explicitly assess them. The process skills discussed in this work were defined as follows:

Critical thinking is analyzing, evaluating, or synthesizing relevant information to form an argument or reach a conclusion supported with evidence.

Information processing is evaluating, interpreting, and manipulating or transforming information.

Examples of critical thinking include the tasks that students are asked to perform in a laboratory course. When students are asked to analyze the data they collected, combine data from different sources, and generate arguments or conclusions about their data, we see this as critical thinking. However, when students simply follow the so-called “cookbook” laboratory instructions that require them to confirm pre-determined conclusions, we do not think students are engaging in critical thinking. One example of information processing is when organic chemistry students are required to re-draw molecules in different formats. The students must evaluate and interpret various pieces of one representation, and then they recreate the molecule in another representation. However, if students are asked to simply memorize facts or algorithms to solve problems, we do not see this as information processing.

Iterative rubric development

The development process was the same for the information processing rubric and the critical thinking rubric. After defining the scope of the rubric, an initial version was drafted based upon the definition of the target process skill and how each aspect of the skill is defined in the literature. A more detailed discussion of the literature that informed each rubric category is included in the “Results and Discussion” section. This initial version then underwent iterative testing in which the rubric was reviewed by researchers, practitioners, and students. The rubric was first evaluated by the authors and a group of eight faculty from multiple STEM disciplines who made up the ELIPSS Project’s primary collaborative team (PCT). The PCT was a group of faculty members with experience in discipline-based education research who employ active-learning pedagogies in their classrooms. This initial round of evaluation was intended to ensure that the rubric measured relevant aspects of the skill and was appropriate for each PCT member’s discipline. This evaluation determined how well the rubrics were aligned with each instructor’s understanding of the process skill including both in-person and email discussions that continued until the group came to consensus that each rubric category could be applied to student work in courses within their disciplines. There has been an ongoing debate regarding the role of disciplinary knowledge in critical thinking and the extent to which critical thinking is subject-specific (Davies, 2013 ; Ennis, 1990 ). This work focuses on the creation of rubrics to measure process skills in different domains, but we have not performed cross-discipline comparisons. This initial round of review was also intended to ensure that the rubrics were ready for classroom testing by instructors in each discipline. Next, each rubric was tested over three semesters in multiple classroom environments, illustrated in Table 1 . The rubrics were applied to student work chosen by each PCT member. The PCT members chose the student work based on their views of how the assignments required students to engage in process skills and show evidence of those skills. The information processing and critical thinking rubrics shown in this work were each tested in at least three disciplines, course levels, and institutions.

After each semester, the feedback was collected from the faculty testing the rubric, and further changes to the rubric were made. Feedback was collected in the form of survey responses along with in-person group discussions at annual project meetings. After the first iteration of completing the survey, the PCT members met with the authors to discuss how they were interpreting each survey question. This meeting helped ensure that the surveys were gathering valid data regarding how well the rubrics were measuring the desired process skill. Questions in the survey such as “What aspects of the student work provided evidence for the indicated process skill?” and “Are there edits to the rubric/descriptors that would improve your ability to assess the process skill?” allowed the authors to determine how well the rubric scores were matching the student work and identify necessary changes to the rubric. Further questions asked about the nature and timing of the feedback given to students in order to address the question of how well the rubrics provide feedback to instructors and students. The survey questions are included in the Supporting Information . The survey responses were analyzed qualitatively to determine themes related to each research question.

In addition to the surveys given to faculty rubric testers, twelve students were interviewed in fall 2016 and fall 2017. In the United States of America, the fall semester typically runs from August to December and is the first semester of the academic year. Each student participated in one interview which lasted about 30 min. These interviews were intended to gather further data to answer questions about how well the rubrics were measuring the identified process skills that students were using when they completed their assignments and to ensure that the information provided by the rubrics made sense to students. The protocol for these interviews is included in the Supporting Information . In fall 2016, the students interviewed were enrolled in an organic chemistry laboratory course for non-majors at a large, research-intensive university in the United States. Thirty students agreed to have their work analyzed by the research team, and nine students were interviewed. However, the rubrics were not a component of the laboratory course grading. Instead, the first author assessed the students’ reports for critical thinking and information processing, and then the students were provided electronic copies of their laboratory reports and scored rubrics in advance of the interview. The first author had recently been a graduate teaching assistant for the course and was familiar with the instructor’s expectations for the laboratory reports. During the interview, the students were given time to review their reports and the completed rubrics, and then they were asked about how well they understood the content of the rubrics and how accurately each category score represented their work.

In fall 2017, students enrolled in a physical chemistry thermodynamics course for majors were interviewed. The physical chemistry course took place at the same university as the organic laboratory course, but there was no overlap between participants. Three students and two graduate teaching assistants (GTAs) were interviewed. The course included daily group work, and process skill assessment was an explicit part of the instructor’s curriculum. At the end of each class period, students assessed their groups using portions of ELIPSS rubrics, including the two process skill rubrics included in this paper. About every 2 weeks, the GTAs assessed the student groups with a complete ELIPSS rubric for a particular skill, then gave the groups their scored rubrics with written comments. The students’ individual homework problem sets were assessed once with rubrics for three skills: critical thinking, information processing, and problem-solving. The students received the scored rubric with written comments when the graded problem set was returned to them. In the last third of the semester, the students and GTAs were interviewed about how rubrics were implemented in the course, how well the rubric scores reflected the students’ written work, and how the use of rubrics affected the teaching assistants’ ability to assess the student skills. The protocols for these interviews are included in the Supporting Information .

Gathering evidence for utility, validity, and reliability

The utility, validity, and reliability of the rubrics were measured throughout the development process. The utility is the degree to which the rubrics are perceived as practical to experts and practitioners in the field. Through multiple meetings, the PCT faculty determined that early drafts of the rubric seemed appropriate for use in their classrooms, which represented multiple STEM disciplines. Rubric utility was reexamined multiple times throughout the development process to ensure that the rubrics would remain practical for classroom use. Validity can be defined in multiple ways. For example, the Standards for Educational and Psychological Testing (Joint Committee on Standards for Educational Psychological Testing, 2014 ) defines validity as “the degree to which all the accumulated evidence supports the intended interpretation of test scores for the proposed use.” For the purposes of this work, we drew on the ways in which two distinct types of validity were examined in the rubric literature: content validity and construct validity. Content validity is the degree to which the rubrics cover relevant aspects of each process skill (Moskal & Leydens, 2000 ). In this case, the process skill definition and a review of the literature determined which categories were included in each rubric. The literature review was finished once the data was saturated: when no more new aspects were found. Construct validity is the degree to which the levels of each rubric category accurately reflect the process that students performed (Moskal & Leydens, 2000 ). Evidence of construct validity was gathered via the faculty surveys, teaching assistant interviews, and student interviews. In the student interviews, students were given one of their completed assignments and asked to explain how they completed the task. Students were then asked to explain how well each category applied to their work and if any changes were needed to the rubric to more accurately reflect their process. Due to logistical challenges, we were not able to obtain evidence for convergent validity, and this is further discussed in the “Limitations” section.

Adjacent agreement, also known as “interrater agreement within one,” was chosen as the measure of interrater reliability due to its common use in rubric development projects (Jonsson & Svingby, 2007 ). The adjacent agreement is the percentage of cases in which two raters agree on a rating or are different by one level (i.e., they give adjacent ratings to the same work). Jonsson and Svingby ( 2007 ) found that most of the rubrics they reviewed had adjacent agreement scores of 90% or greater. However, they noted that the agreement threshold varied based on the number of possible levels of performance for each category in the rubric, with three and four being the most common numbers of levels. Since the rubrics discussed in this report have six levels (scores of zero through five) and are intended for low-stakes assessment and feedback, the goal of 80% adjacent agreement was selected. To calculate agreement for the critical thinking and information processing rubrics, two researchers discussed the scoring criteria for each rubric and then independently assessed the organic chemistry laboratory reports.

Results and discussion

The process skill rubrics to assess critical thinking and information processing in student written work were completed after multiple rounds of revision based on feedback from various sources. These sources include feedback from instructors who tested the rubrics in their classrooms, TAs who scored student work with the rubrics, and students who were assessed with the rubrics. The categories for each rubric will be discussed in terms of the evidence that the rubrics measure the relevant aspects of the skill and how they can be used to assess STEM undergraduate student work. Each category discussion will begin with a general explanation of the category followed by more specific examples from the organic chemistry laboratory course and physical chemistry lecture course to demonstrate how the rubrics can be used to assess student work.

Information processing rubric

The definition of information processing and the focus of the rubric presented here (Fig. 1 ) are distinct from cognitive information processing as defined by the educational psychology literature (Driscoll, 2005 ). The rubric shown here is more aligned with the STEM education construct of representational competency (Daniel et al., 2018 ).

figure 1

Rubric for assessing information processing

When solving a problem or completing a task, students must evaluate the provided information for relevance or importance to the task (Hanson, 2008 ; Swanson et al., 1990 ). All the information provided in a prompt (e.g., homework or exam questions) may not be relevant for addressing all parts of the prompt. Students should ideally show evidence of their evaluation process by identifying what information is present in the prompt/model, indicating what information is relevant or not relevant, and indicating why information is relevant. Responses with these characteristics would earn high rubric scores for this category. Although students may not explicitly state what information is necessary to address a task, the information they do use can act as indirect evidence of the degree to which they have evaluated all of the available information in the prompt. Evidence for students inaccurately evaluating information for relevance includes the inclusion of irrelevant information or the omission of relevant information in an analysis or in completing a task. When evaluating the organic chemistry laboratory reports, the focus for the evaluating category was the information students presented when identifying the chemical structure of their products. For students who received a high score, this information included their measured value for the product’s melting point, the literature (expected) value for the melting point, and the peaks in a nuclear magnetic resonance (NMR) spectrum. NMR spectroscopy is a commonly used technique in chemistry to obtain structural information about a compound. Lower scores were given if students omitted any of the necessary information or if they included unnecessary information. For example, if a student discussed their reaction yield when discussing the identity of their product, they would receive a low Evaluating score because the yield does not help them determine the identity of their product; the yield, in this case, would be unnecessary information. In the physical chemistry course, students often did not show evidence that they determined which information was relevant to answer the homework questions and thus earned low evaluating scores. These omissions will be further addressed in the “Interpreting” section.

Interpreting

In addition to evaluating, students must often interpret information using their prior knowledge to explain the meaning of something, make inferences, match data to predictions, and extract patterns from data (Hanson, 2008 ; Nakhleh, 1992 ; Schmidt et al., 1989 ; Swanson et al., 1990 ). Students earn high scores for this category if they assign correct meaning to labeled information (e.g., text, tables, graphs, diagrams), extract specific details from information, explain information in their own words, and determine patterns in information. For the organic chemistry laboratory reports, students received high scores if they accurately interpreted their measured values and NMR peaks. Almost every student obtained melting point values that were different than what was expected due to measurement error or impurities in their products, so they needed to describe what types of impurities could cause such discrepancies. Also, each NMR spectrum contained one peak that corresponded to the solvent used to dissolve the students’ product, so the students needed to use their prior knowledge of NMR spectroscopy to recognize that peak did not correspond to part of their product.

In physical chemistry, the graduate teaching assistant often gave students low scores for inaccurately explaining changes to chemical systems such as changes in pressure or entropy. The graduate teaching assistant who assessed the student work used the rubric to identify both the evaluating and interpreting categories as weaknesses in many of the students’ homework submissions. However, the students often earned high scores for the manipulating and transforming categories, so the GTA was able to give students specific feedback on their areas for improvement while also highlighting their strengths.

Manipulating and transforming (extent and accuracy)

In addition to evaluating and interpreting information, students may be asked to manipulate and transform information from one form to another. These transformations should be complete and accurate (Kumi et al., 2013 ; Nguyen et al., 2010 ). Students may be required to construct a figure based on written information, or conversely, they may transform information in a figure into words or mathematical expressions. Two categories for manipulating and transforming (i.e., extent and accuracy) were included to allow instructors to give more specific feedback. It was often found that students would either transform little information but do so accurately, or transform much information and do so inaccurately; the two categories allowed for differentiated feedback to be provided. As stated above, the organic chemistry students were expected to transform their NMR spectral data into a table and provide a labeled structure of their final product. Students were given high scores if they converted all of the relevant peaks from their spectrum into the table format and were able to correctly match the peaks to the hydrogen atoms in their products. Students received lower scores if they were only able to convert the information for a few peaks or if they incorrectly matched the peaks to the hydrogen atoms.

Critical thinking rubric

Critical thinking can be broadly defined in different contexts, but we found that the categories included in the rubric (Fig. 2 ) represented commonly accepted aspects of critical thinking (Danczak et al., 2017 ) and suited the needs of the faculty collaborators who tested the rubric in their classrooms.

figure 2

Rubric for assessing critical thinking

When completing a task, students must evaluate the relevance of information that they will ultimately use to support a claim or conclusions (Miri et al., 2007 ; Zohar et al., 1994 ). An evaluating category is included in both critical thinking and information processing rubrics because evaluation is a key aspect of both skills. From our previous work developing a problem-solving rubric (manuscript in preparation) and our review of the literature for this work (Danczak et al., 2017 ; Lewis & Smith, 1993 ), the overlap was seen between information processing, critical thinking, and problem-solving. Additionally, while the Evaluating category in the information processing rubric assesses a student’s ability to determine the importance of information to complete a task, the evaluating category in the critical thinking rubric places a heavier emphasis on using the information to support a conclusion or argument.

When scoring student work with the evaluating category, students receive high scores if they indicate what information is likely to be most relevant to the argument they need to make, determine the reliability of the source of their information, and determine the quality and accuracy of the information itself. The information used to assess this category can be indirect as with the Evaluating category in the information processing rubric. In the organic chemistry laboratory reports, students needed to make an argument about whether they successfully produced the desired product, so they needed to discuss which information was relevant to their claims about the product’s identity and purity. Students received high scores for the evaluating category when they accurately determined that the melting point and nearly all peaks except the solvent peak in the NMR spectrum indicated the identity of their product. Students received lower scores for evaluating when they left out relevant information because this was seen as evidence that the student inaccurately evaluated the information’s relevance in supporting their conclusion. They also received lower scores when they incorrectly stated that a high yield indicated a pure product. Students were given the opportunity to demonstrate their ability to evaluate the quality of information when discussing their melting point. Students sometimes struggled to obtain reliable melting point data due to their inexperience in the laboratory, so the rubric provided a way to assess the student’s ability to critique their own data.

In tandem with evaluating information, students also need to analyze that same information to extract meaningful evidence to support their conclusions (Bailin, 2002 ; Lai, 2011 ; Miri et al., 2007 ). The analyzing category provides an assessment of a student’s ability to discuss information and explore the possible meaning of that information, extract patterns from data/information that could be used as evidence for their claims, and summarize information that could be used as evidence. For example, in the organic chemistry laboratory reports, students needed to compare the information they obtained to the expected values for a product. Students received high scores for the analyzing category if they could extract meaningful structural information from the NMR spectrum and their two melting points (observed and expected) for each reaction step.

Synthesizing

Often, students are asked to synthesize or connect multiple pieces of information in order to draw a conclusion or make a claim (Huitt, 1998 ; Lai, 2011 ). Synthesizing involves identifying the relationships between different pieces of information or concepts, identifying ways that different pieces of information or concepts can be combined, and explaining how the newly synthesized information can be used to reach a conclusion and/or support an argument. While performing the organic chemistry laboratory experiments, students obtained multiple types of information such as the melting point and NMR spectrum in addition to other spectroscopic data such as an infrared (IR) spectrum. Students received high scores for this category when they accurately synthesized these multiple data types by showing how the NMR and IR spectra could each reveal different parts of a molecule in order to determine the molecule’s entire structure.

Forming arguments (structure and validity)

The final key aspect of critical thinking is forming a well-structured and valid argument (Facione, 1984 ; Glassner & Schwarz, 2007 ; Lai, 2011 ; Lewis & Smith, 1993 ). It was observed that students can earn high scores for evaluating, analyzing, and synthesizing, but still struggle to form arguments. This was particularly common in assessing problem sets in the physical chemistry course.

As with the manipulating and transforming categories in the information processing rubric, two forming arguments categories were included to allow instructors to give more specific feedback. Some students may be able to include all of the expected structural elements of their arguments but use faulty information or reasoning. Conversely, some students may be able to make scientifically valid claims but not necessarily support them with evidence. The two forming arguments categories are intended to accurately assess both of these scenarios. For the forming arguments (structure) category, students earn high scores if they explicitly state their claim or conclusion, list the evidence used to support the argument, and provide reasoning to link the evidence to their claim/conclusion. Students who do not make a claim or who provide little evidence or reasoning receive lower scores.

For the forming arguments (validity) category, students earn high scores if their claim is accurate and their reasoning is logical and clearly supports the claim with provided evidence. Organic chemistry students earned high scores for the forms and supports arguments categories if they made explicit claims about the identity and purity of their product and provided complete and accurate evidence for their claim(s) such as the melting point values and positions of NMR peaks that correspond to their product. Additionally, the students provided evidence for the purity of their products by pointing to the presence or absence of peaks in their NMR spectrum that would match other potential side products. They also needed to provide logical reasoning for why the peaks indicated the presence or absence of a compound. As previously mentioned, the physical chemistry students received lower scores for the forming arguments categories than for the other aspects of critical thinking. These students were asked to make claims about the relationships between entropy and heat and then provide relevant evidence to justify these claims. Often, the students would make clearly articulated claims but would provide little evidence to support them. As with the information processing rubric, the critical thinking rubric allowed the GTAs to assess aspects of these skills independently and identify specific areas for student improvement.

Validity and reliability

The goal of this work was to create rubrics that can accurately assess student work (validity) and be consistently implemented by instructors or researchers within multiple STEM fields (reliability). The evidence for validity includes the alignment of the rubrics with literature-based descriptions of each skill, review of the rubrics by content experts from multiple STEM disciplines, interviews with undergraduate students whose work was scored using the rubrics, and interviews of the GTAs who scored the student work.

The definitions for each skill, along with multiple iterations of the rubrics, underwent review by STEM content experts. As noted earlier, the instructors who were testing the rubrics were given a survey at the end of each semester and were invited to offer suggested changes to the rubric to better help them assess their students. After multiple rubric revisions, survey responses from the instructors indicated that the rubrics accurately represented the breadth of each process skill as seen in each expert’s content area and that each category could be used to measure multiple levels of student work. By the end of the rubrics’ development, instructors were writing responses such as “N/A” or “no suggestions” to indicate that the rubrics did not need further changes.

Feedback from the faculty also indicated that the rubrics were measuring the intended constructs by the ways they responded to the survey item “What aspects of the student work provided evidence for the indicated process skill?” For example, one instructor noted that for information processing, she saw evidence of the manipulating and transforming categories when “students had to transform their written/mathematical relationships into an energy diagram.” Another instructor elicited evidence of information processing during an in-class group quiz: “A question on the group quiz was written to illicit [sic] IP [information processing]. Students had to transform a structure into three new structures and then interpret/manipulate the structures to compare the pKa values [acidity] of the new structures.” For this instructor, the structures written by the students revealed evidence of their information processing by showing what information they omitted in the new structures or inaccurately transformed. For critical thinking, an instructor assessed short research reports with the critical thinking rubric and “looked for [the students’] ability to use evidence to support their conclusions, to evaluate the literature studies, and to develop their own judgements by synthesizing the information.” Another instructor used the critical thinking rubric to assess their students’ abilities to choose an instrument to perform a chemical analysis. According to the instructor, the students provided evidence of their critical thinking because “in their papers, they needed to justify their choice of instrument. This justification required them to evaluate information and synthesize a new understanding for this specific chemical analysis.”

Analysis of student work indicates multiple levels of achievement for each rubric category (illustrated in Fig. 3 ), although there may have been a ceiling effect for the evaluating and the manipulating and transforming (extent) categories in information processing for organic chemistry laboratory reports because many students earned the highest possible score (five) for those categories. However, other implementations of the ELIPSS rubrics (Reynders et al., 2019 ) have shown more variation in student scores for the two process skills.

figure 3

Student rubric scores from an organic chemistry laboratory course. The two rubrics were used to evaluate different laboratory reports. Thirty students were assessed for information processing and 28 were assessed for critical thinking

To provide further evidence that the rubrics were measuring the intended skills, students in the physical chemistry course were interviewed about their thought processes and how well the rubric scores reflected the work they performed. During these interviews, students described how they used various aspects of information processing and critical thinking skills. The students first described how they used information processing during a problem set where they had to answer questions about a diagram of systolic and diastolic blood pressure. Students described how they evaluated and interpreted the graph to make statements such as “diastolic [pressure] is our y-intercept” and “volume is the independent variable.” The students then demonstrated their ability to transform information from one form to another, from a graph to a mathematic equation, by recognizing “it’s a linear relationship so I used Y equals M X plus B ” and “integrated it cause it’s the change, the change in V [volume]. For critical thinking, students described their process on a different problem set. In this problem set, the students had to explain why the change of Helmholtz energy and the change in Gibbs free energy were equivalent under a certain given condition. Students first demonstrated how they evaluated the relevant information and analyzed what would and would not change in their system. One student said, “So to calculate the final pressure, I think I just immediately went to the ideal gas law because we know the final volume and the number of moles won’t change and neither will the temperature in this case. Well, I assume that it wouldn’t.” Another student showed evidence of their evaluation by writing out all the necessary information in one place and stating, “Whenever I do these types of problems, I always write what I start with which is why I always have this line of information I’m given.” After evaluating and analyzing, students had to form an argument by claiming that the two energy values were equal and then defending that claim. Students explained that they were not always as clear as they could be when justifying their claim. For instance, one student said, “Usually I just write out equations and then hope people understand what I’m doing mathematically” but they “probably could have explained it a little more.”

Student feedback throughout the organic chemistry course and near the end of the physical chemistry course indicated that the rubric scores were accurate representations of the students’ work with a few exceptions. For example, some students felt like they should have received either a lower or higher score for certain categories, but they did say that the categories themselves applied well to their work. Most notably, one student reported that the forms and supports arguments categories in the critical thinking rubric did not apply to her work because she “wasn’t making an argument” when she was demonstrating that the Helmholtz and Gibbs energy values were equal in her thermodynamics assignment. We see this as an instance where some students and instructors may define argument in different ways. The process skill definitions and the rubric categories are meant to articulate intended learning outcomes from faculty members to their students, so if a student defines the skills or categories differently than the faculty member, then the rubrics can serve to promote a shared understanding of the skill.

As previously mentioned, reliability was measured by two researchers assessing ten laboratory reports independently to ensure that multiple raters could use the rubrics consistently. The average adjacent agreement scores were 92% for critical thinking and 93% for information processing. The exact agreement scores were 86% for critical thinking and 88% for information processing. Additionally, two different raters assessed a statistics assignment that was given to sixteen first-year undergraduates. The average pairwise adjacent agreement scores were 89% for critical thinking and 92% for information processing for this assignment. However, the exact agreement scores were much lower: 34% for critical thinking and 36% for information processing. In this case, neither rater was an expert in the content area. While the exact agreement scores for the statistics assignment are much lower than desirable, the adjacent agreement scores do meet the threshold for reliability as seen in other rubrics (Jonsson & Svingby, 2007 ) despite the disparity in expertise. Based on these results, it may be difficult for multiple raters to give exactly the same scores to the same work if they have varying levels of content knowledge, but it is important to note that the rubrics are primarily intended for formative assessment that can facilitate discussions between instructors and students about the ways for students to improve. The high level of adjacent agreement scores indicates that multiple raters can identify the same areas to improve in examples of student work.

Instructor and teaching assistant reflections

The survey responses from faculty members determined the utility of the rubrics. Faculty members reported that when they used the rubrics to define their expectations and be more specific about their assessment criteria, the students seemed to be better able to articulate the areas in which they needed improvement. As one instructor put it, “having the rubrics helped open conversations and discussions” that were not happening before the rubrics were implemented. We see this as evidence of the clear intended learning outcomes that are an integral aspect of achieving constructive alignment within a course. The instructors’ specific feedback to the students, and the students’ increased awareness of their areas for improvement, may enable the students to better regulate their learning throughout a course. Additionally, the survey responses indicated that the faculty members were changing their teaching practices and becoming more cognizant of how assignments did or did not elicit the process skill evidence that they desired. After using the rubrics, one instructor said, “I realize I need to revise many of my activities to more thoughtfully induce process skill development.” We see this as evidence that the faculty members were using the rubrics to regulate their teaching by reflecting on the outcomes of their practices and then planning for future teaching. These activities represent the reflection and forethought/planning aspects of self-regulated learning on the part of the instructors. Graduate teaching assistants in the physical chemistry course indicated that the rubrics gave them a way to clarify the instructor’s expectations when they were interacting with the students. As one GTA said, “It’s giving [the students] feedback on direct work that they have instead of just right or wrong. It helps them to understand like ‘Okay how can I improve? What areas am I lacking in?’” A more detailed account of how the instructors and teaching assistants implemented the rubrics has been reported elsewhere (Cole et al., 2019a ).

Student reflections

Students in both the organic and physical chemistry courses reported that they could use the rubrics to engage in the three phases of self-regulated learning: forethought/planning, performing, and reflecting. In an organic chemistry interview, one student was discussing how they could improve their low score for the synthesizing category of critical thinking by saying “I could use the data together instead of trying to use them separately,” thus demonstrating forethought/planning for their later work. Another student described how they could use the rubric while performing a task: “I could go through [the rubric] as I’m writing a report…and self-grade.” Finally, one student demonstrated how they could use the rubrics to reflect on their areas for improvement by saying that “When you have the five column [earn a score of five], I can understand that I’m doing something right” but “I really need to work on revising my reports.” We see this as evidence that students can use the rubrics to regulate their own learning, although classroom facilitation can have an effect on the ways in which students use the rubric feedback (Cole et al., 2019b ).

Limitations

The process skill definitions presented here represent a consensus understanding among members of the POGIL community and the instructors who participated in this study, but these skills are often defined in multiple ways by various STEM instructors, employers, and students (Danczak et al., 2017 ). One issue with critical thinking, in particular, is the broadness of how the skill is defined in the literature. Through this work, we have evidence via expert review to indicate that our definitions represent common understandings among a set of STEM faculty. Nonetheless, we cannot claim that all STEM instructors or researchers will share the skill definitions presented here.

There is currently a debate in the STEM literature (National Research Council, 2011 ) about whether the critical thinking construct is domain-general or domain-specific, that is, whether or not one’s critical thinking ability in one discipline can be applied to another discipline. We cannot make claims about the generalness of the construct based on the data presented here because the same students were not tested across multiple disciplines or courses. Additionally, we did not gather evidence for convergent validity, which is “the degree to which an operationalized construct is similar to other operationalized constructs that it theoretically should be similar to” (National Research Council, 2011 ). In other words, evidence for convergent validity would be the comparison of multiple measures of information processing or critical thinking. However, none of the instructors who used the ELIPSS rubrics also used a secondary measure of the constructs. Although the rubrics were examined by a multidisciplinary group of collaborators, this group was primarily chemists and included eight faculties from other disciplines, so the content validity of the rubrics may be somewhat limited.

Finally, the generalizability of the rubrics is limited by the relatively small number of students who were interviewed about their work. During their interviews, the students in the organic and physical chemistry courses each said that they could use the rubric scores as feedback to improve their skills. Additionally, as discussed in the “Validity and Reliability” section, the processes described by the students aligned with the content of the rubric and provided evidence of the rubric scores’ validity. However, the data gathered from the student interviews only represents the views of a subset of students in the courses, and further study is needed to determine the most appropriate contexts in which the rubrics can be implemented.

Conclusions and implications

Two rubrics were developed to assess and provide feedback on undergraduate STEM students’ critical thinking and information processing. Faculty survey responses indicated that the rubrics measured the relevant aspects of each process skill in the disciplines that were examined. Faculty survey responses, TA interviews, and student interviews over multiple semesters indicated that the rubric scores accurately reflected the evidence of process skills that the instructors wanted to see and the processes that the students performed when they were completing their assignments. The rubrics showed high inter-rater agreement scores, indicating that multiple raters could identify the same areas for improvement in student work.

In terms of constructive alignment, courses should ideally have alignment between their intended learning outcomes, student and instructor activities, and assessments. By using the ELIPSS rubrics, instructors were able to explicitly articulate the intended learning outcomes of their courses to their students. The instructors were then able to assess and provide feedback to students on different aspects of their process skills. Future efforts will be focused on modifying student assignments to enable instructors to better elicit evidence of these skills. In terms of self-regulated learning, students indicated in the interviews that the rubric scores were accurate representations of their work (performances), could help them reflect on their previous work (self-reflection), and the feedback they received could be used to inform their future work (forethought). Not only did the students indicate that the rubrics could help them regulate their learning, but the faculty members indicated that the rubrics had helped them regulate their teaching. With the individual categories on each rubric, the faculty members were better able to observe their students’ strengths and areas for improvement and then tailor their instruction to meet those needs. Our results indicated that the rubrics helped instructors in multiple STEM disciplines and at multiple institutions reflect on their teaching and then make changes to better align their teaching with their desired outcomes.

Overall, the rubrics can be used in a number of different ways to modify courses or for programmatic assessment. As previously stated, instructors can use the rubrics to define expectations for their students and provide them with feedback on desired skills throughout a course. The rubric categories can be used to give feedback on individual aspects of student process skills to provide specific feedback to each student. If an instructor or department wants to change from didactic lecture-based courses to active learning ones, the rubrics can be used to measure non-content learning gains that stem from the adoption of such pedagogies. Although the examples provided here for each rubric were situated in chemistry contexts, the rubrics were tested in multiple disciplines and institution types. The rubrics have the potential for wide applicability to assess not only laboratory reports but also homework assignments, quizzes, and exams. Assessing these tasks provides a way for instructors to achieve constructive alignment between their intended outcomes and their assessments, and the rubrics are intended to enhance this alignment to improve student process skills that are valued in the classroom and beyond.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

American Association of Colleges and Universities

Critical Thinking Assessment Test

Comprehensive University

Enhancing Learning by Improving Process Skills in STEM

Liberal Education and America’s Promise

Nuclear Magnetic Resonance

Primary Collaborative Team

Peer-led Team Learning

Process Oriented Guided Inquiry Learning

Primarily Undergraduate Institution

Research University

Science, Technology, Engineering, and Mathematics

Valid Assessment of Learning in Undergraduate Education

ABET Engineering Accreditation Commission. (2012). Criteria for Accrediting Engineering Programs . Retrieved from http://www.abet.org/accreditation/accreditation-criteria/criteria-for-accrediting-engineering-programs-2016-2017/ .

American Chemical Society Committee on Professional Training. (2015). Unergraduate Professional Education in Chemistry: ACS Guidelines and Evaluation Procedures for Bachelor's Degree Programs . Retrieved from https://www.acs.org/content/dam/acsorg/about/governance/committees/training/2015-acs-guidelines-for-bachelors-degree-programs.pdf

Association of American Colleges and Universities. (2019). VALUE Rubric Development Project. Retrieved from https://www.aacu.org/value/rubrics .

Bailin, S. (2002). Critical Thinking and Science Education. Science and Education, 11 , 361–375.

Article   Google Scholar  

Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32 (3), 347–364.

Biggs, J. (2003). Aligning teaching and assessing to course objectives. Teaching and learning in higher education: New trends and innovations, 2 , 13–17.

Google Scholar  

Biggs, J. (2014). Constructive alignment in university teaching. HERDSA Review of higher education, 1 (1), 5–22.

Black, P., & Wiliam, D. (1998). Assessment and Classroom Learning. Assessment in Education: Principles, Policy & Practice, 5 (1), 7–74.

Bodner, G. M. (1986). Constructivism: A theory of knowledge. Journal of Chemical Education, 63 (10), 873–878.

Brewer, C. A., & Smith, D. (2011). Vision and change in undergraduate biology education: a call to action. American Association for the Advancement of Science . DC : Washington .

Brookhart, S. M., & Chen, F. (2014). The quality and effectiveness of descriptive rubrics. Educational Review , 1–26.

Butler, D. L., & Winne, P. H. (1995). Feedback and Self-Regulated Learning: A Theoretical Synthesis. Review of Educational Research, 65 (3), 245–281.

Cole, R., Lantz, J., & Ruder, S. (2016). Enhancing Learning by Improving Process Skills in STEM. Retrieved from http://www.elipss.com .

Cole, R., Lantz, J., & Ruder, S. (2019a). PO: The Process. In S. R. Simonson (Ed.), POGIL: An Introduction to Process Oriented Guided Inquiry Learning for Those Who Wish to Empower Learners (pp. 42–68). Sterling, VA: Stylus Publishing.

Cole, R., Reynders, G., Ruder, S., Stanford, C., & Lantz, J. (2019b). Constructive Alignment Beyond Content: Assessing Professional Skills in Student Group Interactions and Written Work. In M. Schultz, S. Schmid, & G. A. Lawrie (Eds.), Research and Practice in Chemistry Education: Advances from the 25 th IUPAC International Conference on Chemistry Education 2018 (pp. 203–222). Singapore: Springer.

Chapter   Google Scholar  

Danczak, S., Thompson, C., & Overton, T. (2017). ‘What does the term Critical Thinking mean to you?’A qualitative analysis of chemistry undergraduate, teaching staff and employers' views of critical thinking. Chemistry Education Research and Practice, 18 , 420–434.

Daniel, K. L., Bucklin, C. J., Leone, E. A., & Idema, J. (2018). Towards a Definition of Representational Competence. In Towards a Framework for Representational Competence in Science Education (pp. 3–11). Switzerland: Springer.

Davies, M. (2013). Critical thinking and the disciplines reconsidered. Higher Education Research & Development, 32 (4), 529–544.

Deloitte Access Economics. (2014). Australia's STEM Workforce: a survey of employers. Retrieved from https://www2.deloitte.com/au/en/pages/economics/articles/australias-stem-workforce-survey.html .

Driscoll, M. P. (2005). Psychology of learning for instruction . Boston, MA: Pearson Education.

Ennis, R. H. (1990). The extent to which critical thinking is subject-specific: Further clarification. Educational researcher, 19 (4), 13–16.

Facione, P. A. (1984). Toward a theory of critical thinking. Liberal Education, 70 (3), 253–261.

Facione, P. A. (1990a). The California Critical Thinking Skills Test--College Level . In Technical Report #1 . Experimental Validation and Content : Validity .

Facione, P. A. (1990b). The California critical thinking skills test—college level . In Technical Report #2 . Factors Predictive of CT : Skills .

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111 (23), 8410–8415.

Gafney, L., & Varma-Nelson, P. (2008). Peer-led team learning: evaluation, dissemination, and institutionalization of a college level initiative (Vol. 16): Springer Science & Business Media, Netherlands.

Glassner, A., & Schwarz, B. B. (2007). What stands and develops between creative and critical thinking? Argumentation? Thinking Skills and Creativity, 2 (1), 10–18.

Gosser, D. K., Cracolice, M. S., Kampmeier, J. A., Roth, V., Strozak, V. S., & Varma-Nelson, P. (2001). Peer-led team learning: A guidebook: Prentice Hall Upper Saddle River, NJ .

Gray, K., & Koncz, A. (2018). The key attributes employers seek on students' resumes. Retrieved from http://www.naceweb.org/about-us/press/2017/the-key-attributes-employers-seek-on-students-resumes/ .

Hanson, D. M. (2008). A cognitive model for learning chemistry and solving problems: implications for curriculum design and classroom instruction. In R. S. Moog & J. N. Spencer (Eds.), Process-Oriented Guided Inquiry Learning (pp. 15–19). Washington, DC: American Chemical Society.

Hattie, J., & Gan, M. (2011). Instruction based on feedback. Handbook of research on learning and instruction , 249-271.

Huitt, W. (1998). Critical thinking: an overview. In Educational psychology interactive Retrieved from http://www.edpsycinteractive.org/topics/cogsys/critthnk.html .

Joint Committee on Standards for Educational Psychological Testing. (2014). Standards for Educational and Psychological Testing : American Educational Research Association.

Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2 (2), 130–144.

Kumi, B. C., Olimpo, J. T., Bartlett, F., & Dixon, B. L. (2013). Evaluating the effectiveness of organic chemistry textbooks in promoting representational fluency and understanding of 2D-3D diagrammatic relationships. Chemistry Education Research and Practice, 14 , 177–187.

Lai, E. R. (2011). Critical thinking: a literature review. Pearson's Research Reports, 6 , 40–41.

Lewis, A., & Smith, D. (1993). Defining higher order thinking. Theory into Practice, 32 , 131–137.

Miri, B., David, B., & Uri, Z. (2007). Purposely teaching for the promotion of higher-order thinking skills: a case of critical thinking. Research in Science Education, 37 , 353–369.

Moog, R. S., & Spencer, J. N. (Eds.). (2008). Process oriented guided inquiry learning (POGIL) . Washington, DC: American Chemical Society.

Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research and Evaluation, 7 , 1–11.

Nakhleh, M. B. (1992). Why some students don't learn chemistry: Chemical misconceptions. Journal of Chemical Education, 69 (3), 191.

National Research Council. (2011). Assessing 21st Century Skills: Summary of a Workshop . Washington, DC: The National Academies Press.

National Research Council. (2012). Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st Century . Washington, DC: The National Academies Press.

Nguyen, D. H., Gire, E., & Rebello, N. S. (2010). Facilitating Strategies for Solving Work-Energy Problems in Graphical and Equational Representations. 2010 Physics Education Research Conference, 1289 , 241–244.

Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199–218.

Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: a review. Educational Research Review, 9 , 129–144.

Pearl, A. O., Rayner, G., Larson, I., & Orlando, L. (2019). Thinking about critical thinking: An industry perspective. Industry & Higher Education, 33 (2), 116–126.

Ramsden, P. (1997). The context of learning in academic departments. The experience of learning, 2 , 198–216.

Rau, M. A., Kennedy, K., Oxtoby, L., Bollom, M., & Moore, J. W. (2017). Unpacking “Active Learning”: A Combination of Flipped Classroom and Collaboration Support Is More Effective but Collaboration Support Alone Is Not. Journal of Chemical Education, 94 (10), 1406–1414.

Reynders, G., Suh, E., Cole, R. S., & Sansom, R. L. (2019). Developing student process skills in a general chemistry laboratory. Journal of Chemical Education , 96 (10), 2109–2119.

Saxton, E., Belanger, S., & Becker, W. (2012). The Critical Thinking Analytic Rubric (CTAR): Investigating intra-rater and inter-rater reliability of a scoring mechanism for critical thinking performance assessments. Assessing Writing, 17 , 251–270.

Schmidt, H. G., De Volder, M. L., De Grave, W. S., Moust, J. H. C., & Patel, V. L. (1989). Explanatory Models in the Processing of Science Text: The Role of Prior Knowledge Activation Through Small-Group Discussion. J. Educ. Psychol., 81 , 610–619.

Simonson, S. R. (Ed.). (2019). POGIL: An Introduction to Process Oriented Guided Inquiry Learning for Those Who Wish to Empower Learners . Sterling, VA: Stylus Publishing, LLC.

Singer, S. R., Nielsen, N. R., & Schweingruber, H. A. (Eds.). (2012). Discipline-Based education research: understanding and improving learning in undergraduate science and engineering . Washington D.C.: The National Academies Press.

Smit, R., & Birri, T. (2014). Assuring the quality of standards-oriented classroom assessment with rubrics for complex competencies. Studies in Educational Evaluation, 43 , 5–13.

Stein, B., & Haynes, A. (2011). Engaging Faculty in the Assessment and Improvement of Students' Critical Thinking Using the Critical Thinking Assessment Test. Change: The Magazine of Higher Learning, 43 , 44–49.

Swanson, H. L., Oconnor, J. E., & Cooney, J. B. (1990). An Information-Processing Analysis of Expert and Novice Teachers Problem-Solving. American Educational Research Journal, 27 (3), 533–556.

The Royal Society. (2014). Vision for science and mathematics education: The Royal Society Science Policy Centre . London: England.

Watson, G., & Glaser, E. M. (1964). Watson-Glaser Critical Thinking Appraisal Manual . New York, NY: Harcourt, Brace, and World.

Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41 (2), 64–70.

Zohar, A., Weinberger, Y., & Tamir, P. (1994). The Effect of the Biology Critical Thinking Project on the Development of Critical Thinking. Journal of Research in Science Teaching, 31 , 183–196.

Download references

Acknowledgements

We thank members of our Primary Collaboration Team and Implementation Cohorts for collecting and sharing data. We also thank all the students who have allowed us to examine their work and provided feedback.

Supporting information

• Product rubric survey

• Initial implementation survey

• Continuing implementation survey

This work was supported in part by the National Science Foundation under collaborative grants #1524399, #1524936, and #1524965. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Author information

Authors and affiliations.

Department of Chemistry, University of Iowa, W331 Chemistry Building, Iowa City, Iowa, 52242, USA

Gil Reynders & Renée S. Cole

Department of Chemistry, Virginia Commonwealth University, Richmond, Virginia, 23284, USA

Gil Reynders & Suzanne M. Ruder

Department of Chemistry, Drew University, Madison, New Jersey, 07940, USA

Juliette Lantz

Department of Chemistry, Ball State University, Muncie, Indiana, 47306, USA

Courtney L. Stanford

You can also search for this author in PubMed   Google Scholar

Contributions

RC, JL, and SR performed an initial literature review that was expanded by GR. All authors designed the survey instruments. GR collected and analyzed the survey and interview data with guidance from RC. GR revised the rubrics with extensive input from all other authors. All authors contributed to reliability measurements. GR drafted all manuscript sections. RC provided extensive comments during manuscript revisions; JL, SR, and CS also offered comments. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Renée S. Cole .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Supporting Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Reynders, G., Lantz, J., Ruder, S.M. et al. Rubrics to assess critical thinking and information processing in undergraduate STEM courses. IJ STEM Ed 7 , 9 (2020). https://doi.org/10.1186/s40594-020-00208-5

Download citation

Received : 01 October 2019

Accepted : 20 February 2020

Published : 09 March 2020

DOI : https://doi.org/10.1186/s40594-020-00208-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Constructive alignment
  • Self-regulated learning
  • Process skills
  • Professional skills
  • Critical thinking
  • Information processing

information processing research paper

information processing research paper

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

  •  We're Hiring!
  •  Help Center

Information Processing

  • Most Cited Papers
  • Most Downloaded Papers
  • Newest Papers
  • Last »
  • Philosophy of Mind and Artificial Intelligence & AI Follow Following
  • Modeling Emotions and Emotional Intelligence Follow Following
  • Philosophy of Cognitive Sciences & Cognitive Sciences Follow Following
  • Cognitive Modeling of Mind Follow Following
  • FPGA implementation Follow Following
  • Smart Card Follow Following
  • Field Programmable Gate Array Follow Following
  • Perception and Cognition Follow Following
  • Agent Based Modeling and Simulation Follow Following
  • Reaction Time Follow Following

Enter the email address you signed up with and we'll email you a reset link.

  • Academia.edu Journals
  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

10 September 2024: Due to technical disruption, we are experiencing some delays to publication. We are working to restore services and apologise for the inconvenience. For further updates please visit our website: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

information processing research paper

  • > Journals
  • > American Political Science Review
  • > Volume 67 Issue 4
  • > Schema Theory: An Information Processing Model of Perception...

information processing research paper

Article contents

Schema theory: an information processing model of perception and cognition *.

Published online by Cambridge University Press:  01 August 2014

The world is complex, and yet people are able to make some sense out of it. This paper offers an information-processing model to describe this aspect of perception and cognition. The model assumes that a person receives information which is less than perfect in terms of its completeness, its accuracy, and its reliability. The model provides a dynamic description of how a person evaluates this kind of information about a case, how he selects one of his pre-existing patterns (called schemata) with which to interpret the case, and how he uses the interpretation to modify and extend his beliefs about the case. It also describes how this process allows the person to make the internal adjustments which will serve as feedback for the interpretation of future information. A wide variety of evidence from experimental and social psychology is cited to support the decisions which went into constructing the separate parts of the schema theory, and further evidence is cited supporting the theory's system-level predictions. Since the schema theory allows for (but does not assume) the optimization of its parameters, it is also used as a framework for a normative analysis of the selection of schemata. Finally, a few illustrations from international relations and especially foreign-policy formation show that this model of how people make sense out of a complex world can be directly relevant to the study of important political processes.

Access options

For their help I wish to thank Helga Novotny of Cambridge University; A. K. Sen of the London School of Economics; Brian Barry of Oxford University; Barry R. Schlenker of the State University of New York at Albany; Ian Budge, Michael Bloxam, Norman Schofield and Michael Taylor of the University of Essex; Ole Holsti of the University of British Columbia; Ernst Haas and Jeffrey Hart of the University of California at Berkeley; Cedric Smith of the University of London; Zolton Domotor of the University of Pennsylvania; and my research assistants Jacob Bercovitch, Ditsa Kafry, and William Strawn. Parts of this paper have been presented at research seminars at Tel Aviv University, Hebrew University (Jerusalem), Technion (Haifa), the London School of Economics, and the University of Essex. I am grateful to the discussants at all these places. For their generous financial support I wish to thank the Institute of International Studies of the University of California at Berkeley, the Fellowship Program of the Council on Foreign Relations, and the NATO Postdoctoral Fellowship Program of the National Science Foundation. Of course the author is solely responsible for the views presented in this paper.

In reviewing the psychological literature I have relied heavily on Gardner Lindzey and Elliot Aronson, eds., The Handbook of Social Psychology , second edition (Reading, Mass.: Addison-Wesley, 1968, 1969), especially the chapters by Berger and Lambert “Stimulus-Response Theory in Contemporary Social Psychology,” I, 81–178; Seymour Rosenberg, “Mathematical Models of Social Behavior,” I, 179–244; Robert B. Zajonc, “Cognitive Theories in Social Psychology,” I, 320–411; Leonard Berkowitz, “Social Motivation,” III, 50–135; Henri Tajfel, “Social and Cultural Factors in Perception,” III, 315–394; and George A. Miller and David McNeill, “Psycholinguistics,” III, 666–794.

1 Singer , Jerome E. , “ Consistency as a Stimulus Process Mechanism ,” in Theories of Cognitive Consistency: A Sourcebook , ed. Abelson , Robert P. , Aronson , Elliot , McGuire , William , Newcomb , Theodore , Rosenberg , Milton and Tannenbaum , Percy ( Chicago : Rand McNally , 1968 ) Google Scholar .

2 For a formal treatment of balance see Harary , Frank , Norman , Robert and Cartwright , Dorwin , Structural Models ( New York : Wiley , 1965 ) Google Scholar . For a treatment of clusters see Davis , James , “ Clustering and Structural Balance in Graphs ,” Human Relations , 20 (May, 1967 ), 181 – 187 CrossRef Google Scholar ; and Riley , James A. , “ An Application of Graph Theory to Social Psychology ,” in The Many Facets of Graph Theory , Lecture Notes in Mathematics, No. 110 ( New York and Berlin : Springer Verlag , 1969 ) Google Scholar .

3 Axelrod , Robert , Framework for a General Theory of Cognition and Choice ( Berkeley : Institute of International Studies, University of California , 1972 ) Google Scholar . Also to appear in Theories of Collective Behavior , ed. Julius Margolis and Henry Teune, forthcoming.

4 Axelrod , Robert , “ Psycho-Algebra: A Mathematical Theory of Cognition and Choice with an Application to the British Eastern Committee in 1918 ,” Papers of the Peace Research Society (International) , 18 ( 1972 ), 113 – 131 Google Scholar .

5 For simplicity, the present model makes the unrealistic assumption that each of the separate relationships within a specification is believed with the same degree of confidence.

6 The present model assumes that new information can be assigned to a case and that each case can be assigned to a known case type. Quite possibly these assignments involve a process similar to the interpretation process described within the model.

7 Simon , Herbert , Models of Man ( New York : Wiley , 1957 ) Google Scholar .

8 Asch , S. E. , “ Forthcoming Impressions of Personality ,” Journal of Abnormal and Social Psychology , 41 (July, 1946 ), 258 – 290 CrossRef Google Scholar ; and Anderson , N. H. , “ Primacy Effects in Personality Impression Formation Using a Generalized Order Effect Paradigm ,” Journal of Personality and Social Psychology , 2 (July, 1965 ), 1 – 9 CrossRef Google Scholar PubMed .

9 Note that the present model does not treat the effects of limited or inaccurate memory.

10 Dietrich , J. E. , “ The Relative Effectiveness of Two Modes of Radio Delivery in Influencing Attitudes ,” Speech Monographs , 13 ( 1946 ), 58 – 65 CrossRef Google Scholar ; Hovland , C. I. , Harvey , O. J. , and Sherif , M. , “ Assimilations and Contrast Effects in Communication and Attitude Change ,” Journal of Abnormal and Social Psychology , 55 (September, 1957 ), 242 – 252 CrossRef Google Scholar ; and McKillop , Anne S. , The Relationship Between the Reader's Attitudes and Some Types of Reading Response ( New York : Columbia University Press , 1952 ) Google Scholar .

11 Anderson , L. R. , “ Discrediting Sources as a Means of Belief Defense of Cultural Truisms ,” American Psychologist , 21 (July, 1966 ), 708 Google Scholar .

12 A more complex model would take account of other source factors as well as credibility. The new information could then be regarded as having a confidence parameter which was determined by factors including the attractiveness and the power of the source as well as the source's credibility. For a review of source factors in attitude change see McGuire , William J. , “ The Nature of Attitudes and Attitude Change ,” in Handbook of Social Psychology , ed. Lindzey , and Aronson , , III , 136 – 314 Google Scholar .

13 Hovland , C. I. and Mandell , W. , “ An Experimental Comparison of Conclusion-drawing by the Communicator and by the Audience ,” Journal of Abnormal and Social Psychology , 47 (July, 1952 ), 581 – 588 CrossRef Google Scholar PubMed ; and Hovland , C. I. and Weiss , W. , “ The Influence of Source Credibility on Communication Effectiveness ,” Public Opinion Quarterly , 15 (Winter, 1951 ), 635 – 650 CrossRef Google Scholar .

14 Hovland , Carl I. , Janis , Irving L. , and Kelley , Harold H. , Communication and Persuasion ( New Haven : Yale University Press , 1953 ) Google Scholar .

15 Osgood , C. E. and Tannenbaum , P. H. , “ The Principle of Congruity in the Prediction of Attitude Change ,” Psychological Review , 62 (January, 1955 ), 42 – 55 CrossRef Google Scholar PubMed .

16 Miller , Neal E. and Dollard , John , Social Learning and Imitation ( New Haven : Yale University Press , 1941 ) Google Scholar PubMed .

17 Bjorkman , Mats , “ Predictive Behavior: Some Aspects Based on Ecological Orientation ,” Scandinavian Journal of Psychology , 7 ( 1966 ), 43 – 57 CrossRef Google Scholar PubMed .

18 Anderson, “Primacy Effects”; Zajonc, “Cognitive Theories”; and Rosenberg, “Mathematical Models.”

19 DeSoto , Clinton B. and Kuethe , James L. , “ Subjective Probabilities of Interpersonal Relationships ,” Journal of Abnormal and Social Psychology , 59 (September 1959 ), 290 – 294 CrossRef Google Scholar .

20 Attneave , Fred , “ Multistability in Perception ,” Scientific American (December, 1971 ), pp. 63 – 71 Google Scholar .

21 This effect was found by McGuire , William J. , “ Cognitive Consistency and Attitude Change ,” Journal of Abnormal and Social Psychology , 60 (May, 1960 ), 345 – 353 CrossRef Google Scholar .

22 Only weak indications of such an effect were found by Dillehay , R. C. , Insko , C. A. , and Smith , M. B. , “ Logical Consistency and Attitude Change ,” Journal of Personality and Social Psychology , 3 (June, 1966 ), 646 – 654 CrossRef Google Scholar PubMed .

23 Bartlett , Frederic , A Study in Experimental and Social Psychology ( Cambridge : Cambridge University Press , 1969 ) Google Scholar .

24 Simon, Models of Man .

25 March , James G. and Simon , Herbert A. , Organizations ( New York : Wiley , 1958 ) Google Scholar .

26 Cyert , Richard M. and March , James G. , A Behavioral Theory of the Firm ( Englewood Cliffs, N.J. : Prentice-Hall , 1963 ) Google Scholar .

27 Kuethe , James , “ Pervasive Influence of Social Schemata ,” Journal of Abnormal and Social Psychology , 68 (May, 1964 ), 248 – 254 CrossRef Google Scholar PubMed .

28 Kuethe , James and Weingartner , Herbert , “ Male-Female Schemata of Homosexual and Non-Homosexual Penitentiary Inmates ,” Journal of Personality , 32 (March, 1964 ), 23 – 31 CrossRef Google Scholar PubMed .

29 DeSoto , Clinton B. , “ Learning a Social Structure ,” Journal of Abnormal and Social Psychology , 60 (May, 1960 ), 417 – 421 CrossRef Google Scholar .

30 Sherif , Muzafer and Hovland , Carl I. , Social Judgment: Assimilation and Contrast Effects in Communication and Attitude Change ( New Haven : Yale University Press , 1961 ) Google Scholar ; Sherif , Carolyn W. , Sherif , Muzafer , and Nebergall , R. E. , Attitude and Attitude Change ( Philadelphia : Saunders , 1965 ) Google Scholar ; and Sherif , Carolyn W. and Muzafer , Sherif , eds., Attitude, Ego-Involvement and Change ( New York : Wiley , 1967 ) Google Scholar .

31 Sherif and Hovland, Social Judgment .

32 Berkowitz , Leonard , “ Judgmental Processes in Personality Functioning ,” Psychological Review , 67 (March, 1960 ), 130 – 142 CrossRef Google Scholar .

33 Berkowitz , Leonard and Goranson , R. E. , “ Motivational and Judgmental Determinants of Social Perception ,” Journal of Abnormal and Social Psychology , 69 (September, 1964 ), 296 – 302 CrossRef Google Scholar PubMed .

34 Tajfel , Henri and Wilkes , A. L. , “ Classification and Quantitative Judgement ,” British Journal of Psychology , 54 (May, 1963 ), 101 – 114 CrossRef Google Scholar PubMed .

35 Campbell , D. T. , “ Enhancement of Contrast as Composite Habit ,” Journal of Abnormal and Social Psychology , 53 (November, 1956 ), 350 – 355 CrossRef Google Scholar PubMed .

36 Ervin , Susan , “ Imitation and Structural Change in Children's Language ,” in New Directions in the Study of Language , ed. Lenneberg , Eric H. ( Cambridge : M.I.T. Press , 1964 ), pp. 163 – 190 Google Scholar .

37 Woodworth , R. S. , Experimental Psychology ( New York : Holt , 1938 ) Google Scholar PubMed ; Miller , George A. , “ Some Psychological Studies of Grammar ,” American Psychologist , 17 (November, 1962 ), 748 – 762 CrossRef Google Scholar .

38 Berelson , Bernard R. and Steiner , Gary A. , Human Behavior ( Burlingame : Harcourt, Brace and World , 1964 ) Google Scholar .

39 Lenneberg , Eric H. and Roberts , John M. , The Language of Experience ( Baltimore : Waverly Press , 1956 ) Google Scholar . Also appeared as International Journal of American Linguistics , 22, No. 2; and in Saporta , Sol , ed., Psycholinguistics ( New York : Holt, Rinehart and Winston , 1961 ) Google Scholar .

40 Brown , Roger and Lenneberg , Eric H. , “ A Study in Language and Cognition ,” Journal of Abnormal and Social Psychology , 49 (July, 1954 ), 454 – 462 CrossRef Google Scholar PubMed .

41 Glanzer , Murray and Clark , William H. , “ Accuracy of Perceptual Recall: An Analysis of Organization ,” Journal of Verbal Learning and Verbal Behavior , 1 (January, 1963 ), 289 – 299 CrossRef Google Scholar .

42 Kogan , Nathan and Tagiuri , Renato , “ Interpersonal Preference and Cognitive Organization ,” Journal of Abnormal and Social Psychology , 56 (January, 1958 ), 113 – 116 CrossRef Google Scholar PubMed .

43 DeSoto , Clinton E. and Kuethe , James L. , “ Perception of Mathematical Properties of Interpersonal Relations ,” Perceptual and Motor Skills , 8 (December, 1958 ), 279 – 286 CrossRef Google Scholar ; and DeSoto and Kuethe, “Subjective Probabilities.” For a quantitative extension, see Wellens , A. Rodney and Thistlethwaite , Donald L. , “ Comparison of Three Theories of Cognitive Balance ,” Journal of Personality and Social Psychology , 20 (October, 1971 ), 82 – 92 CrossRef Google Scholar .

44 Morrissette , J. , “ An Experimental Study of the Theory of Structural Balance ,” Human Relations , 11 (August; 1958 ), 239 – 254 CrossRef Google Scholar ; and Schrader , Elizabeth G. and Lewit , D. W. , “ Structural Factors in Cognitive Balancing Behavior ,” Human Relations , 15 (August, 1962 ), 265 – 276 CrossRef Google Scholar .

45 Swets , J. A. , “ Is There a Sensory Threshold? ” Science , 134 (July 21, 1961 ), 168 – 177 CrossRef Google Scholar ; and Green , D. M. , “ Psychoacoustics and Detection Theory ,” Journal of the Acoustical Society of America , 29 ( 1960 ), 1180 – 1203 Google Scholar . For a review see Coombs , Clyde H. , Dawes , Robyn M. , and Tversky , Amos , Mathematical Psychology ( Englewood Cliffs, N.J. : Prentice-Hall , 1970 ) Google Scholar .

46 Gerard , Harold B. and Fleischer , Linda , “ Recall and Pleasantness of Balanced and Unbalanced Structures ,” Journal of Personality and Social Psychology , 7 (November, 1967 ), 332 – 337 CrossRef Google Scholar PubMed .

47 Kogan and Tagiuri, “Interpersonal Preference.”

48 Zajonc , R. B. and Burnstein , E. , “ Structural Balance, Reciprocity, and Positivity as Sources of Cognitive Bias ,” Journal of Personality , 33 (December, 1965 ), 570 – 583 CrossRef Google Scholar PubMed . For related results on the learning of liking structures, see Zajonc , R. B. and Burnstein , E. , “ The Learning of Balanced and Unbalanced Social Structures ,” Journal of Personality , 33 (June, 1965 ), 153 – 163 CrossRef Google Scholar PubMed .

49 DeSoto, “Learning a Social Structure.” For more recent verification that learning a set of relationships is easier when they fit the expected structure, see DeSoto , Clinton B. , Henley , Nancy M. , and London , Marvin , “ Balance and the Grouping Schema ,” Journal of Personality and Social Psychology , 8 (July, 1968 ), 1 – 7 CrossRef Google Scholar ; Henley , Nancy M. , Horsfall , Robert B. and DeSoto , Clinton B. , “ Goodness of Figure and Social Structure ,” Psychological Review , 76 ( 1969 ), 194 – 204 CrossRef Google Scholar ; and Horsfall , Robert B. and Henley , Nancy M. , “ Mixed Social Structures: Strain and Probability Ratings ,” Psychonomic Science , 15 (May, 1969 ), 186 – 187 CrossRef Google Scholar ; Mosher , Donald L. , “ The Learning of Congruent and Noncongruent Social Structures ,” Journal of Social Psychology , 73 (December, 1967 ), 285 – 290 CrossRef Google Scholar PubMed .

50 Berelson , and Steiner , , Human Behavior , p. 204 Google Scholar .

51 Berelson , and Steiner , , Human Behavior , p. 204 Google Scholar .

52 Rosenberg , Milton J. and Abelson , Robert P. , “ An Analysis of Cognitive Balancing ,” in Attitude Organization and Change , Rosenberg , Milton J. , Hovland , Carl I. , McGuire , William J. , Abelson , Robert P. and Brehm , Jack W. ( New Haven : Yale University Press , 1960 ), pp. 112 – 163 Google Scholar .

53 Berlyne , D. E. , Conflict, Arousal, and Curiosity ( New York : McGraw-Hill , 1960 ) CrossRef Google Scholar , and Sokolov , Evgenii N. , Perception and the Conditioned Reflex , trans. Waydenfeld , Stephen W. ( Oxford and New York : Pergamon , 1963 ) Google Scholar .

54 McGuire , , “ The Nature of Attitudes and Attitude Change ,” p. 169 Google Scholar .

55 Sherif, “A Study of Some Social Factors in Perception.”

56 See McGuire , , “ The Nature of Attitudes and Attitude Change ,” p. 223 Google Scholar , for a review.

57 Solomon , R. L. , Kamin , L. J. , and Wynne , L. C. , “ Traumatic Avoidance Learning: The Outcomes of Several Extinction Procedures with Dogs ,” Journal of Abnormal and Social Psychology , 48 (April, 1953 ), 291 – 302 CrossRef Google Scholar PubMed .

58 Miller and Dollard, Social Learning and Imitation .

59 McGuire , William J. , “ The Current Status of Cognitive Consistency Theories ,” in Cognitive Consistency: Motivational Antecedents and Behavioral Consequents , ed. Feldman , Shel ( New York : Academic Press , 1966 ) Google Scholar ; Abelson , Robert P. , “ Psychological Implication ,” in Abelson , et al. , Theories of Cognitive Consistency , pp. 112 – 139 Google Scholar ; Abelson , et al. , Theories of Cognitive Consistency , chapters 64 – 71 Google Scholar ; and Tannenbaum , Percy H. , Macaulay , Jacqueline R. , and Norris , Eleanor L. , “ Principle of Congruity and Reduction of Persuasion ,” Journal of Personality and Social Psychology , 3 (February, 1966 ), 233 – 238 CrossRef Google Scholar .

60 But for some interesting simulations of mental processes see Tomkins , Silvan and Messick , Samuel (eds.), Computer Simulation of Personality ( New York : Wiley , 1963 ) Google Scholar , Abelson , Robert P. and Carroll , J. Douglas , “ Computer Simulation of Individual Belief Systems ,” American Behavioral Scientist , 8 (May, 1965 ), 24 – 30 CrossRef Google Scholar ; Loehlin , John C. , Computer Models of Personality ( New York : Random House , 1968 ) Google Scholar ; Moser , U. , von Zeppelen , I. S. and Schneider , W. , “ Computer Simulation of a Model of Neurotic Defence Processes ,” Behavioral Science , 15 (March 1970 ), 194 – 202 CrossRef Google Scholar ; and Newell , Allen and Simon , Herbert , “ Simulation of Human Thought ,” in Computer Simulation of Human Behavior , ed. Dutton , John M. and Starbuck , William ( New York : Wiley , 1971 ) Google Scholar .

61 Wertheimer , Max , Productive Thinking ( New York : Harper , 1945 ) Google Scholar .

62 Garner , Wendell R. , Uncertainty and Structure as Psychological Concepts ( New York : Wiley , 1962 ) Google Scholar .

63 Watanabe , Satosi , Knowing and Guessing: A Quantitative Study of Inference and Information ( New York : Wiley , 1969 ) Google Scholar .

64 Abelson , Robert P. and Rosenberg , Milton J. , “ Symbolic Psychologic: A Model of Attitudinal Cognition ,” Behavioral Science , 3 (January, 1958 ), 1 – 13 CrossRef Google Scholar .

65 For a basic introduction to balance theory, see Brown , Roger , Social Psychology ( Glencoe, Ill. : The Free Press , 1956 ) Google Scholar ; for a review of most of the experimental evidence see Zajonc, “Cognitive Theories”; for a fund of source material, see Abelson et al., Theories of Cognitive Consistency .

66 In fact, much of the experimental work on schemata cited in this paper has been inspired more or less directly by the study of interpersonal relationships contained in Heider , Fritz , The Psychology of Inter-personal Relations ( New York : Wiley , 1958 ) Google Scholar .

67 Rosenberg and Abelson, “An Analysis of Cognitive Balancing.”

68 Morrissette, “An Experimental Study.”

69 Kogan and Tagiuri, “Interpersonal Preference.”

70 De Soto , Clinton B. , Henley , Nancy M. and London , Marvin , “ Balance and the Grouping Schema ,” Journal of Personality and Social Psychology , 8 (January, 1968 ), 1 – 7 CrossRef Google Scholar . Cottrell , Nicolas B. , Ingraham , Larry H. and Monfort , Franklin W. , “ The Retention of Balanced and Unbalanced Cognitive Structures ,” Journal of Personality , 39 (March, 1971 ), 112 – 131 CrossRef Google Scholar .

71 Burnstein , Eugene , “ Sources of Cognitive bias in the Representation of Simple Social Structures; Balance, Minimal Change, Positivity, Reciprocity, and the Respondent's Own Attitude ,” Journal of Personality and Social Psychology , 7 (September, 1967 ), 36 – 48 CrossRef Google Scholar PubMed . Miller , Harold and Geller , Dennis , “ Structural Balance in Dyads ,” Journal of Personality and Social Psychology , 21 (February, 1972 ), 135 – 138 CrossRef Google Scholar .

72 Johnson , Nicholas B. , “ Some Models of Balance Applied to Children's Perceptions of International Relations ,” European Journal of Social Psychology , 2 ( 1972 ), 55 – 64 CrossRef Google Scholar .

73 Blanchard , Edward B. , Vickers , Marilyn and Price , Katina C. , “ Balance Effects in Image Formation ,” The Journal of Social Psychology , 87 (June, 1972 ), 37 – 44 CrossRef Google Scholar PubMed .

74 Zajonc and Burnstein, “The Learning of Balanced and Unbalanced Social Structures” and “Structural Balance.”

75 Zajonc and Burnstein, “Structural Balance.”

76 Wiest , William M. , “ A Quantitative Extension of Heider's Theory of Cognitive Balance Applied to Interpersonal Perception and Self-Esteem ,” Psychological Monographs , 79 , No. 1 (Whole No. 607) (December, 1965 ) CrossRef Google Scholar PubMed .

77 Rodriguez , Aroldo , “ Effects of Balance, Positivity and Agreement in Triadic Social Relations ,” Journal of Personality and Social Psychology , 5 (April, 1967 ), 472 – 476 CrossRef Google Scholar .

78 Zajonc , R. B. , “ The Concepts of Balance, Congruity, and Dissonance ,” Public Opinion Quarterly , 24 (Summer, 1960 ), 280 – 296 CrossRef Google Scholar .

79 Zajonc , Robert B. and Sherman , Steven J. , “ Structural Balance and the Induction of Relations ,” Journal of Personality , 35 (December, 1967 ), 635 – 650 CrossRef Google Scholar PubMed .

80 Zajonc and Burnstein, “Tlie Learning of Balanced and Unbalanced Social Structures.” The authors explain this result by saying that one issue was less important than the other, but an alternative explanation is that the balance schema is not accessible for a case type involving readership of a general purpose magazine.

81 Axelrod, “Psycho-Algebra.”

82 Feather , N. T. , “ Organization and Discrepancy in Cognitive Structures ,” Psychological Review , 78 (September, 1971 ), 355 – 379 CrossRef Google Scholar .

83 William J. McGuire, “Cognitive Consistency and Attitude Change”; McGuire , William J. , “ A Syllogistic Analysis of Cognitive Relationships ,” in Attitude Organization and Change , Rosenberg , Milton J. et al. ( New Haven : Yale University Press , 1960 ), pp. 65 – 111 Google Scholar ; Dillehay , R. C. , Insko , C. A. , and Smith , M. B. , “ Logical Consistency and Attitude Change ,” Journal of Personality and Social Psychology , 3 (June, 1966 ), 646 – 654 CrossRef Google Scholar PubMed .

84 Rosenberg , Milton J. and Gardner , C. W. , “ Case Report: Some Dynamic Aspects of Posthypnotic Compliance ,” Journal of Abnormal and Social Psychology , 57 (November, 1958 ), 351 – 366 CrossRef Google Scholar .

85 Scott , W. A. , “ Cognitive Consistency, Response Reinforcement, and Attitude Change ,” Sociometry , 22 (September, 1959 ), 219 – 229 CrossRef Google Scholar .

86 Henle , Mary , “ On Error in Deductive Reasoning ,” Psychological Reports , 7 (August, 1960 ), 80 CrossRef Google Scholar .

87 Whorf , Benjamin L. , Language, Thought and Reality , ed. Carroll , J. B. ( Cambridge, Mass. : M.I.T. Press , 1956 Google Scholar ).

88 For a review of linguistic development in children see Brown , Roger , Social Psychology ( Glencoe, Illinois : The Free Press , 1956 ) Google Scholar PubMed . For a review of Piaget's work on cognitive development see Flavell , John H. , The Developmental Psychology of Jean Piaget ( Princeton : Van Nostrand , 1963 ) CrossRef Google Scholar PubMed .

89 A recent formulation of the claim for linguistic universals which uses this example is Chomsky , Noam , Problems of Knowledge and Freedom ( New York : Pantheon Books , 1971 ) Google Scholar . For a review of psycholinguistic research see Miller and McNeill, “Psycholinguistics.”

90 Attneave, “Multistability in Perception.”

91 Concept formation experiments provide further indirect evidence. See, for example, Bruner , Jerome S. , Goodnow , Jacqueline J. and Austin , George A. , A Study of Thinking ( New York : Wiley , 1956 ) Google Scholar .

92 For example, DeSoto and Kuethe, “Perception of Mathematical Properties”; DeSoto and Kuethe, “Subective Probabilities”; and DeSoto, “Learning a Social Structure.”

93 For example, DeSoto and Kuethe, “Perception of Mathematical Properties”; DeSoto and Kuethe, “Subective Probabilities;” and DeSoto, “Learning a Social Structure.”

94 Henley, Horsfall and DeSoto, “Goodness of Figure,” Axelrod, “Psycho-Algebra.”

95 Henley, Horsfall and DeSoto, “Goodness of Figure”; DeSoto , Clinton and Albrecht , Frank , “ Conceptual Good Figures ,” in Theories of Cognitive Consistency , Abelson , et al. , pp. 504 – 511 Google Scholar ; and DeSoto , Clinton and Albrecht , Frank , “ Cognition and Social Orderings ,” in Theories of Cognitive Consistency , Abelson , et al. , pp. 531 – 538 Google Scholar ; Poitou , Jean Pierre and Van Kreveld , David , “ Is There a Position Bias in Learning of Influence Strucures? ” European Journal of Social Psychology , 2 , No. 1 ( 1972 ), 75 – 85 CrossRef Google Scholar . For a related schema called proximity, see Peay , Edmund R. , “ Extensions of Clusterability to Quantitative Data with an Application to the Cognition of Political Attitudes ” (Ph.D. diss., University of Michigan , 1970 ) Google Scholar .

96 DeSoto , Clinton B. and Bosley , John J. , “ The Cognitive Structure of a Social Structure ,” Journal of Abnormal and Social Psychology , 64 (April, 1962 ), 303 – 307 CrossRef Google Scholar ; DeSoto and Albrecht. “Conceptual Good Figure,” and DeSoto and Albrecht, “Cognition and Social Orderngs.”

97 Alexander , C. Norman Jr. , and Epstein , Joyce , “ Problems of Dispositional Inference in Person Perception Research ,” Sociometry , 32 (December, 1969 ), 381 – 395 CrossRef Google Scholar PubMed .

98 Abelson , Robert P. and Kanouse , David E. , “ Subjective Acceptance of Verbal Generalization ,” in Cognitive Consistency , ed. Feldman , Shel ( New York : Academic Press , 1966 ), pp. 171 – 197 CrossRef Google Scholar .

99 Bieri , James and Blacker , Edward , “ The Generality of Cognitive Complexity in the Perception of People and Inkblots ,” Journal of Abnormal and Social Psychology , 53 (July, 1956 ), 112 – 117 CrossRef Google Scholar .

100 Brown, Social Psychology .

101 Frenkel-Brunswik , Else , “ Intolerance of Ambiguity as an Emotional and Perceptual Personality Variable ,” Journal of Personality , 18 (September, 1949 ), 103 – 143 CrossRef Google Scholar .

102 Newcomb , Theodore M. , “ Stabilities Underlying Changes in Interpersonal Attraction ,” Journal of Abnormal and Social Psychology , 66 (April, 1963 ), 376 – 386 , esp. p. 385 CrossRef Google Scholar PubMed .

103 Steiner , Ivan D. and Johnson , Homer H. , “ Authoritarianism and ‘Tolerance of Trait Inconsistency’ ,” Journal of Abnormal and Social Psychology , 67 (October 1963 ), 388 – 391 CrossRef Google Scholar .

104 Festinger , Leon , A Theory of Cognitive Dissonance ( Stanford : Stanford University Press , 1957 ) Google Scholar ; Brehm , Jack W. and Cohen , A. R. , Explorations in Cognitive Dissonance ( New York : Wiley , 1962 ) CrossRef Google Scholar ; and Zajonc, “Cognitive Theories in Social Psychology.”

105 Cantril , Hadley , “ The Prediction of Social Events ,” Journal of Abnormal and Social Psychology , 33 ( 1938 ), 364 – 389 CrossRef Google Scholar ; McGreor , D. , “ The Major Determinants of the Prediction of Social Events ,” Journal of Abnormal and Social Psychology , 33 ( 1938 ), 179 – 204 CrossRef Google Scholar ; and McGuire, “Cognitive Consistency and Attitude Change.”

106 For a formal model with a similar tradeoff, see Amari , Shun-ichi , “ Theory of Learning Decision Systems ,” in Unifying Study of Basic Problems in Engineering and Physical Sciences by Means of Geometry , ed. Kondo , Kazoo (Division 1), ( Tokyo : Gakujutsu Bunken Fukyu-kai , 1969 ), pp. 33 – 42 (which is RAAG Memoirs, 4, [1969], 567–576) Google Scholar .

107 With the assumption that each instance of a schema has equal subjective probability, size determines redundancy. The assumption is based on the principle that if some instances have a higher subjective probability than others, they should be treated by the model as belonging to different schemata. For applications of redundancy to psychology see Garner, Uncertainty and Structure ; and Garner , Wendell R. and Clement , David E. , “ Goodness of Pattern and Pattern Uncertainty ,” Journal of Verbal Learning and Verbal Behavior , 2 (December, 1963 ), 446 – 452 CrossRef Google Scholar . For a recent critical review see Corcoran , D. W. J. , Pattern Recognition ( Harmondsworth, England : Penguin Books , 1971 ) Google Scholar .

108 For schemata of infinite size (e.g., with continuous rather than discrete parameters), the appropriate measure of size uses degrees of freedom. For example, a linear equation has two degrees of freedom (namely a and b in y = ax + b ), and a quadratic equation has three degrees of freedom (namely a, b and c in y = ax 3 + bx + c ).

109 Kogan and Tagiuri, “Interpersonal Preference and Cognitive Organization.”

110 To carry the analysis further one could do the type of indifference curve analysis used in microeconomics. Negatively sloped indifference curves could be drawn in Figure 4 to represent the tradeoffs between smallness and veridicality. Then the schemata should be given accessibility in order of their intersection with the ordered set of indifference curves. If there are enough alternative schemata to allow for a continuous approximation of the frontier of knowledge, then the schema which should be given the highest accessibility is represented by the point at which one of the indifference curves is tangent to the frontier of knowledge.

111 A metric function has the following properties: d(x,y) = d(y,x); d(x,y) ≥ 0; d(x,y) = 0 if and only if x = y , and d(x,y) + d(y,z) ≥ d(x,z) .

112 Although the distance between points is a metric function as defined in the previous footnote, the distance between sets is not.

113 Abelson and Rosenberg, “Symbolic Psycho-logic.”

114 Wiest, “A Quantitative Extension.”

115 Signal detectability theory treats the situation in which there is only one accessible schema. See Coombs, Dawes, and Tversky, Mathematical Psychology .

116 Geometric analysis of this section can be extended to yield a number of suggestive theorems about some of the more subtle consequences of the present information processing model. Here are four such results.

(1) Potential Completeness . If Q is a continuous space, then under a weak set of assumptions the modified and extended specification of a case will be a complete specification of the case. This corresponds to the situation in which the person can make an estimate about every feature of the case (e.g., every friendship relationship). For example, in Figure 5 the set of points in S 2 which are minimally distant from X is a single point, x ′, and is therefore a complete specification.

(2) Potential Enlargement . Under certain circumstances the modified and extended specification can actually be larger than the initial partial specification. This occurs when there are few points in the partial specification, but more points in the selected schema which are at the minimum distance from the initial partial specification than there were in the initial partial specification itself. In such a situation the person has a schema which he can use to interpret the case, but he does not end up with a single unique specification of the case (even if he started with one). He believes something is wrong with the initial information, but he can not pin down what it is, even though he has selected a schema with which to interpret the case.

(3) Cancellation Effect . Under a wide range of conditions, an initial partial specification with continuous parameters will be modified to achieve a fit to the selected schema by leaving most of the parameters unchanged and by giving the rest neutral values. This can be called the cancellation effect since the only changes in the modification are neutralizations. The cancellation effect is a consequence of the principle that the modification is made in such a way as to move the least distance to the selected schema.

(4) Seriatim Effect . Under a wide range of circumstances, the sequential presentation of the same information can reduce the amount of attitude change compared to a simultaneous presentation. The seriatim effect is due to the principle that a message is first evaluated for how well it fits into a previous interpretation of the case. Presenting discrepant information a little at a time can result in each message being blamed as it is received, whereas if the discrepant information were presented all at once, the blame might be affixed to the old interpretation.

117 Allison , Graham T. , “ Conceptual Models and the Cuban Missile Crisis ,” American Political Science Review , 63 (September, 1969 ), 689 – 718 CrossRef Google Scholar .

118 Holsti , Ole R. , “ The Belief System and National Images: A Case Study ,” Journal of Conflict Resolution , 6 (September, 1962 ), 244 – 252 CrossRef Google Scholar .

119 Axelrod, Framework for a General Theory of Cognition and Choice .

120 Masterman , J. C. , The Double-Cross System in the War of 1939 to 1945 ( New Haven and London : Yale University Press , 1972 ) Google Scholar .

121 Masterman, The Double-Cross System .

122 Ben-David , Joseph , The Scientist's Role in Society ( Englewood Cliffs, New Jersey : Prentice-Hall , 1971 ), pp. 25 f. Google Scholar

123 Solomon, Kamm and Wynne, “Traumatic Avoidance Learning.”

Crossref logo

This article has been cited by the following publications. This list is generated based on data provided by Crossref .

  • Google Scholar

View all Google Scholar citations for this article.

Save article to Kindle

To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Volume 67, Issue 4
  • Robert Axelrod (a1)
  • DOI: https://doi.org/10.2307/1956546

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox .

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive .

Reply to: Submit a response

- No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted

Your details

Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly.

You have entered the maximum number of contributors

Conflicting interests.

Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners.

Information Processing Research Paper

Academic Writing Service

This sample education research paper on information processing features: 4400 words (approx. 14 pages) and a bibliography with 26 sources. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.

The primary goal of education is to help people learn. More specifically, the goal is to help people learn in ways that will allow them to use what they have learned in new situations—a process that can be called problem-solving transfer (Mayer & Wittrock, 2006). To accomplish this goal, it is useful for educators to have a clear understanding of how the human mind works.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% off with 24start discount code.

This research-paper explores the information processing view of learning, which currently offers the most comprehensive, best supported, and most widely accepted theory of how people learn (Bransford, Brown, & Cocking, 1999; Bruning, Schraw, Norby, & Ronning, 2004; Mayer, 2008). In this research-paper, I summarize the main tenets of information processing theory; compare information processing theory with other views of learning; summarize the implications of information processing theory for learning, instructing, and assessing; summarize contributions of the information processing view in education including psychologies of subject matter, cognitive process instruction, and instructional design; and explore future directions for theories of learning in 21st-century education.

Humans are processors of information. This simple statement summarizes the essence of information processing theory. According to the information processing view of how the human mind works, people take in information from the outside world through their eyes and ears, construct an internal mental representation, apply cognitive processes that mentally manipulate the representation, and use their representations to plan and carry out actions. As you can see, two key elements in information processing theory are cognitive representations and cognitive processes. Information from the outside world is transformed in the learner’s cognitive system by cognitive representations; cognitive processes then perform mental computations, or the systematic manipulation of the learner’s knowledge.

In short, the “information” part of information processing refers to the learner’s cognitive representations and the “processing” part of information processing refers to the learner’s cognitive processes. Human information processing involves building, manipulating, and using cognitive representations. According to the information processing view, learning involves cognitive processing aimed at building cognitive representations.

Research in cognitive science offers three important principles that should be part of any educationally relevant theory of how people learn (Mayer, 2001, 2005a):

  • Dual channels. People have separate channels for processing verbal material and pictorial material (Paivio, 1986).
  • Limited capacity. Within each channel people are able to attend to only a few pieces of information at any one time (Baddeley, 1999; Sweller, 1999).
  • Active processing. Meaningful learning occurs when people engage in appropriate cognitive processing during learning, including attending to relevant incoming material, mentally organizing the material into a coherent cognitive structure, and integrating the material with relevant existing knowledge (Mayer, 2001; Wittrock, 1989).

Information Processing Model of Learning

Figure 1 presents a framework for describing the human information processing system based on the principles described above (Mayer, 2001). Information from the outside world—such as a textbook lesson or a teacher-led classroom demonstration—enters the learner’s cognitive system through the eyes and ears and is represented briefly in sensory memory. If the learner pays attention, some of the material is transferred to working memory for further processing (as indicated by the selecting words and selecting images arrows in Figure 1). Next, the learner may engage in deeper cognitive processing of the material in working memory, such as mentally organizing the material (as indicated by the organizing words and organizing images arrows) and integrating it with relevant prior knowledge from long-term memory (as indicated by the integrating arrow), and the resulting learning outcome can be stored in long-term memory.

Three Kinds of Memory stores

As you can see, this information processing framework has three main memory stores: sensory memory, working memory, and long-term memory. Sensory memory is an unlimited but temporary store for holding incoming sensory information in which visual images and sounds last for a fraction of a second. Sensory information that impinges on the eyes is temporarily held as a fleeting visual image in visual sensory memory, and sensory information that impinges on the ears is temporarily held as a fleeting sound in auditory sensory memory. Working memory is a limited-capacity store in which a few pieces of incoming and retrieved material can be held and processed. Aspects of the visual images that are attended to are held in working memory as pictorial images and, when mentally organized by the learner, can be converted into a coherent pictorial representation (i.e., a pictorial model). Aspects of the auditory sounds that are attended to are held in working memory as sounds and, when mentally organized by the learner, can be converted into a coherent verbal representation (i.e., a verbal model). Further processing occurs in working memory as connections are built between the verbal and visual models with relevant knowledge retrieved from long-term memory. Thus, working memory is the venue for knowledge construction, but the amount of knowledge that can be held and the amount of processing that can take place at any one time is subject to capacity limitations. Long-term memory has unlimited capacity and is the storehouse for knowledge that has been constructed in working memory.

Three Kinds of Cognitive Processes During Learning

As you also see in Figure 1, this framework has three main kinds of cognitive processes: selecting, organizing, and integrating. By attending to aspects of material in sensory memory, the learner can transfer it to working memory for further processing. In Figure 1, selecting is indicated by the arrows from sensory memory to working memory. By mentally organizing the material in working memory, the learner can construct a coherent cognitive structure. Organizing is indicated by the arrows within working memory. By retrieving relevant prior knowledge and connecting it logically with incoming material in working memory, the learner can construct a meaningful learning outcome. Integrating is indicated by the arrow from long-term memory to working memory. In addition, the learner can make connections between corresponding aspects of the verbal and pictorial models as indicated by the arrow between them.

Figure 1: Human Information Processing System

Information Processing Research Paper

Active cognitive processing for meaningful learning requires that the learner engage in all three kinds of cognitive processing during learning—selecting relevant information, organizing it into coherent cognitive representations, and integrating it with other representations and knowledge from long-term memory. Finally, the arrow from working memory to long-term memory signifies a fourth kind of cognitive process—encoding the newly constructed knowledge in long-term memory.

Three Kinds of Cognitive load

In the information processing model in Figure 1, each of the two channels in working memory is limited in capacity. Sweller (1999, 2005) and Mayer (2001, 2005a; Mayer & Moreno, 2003) identified three different kinds of cognitive load in working memory: extraneous cognitive processing (or extraneous cognitive load), essential cognitive processing (or intrinsic cognitive load), and generative cognitive processing (or germane cognitive load). Extraneous cognitive processing does not support the instructional goal and is caused by ineffective presentation format. For example, if a text lesson contains a lot of extraneous information and pictures, learners may focus mainly on that information instead of the key information. Similarly, in a group learning situation students may spend their time discussing topics that have nothing to do with the instructional task. An important instructional goal is to minimize extraneous processing, such as by eliminating extraneous material from the lesson.

Essential cognitive processing is needed to mentally represent the presented material (i.e., the process of selecting in Figure 1) and is caused by the complexity of the material to be learned. For example, a topic such as how lightning storms develop is complex because there are many interacting elements such as the effects of differences in air temperature and differences in electrical charge. An important instructional goal is to manage essential processing, such as by providing pretraining in the names and characteristics of the key elements or presenting the material in segments.

Generative cognitive processing is deeper cognitive processing (i.e., the processes of organizing and integrating in Figure 1) and is primed by the learner’s motivation to understand the material. An important instructional goal is to foster generative processing, such as asking learners to explain a lesson to themselves.

Overall, learners have a limited capacity for processing information in working memory, so instruction should be designed to minimize extraneous cognitive processing, manage essential processing, and foster generative processing.

Five Kinds of Knowledge

According to the information processing view, learning involves knowledge construction. An important contribution of the information processing view is the analysis of several qualitatively different kinds of knowledge:

  • Facts: descriptions of things or events, such as “Earth is the third planet from the sun.”
  • Concepts: principles or models (such as having the concept of place value for written numbers) and categories or schemas (such as knowing what a dog is)
  • Procedures: step-by-step processes, such as knowing how to carry out the long-division procedure for 425 divided by 17
  • Strategies: general methods, such as knowing how to summarize a paragraph
  • Beliefs: thoughts about one’s cognitive processing, such as believing “I am good at learning about how the mind works.”

Learners possess all five kinds of knowledge in long-term memory, and proficiency in most complex tasks requires being able to coordinate among them (Anderson et al., 2001; Mayer, 2008). Metastrategies are strategies for how to manage and coordinate all the types of knowledge, and they are an important aspect of the knowledge of successful learners (McCormick, 2003).

Competing Views of Learning

Over the course of the past 100 years, researchers in psychology and education have posited four major metaphors of learning: response strengthening, information acquisition, knowledge construction, and social construction.

Learning As Response Strengthening

Learning as response strengthening—a popular view in the first half of the 20th century—conceptualizes learning as strengthening and weakening of associations. According to this view, the learner is a recipient of rewards and punishments and the teacher is a dispenser of rewards and punishments; a common instructional method is drill and practice. What is the relation of this view to the information processing view? In its traditional behaviorist form, the response strengthening view holds that rewards automatically strengthen associations and punishments automatically weaken them. In contrast, research in the information processing tradition has shown that it is not rewards and punishments per se that cause learning but rather it is the learner’s interpretation of the rewards and punishments (Lepper & Greene, 1978). In short, according to the information processing view, learners apply cognitive processing to the information in their environment including the rewards and punishments they receive.

Learning as Information Acquisition

Learning as information acquisition—a popular view in the mid-20th century—conceptualizes learning as adding information to long-term memory. According to this view, the learner is a recipient of information and the teacher is a dispenser of information; a common instructional method is lecturing or assigning readings. This is an early version of information processing theory in which information is seen as a commodity than can be transferred from one person’s memory to another person’s memory. In contrast, a more recent version—reflected in the knowledge construction metaphor—focuses on knowledge (which consists of cognitive representations in the learner’s memory system) rather than information (which consists of symbols that exist in objective reality for all to see) and focuses on the constructive processes (such as selecting, organizing, and integrating) rather than acquisition processes (such as simply adding information to memory). In short, many of the criticisms of the information processing view are attacks on the learning as information acquisition view, whereas the current version of the information processing view is reflected in the learning as knowledge construction metaphor.

Learning as Knowledge Construction

Learning as knowledge construction—a popular view since the last third of the 20th century—conceptualizes learning as building coherent cognitive structures. According to this view, the learner is an active sense maker and the teacher is a cognitive guide; an exemplary instructional method is asking learners to engage in self-explanation as they read or providing worked examples along with problems to solve. This view best epitomizes the information processing view of human cognition.

To better understand the distinction between learning as information acquisition and learning as knowledge construction, consider a lesson in which the teacher asks the class to view a 5-minute narrated video on how lightning storms develop. Which view of learning is this lesson most consistent with? It certainly is consistent with the information acquisition view because the instruction presents information for the learner to acquire. It would be consistent with the knowledge construction view only if the instructor also helps to guide the learner’s cognitive processing of the material—that is, if the instruction encourages the learner to select relevant information (such as focusing on key steps in the process), mentally organize it (such as building a causal chain of the key steps in the process), and relate it with prior knowledge (such as remembering knowledge related to each step, including why hot air rises). When the learners are inexperienced they may need some guidance in how to make sense of the material. This can be done by providing pretraining in the key terms (positively charged particle and negatively charged particle, freezing level, etc.), by highlighting key steps, by reminding students of their prior knowledge concerning temperature differences and differences in electrical charge, by breaking the lesson into segments that can be paced by the learner, by asking learners to explain each segment to themselves, and many other techniques that help learners process the material more deeply. Overall, the learning as information acquisition view is consistent with instruction that simply presents information to be learned, whereas the learning as knowledge construction view requires both presenting information and making sure the learner processes it appropriately.

Learning As Social Construction

Learning as social construction—an emerging view in the latter part of the 20th century—conceptualizes learning as a sociocultural event that occurs within groups as members work together to accomplish some authentic task. An exemplary instructional method is working together as a group on a significant academic project.

How does the learning as social construction metaphor compare with the information processing view? The answer depends on whether one takes a cognitive or radical approach. According to the cognitive version of social constructivism, people build cognitive representations when they work together on a task. This view is consistent with the basic tenets of the information processing view because individual learners are applying cognitive processes and building cognitive representations. According to a radical version of social constructivism, learning does not occur within learners and teaching does not emanate from teachers but rather is a sociocultural construction produced by a group and stored as a cultural product of the group (Phillips & Burbules, 2000). This view of social constructivism is not consistent with the information processing view, nor does it offer testable hypotheses needed to qualify as a scientific theory.

Implications

The information processing view has implications for learning, instructing, and assessing.

Implications for Learning

What is learning? Learning is a long-lasting change in the learner’s knowledge as a result of the learner’s experience. This definition has three components:

  • Learning is long lasting, so fleeting changes such as mood changes do not count as learning.
  • Learning is a change in knowledge, so changes in knowledge must be inferred from changes in behavior.
  • Learning is a result of experience, so changes due to fatigue, injury, or drugs do not count as learning.

One of the most contentious aspects of this definition concerns the second element in the definition—namely, what is learned. According to the behaviorist view of learning, which dominated psychology and education in the first half of 20th century, what is learned is a change in behavior. According to the cognitive (or information processing) view of learning, which became dominant in the 1960s, what is learned is a change in knowledge. Knowledge is an internal, cognitive representation that is not directly observable, so it can only be inferred by changes in behavior. Thus, the information processing view focuses on changes in the learner’s behavior as a way of determining changes in the learner’s knowledge. A major implication of the information processing view is to modify the definition of learning so that what is learned is a change in knowledge rather than a change in behavior. The information processing view puts the construction of knowledge—or cognitive representations—at the center of learning.

Implications for Instructing

What is instruction? According to the classic view, instruction is concerned with presenting material to learners. In contrast, according to the information processing view, instruction is activity by the teacher intended to guide the learner’s cognitive processing during learning. The information processing model summarized in Figure 1 contains three kinds of cognitive processes during learning: selecting relevant information for further processing, organizing the selected information into coherent representations, and integrating the information with appropriate knowledge from long-term memory. Meaningful learning—the construction of a meaningful learning outcome—requires that the learner engage in all three kinds of cognitive processes. Rote learning—the construction of rote learning outcomes—requires that the learner engage in selecting relevant knowledge but does not require the deeper processing of organizing and integrating. Finally, no learning occurs when the learner does not engage in any of the three kinds of cognitive processes.

A major challenge of instructional design is to present material and encourage appropriate cognitive processing in a way that does not overload the learner’s information processing system. As you can see, a major implication of the information processing view is that the goal of instruction is more than simply presenting material; in addition, instructors must also guide the way that learners process the material. The information processing view changes the focus of instruction from presenting information to guiding the learner’s processing of the presented information. The information processing view puts cognitive processing—such as selecting, organizing, and integrating—at the center of learning.

Implications for Assessing

What is assessment? According to the classic view, the goal of assessment is to measure performance, such as how many arithmetic problems a learner can solve in a given period of time. The classic view of assessment is concerned mainly with determining how much is learned. In contrast, the goal of assessment in the information processing view is to measure knowledge, including the degree to which the learning outcome is meaningful or rote. The information processing view is concerned mainly with what is learned, that is, “knowing what students know” (Pelligrino, Chudowsky, & Glaser, 2001).

The information processing view has useful implications for how to assess what is learned. The two most common ways of assessing learning outcomes are retention tests—such as asking the learner to recall or recognize what was presented—and transfer tests—such as asking the learner to use the material to solve a new problem. No learning is indicated by poor performance on both retention and transfer. Rote learning is indicated by good performance on retention and poor performance on transfer. Meaningful learning is indicated by good performance on both retention and transfer. Thus, according to the information processing view, the quality of learning outcomes can be inferred by examining the pattern of performance on a series of dependent measures, including retention and transfer tests. Other techniques for probing knowledge use coding systems based on interviews and observations. Techniques for assessing one of the five kinds of knowledge (i.e., facts, concepts, procedures, strategies, or beliefs) may not be appropriate for assessing other kinds. Thus, a major implication of the information processing view is to assess what is learned rather than how much is learned.

Major Contributions in Educational Psychology

One way to judge the value of the information processing view is to examine whether it has generated useful research. In this section, I describe three examples of how the information processing approach has provided useful contributions to research in educational psychology: psychologies of subject matter, cognitive process instruction, and multimedia instructional design.

Psychologies of subject Matter

Psychologies of subject matter represent a shift from studying learning in general to studying how learning works within specific subject areas. For example, instead of asking, How do people learn? researchers studying psychologies of subject matter ask, How do people learn to read, to write, to comprehend text, to solve math problems, or to think scientifically? (Mayer, 2004).

In order to help people learn to carry out basic academic tasks—such as reading a sentence, comprehending a paragraph, writing an essay, solving a word problem, or conducting a scientific experiment—the first step is to specify the cognitive processes involved in the tasks. Based on an information processing view, progress can be made by asking, What are the cognitive processes required for an academic task? and What do you need to know in order to accomplish an academic task? For example, in order to solve a math story problem, students need to be able to engage in four cognitive processes:

  • Problem translation, converting each sentence of the problem into an internal mental representation by using factual knowledge (e.g., knowing how many cents are in a dollar) and linguistic knowledge (e.g., knowing that adding “s” turns a word into a plural)
  • Problem integration, organizing the information into a coherent statement of the problem—called a situation model—by using conceptual knowledge (e.g., knowing place value) and schematic knowledge (e.g., knowing problem types)
  • Solution planning and monitoring. devising a plan for solving the problem by using strategic knowledge (e.g., knowing how to break a solution plan into parts)
  • Solution execution. carrying out the plan to arrive at an answer by using procedural knowledge (e.g., knowing how to add, subtract, multiply, and divide numbers)

Research on expertise (Bransford et al., 1999; Mayer, 2008) shows that experts—or proficient performers— know something different than novices. For example, successful mathematical problem solvers are able to represent the problem using concrete objects, and when less successful students are given instruction in how to represent problems in this way their problem-solving performance improves (Lewis, 1989; Low & Over, 1989). An important contribution of the information processing approach is the pinpointing of specific knowledge needed for proficiency on academic tasks (Kilpatrick, Swafford, & Findell, 2001). Overall, psychologies of subject matter represent one of educational psychology’s success stories in the late 20th century.

Cognitive Process Instruction

Cognitive process instruction involves providing focused instruction on cognitive processes needed for success on academic tasks. For example, if the goal is to improve reading comprehension, then students may need explicit instruction in how to engage in a process of self-explanation in which they try to explain discrepancies they find in the text as they read. Students who are taught how to engage in self-explanations show large improvements in their reading comprehension (Roy & Chi, 2005). Pressley and Woloshyn (1995) and Pressley and Harris (2006) have shown how cognitive process instruction can be applied across the curriculum. Thus, an important contribution of the information processing approach is the focus on teaching specific cognitive processes required for success in school. Cognitive process instruction has been another one of educational psychology’s success stories in the late 20th century.

Multimedia Instructional Design

Advances in instructional design principles represent a third example of the contributions of the information processing view. In particular, the information processing view has contributed to the creation of a new generation of principles for how to design instructional messages (such as textbook and online lessons). In taking an information processing approach, the focus is on helping learners engage in appropriate cognitive processing of the presented material, while being sensitive to characteristics of the human information processing system.

For example, some basic principles for how to design multimedia instructional messages (i.e., lessons containing words and pictures) are coherence, spatial contiguity, modality, and redundancy (Mayer, 2001; Mayer & Moreno, 2003). People learn better when extraneous material is excluded from a lesson. The coherence principle is based on the idea that working memory capacity is limited, so when a learner is processing extraneous material, the learner may not be able to process relevant material.

People also learn better when corresponding printed words and pictures are placed near rather than far from each other on the page or screen. The spatial contiguity principle is based on the idea that working memory is limited, so when the learner uses processing capacity to scan between words at the bottom of the screen and the appropriate portion of the graphic, the learner may have inadequate remaining capacity for deep processing of the target material.

The modality principal holds that people learn better from graphics and concurrent narration than from graphics and concurrent on-screen text. When learners must attend to both graphics and printed words they must split their visual attention (in visual sensory memory) because the eyes can only look at one location at a time. When words are presented in spoken form, they enter the information processing system through the ears (and auditory sensory memory), thus off-loading some of the processing from the visual channel.

Another principal of multimedia instructional design is the redundancy principle. People learn better from graphics and concurrent narration than from graphics, concurrent narration, and concurrent on-screen text. Adding on-screen text creates the same split-attention problem described for the modality principle and invites extraneous processing in which learners waste limited processing capacity on trying to reconcile the two verbal streams.

As you can see, these principles and many others (Mayer, 2005b) are based on an information processing view. The information processing view has contributed to yet another of educational psychology’s success stories—the development of theory-based instructional design principles that work when tested empirically (O’Neil, 2005).

The information processing view of learning holds that people construct cognitive representations by applying cognitive processes. The learner’s construction of knowledge corresponds to the “information” side of information processing, whereas the learner’s application of cognitive processes corresponds to the “processing” side of information processing. According to the information processing view, the main goal of education is to foster changes in the learner’s knowledge. This goal is accomplished by devising instruction that helps guide the learner’s cognitive processing during learning.

The information processing view has fundamental implications for 21st-century education. In the 21st century, the task of educators is not just to present information or to provide learning environments for students. In addition, a major challenge of educators is to guide how students process information during learning. To meet this challenge, educators would benefit from an understanding of how the human information processing system works.

Bibliography:

  • Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., et al. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman.
  • Baddeley, A. D. (1999). Human memory. Boston: Allyn & Bacon.
  • Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn. Washington, DC: National Academies Press.
  • Bruning, R. H., Schraw, G. J., Norby, M. M., & Ronning, R. R. (2004). Cognitive psychology and instruction (4th ed.). Upper Saddle River, NJ: Prentice Hall.
  • Kilpatrick, J., Swafford, J., & Findell, S. (Eds.). (2001). Adding it up: Helping children learn mathematics. Washington, DC:National Academies Press.
  • Lepper, M. R., & Greene, D. (1978). The hidden costs of reward. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Lewis, A. B. (1989). Training students to represent arithmetic word problems. Journal of Educational Psychology, 79, 521-531.
  • Low, R., & Over, R. (1989). Detection of missing and irrelevant information within algebraic story problems. Journal of Educational Psychology, 79, 296-305.
  • Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.
  • Mayer, R. E. (2004). Teaching of subject matter. In S. T. Fiske, D. L. Schacter, & C. Zahn-Waxler (Eds.), Annual Review of Psychology (Vol. 55, pp. 715-744). Palo Alto, CA: Annual Reviews.
  • Mayer, R. E. (2005a). Cognitive theory of multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 31-48). New York: Cambridge University Press.
  • Mayer, R. E. (Ed.). (2005b). The Cambridge handbook of multimedia learning. New York: Cambridge University Press.
  • Mayer, R. E. (2008). Learning and instruction (2nd ed.). Upper Saddle River, NJ: Prentice Hall.
  • Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38, 43-52.
  • Mayer, R. E., & Wittrock, M. C. (2006). Problem solving. In P. A. Alexander & P. H. Winne (Eds.), Handbook of educational psychology (2nd ed., pp. 287-303). Mahwah, NJ: Lawrence Erlbaum Associates.
  • McCormick, C. B. (2003). Metacognition and learning. In W. M. Reynolds & G. E. Miller (Eds.), Handbook of psychology: Volume 7, Educational psychology (pp. 79-102). New York: Wiley.
  • O’Neil, H. F. (Ed.). (2005). What works in distance learning: Guidelines. Greenwich, CT: Information Age Publishing.
  • Paivio, A. (1986). Mental representations: A dual coding approach. Oxford, UK: Oxford University Press.
  • Pellegrino, J. W., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know. Washington, DC: National Academies Press.
  • Phillips, D. C., & Burbules, N. C. (2000). Postpositivism andeducational research. Lanham, MD: Rowman & Littlefield.
  • Pressley, M., & Harris, K. R. (2006). Cognitive strategies instruction: From basic research to classroom instruction. In P. A. Alexander & P. H. Winne (Eds.), Handbook of educational psychology (pp. 265-286). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Pressley, M., & Woloshyn, V. (1995). Cognitive process instruction that really improves children’s academic performance. Cambridge, MA: Brookline Books.
  • Roy, M., & Chi, M. T. H. (2005). The self-explanation effect in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 271-286). New York: Cambridge University Press.
  • Sweller, J. (1999). Instructional design in technical areas. Camberwell, Australia: ACER Press.
  • Sweller, J. (2005). Implications of cognitive load theory for multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 19-30). New York: Cambridge University Press.
  • Wittrock, M. C. (1989). Cognitive processes of comprehension. Educational Psychologist, 24, 345-376.

ORDER HIGH QUALITY CUSTOM PAPER

information processing research paper

How to Write a Research Proposal Paper

Banner image displaying students at OISE

Table of Contents

What is a research proposal paper, why write a research proposal paper.

  • How to Plan a Research Proposal Paper

Components of a Research Proposal Paper

Research proposal examples, help & additional resources, this resource page will help you:.

  • Learn what a research proposal paper is.  
  • Understand the importance of writing a research proposal paper. 
  • Understand the steps in the planning stages of a research proposal paper.  
  • Identify the components of a research proposal paper.  

A research proposal paper:   

  • includes sufficient information about a research study that you propose to conduct for your thesis (e.g., in an MT, MA, or Ph.D. program) or that you imagine conducting (e.g., in an MEd program). It should help your readers understand the scope, validity, and significance of your proposed study.  
  • may be a stand-alone paper or one part of a larger research project, depending on the nature of your assignment. 
  • typically follows the citation format of your field, which at OISE is APA .    

Your instructor will provide you with assignment details that can help you determine how much information to include in your research proposal, so you should carefully check your course outline and assignment instructions.  

Writing a research proposal allows you to  

  • develop skills in designing a comprehensive research study; 

learn how to identify a research problem that can contribute to advancing knowledge in your field of interest; 

further develop skills in finding foundational and relevant literature related to your topic; 

critically review, examine, and consider the use of different methods for gathering and analyzing data related to the research problem;  

see yourself as an active participant in conducting research in your field of study. 

Writing a research proposal paper can help clarify questions you may have before designing your research study. It is helpful to get feedback on your research proposal and edit your work to be able to see what you may need to change in your proposal. The more diverse opinions you receive on your proposal, the better prepared you will be to design a comprehensive research study. 

How to Plan your Research Proposal

Before starting your research proposal, you should clarify your ideas and make a plan. Ask yourself these questions and take notes:  

What do I want to study? 

Why is the topic important? Why is it important to me? 

How is the topic significant within the subject areas covered in my class? 

What problems will it help solve? 

How does it build on research already conducted on the topic? 

What exactly should I plan to do to conduct a study on the topic? 

It may be helpful to write down your answers to these questions and use them to tell a story about your chosen topic to your classmates or instructor. As you tell your story, write down comments or questions from your listeners. This will help you refine your proposal and research questions. 

This is an example of how to start planning and thinking about your research proposal assignment. You will find a student’s notes and ideas about their research proposal topic - "Perspectives on Textual Production, Student Collaboration, and Social Networking Sites”. This example is hyperlinked in the following Resource Page: 

A research proposal paper typically includes: 

  • an introduction  
  • a theoretical framework 
  • a literature review 
  • the methodology  
  • the implications of the proposed study and conclusion 
  • references 

Start your introduction by giving the reader an overview of your study. Include:  

  • the research context (in what educational settings do you plan to conduct this study?) 
  • the research problem, purpose (What do you want to achieve by conducting this study?) 
  • a brief overview of the literature on your topic and the gap your study hopes to fill 
  •  research questions and sub-questions 
  • a brief mention of your research method (How do you plan to collect and analyze your data?) 
  • your personal interest in the topic. 

 Conclude your introduction by giving your reader a roadmap of your proposal. 

 To learn more about paper introductions, check How to write Introductions .  

A theoretical framework refers to the theories that you will use to interpret both your own data and the literature that has come before. Think about theories as lenses that help you look at your data from different perspectives, beyond just your own personal perspective. Think about the theories that you have come across in your courses or readings that could apply to your research topic. When writing the theoretical framework, include 

  • A description of where the theories come from (original thinkers), their key components, and how they have developed over time. 
  • How you plan to use the theories in your study / how they apply to your topic. 

The literature review section should help you identify topics or issues that will help contextualize what the research has/hasn’t found and discussed on the topic so far and convince your reader that your proposed study is important. This is where you can go into more detail on the gap that your study hopes to fill. Ultimately, a good literature review helps your reader learn more about the topic that you have chosen to study and what still needs to be researched 

To learn more about literature reviews check What is a Literature Review . 

The methods section should briefly explain how you plan to conduct your study and why you have chosen a particular method. You may also include  

  • your overall study design (quantitative, qualitative, mixed methods) and the proposed stages 
  • your proposed research instruments (e.g. surveys, interviews)  
  • your proposed participant recruitment channels / document selection criteria 
  • a description of your proposed study participants (age, gender, etc.). 
  • how you plan to analyze the data.  

You should cite relevant literature on research methods to support your choices. 

The conclusion section should include a short summary about the implications and significance of your proposed study by explaining how the possible findings may change the ways educators and/or stakeholders address the issues identified in your introduction. 

Depending on the assignment instructions, the conclusion can also highlight next steps and a timeline for the research process. 

To learn more about paper conclusions, check How to write Conclusions . 

List all references you used and format them according to APA style. Make sure that everything in your reference list is cited in the paper, and every citation in your paper is in your reference list.  

To learn more about writing citations and references, check Citations & APA . 

These are detailed guidelines on how to prepare a quantitative research proposal. Adapted from the course APD2293 “Interpretation of Educational Research”. These guidelines are hyperlinked in the following Resource Page:  

Related Resource Pages on ASH

  • What is a Literature Review?
  • How to Prepare a Literature Review
  • How to Understand & Plan Assignments
  • Citations and APA Style
  • How to Integrate Others' Research into your Writing
  • How to Write Introductions
  • How to Write Conclusions

Additional Resources

  • Writing a research proposal– University of Southern California   
  • Owl Purdue-Graduate-Specific Genres-Purdue University  
  • 10 Tips for Writing a research proposal – McGill University  

On Campus Services

  • Book a writing consultation (OSSC)
  • Book a Research Consultation (OISE Library)

REVIEW article

Information processing: the language and analytical tools for cognitive psychology in the information age.

\r\nAiping Xiong

  • Department of Psychological Sciences, Purdue University, West Lafayette, IN, United States

The information age can be dated to the work of Norbert Wiener and Claude Shannon in the 1940s. Their work on cybernetics and information theory, and many subsequent developments, had a profound influence on reshaping the field of psychology from what it was prior to the 1950s. Contemporaneously, advances also occurred in experimental design and inferential statistical testing stemming from the work of Ronald Fisher, Jerzy Neyman, and Egon Pearson. These interdisciplinary advances from outside of psychology provided the conceptual and methodological tools for what is often called the cognitive revolution but is more accurately described as the information-processing revolution. Cybernetics set the stage with the idea that everything ranging from neurophysiological mechanisms to societal activities can be modeled as structured control systems with feedforward and feedback loops. Information theory offered a way to quantify entropy and information, and promoted theorizing in terms of information flow. Statistical theory provided means for making scientific inferences from the results of controlled experiments and for conceptualizing human decision making. With those three pillars, a cognitive psychology adapted to the information age evolved. The growth of technology in the information age has resulted in human lives being increasingly interweaved with the cyber environment, making cognitive psychology an essential part of interdisciplinary research on such interweaving. Continued engagement in interdisciplinary research at the forefront of technology development provides a chance for psychologists not only to refine their theories but also to play a major role in the advent of a new age of science.

Information is information, not matter or energy

Wiener (1952 , p. 132)

Introduction

The period of human history in which we live is frequently called the information age , and it is often dated to the work of Wiener (1894–1964) and Shannon (1916–2001) on cybernetics and information theory. Each of these individuals has been dubbed the “father of the information age” ( Conway and Siegelman, 2005 ; Nahin, 2013 ). Wiener’s and Shannon’s work quantitatively described the fundamental phenomena of communication, and subsequent developments linked to that work had a profound influence on re-shaping many fields, including cognitive psychology from what it was prior to the 1950s ( Cherry, 1957 ; Edwards, 1997 , p. 222). Another closely related influence during that same period is the statistical hypothesis testing of Fisher (1890–1962), the father of modern statistics and experimental design ( Dawkins, 2010 ), and Jerzy Neyman (1894–1981), and Egon Pearson (1895–1980). In the U.S., during the first half of the 20th century, the behaviorist approach dominated psychology ( Mandler, 2007 ). In the 1950s, though, based mainly on the progress made in communication system engineering, as well as statistics, the human information-processing approach emerged in what is often called the cognitive revolution ( Gardner, 1985 ; Miller, 2003 ).

The information age has had, and continues to have, a great impact on psychology and society at large. Since the 1950s, science and technology have progressed with each passing day. The promise of the information-processing approach was to bring knowledge of human mind to a level in which cognitive mechanisms could be modeled to explain the processes between people’s perception and action. This promise, though far from completely fulfilled, has been increasingly realized. However, as any period in human history, the information age will come to an end at some future time and be replaced by another age. We are not claiming that information will become obsolete in the new age, just that it will become necessary but not sufficient for understanding people and society in the new era. Comprehending how and why the information-processing revolution in psychology occurred should prepare psychologists to deal with the changes that accompany the new era.

In the present paper, we consider the information age from a historical viewpoint and examine its impact on the emergence of contemporary cognitive psychology. Our analysis of the historical origins of cognitive psychology reveals that applied research incorporating multiple disciplines provided conceptual and methodological tools that advanced the field. An implication, which we explore briefly, is that interdisciplinary research oriented toward solving applied problems is likely to be the source of the next advance in conceptual and methodological tools that will enable a new age of psychology. In the following sections, we examine milestones of the information age and link them to the specific language and methodology for conducting psychological studies. We illustrate how the research methods and theory evolved over time and provide hints for developing research tools in the next age for cognitive psychology.

Cybernetics and Information Theory

Wiener and cybernetics.

Norbert Wiener is an individual whose impact on the field of psychology has not been acknowledged adequately. Wiener, a mathematician and philosopher, was a child prodigy who earned his Ph.D. from Harvard University at age 18 years. He is best-known for establishing what he labeled Cybernetics ( Wiener, 1948b ), which is also known as control theory, although he made many other contributions of note. A key feature of Wiener’s intellectual development and scientific work is its interdisciplinary nature ( Montagnini, 2017b ).

Prior to college, Wiener was influenced by Harvard physiologist Walter B. Cannon ( Conway and Siegelman, 2005 ), who later, in 1926, devised the term homeostasis , “the tendency of an organism or a cell to regulate its internal conditions, usually by a system of feedback controls…” ( Biology Online Dictionary, 2018 ). During his undergraduate and graduate education, Wiener was inspired by several Harvard philosophers ( Montagnini, 2017a ), including William James (pragmatism), George Santayana (positivistic idealism), and Josiah Royce (idealism and the scientific method). Motivated by Royce, Wiener made his commitment to study logic and completed his dissertation on mathematic logic. Following graduate school, Wiener traveled on a postdoctoral fellowship to pursue his study of mathematics and logic, working with philosopher/logician Bertrand Russell and mathematician/geneticist Godfrey H. Hardy in England, mathematicians David Hilbert and Edmund Landau in Europe, and philosopher/psychologist John Dewey in the U.S.

Wiener’s career was characterized by a commitment to apply mathematics and logic to real-world problems, which was sparked by his working for the U.S. Army. According to Hulbert ( 2018 , p. 50),

He returned to the United States in 1915 to figure out what he might do next, at 21 jumping among jobs… His stint in 1918 at the U.S. Army’s Aberdeen Proving Ground was especially rewarding…. Busy doing invaluable work on antiaircraft targeting with fellow mathematicians, he found the camaraderie and the independence he yearned for. Soon, in a now-flourishing postwar academic market for the brainiacs needed in a science-guided era, Norbert found his niche. At MIT, social graces and pedigrees didn’t count for much, and wartime technical experience like his did. He got hired. The latest mathematical tools were much in demand as electronic communication technology took off in the 1920s.

Wiener began his early research in applied mathematics on stochastic noise processes (i.e., Brownian motion; Wiener, 1921 ). The Wiener process named in honor of him has been widely used in engineering, finance, physical sciences, and, as described later, psychology. From the mid 1930s until 1953, Wiener also was actively involved in a series of interdisciplinary seminars and conferences with a group of researchers that included mathematicians (John von Neumann, Walter Pitts), engineers (Julian Bigelow, Claude Shannon), physiologists (Warren McCulloch, Arturo Rosenblueth), and psychologists (Wolfgang Köhler, Joseph C. R. Licklider, Duncan Luce). “Models of the human brain” is one topic discussed in those meetings, and concepts proposed during those conferences had significant influence on the research in information technologies and the human sciences ( Heims, 1991 ).

One of Wiener’s major contributions was in World War II, when he applied mathematics to electronics problems and developed a statistical prediction method for fire control theory. This method predicted the position in space where an enemy aircraft would be located in the future so that an artillery shell fired from a distance would hit the aircraft ( Conway and Siegelman, 2005 ). As told by Conway and Siegelman, “Wiener’s focus on a practical real-world problem had led him into that paradoxical realm of nature where there was no certainty, only probabilities, compromises, and statistical conclusions…” (p. 113). Advances in probability and statistics provided a tool for Wiener and others to investigate this paradoxial realm. Early in 1942 Wiener wrote a classified report for the National Defense Research Committee (NRDC), “The Extrapolation, Interpolating, and Smoothing of Stationary Time Series,” which was published as a book in 1949. This report is credited as the founding work in communications engineering, in which Wiener concluded that communication in all fields is in terms of information. In his words,

The proper field of communication engineering is far wider than that generally assigned to it. Communication engineering concerns itself with the transmission of messages. For the existence of a message, it is indeed essential that variable information be transmitted. The transmission of a single fixed item of information is of no communicative value. We must have a repertory of possible messages, and over this repertory a measure determining the probability of these messages ( Wiener, 1949 , p. 2).

Wiener went on to say “such information will generally be of a statistical nature” (p. 10).

From 1942 onward, Wiener developed his ideas of control theory more broadly in Cybernetics , as described in a Scientific American article ( Wiener, 1948a ):

It combines under one heading the study of what in a human context is sometimes loosely described as thinking and in engineering is known as control and communication. In other words, cybernetics attempts to find the common elements in the functioning of automatic machines and of the human nervous system, and to develop a theory which will cover the entire field of control and communication in machines and in living organisms (p. 14).

Wiener (1948a) made apparent in that article that the term cybernetics was chosen to emphasize the concept of feedback mechanism. The example he used was one of human action:

Suppose that I pick up a pencil. To do this I have to move certain muscles. Only an expert anatomist knows what all these muscles are, and even an anatomist could hardly perform the act by a conscious exertion of the will to contract each muscle concerned in succession. Actually, what we will is not to move individual muscles but to pick up the pencil. Once we have determined on this, the motion of the arm and hand proceeds in such a way that we may say that the amount by which the pencil is not yet picked up is decreased at each stage. This part of the action is not in full conscious (p. 14; see also p. 7 of Wiener, 1961 ).

Note that in this example, Wiener hits on the central idea behind contemporary theorizing in action selection – the choice of action is with reference to a distal goal ( Hommel et al., 2001 ; Dignath et al., 2014 ). Wiener went on to say,

To perform an action in such a manner, there must be a report to the nervous system, conscious or unconscious, of the amount by which we have failed to pick up the pencil at each instant. The report may be visual, at least in part, but it is more generally kinesthetic, or, to use a term now in vogue, proprioceptive (p. 14; see also p. 7 of Wiener, 1961 ).

That is, Wiener emphasizes the role of negative feedback in control of the motor system, as in theories of motor control ( Adams, 1971 ; Schmidt, 1975 ).

Wiener (1948b) developed his views more thoroughly and mathematically in his master work, Cybernetics or Control and Communication in the Animal and in the Machine , which was extended in a second edition published in 1961. In this book, Wiener devoted considerable coverage to psychological and sociological phenomena, emphasizing a systems view that takes into account feedback mechanisms. Although he was interested in sensory physiology and neural functioning, he later noted, “The need of including psychologists had indeed been obvious from the beginning. He who studies the nervous system cannot forget the mind, and he who studies the mind cannot forget the nervous system” ( Wiener, 1961 , p. 18).

Later in the Cybernetics book, Wiener indicated the value of viewing society as a control system, stating “Of all of these anti-homeostatic factors in society, the control of the means of communication is the most effective and most important” (p. 160). This statement is followed immediately by a focus on information processing of the individual: “One of the lessons of the present book is that any organism is held together in this action by the possession of means for the acquisition, use, retention, and transmission of information” ( Wiener, 1961 , p. 160).

Cybernetics, or the study of control and communication in machines and living things, is a general approach to understanding self-regulating systems. The basic unit of cybernetic control is the negative feedback loop, whose function is to reduce the sensed deviations from an expected outcome to maintain a steady state. Specifically, a present condition is perceived by the input function and then compared against a point of reference through a mechanism called a comparator. If there is a discrepancy between the present state and the reference value, an action is taken. This arrangement thus constitutes a closed loop of control, the overall purpose of which is to minimize deviations from the standard of comparison (reference point). Reference values are typically provided by superordinate systems, which output behaviors that constitute the setting of standards for the next lower level.

Cybernetics thus illustrates one of the most valuable characteristics of mathematics: to identify a common feature (feedback) across many domains and then study it abstracted from those domains. This abstracted study draws the domains closer together and often enables results from one domain to be extended to the other. From its birth, Wiener conceived of cybernetics as an interdisciplinary field, and control theory has had a major impact on diverse areas of work, such as biology, psychology, engineering, and computer science. Besides the mathematical nature, cybernetics has also been claimed as the science of complex probabilistic systems ( Beer, 1959 ). In other words, cybernetics is a science of combined constant flows of communication and self-regulating systems.

Shannon and Information Theory

With backgrounds in electrical engineering and mathematics, Claude Shannon obtained his Ph.D. in electrical engineering at MIT in 1940. Shannon is known within psychology primarily for information theory, but prior to his contribution on that topic, in his Master’s thesis, he showed how to design switch circuits according to Boole’s symbolic logic. Use of combinations of switches that represent binary values provides the foundation of modern computers and telecommunication systems ( O’Regan, 2012 ). In the 1940s, Shannon’s work on digit circuit theory opened the doors for him and allowed him to make connections with great scientists of the day, including von Neumann, Albert Einstein, and Alan Turing. These connections, along with his work on cryptography, affected his thoughts about communication theory.

With regard to information theory, or what he called communication theory, Shannon (1948a) stated the essential problem of communication in the first page of his classic article:

The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point… The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function (p. 379).

Shannon (1948a) characterized an information system as having five elements: (1) an information source; (2) a transmitter; (3) a channel; (4) a receiver; (5) a destination. Note the similarity of Figure 1 , taken from his article, to the human information-processing models of cognitive psychology. Shannon provided mathematical analyses of each element for three categories of communication systems: discrete, continuous, and mixed. A key measure in information theory is entropy , which Shannon defined as the amount of uncertainty involved in the value of a random variable or the outcome of a random process. Shannon also introduced the concepts of encoding and decoding for the transmitter and receiver, respectively. His main concern was to find explicit methods, also called codes , to increase the efficiency and reduce the error rate during data communication over noisy channels to near the channel capacity.

www.frontiersin.org

FIGURE 1. Shannon’s schematic diagram of a general communication system ( Shannon, 1948a ).

Shannon explicitly acknowledged Wiener’s influence on the development of information theory:

Communication theory is heavily indebted to Wiener for much of its basic philosophy and theory. His classic NDRC report, The Interpolation, Extrapolation, and Smoothing of Stationary Time Series ( Wiener, 1949 ), contains the first clear-cut formulation of communication theory as a statistical problem, the study of operations on time series. This work, although chiefly concerned with the linear prediction and filtering problem, is an important collateral reference in connection with the present paper. We may also refer here to Wiener’s Cybernetics ( Wiener, 1948b ), dealing with the general problems of communication and control ( Shannon and Weaver, 1949 , p. 85).

Although Shannon and Weaver (1949) developed similar measures of information independently from Wiener, they approached the same problem from different angles. Wiener developed the statistical theory of communication, equated information with negative entropy and related it to solve the problems of prediction and filtering while he worked on designing anti-aircraft fire-control systems ( Galison, 1994 ). Shannon, working primarily on cryptography at Bell Labs, drew an analogy between a secrecy system and a noisy communication system through coding messages into signals to transmit information in the presence of noise ( Shannon, 1945 ). According to Shannon, the amount of information and channel capacity were expressed in terms of positive entropy. With regard to the difference in sign for entropy in his and Wiener’s formulations, Shannon wrote to Wiener:

I do not believe this difference has any real significance but is due to our taking somewhat complementary views of information. I consider how much information is produced when a choice is made from a set – the larger the set the more the information. You consider the larger uncertainty in the case of a larger set to mean less knowledge of the situation and hence less information ( Shannon, 1948b ).

A key element of the mathematical theory of communication developed by Shannon is that it omits “the question of interpretation” ( Shannon and Weaver, 1949 ). In other words, it separates information from the “psychological factors” involved in the ordinary use of information and establishes a neutral or non-specific human meaning of the information content ( Luce, 2003 ). In such sense, consistent with cybernetics, information theory also confirmed this neutral meaning common to systems of machines, human beings, or combinations of them. The view that information refers not to “what” you send but what you “can” send, based on probability and statistics, opened a new science that used the same methods to study machines, humans, and their interactions.

Inference Revolution

Although it is often overlooked, a related impact on psychological research during roughly the same period was that of using statistical thinking and methodology for small sample experiments. The two approaches that have been most influential in psychology, the null hypothesis significance testing of Ronald Fisher and the more general hypothesis testing view of Jerzy Neyman and Egon Pearson, resulted in what Gigerenzer and Murray (1987) called the inference revolution .

Fisher, Information, Inferential Statistics, and Experiment Design

Ronald Fisher got his degree in mathematics from Cambridge University where he spent another year studying statistical mechanics and quantum theory ( Yates and Mather, 1963 ). He has been described as “a genius who almost single-handedly created the foundations for modern statistical science” ( Hald, 2008 , p. 147) and “the single most important figure of 20th century statistics” ( Efron, 1998 , p. 95). Fisher is also “rightly regarded as the founder of the modern methods of design and analysis of experiments” ( Yates, 1964 , p. 307). In addition to his work on statistics and experimental design, Fisher made significant scientic contributions to genetics and evolutionary biology. Indeed, Dawkins (2010) , the famous biologist, called Fisher the greatest biologist since Darwin, saying:

Not only was he the most original and constructive of the architects of the neo-Darwinian synthesis. Fisher also was the father of modern statistics and experimental design. He therefore could be said to have provided researchers in biology and medicine with their most important research tools.

Our interest in this paper is, of course, with the research tools and logic that Fisher provided, along with their application to scientific content.

Fisher began his early research as a statistician at Rothamsted Experimental Station in Harpenden, England (1919–1933). There, he was hired to develop statistical methods that could be applied to interpret the cumulative results of agriculture experiments ( Russell, 1966 , p. 326). Besides dealing with the past data, he became involved with ongoing experiments and developing methods to improve them ( Lehmann, 2011 ). Fisher’s hands-on work with experiments is a necessary feature of his background for understanding his positions regarding statistics and experimental design. Fisher (1962 , p. 529) essentially said as much in an address published posthumously:

There is, frankly, no easy substitute for the educational discipline of whole time personal responsibility for the planning and conduct of experiments, designed for the ascertainment of fact, or the improvement of Natural Knowledge. I say “educational discipline” because such experience trains the mind and deepens the judgment for innumerable ancillary decisions, on which the value or cogency of an experimental program depends.

The analysis of variance (ANOVA, Fisher, 1925 ) and an emphasis on experimental design ( Fisher, 1937 ) were both outcomes of Fisher’s work in response to the experimental problems posed by the agricultural research performed at Rothamsted ( Parolini, 2015 ).

Fisher’s work synthesized mathematics with practicality and reshaped the scientific tools and practice for conducting and analyzing experiments. In the preface to the first edition of his textbook Statistical Methods for Research Workers , Fisher (1925 , p. vii) made clear that his main concern was application:

Daily contact with the statistical problems which present themselves to the laboratory worker has stimulated the purely mathematical researches upon which are based the methods here presented. Little experience is sufficient to show that the traditional machinery of statistical processes is wholly unsuited to the needs of practical research.

Although prior statisticians developed probabilistic methods to estimate errors of experimental data [e.g., Student’s, 1908 (Gosset’s) t -test], Fisher carried the work a step further, developing the concept of null hypothesis testing using ANOVA ( Fisher, 1925 , 1935 ). Fisher demonstrated that by proposing a null hypothesis (usually no effect of an independent variable over a population), a researcher could evaluate whether a difference between conditions was sufficiently unlikely to occur due to chance to allow rejection of the null hypothesis. Fisher proposed that tests of significance with a low p -value can be taken as evidence against the null hypothesis. The following quote from Fisher (1937 , pp. 15–16), captures his position well:

It is usual and convenient for experimenters to take 5 per cent. as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results.

While Fisher recommended using the 0.05 probability level as a criterion to decide whether to reject the null hypothesis, his general position was that researchers should set the critical level of significance at sufficiently low probability so as to limit the chance of concluding that an independent variable has an effect when the null hypothesis is true. Therefore, the criterion of significance does not necessarily have to be 0.05 (see also Lehmann, 1993 ), and Fisher’s main point was that failing to reject the null hypothesis regardless of what criterion is used does not warrant accepting it (see Fisher, 1956 , pp. 4 and 42).

Fisher (1925 , 1935 ) also showed how different sources of variance can be partitioned to allow tests of separate and combined effects of two or more independent variables. Prior to this work, much experimental research in psychology, though not all, used designs in which a single independent variable was manipulated. Fisher made a case that designs with two or more independent variables were more informative than multiple experiments using different single independent variables because they allowed determination of whether the variables interacted. Fisher’s application of the ANOVA to data from factorial experimental designs, coupled with his treatment of extracting always the maximum amount of information (likelihood) conveyed by a statistic (see later), apparently influenced both Wiener and Shannon ( Wiener, 1948a , p. 10).

In “The Place of Design of Experiments in the Logic of Scientific Inference,” Fisher (1962) linked experimental design, the application of correct statistical methods, and the subsequent extraction of a valid conclusion through the concept of information (known as Fisher information ). Fisher information measures the amount of information that an obtained random sample of data has about the variable of interest ( Ly et al., 2017 ). It is the expected value of the second moment of the log-likelihood function, which is the probability density function for obtained data conditional on the variable. In other words, the variance is defined to be the Fisher information, which measures the sensitivity of the likelihood function to the changes of a manipulated variable on the obtained results. Furthermore, Fisher argued that experimenters should be interested in not only minimizing loss of information in the process of statistical reduction (e.g., use ANOVA to summarize evidence that preserves the relevant information from data, Fisher, 1925 , pp. 1 and 7) but also the deliberate study of experimental design, for example, by introducing randomization or control, to maximize the amount of information provided by estimates derived from the resulting experimental data (see Fisher, 1947 ). Therefore, Fisher unfied experimental design and statistical analysis through information ( Seidenfeld, 1992 ; Aldrich, 2007 ), an approach that resonates with the system view of cybernetics.

Neyman-Pearson Approach

Jerzy Neyman obtained his doctorate from the University of Warsaw with a thesis based on his statistical work at the Agricultural Institute in Bydgoszcz, Poland, in 1924. Egon Pearson received his undergraduate degree in mathematics and continued his graduate study in astronomy at Cambridge University until 1921. In 1926, Neyman and Pearson started their collaboration and raised a question with regard to Fisher’s method, why test only the null hypothesis? They proposed a solution in which not only the null hypothesis but also a class of possible alternatives are considered, and the decision is one of accepting or rejecting the null hypothesis. This decision yields probabilities of two kinds of error: false rejection of the null hypothesis (Type I or Alpha) or false acceptance of the alternative hypothesis (Type II or Beta; Neyman and Pearson, 1928 , 1933 ). They suggested that the best test was the one that minimized the Type II error subject to a bound on the Type I error, i.e., the significance level of the test. Thus, instead of classifying the null hypothesis as rejected or not, the central consideration of the Neyman-Pearson approach was that one must specify not only the null hypothesis but also the alternative hypotheses against which it is tested. With this symmetric decision approach, statistical power (1 – Type II error) becomes an issue. Fisher (1947) also realized the importance and necessity of power but argued that it is a qualitative concern addressed during the experimental design to increase the sensitivity of the experiment and not part of the statistical decision process. In other words, Fisher thought that researchers should “conduct experimental and observational inquiries so as to maximize the information obtained for a given expenditure” ( Fisher, 1951 , p. 54), but did not see that as being part of statistics.

To reject or accept the null hypothesis, rather than disregarding results that do not allow rejection of the null hypothesis, was the rule of behavior for Neyman-Pearson hypothesis testing. Thus, their approach assumed that a decision made from the statistical analysis was sufficient to draw a conclusion as to whether the null or alternative hypothesis was most likely and did not put emphasis on the need for non-statistical inductive inference to understand the problems. In other words, tests of significance were interpreted as means to make decisions in an acceptance procedure (also see Wald, 1950 ) but not specifically for research workers to gain a better understanding of the experimental material. Also, the Neyman-Pearson approach interpreted probability (or the value of a significance level) as a realization of long-run frequency in a sequence of repetitions under constant conditions. Their view was that if a sequence of independent events was obtained with probability p of success, then the long-run success frequency will be close to p (this is known as a frequentist probability, Neyman, 1977 ). Fisher (1956) vehemently disagreed with this frequentist position.

Fisherian vs. Frequentist Approaches

In a nutshell, the differences between Fisherians and frequentists are mostly about research philosophy and how to interpret the results ( Fisher, 1956 ). In particular, Fisher emphasized that in scientific research, failure to reject the null hypothesis should not be interpreted as acceptance of it, whereas Neyman and Pearson portrayed the process as a decision between accepting or rejecting the null hypothesis. Nevertheless, the usual practice for statistical testing in psychology is based on a hybrid of Fisher’s and Neyman-Pearson’s approaches ( Gigerenzer and Murray, 1987 ). In practice, when behavioral researchers speak of the results of research, they are primarily referring to the statistically significant results and less often to null effects and the effect size estimates associated with those p -values.

The reliance on the results of significance testing has been explained from two perspectives: (1) Neither experienced behavioral researchers nor experienced statisticians have a good intuitive feel for the practical meaning of effect size estimation (e.g., Rosenthal and Rubin, 1982 ); (2) The reliance on a reject-or-accept dichotomous decision procedure, in which the differences between p levels are taken to be trivial relative to the difference between exceeding or failing to exceed a 0.05 or some other accepted level of significance ( Nelson et al., 1986 ). The reject-accept procedure follows the Neyman-Pearson approach and is compatible with the view that information is binary ( Shannon, 1948a ). Nevertheless, even if an accurate statistical power analysis is conducted, a replication study properly powered can produce results that are consistent with the effect size of interest or consistent with absolutely no effect ( Nelson et al., 1986 ; Maxwell et al., 2015 ). Therefore, instead of solely relying on hypothesis testing, or whether an effect is true or false, a report of actual p level obtained along with a statement of the effect size estimation, should be considered.

Fisher emphasized in his writings that an essential ingredient in the research process is the judgment of the researcher, who must decide by how much the obtained results have advanced a particular theoretical proposition (that is, how meaningful the results are). This decision is based in large part on decisions made during experimental design. The statistical significance test is just a useful tool to inform such decisions during the process to allow the researcher to be confident that the results are likely not due to chance. Moreover, he wanted this statistical decision in scientific research to be independent of a priori probabilities or estimates because he did not think these could be made accurately. Consequently, Fisher considered that only a statistically significant effect in an exact test for which the null hypothesis can be rejected should be open to subsequent interpretation by the researcher.

Acree (1978 , pp. 397–398) conducted a thorough evaluation of statistical inference in psychological research that for the most part captures why Fisher’s views had greater impact on the practice of psychological researchers than those of Neyman and Pearson (emphasis ours):

On logical grounds, Neyman and Pearson had decidedly the better theory; but Fisher’s claims were closer to the ostensible needs of psychological research . The upshot is that psychologists have mostly followed Fisher in their thinking and practice: in the use of the hypothetical infinite population to justify probabilistic statements about a single data set; in treating the significance level evidentially; in setting it after the experiment is performed; in never accepting the null hypothesis; in disregarding power…. Yet the rationale for all our statistical methods, insofar as it is presented, is that of Neyman and Pearson, rather than Fisher.

Although Neyman and Pearson may have had “decidedly the better theory” for statistical decisions in general, Fisher’s approach provides a better theory for scientific inferences from controlled experiments.

Interim Summary

The work we described in Sections “Cybernetics and Information Theory” and “Inference Revolution” identifies three crucial pillars of research that were developed mainly in the period from 1940 to 1955: cybernetics/control theory, information/communication theory, and inferential statistical theory. Moreover, our analysis revealed a correspondence among those pillars. Specifically, the cybernetics/control theory corresponds to the experimental design, both of which provide the framework for cognitive psychology. The information theory corresponds to the statistical test, both of which provide quantitative evidence for the qualitative assumption.

These pillars were identified as early as 1952 in the preface to the proceedings of a conference called Information Theory in Biology , in which the editor, Quastler (1953 , p. 1), said:

The “new movement” [what we would call information-processing theory] is based on evaluative concepts (R. A. Fisher’s experimental design, A. Wald’s statistical decision function, J. von Neumann’s theory of games), on the development of a measure of information (R. Hartley, D. Gabor, N. Wiener, C. Shannon), on studies of control mechanisms, and the analysis and design of large systems (W. S. McCulloch and W. Pitt’s “neurons,” J. von Neumann’s theory of complicated automata, N. Wiener’s cybernetics).

The pillars undergirded not only the new movement in biology but also the new movement in psychology. The concepts introduced in the dawning information age of 1940–1955 had tremendous impact on applied and basic research in experimental psychology that transformed psychological research into a form that has developed to the present.

Human Information Processing

As noted in earlier sections, psychologists and neurophysiologists were involved in the cybernetics, information theory, and inferential statistics movements from the earliest days. Each of these movements was crucial to the ascension of the information-processing approach in psychology and the emergence of cognitive science, which are often dated to 1956. In this section, we review developments in cognitive psychology linked to each of the three pillars, starting with the most fundamental one, cybernetics.

The Systems Viewpoint of Cybernetics

George A. Miller explicitly credited cybernetics as being seminal in 1979, stating, “I have picked September 11, 1956 [the date of the second MIT symposium on Information Theory] as the birthday of cognitive science – the day that cognitive science burst from the womb of cybernetics and became a recognizable interdisciplinary adventure in its own right” (quoted by Elias, 1994 , p. 24; emphasis ours). With regard to the development of human factors (ergonomics) in the United Kingdom, Waterson (2011 , pp. 1126–1127) remarks similarly:

During the 1960s, the ‘systems approach’ within ergonomics took on a precedence which has lasted until the present day, and a lot of research was informed from cybernetics and general systems theory. In many respects, a concern in applying a systemic approach to ergonomic issues could be said to be one of the factors which ‘glues’ together all of the elements and sub-disciplines within ergonomics.

This seminal role for cybernetics is due to its fundamental idea that various levels of processing in humans and non-humans can be viewed as control systems with interconnected stages and feedback loops. It should be apparent that the human information-processing approach, which emphasizes the human as a processing system with feedback loops, stems directly from cybernetics, and the human-machine system view that underlies contemporary human factors and ergonomics, can be traced directly to cybernetics.

We will provide a few more specific examples of the impact of cybernetics. McCulloch and Pitts (1943) , members of the cybernetics movement, are given credit for developing “the first conceptual model of an artificial neural network” ( Shiffman, 2012 ) and “the first modern computational theory of mind and brain” ( Piccinini, 2004 ). The McCulloch and Pitts model identified the neurons as logical decision elements by on and off states, which are the basis of building brain-like machines. Since then, Boolean function, together with feedback through neurons, has been used extensively to quantify theorizing in relation to both neural and artificial intelligent systems ( Piccinini, 2004 ). Thus, computational modeling of brain processes was part of the cybernetics movement from the outset.

Franklin Taylor a noted engineering psychologist, reviewed Wiener’ (1948b) Cybernetics book, calling it “a curious and provocative book” ( Taylor, 1949 , p. 236). Taylor noted, “The author’s most important assertion for psychology is his suggestion, often repeated, that computers, servos, and other machines may profitably be used as models of human and animal behavior” (p. 236), and “It seems that Dr. Wiener is suggesting that psychologists should borrow the theory and mathematics worked out for machines and apply them to the behavior of men” (p. 237). Psychologists have clearly followed this suggestion, making ample use of the theory and mathematics of control systems. Craik (1947 , 1948 ) in the UK had in fact already started to take a control theory approach to human tracking performance, stating that his analysis “puts the human operator in the class of ‘intermittent definite correction servos’ ” ( Craik, 1948 , p. 148).

Wiener’s work seemingly had considerable impact on Taylor, as reflected in the opening paragraphs of a famous article by Birmingham and Taylor (1954) on human performance of tracking tasks and design of manual control systems:

The cardinal purpose of this report is to discuss a principle of control system design based upon considerations of engineering psychology. This principle will be found to advocate design practices for man-operated systems similar to those customarily employed by engineers with fully automatic systems…. In many control systems the human acts as the error detector… During the last decade it has become clear that, in order to develop control systems with maximum precision and stability, human response characteristics have to be taken into account. Accordingly, the new discipline of engineering psychology was created to undertake the study of man from an engineering point of view (p. 1748).

Control theory continues to provide a quantitative means for modeling basic and applied human performance ( Jagacinski and Flach, 2003 ; Flach et al., 2015 ).

Colin Cherry, who performed the formative study on auditory selective attention, studied with Wiener and Jerome Wiesner at MIT in 1952. It was during this time that he conducted his classic experiments on the cocktail party problem – the question of how we identify what one person is saying when others are speaking at the same time ( Cherry, 1953 ). His detailed investigations of selective listening, including attention switching, provided the basis for much research on the topic in the next decade that laid the foundation for contemporary studies of attention. The initial models explored the features and locus of a “limited-capacity processing channel” ( Broadbent, 1958 ; Deutsch and Deutsch, 1963 ). Subsequent landmark studies of attention include the attentuation theory of Treisman (1960 ; also see Moray, 1959 ); capacity models that conceive of attention as a resource to be flexibly allocated to various stages of human information processing ( Kahneman, 1973 ; Posner, 1978 ); the distinction between controlled and automatic processing ( Shiffrin and Schneider, 1977 ); the feature-integration theory of visual search ( Treisman and Gelade, 1980 ).

As noted, Miller (2003) and others identified the year 1956 as a critical one in the development of contemporary psychology ( Newell and Simon, 1972 ; Mandler, 2007 ). Mandler lists two events that year that ignited the field, in both of which Allan Newell and Herbert Simon participated. The first is the meeting of the Special Group on Information Theory of the Institute of Electrical and Electronics Engineers, which included papers by linguist Noam Chomsky (who argued against an information theory approach to language for his transformational-generative grammer) and psychologist Miller (on avoiding the short-term memory bottleneck), in addition to Newell and Simon (on their Logic Theorist “thinking machine”) and others ( Miller, 2003 ). The other event is the Dartmouth Summer Seminar on Artificial Intelligence (AI), which was organized by John McCarthy, who had coined the term AI the previous year. It included Shannon, Oliver Selfridge (who discussed initial ideas that led to his Pandemonium model of human pattern recognition, described in the next paragraph), and Marvin Minsky (a pioneer of AI, who turned to symbolic AI after earlier work on neural net; Moor, 2006 ), among others. A presentation by Newell and Simon at that seminar is regarded as essential in the birth of AI, and their work on human problem solving exploited concepts from work on AI.

Newell applied a combination of experimental and theoretical research during his work in RAND Corporation from 1950 ( Simon, 1997 ). For example, in 1952, he and his colleagues designed and conducted laboratory experiments on a full-scale simulation of an Air-Force Early Warning Station to study the decision-making and information-handling processes of the station crews. Central to the research was the recording and analyzing the crew’s interaction with their radar screens, with interception aircraft, and with each other. From these studies, Newell became to believe that information processing is the central activity in organizations (systems).

Selfridge (1959) laid the foundation for a cognitive theory of letter perception with his Pandemonium model, in which the letter identification is achieved by way of hierarchically organized layers of features and letter detectors. Inspired by Selfridge’s work on Pandemonium, Newell started to converge on the idea that systems can be created that contain intelligence and have the ability to adapt. Based on his understanding of computers, heuristics, information processing in organizations (systems), and cybernetics, Newell (1955) delineated the design of a computer program to play chess in “The Chess Machine: An Example of Dealing with a Complex Task by Adaptation.” After that, for Newell, the investigation of organizations (systems) became the examination of the mind, and he committed himself to understand human learning and thinking through computer simulations.

In the study of problem solving, think-aloud protocols in laboratory settings revealed that means-end analysis is a key heuristic mechanism. Specifically, the current situation is compared to the desired goal state and mental or physical actions are taken to reduce the gap. Newell, Simon, and Jeff Shaw developed the General Problem Solver, a computer program that could solve problems in various domains if given a problem space (domain representation), possible actions to move between space states, and information about which actions would reduce the gap between the current and goal states (see Ernst and Newell, 1969 , for a detailed treatment, and Newell and Simon, 1972 , for an overview). The program built into the system underlined the importance of control structure for solving the problems, revealing a combination of cybernetics and information theory.

Besides using cybernetics, neuroscientists further developed it to explain anticipation in biological systems. Although closed-loop feedback can perform online corrections in a determinate machine, it does not give any direction ( Ashby, 1956 , pp. 224–225). Therefore, a feedforward loop was proposed in cybernetics that could improve control over systems through anticipation of future actions ( Ashby, 1956 ; MacKay, 1956 ). Generally, the feedforward mechansim is constructed as another input pathway parallel to the actual input, which enables comparison between the actual and anticipated inputs before they are processed by the system ( Ashby, 1960 ; Pribram, 1976 , p. 309). In other words, a self-organized system is not only capable of self-adjusting its own behavior (feedback), but is also able to change its own internal organization in such a way as to select the response that eliminates a disturbance from the outside among the random responses that it attempts ( Ashby, 1960 ). Therefore, the feedforward loop “nudges” the inputs based on predefined parameters in an automatic manner to account for cognitive adaptation, indicating a higher level action planning. Moreover, different from error-based feedback control, the knowledge-based feedforward control cannot be further adjusted once the feedforward input has been processed. The feedforward control from cybernetics has been used by psychologists to understand human action control at behavioral, motoric, and neural levels (for a review, see Basso and Olivetti Belardinelli, 2006 ).

Therefore, both feedback and feedforward are critical to a control system, in which feedforward control is valuable and could improve the performance when feedback control is not sufficient. A control system with feedforward and feedback loops allows the interaction between top-down and bottom-up information processing. Consequently, the main function of a control system is not to create “behavior” but to create and maintain the anticipation of a specific desired condition, which constitutes its reference value or standard of comparison.

Cognitive psychology and neuroscience suggest that the combination of anticipatory and hierarchical structures are involved for human action learning and control ( Herbort et al., 2005 ). Specifically, anticipatory mechanisms lead to direct action selections in inverse model and effective filtering mechanisms in forward models, both of which are based on sensorimotor contingencies through people’s interaction with the environment (ideomotor principle, Greenwald, 1970 ; James, 1890 ). Therefore, the feedback loop included in cybernetics as well as the feedforward loop are essential to the learning processes.

We conclude this section with mention of one of the milestone books in cognitive psychology, Plans and the Structure of Behavior , by Miller et al. (1960) . In the prolog to the book, the authors indicate that they worked on it together for a year at the Center for Advanced Study in the Behavioral Sciences in California. As indicated by the title, the central idea motivating the book was that of a plan, or program, that guides behavior. But, the authors said:

Our fundamental concern, however, was to discover whether the cybernetic ideas have any relevance for psychology…. There must be some way to phrase the new ideas [of cybernetics] so that they can contribute to and profit from the science of behavior that psychologists have created. It was the search for that favorable intersection that directed the course of our year-long debate (p. 3, emphasis ours).

In developing the central concept of the test-operate-test (TOTE) unit in the book, Miller et al. (1960) stated, “The interpretation to which the argument builds is one that has been called the ‘cybernetic hypothesis,’ namely that the fundamental building block of the nervous system is the feedback loop” (pp. 26–27). As noted by Edwards (1997) , the TOTE concept “is the same principle upon which Weiner, Rosenblueth, and Bigelow had based ‘Behavior, Purpose, and Teleology”’ (p. 231). Thus, although Miller later gave 1956 as the date that cognitive science “burst from the womb of cybernetics,” even after the birth of cognitive science, the genes inherited from cybernetics continued to influence its development.

Information and Uncertainty

Information theory, a useful way to quantify psychological and behavior concepts, had possibly a more direct impact than cybernetics on psychological research. No articles were retrieved from the PsycINFO database prior to 1950 when we entered “information theory” as an unrestricted field search term on May 3, 2018. But, from 1950 to 1956 there were 37 entries with “information theory” in the title and 153 entries with the term in some field. Two articles applying information theory to speech communication appeared in 1950, a general theoretical article by Fano (1950) of the Research Laboratory in Electronics at MIT, and an empirical article by Licklider (1950) of the Acoustics Laboratory, also at MIT. Licklider (1950) presented two methods of reducing the frequncies of speech without destoying the intelligibility by using the Shannon-Weaver information formula based on first-order probability.

Given Licklider’s background in cybernetics and information theory, it is not too surprising that he played a major role in establishing the ARPAnet, which was later replaced by the Internet:

His 1968 paper called “The Computer as a Communication Device” illustrated his vision of network applications and predicted the use of computer networks for communications. Until then, computers had generally been thought of as mathematical devices for speeding up computations ( Internet Hall of Fame, 2016 ).

Licklider worked from 1943–1950 at the Psyco-Acoustics Laboratory (PAL) of Harvard University University, headed by Edwards ( 1997 , p. 212) noted, “The PAL played a crucial role in the genesis of postwar information processing psychologies.” He pointed out, “A large number of those who worked at the lab… helped to develop computer models and metaphors and to introduce information theory into human experimental psychology” (p. 212). Among those were George Miller and Wendell Garner, who did much to promulgate information theory in psychology ( Garner and Hake, 1951 ; Miller, 1953 ), as well as Licklider, Galanter and Pribram. Much of PAL’s research was based in solving engineering problems for the military and industry.

The exploration of human information-processing limitations using information theory that led to one of the most influential applications was that of Hick (1952) and Hyman (1953) to explain increases in reaction time as a function of uncertainty regarding the potential stimulus-response alternatives. Their analyses showed that reaction time increased as a logarithmic function of the number of equally likely alternatives and as a function of the amount of information as computed from differential probabilities of occurrence and sequential effects. This relation, call Hick’s law or the Hick-Hyman law has continued to be a source of research to the present and is considered to be a fundamental law of human-computer interaction ( Proctor and Schneider, 2018 ). Fitts and Seeger (1953) showed that uncertainty was not the only factor influencing reaction time. They examined performance of eight-choice task for all combinations of three spatial-location stimulus and response arrays. Responses were faster and more accurate when the response array corresponded to that of the stimulus array than when it did not, which Fitts and Seeger called a stimulus-response compatibility effect. The main point of their demonstration was that correspondence of the spatial codes for the stimulus and response alternatives was crucial, and this led to detailed investigation of compatibility effects that continue to the present ( Proctor and Vu, 2006 , 2016 ).

Even more influential has been Fitts’s law, which describes movement time in tasks where people make discrete aimed movements to targets or series of repetitive movements between two targets. Fitts (1954) developed the index of difficulty as –log 2 W/2A bits/response, where W , is the target width and A is the amplitude (or distance) of the movement. The resulting movement time is a linear function of the index of difficulty, with the slope differing for different movement types. Fitts’s law continues to be the subject of basic and applied research to the present ( Glazebrook et al., 2015 ; Velasco et al., 2017 ).

Information theory was applied to a range of other topics during the 1950s, including intelligence tests ( Hick, 1951 ), memory ( Aborn and Rubenstein, 1952 ; Miller, 1956 ), perception ( Attneave, 1959 ), skilled performance ( Kay, 1957 ), music ( Meyer, 1957 ), and psychiatry ( Brosin, 1953 ). However, the key concept of information theory, entropy, or uncertainty, was found not to provide an adequate basis for theories of human performance (e.g., Ambler et al., 1977 ; Proctor and Schneider, 2018 ).

Shannon’ (1948a) advocacy of information theory for electronic communication was mainly built on there being a mature understanding of the structured pattern of information transmission within electromagnetic systems at that time. In spite of cognitive research having greatly expanded our knowledge about how humans select, store, manipulate, recover, and output information, the fundamental mechanisms of those information processes remained under further investigation ( Neisser, 1967 , p. 8). Thus, although information theory provided a useful mathematic metric, it did not provide a comprehensive account of events between the stimulus and response, which is what most psychologists were interested in ( Broadbent, 1959 ). With the requirement that information be applicable to a vast array of psychological issues, “information” has been expanded from a measure of informativeness of stimuli and responses, to a framework for describing the mental or neural events between stimuli and responses in cognitive psychology ( Collins, 2007 ). Therefore, the more enduring impact of information theory was through getting cognitive psychologists to focus on the nature of human information processing, such that by Lachman et al. (1979) titled their introduction to the field, Cognitive Psychology and Information Processing .

Along with the information theory, the arrival of the computer provided one of the most viable models to help researchers understand the human mind. Computers grew from a desire to make machines smart ( Laird et al., 1987 ), which assumes that stored knowledge inside of a machine can be applied to the world similar to the way that people do, constituting intelligence (e.g., intelligent machine, Turing, 1937 ; AI, Minsky, 1968 ; McCarthy et al., 2006 ). The core idea of the computer metaphor is that the mind functions like a digital computer, in which mental states are computational states and mental processes are computational processes. The use of the computer as a tool for thinking about how the mind handles information has been highly influential in cognitive psychology. For example, the PsycINFO database returned no articles prior to 1950 when “encoding” was entered as an unrestricted field search term on May 3, 2018. From 1950 to 1956 there was 1 entry with “encoding” in the title and 4 entries with the term in some field. But, from 1956 to 1973, there were 214 entries with “encoding” in the title and 578 entries with term in some field, including the famous encoding specificity principle of Tulving and Thomson (1973) . Some models in cognitive psychology were directly inspired by how the memory system of a computer works, for example, the multi-store memory ( Atkinson and Shiffrin, 1968 ) and working memory ( Baddeley and Hitch, 1974 ) models.

Although cybernetics is the origin of early AI ( Kline, 2011 ) and the computer metaphor and cybernetics share similar concepts (e.g., representation), they are fundamentally different at the conceptual level. The computer metaphor represents a genuine simplification: Terms like “encoding” and “retrieving” can be used to describe human behavior analogously to machine operation but without specifying a precise mapping between the analogical “computer” and the target “human” domain ( Gentner and Grudin, 1985 ). In contrast, cybernetics provides a powerful framework to help people understand the human mind, which holds that, regardless of human or machine, it is necessary and possible to achieve goals through correcting action using feedback and adapting to the external environment using feedforward. Recent breakthroughs in AI (e.g., AlphaGo beating professional Go players) rely on training the machine to learn how to perform tasks at a level not seen before using a large number of examples and an artificial neural network (ANN) without human guidance. This unsupervised learning allows the machine to determine on its own whether a certain function should be executed. The development of ANN has been greatly influenced by consideration of dynamic properties of cybernetics ( Cruse, 2009 ), to achieve the goal of self-organization or self-regulation.

Statistical Inference and Decisions

Statistical decision theory also had substantial impact. Engineering psychologists were among the leaders in promulgating use of the ANOVA, with Chapanis and Schachter (1945 ; Schachter and Chapanis, 1945 ) using it in research on depth perception through distorted glass, conducted in the latter part of World War II and presented in Technical Reports. As noted by Rucci and Tweney (1980) , “Following the war, these [engineering] psychologists entered the academic world and began to publish in regular journals, using ANOVA” (p. 180).

Factorial experiments and use of the ANOVA were slow to take hold in psychology. Rucci and Tweney (1980) counted the frequency with which the t -test and ANOVA were used in major psychology journals from 1935 to 1952. They described the relation as, “Use of both t and ANOVA increased gradually prior to World War II, declined during the war, and increased immediately thereafter” (p. 172). Rucci and Tweney concluded, “By 1952 it [ANOVA] was fully established as the most frequently used technique in experimental research” (p. 166). They emphasized that this increased use of ANOVA reflected a radical change in experimental design, and emphasized that although one could argue that the statistical technique caused the change in psychological research, “It is just as plausible that the discipline had developed in such a way that the time was ripe for adoption of the technique” (p. 167). Note that the rise in use of null hypothesis testing and ANOVA paralleled that of cybernetics and information theory, which suggests that the time was indeed ripe for the use of probability theory, multiple independent variables, and formal scientific decision making through hypothesis testing that is embodied in the factorial design and ANOVA.

The first half of the 1950s also saw the introduction of signal detection theory, a variant of statistical decision theory, for analyzing human perception and performance. Initial articles by Peterson et al. (1954) and Van Meter and Middleton (1954) were published in a journal of the Institute of Electrical and Electronics Engineers (IEEE), but psychologists were quick to realize the importance of the approach. This point is evident in the first sentence of Swets et al.’s (1961) article describing signal detection theory in detail:

About 5 years ago, the theory of statistical decision was translated into a theory of signal detection. Although the translation was motivated by problems in radar, the detection theory that resulted is a general theory… The generality of the theory suggested to us that it might also be relevant to the detection of signals by human observers… The detection theory seemed to provide a framework for a realistic description of the behavior of the human observer in a variety of perceptual tasks (p. 301).

Signal detection theory has proved to be an invaluable tool because it dissociates influences of the evidence on which decisions are based from the criteria applied to that evidence. This way of conceiving decisions is useful not only for perceptual tasks but for a variety of tasks in which choices on the basis of noisy information are required, including recognition memory ( Kellen et al., 2012 ). Indeed, Wixted (2014) states, “Signal-detection theory is one of psychology’s most notable achievements, but it is not a theory about typical psychological phenomena such as memory, attention, vision or psychopathology (even though it applies to all of those areas and more). Instead, it is a theory about how we use evidence to make decisions.”

In the 1960s, Sternberg (1969) formalized the additive factors method of analyzing reaction-time data to identify different information-processing stages. Specifically, a factorial experiment is conducted, and if two independent variables affect different processing stages, the two variables do not interact. If, on the other hand, there is a significant interaction, then the variables can be assumed to affect at least one processing stage in common. Note that the subtitle of Sternberg’s article is “Extension of Donders’ Method,” which is reference to the research reported by F. C. Donders 100 years earlier in which he estimated the time for various processing stages by subtracting the reaction time obtained for a task that did not have an additional processing stage inserted from one that did. A limitation of Donders’s (1868/1969 ) subtraction method is that the stages had to be assumed and could not be identified. Sternberg’s extension that provided a means for identifying the stages did not occur until both the language of information processing and the factorial ANOVA were available as tools for analyzing reaction-time data. Additive factors method formed a cornerstone for much research in cognitive psychology for the following couple of decades, and the logic is still often applied to interpret empirical results, often without explicit acknowledgment.

In psychology, how people make decisions in perceptual and cognitive tasks has often been proposed on the basis of sequential sampling to explain the pattern of obtained reaction time (RT) and percentage error. The study of such mechanisms addresses one of the fundamental question in psychology, namely, how the central nervous system translates perception into action and how this translation depends on the interaction and expectation of individuals. Like signal detection theory, the theory of sequential sampling starts from the premise that perceptual and cognitive decisions are statistical in nature. It also follws the widely accepted assumption that sensory and cognitive systems are inherently noise and time-varying. In practice, the study of a given sequential model reduce to the study of a stochastic process, which represents the accumulative information avaliable to the decision at a given time. A Wiener process forms the basis of Ratcliff’ (1978) influential diffusion model of reaction times, in which noisy information accumulates continuously over time from a starting point to response thresholds ( Ratcliff and Smith, 2004 ). Recently, Srivastava et al. (2017) extended this diffusion model to make the Wiener process time dependent. More generally, Shalizi (2007 , p. 126) makes the point, “The fully general theory of stochastic calculus considers integration with respect to a very broad range of stochastic processes, but the original case, which is still the most important, is integration with respect to the Wiener process.”

In parallel to use of the computer metaphor to understand human mind, use of the laws of probability as metaphors of the mind also has had a profound influence on physiology and psychology ( Gigerenzer, 1991 ). Gregory (1968) regarded seeing an object from an image as an inference from a hypothesis (also see “unconscious inference” of Helmholtz, 1866/1925 ). According to Gregory (1980) , in spite of differences between perception and science, the cognitive procedures carried out by perceptual neural processes are essentially the same as the processes of predictive hypotheses of science. Especially, Gregory emphasized the importance and distinction between bottom–up and top–down procedures in perception. For normal perception and the perceptual illusions, the bottom–up procedures filter and structure the input and the top–down procedures refer to stored knowledge or assumptions that can work downwards to parcel signals and data into object.

A recent development of the statistics metaphor is the Bayesian brain hypothesis, which has been used to model perception and decision making since the 1990s ( Friston, 2012 ). Rao and Ballard (1997) described a hierarchical neural network model of visual recognition, in which both input-driven bottom–up signals and expectation-driven top–down signals were used to predict the current recognition state. They showed that feedback from a higher layer to the input later carries predictions of expected inputs, and the feedforward connections convey the errors in prediction which are used to correct the estimation. Rao (2004) illustrated how the Bayesian model could be implemented with neural networks by feedforward and recurrent connections, showing that for both perception and decision-making tasks the resulting network exhibits direction selectivity and computes posterior error corrections.

We would like to highlight that, different from the analogy to computers, the cybernetics view is essential for the Bayesian brain hypothesis. This reliance on cybernetics is because the Bayesian brain hypothesis models the interaction between prior knowledge (top–down) and sensory evidence (bottom–up) quantitatively. Therefore, the success of Bayesian brain modeling is due to both the framework from cybernetics and the computation of probability. Seth (2015) explicitly acknowledges this relation in his article, The Cybernetic Bayesian Brain .

Meanwhile, the external information format (or representation) on which the Bayesian inferences and statistical reasoning operate has been investigated. For example, Gigerenzer and Hoffrage (1995) varied mathematically equivalent representation of information in percentage or frequency for various problems (e.g., the mammography problem, the cab problem) and found that frequency formats enabled participants’ inferences to conform to Bayes’ theorem without any teaching or instruction.

The Information Age of humans follows on periods that are called the Stone Age, Bronze Age , Iron Age , and Industrial Age . These labels indicate that the advancement and time period of a specific human history are often represented by the type of tool material used by humans. The formation of the Information Age is inseparable from the interdisciplinary work of cybernetics, information theory, and statistical inference, which together generated a cognitive psychology adapted to the age. Each of these three pillars has been acknowledged separately by other authors, and contemporary scientifical approaches to motor control and cognitive processing have been continuously inspired by cybernetics, information theory, and statistical inference.

Kline (2015 , p. 1) aptly summarized the importance of cybernetics in the founding of the information age:

During contentious meetings filled with brilliant arguments, rambling digressions, and disciplinary posturing, the cybernetics group shaped a language of feedback, control, and information that transformed the idiom of the biological and social sciences, sparked the invention of information technologies, and set the intellectual foundation for what came to be called the information age. The premise of cybernetics was a powerful analogy: that the principles of information-feedback machines, which explained how a thermostat controlled a household furnace, for example, could also explain how all living things—from the level of the cell to that of society—behaved as they interacted with their environment.

Pezzulo and Cisek (2016) extended the cybernetic principles of feedback and forward control for understanding cognition. In particular, they proposed hierachical feedback control, indicating that adaptive action selection is influenced not only by prediction of immediate outcomes but also prediction of new opportunites afforded by the outcomes. Scott (2016) highlighted the use of sensory feedback, after a person becomes familiar with performing a perceptual-motor task, to drive goal-directed motor control, reducing the role of top–down control through utilizing bottom–up sensory feedback.

Although less expansive than Kline (2015) , Fan (2014 , p. 2) emphasized the role of information theory. He stated, “Information theory has a long and distinguished role in cognitive science and neuroscience. The ‘cognitive revolution’ of the 1950s, as spearheaded by Miller (1956) and Broadbent (1958) , was highly influenced by information theory.” Likewise, Gigerenzer (1991 , p. 255) said, “Inferential statistics… provided a large part of the new concepts for mental processes that have fueled the so called cognitive revolution since the 1960s.” The separate treatment of the three pillars by various authors indicates that the pillars have distinct emphases, which are sometimes treated as in opposition ( Verschure, 2016 ). However, we have highlighted the convergent aspects of the three that were critical to the founding of cognitive psychology and its continued development to the present. An example of a contemporary approach utilizing information theory and statistics in computational and cognitive neuroscience is the study of activity of neuronal populations to understand how the brain processes information. Quian Quiroga and Panzeri (2009) reviewed methods based on statistical decoding and information theory, and concluded, “Decoding and information theory describe complementary aspects of knowledge extraction… A more systematic joint application of both methodologies may offer additional insights” (p. 183).

Leahey (1992) claimed that it is incorrect to say that there was a “cognitive revolution” in the 1950s, but he acknowledged “that information-processing psychology has had world-wide influence…” (p. 315). Likewise, Mandler (2007) pointed out that the term “cognitive revolution” for the changes that occurred in the 1950s is a misnomer because, although behaviorism was dominant in the United States, much psychology outside of the United States prior to that time could be classified as “cognitive.” However, he also said, after reviewing the 1956 meetings and ones in 1958, that “the 1950s surely were ready for the emergence of the new information-processing psychology—the new cognitive psychology” (p. 187). Ironically, although both Leahey and Mandler identified the change as being one of information processing, neither author acknowledged the implication of their analyses, which is that there was a transformation that is more aptly labeled the information-processing revolution rather than the cognitive revolution. The concepts provided by the advances in cybernetics, information theory, and inferential statistical theory together provided the language and methodological tools that enabled a significant leap forward in theorizing.

Wootton (2015) , says of his book The Invention of Science: A New History of the Scientific Revolution , “We can state one of its core premises quite simply: a revolution in ideas requires a revolution in language” (p. 48). That language is what the concepts of communications systems engineering and inferential statistical theory provided for psychological research. Assessing the early influence of cybernetics and information theory on cognitive psychology, Broadbent (1959) stated, “It is in fact, the cybernetic approach above all others which has provided a clear language for discussing those various internal complexities which make the nervous system differ from a simple channel” (p. 113). He also identified a central feature of information theory as being crucial: The information conveyed by a stimulus is dependent on the stimuli that might have occurred but did not.

Posner (1986) , in his introduction to the Information Processing section of the Handbook of Perception and Human Performance , highlights more generally that the language of information processing affords many benefits to cognitive psychologists. He states, “Information processing language provides an alternative way of discussing internal mental operations intermediate between subjective experience and activity of neurons” (p. V-3). Later in the chapter, he elaborates:

The view of the nervous system in terms of information flow provided a common language in which both conscious and unconscious events might be discussed. Computers could be programmed to simulate exciting tasks heretofore only performed by human beings without requiring any discussion of consciousness. By analogies with computing systems, one could deal with the format (code) in which information is presented to the senses and the computations required to change code (recodings) and for storage and overt responses. These concepts brought a new unity to areas of psychology and a way of translating between psychological and physiological processes. The presence of the new information processing metaphor reawakened interest in internal mental processes beyond that of simple sensory and motor events and brought cognition back to a position of centrality in psychology (p. V-7).

Posner also notes, “The information processing approach has a long and respected relationship with applications of experimental psychology to industrial and military settings” (V-7). The reason, as emphasized years earlier by Wiener, is that it allows descriptions of humans to be integrated with those of the nonhuman parts of the system. Again, from our perspective, there was a revolution, but it was specifically an information-processing revolution.

Our main points can be summarized as follows:

(1) The information age originated in interdisciplinary research of an applied nature.

(2) Cybernetics and information theory played pivotal roles, with the former being more fundamental than the latter through its emphasis on a systems approach.

(3) These roles of communication systems theory were closely linked to developments in inferential statistical theory and applications.

(4) The three pillars of cybernetics, information theory, and inferential statistical theory undergirded the so-called cognitive revolution in psychology, which is more appropriately called the information-processing revolution.

(5) Those pillars, rooted in solving real-world problems, provided the language and methodological tools that enabled growth of the basic and applied fields of psychology.

(6) The experimental design and inferential statistics adopted in cognitive psychology, with emphasis on rejecting null hypotheses, originated in the applied statistical analyses of the scientist Ronald Fisher and were influential because of their compatibility with scientific research conducted using controlled experiments.

Simon (1969) , in an article entitled “Designing Organizations for an Information-Rich World,” pointed out the problems created by the wealth of information:

Now, when we speak of an information-rich world, we may expect, analogically, that the wealth of information means a dearth of something else – a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it (manuscript pp. 6-7).

What Simon described is valid and even more evident in the current smart device and Internet era, where the amount of information is overwhelming. Phenomena as disparate as accidents caused by talking on a cellphone while driving ( Banducci et al., 2016 ) and difficulty assessing the credibility of information reported on the Internet or other media ( Chen et al., 2015 ) can be attributed to the overload. Moreover, the rapid rate at which information is encountered may have a negative impact on maintaining a prolonged focus of attention ( Microsoft Canada, 2015 ). Therefore, knowing how people process information and allocate attention is increasingly essential in the current explosion of information.

As noted, the predominant method of cognitive psychology in the information age has been that of drawing theoretical inferences from the statistical results of small-scale sets of data collected in controlled experimental settings (e.g., laboratory). The progress in psychology is tied to progress in statistics as well as technological developments that improve our ability to measure and analyze human behavior. Outside of the lab, with the continuing development of the Internet of Things (IoT), especially the implementation of AI, human physical lives are becoming increasingly interweaved into the cyber world. Ubiquitous records of human behavior, or “big data,” offer the potential to examine cognitive mechanisms at an escalated scale and level of ecological validity that cannot be achieved in the lab. This opportunity seems to require another significant transformation of cognitive psychology to use those data effectively to push forward understanding of the human mind and ensure seamless integration with cyber physical systems.

In a posthumous article, Brunswik (1956) noted that psychology should have the goal of broadening perception and learning by including interactions with a probabilistic environment. He insisted that psychology “must link behavior and environment statistically in bivariate or multivariate correlation rather than with the predominant emphasis on strict law…” (p. 158). As part of this proposal, Brunswik indicated a need to relate psychology more closely to disciplines that “use autocorrelation and intercorrelation, as theoretically stressed especially by Wiener (1949) , for probability prediction” (p. 160). With the ubiquitous data being collected within cyber physical systems, more extensive use of sophisticated correlational methods to extract the information embedded within the data will likely be necessary.

Using Stokes (1997) two dimensions of scientific research (considerations of use; quest for fundamental understanding), the work of pioneers of the Information Age, including Wiener, Shannon, and Fisher, falls within Pasteur’s Quadrant of use-inspired basic research. They were motivated by the need to solve immediate applied problems and through their research advanced human’s fundamental understanding of nature. Likewise, in seizing the opportunity to use big data to inform cognitive psychology, psychologists need to increase their involvement in interdisciplinary research targeted at real-world problems. In seeking to mine the information from big data, a new age is likely to emerge for cognitive psychology and related disciplines.

Although we reviewed the history of the information-processing revolution and subsequent developments in this paper, our ultimate concern is with the future of cognitive psychology. So, it is fitting to end as we began with a quote from Wiener (1951 , p. 68):

To respect the future, we must be aware of the past.

Author Contributions

AX and RP contributed jointly and equally to the paper.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Aborn, M., and Rubenstein, H. (1952). Information theory and immediate recall. J. Exp. Psychol. 44, 260–266. doi: 10.1037/h0061660

CrossRef Full Text | Google Scholar

Acree, M. C. (1978). Theories of Statistical Inference in Psychological Research: A Historico-Critical Study. Doctoral dissertation, Clark University, Worcester, MS.

Google Scholar

Adams, J. A. (1971). A closed-loop theory of motor learning. J. Mot. Behav. 3, 111–150. doi: 10.1080/00222895.1971.10734898

Aldrich, J. (2007). Information and economics in Fisher’s design of experiments. Int. Stat. Rev. 75, 131–149. doi: 10.1111/j.1751-5823.2007.00020.x

Ambler, B. A., Fisicaro, S. A., and Proctor, R. W. (1977). Information reduction, internal transformations, and task difficulty. Bull. Psychon. Soc. 10, 463–466. doi: 10.3758/BF03337698

Ashby, W. R. (1956). An Introduction to Cybernetics. London: Chapman & Hall. doi: 10.5962/bhl.title.5851

Ashby, W. R. (1960). Design for a Brain: The Origin of Adaptive Behavior. New York, NY: Wiley & Sons. doi: 10.1037/11592-000

Atkinson, R. C., and Shiffrin, R. M. (1968). “Human memory: a proposed system and its control,” in The Psychology of Learning and Motivation , Vol. 2, eds K. W. Spence and J. T. Spence (New York, NY: Academic Press), 89–195.

Attneave, F. (1959). Applications of Information theory to Psychology: A Summary of Basic Concepts, Methods, and Results. New York, NY: Henry Holt.

Baddeley, A. D., and Hitch, G. (1974). “Working memory,” in The Psychology of Learning and Motivation , Vol. 8, ed. G. A. Bower (New York, NY: Academic press), 47–89.

Banducci, S. E., Ward, N., Gaspar, J. G., Schab, K. R., Crowell, J. A., Kaczmarski, H., et al. (2016). The effects of cell phone and text message conversations on simulated street crossing. Hum. Factors 58, 150–162. doi: 10.1177/0018720815609501

PubMed Abstract | CrossRef Full Text | Google Scholar

Basso, D., and Olivetti Belardinelli, M. (2006). The role of the feedforward paradigm in cognitive psychology. Cogn. Process. 7, 73–88. doi: 10.1007/s10339-006-0034-1

Beer, S. (1959). Cybernetics and Management. New York, NY: John Wiley & Sons.

Biology Online Dictionary (2018). Homeostasis. Available at: https://www.biology-online.org/dictionary/Homeostasis

Birmingham, H. P., and Taylor, F. V. (1954). A design philosophy for man-machine control systems. Proc. Inst. Radio Eng. 42, 1748–1758. doi: 10.1109/JRPROC.1954.274775

Broadbent, D. E. (1958). Perception and Communication. London: Pergamon Press. doi: 10.1037/10037-000

Broadbent, D. E. (1959). Information theory and older approaches in psychology. Acta Psychol. 15, 111–115. doi: 10.1016/S0001-6918(59)80030-5

Brosin, H. W. (1953). “Information theory and clinical medicine (psychiatry),” in Current Trends in Information theory , ed. R. A. Patton (Pittsburgh, PA: University of Pittsburgh Press), 140–188.

Brunswik, E. (1956). Historical and thematic relations of psychology to other sciences. Sci. Mon. 83, 151–161.

Chapanis, A., and Schachter, S. (1945). Depth Perception through a P-80 Canopy and through Distorted Glass. Memorandum Rep. TSEAL-69S-48N. Dayton, OH: Aero Medical Laboratory.

Chen, Y., Conroy, N. J., and Rubin, V. L. (2015). News in an online world: the need for an “automatic crap detector”. Proc. Assoc. Inform. Sci. Technol. 52, 1–4. doi: 10.1002/pra2.2015.145052010081

Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and with two ears. J. Acoust. Soc. Am. 25, 975–979. doi: 10.1121/1.1907229

Cherry, E. C. (1957). On Human Communication. New York, NY: John Wiley.

Collins, A. (2007). From H = log s n to conceptual framework: a short history of information. Hist. Psychol. 10, 44–72. doi: 10.1037/1093-4510.10.1.44

Conway, F., and Siegelman, J. (2005). Dark Hero of the Information Age. New York, NY: Basic Books.

Craik, K. J. (1947). Theory of the human operator in control systems. I. The operator as an engineering system. Br. J. Psychol. 38, 56–61.

Craik, K. J. (1948). Theory of the human operator in control systems. II. Man as an element in a control system. Br. J. Psychol. 38, 142–148.

Cruse, H. (2009). Neural Networks as Cybernetic Systems , 3rd Edn. Bielefeld: Brain, Minds, and Media.

Dawkins, R. (2010). Who is the Greatest Biologist Since Darwin? Why? Available at: https://www.edge.org/3rd_culture/leroi11/leroi11_index.html#dawkins

Deutsch, J. A., and Deutsch, D. (1963). Attention: some theoretical considerations. Psychol. Rev. 70, 80–90. doi: 10.1037/h0039515

Dignath, D., Pfister, R., Eder, A. B., Kiesel, A., and Kunde, W. (2014). Representing the hyphen in action–effect associations: automatic acquisition and bidirectional retrieval of action–effect intervals. J. Exp. Psychol. Learn. Mem. Cogn. 40, 1701–1712. doi: 10.1037/xlm0000022

Donders, F. C. (1868/1969). “On the speed of mental processes,” in Attention and Performance II , ed. W. G. Koster (Amsterdam: North Holland Publishing Company), 412–431.

Edwards, P. N. (1997). The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge, MA: MIT Press.

Efron, B. (1998). R. A. Fisher in the 21st century: invited paper presented at the 1996 R. A. Fisher lecture. Stat. Sci. 13, 95–122.

Elias, P. (1994). “The rise and fall of cybernetics in the US and USSR,” in The Legacy of Norbert Wiener: A Centennial Symposium , eds D. Jerison I, M. Singer, and D. W. Stroock (Providence, RI: American Mathematical Society), 21–30.

Ernst, G. W., and Newell, A. (1969). GPS: A Case Study in Generality and Problem Solving. New York, NY: Academic Press.

Fan, J. (2014). An information theory account of cognitive control. Front. Hum. Neurosci. 8:680. doi: 10.3389/fnhum.2014.00680

Fano, R. M. (1950). The information theory point of view in speech communication. J. Acoust. Soc. Am. 22, 691–696. doi: 10.1121/1.1906671

Fisher, R. A. (1925). Statistical Methods for Research Workers. London: Oliver & Boyd.

Fisher, R. A. (1935). The Design of Experiments. London: Oliver & Boyd.

Fisher, R. A. (1937). The Design of Experiments , 2nd Edn. London: Oliver & Boyd.

Fisher, R. A. (1947). “Development of the theory of experimental design,” in Proceedings of the International Statistical Conferences , Vol. 3, Poznań, 434–439.

Fisher, R. A. (1951). “Statistics,” in Scientific thought in the Twentieth century , ed. A. E. Heath (London: Watts).

Fisher, R. A. (1956). Statistical Methods and Scientific Inference. Edinburgh: Oliver & Boyd.

Fisher, R. A. (ed.). (1962). “The place of the design of experiments in the logic of scientific inference,” in Fisher: Collected Papers Relating to Statistical and Mathematical theory and Applications , Vol. 110, (Paris: Centre National de la Recherche Scientifique), 528–532.

Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. J. Exp. Psychol. 47, 381–391. doi: 10.1037/h0055392

Fitts, P. M., and Seeger, C. M. (1953). S-R compatibility: spatial characteristics of stimulus and response codes. J. Exp. Psychol. 46, 199–210. doi: 10.1037/h0062827

Flach, J. M., Bennett, K. B., Woods, D. D., and Jagacinski, R. J. (2015). “Interface design: a control theoretic context for a triadic meaning processing approach,” in The Cambridge Handbook of Applied Perception Research , Vol. II, eds R. R. Hoffman, P. A. Hancock, M. W. Scerbo, R. Parasuraman, and J. L. Szalma (New York, NY: Cambridge University Press), 647–668.

Friston, K. (2012). The history of the future of the Bayesian brain. Neuroimage 62, 1230–1233. doi: 10.1016/j.neuroimage.2011.10.004

Galison, P. (1994). The ontology of the enemy: norbert Wiener and the cybernetic vision. Crit. Inq. 21, 228–266. doi: 10.1086/448747

Gardner, H. E. (1985). The Mind’s New Science: A History of the Cognitive Revolution. New York, NY: Basic Books.

Garner, W. R., and Hake, H. W. (1951). The amount of information in absolute judgments. Psychol. Rev. 58, 446–459. doi: 10.1037/h0054482

Gentner, D., and Grudin, J. (1985). The evolution of mental metaphors in psychology: a 90-year retrospective. Am. Psychol. 40, 181–192. doi: 10.1037/0003-066X.40.2.181

Gigerenzer, G. (1991). From tools to theories: a heuristic of discovery in Cognitive Psychology. Psychol. Rev. 98, 254–267. doi: 10.1037/0033-295X.98.2.254

Gigerenzer, G., and Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: frequency formats. Psychol. Rev. 102, 684–704. doi: 10.1037/0033-295X.102.4.684

Gigerenzer, G., and Murray, D. J. (1987). Cognition as Intuitive Statistics. Mahwah, NJ: Lawrence Erlbaum.

Glazebrook, C. M., Kiernan, D., Welsh, T. N., and Tremblay, L. (2015). How one breaks Fitts’s Law and gets away with it: moving further and faster involves more efficient online control. Hum. Mov. Sci. 39, 163–176. doi: 10.1016/j.humov.2014.11.005

Greenwald, A. G. (1970). Sensory feedback mechanisms in performance control: with special reference to the ideo-motor mechanism. Psychol. Rev. 77, 73–99. doi: 10.1037/h0028689

Gregory, R. L. (1968). Perceptual illusions and brain models. Proc. R. Soc. Lond. B Biol. Sci. 171, 279–296. doi: 10.1098/rspb.1968.0071

Gregory, R. L. (1980). Perceptions as hypotheses. Philos. Trans. R. Soc. Lond. B 290, 181–197. doi: 10.1098/rstb.1980.0090

Hald, A. (2008). A History of Parametric Statistical Inference from Bernoulli to Fisher. Copenhagen: Springer Science & Business Media, 1713–1935.

Heims, S. J. (1991). The Cybernetics Group. Cambridge, MA: Massachusetts Institute of Technology.

Helmholtz, H. (1866/1925). Handbuch der Physiologischen Optik [Treatise on Physiological Optics] , Vol. 3, ed. J. Southall (Rochester, NY: Optical Society of America).

Herbort, O., Butz, M. V., and Hoffmann, J. (2005). “Towards an adaptive hierarchical anticipatory behavioral control system,” in From Reactive to Anticipatory Cognitive Embodied Systems: Papers from the AAAI Fall Symposium , eds C. Castelfranchi, C. Balkenius, M. V. Butz, and A. Ortony (Menlo Park, CA: AAAI Press), 83–90.

Hick, W. E. (1951). Information theory and intelligence tests. Br. J. Math. Stat. Psychol. 4, 157–164. doi: 10.1111/j.2044-8317.1951.tb00317.x

Hick, W. E. (1952). On the rate of gain of information. Q. J. Exp. Psychol. 4, 11–26. doi: 10.1080/17470215208416600

Hommel, B., Müsseler, J., Aschersleben, G., and Prinz, W. (2001). The theory of event coding (TEC): a framework for perception and action planning. Behav. Brain Sci. 24, 849–878. doi: 10.1017/S0140525X01000103

Hulbert, A. (2018). Prodigies’ Progress: Parents and Superkids, then and Now. Cambridge, MA: Harvard Magazine, 46–51.

Hyman, R. (1953). Stimulus information as a determinant of reaction time. J. Exp. Psychol. 53, 188–196. doi: 10.1037/h0056940

Internet Hall of Fame (2016). Internet Hall of Fame Pioneer J.C.R. Licklider: Posthumous Recipient. Available at: https://www.internethalloffame.org/inductees/jcr-licklider

Jagacinski, R. J., and Flach, J. M. (2003). Control theory for Humans: Quantitative Approaches to Modeling Performance. Mahwah, NJ: Lawrence Erlbaum.

James, W. (1890). The Principles of Psychology. New York, NY: Dover.

Kahneman, D. (1973). Attention and Effort. Englewood Cliffs, NJ: Prentice Hall.

Kay, H. (1957). Information theory in the understanding of skills. Occup. Psychol. 31, 218–224.

Kellen, D., Klauer, K. C., and Singmann, H. (2012). On the measurement of criterion noise in signal detection theory: the case of recognition memory. Psychol. Rev. 119, 457–479. doi: 10.1037/a0027727

Kline, R. R. (2011). Cybernetics, automata studies, and the Dartmouth conference on artificial intelligence. IEEE Ann. His. Comput. 33, 5–16. doi: 10.1109/MAHC.2010.44

Kline, R. R. (2015). The Cybernetics Moment: Or why We Call our Age the Information Age. Baltimore, MD: John Hopkins University Press.

Lachman, R., Lachman, J. L., and Butterfield, E. C. (1979). Cognitive Psychology and Information Processing: An Introduction. Hillsdale, NJ: Lawrence Erlbaum.

Laird, J., Newell, A., and Rosenbloom, P. (1987). SOAR: an architecture for general intelligence. Artif. Intell. 33, 1–64. doi: 10.1016/0004-3702(87)90050-6

Leahey, T. H. (1992). The mythical revolutions of American psychology. Am. Psychol. 47, 308–318. doi: 10.1037/0003-066X.47.2.308

Lehmann, E. L. (1993). The Fisher, Neyman-Pearson theories of testing hypotheses: one theory or two? J. Am. Stat. Assoc. 88, 1242–1249. doi: 10.1080/01621459.1993.10476404

Lehmann, E. L. (2011). Fisher, Neyman, and the Creation of Classical Statistics. New York, NY: Springer Science & Business Media. doi: 10.1007/978-1-4419-9500-1

Licklider, J. R. (1950). The intelligibility of amplitude-dichotomized, time-quantized speech waves. J. Acoust. Soc. Am. 22, 820–823. doi: 10.1121/1.1906695

Luce, R. D. (2003). Whatever happened to information theory in psychology? Rev. Gen. Psychol. 7, 183–188. doi: 10.1037/1089-2680.7.2.183

Ly, A., Marsman, M., Verhagen, J., Grasman, R. P., and Wagenmakers, E. J. (2017). A tutorial on Fisher information. J. Math. Psychol. 80, 40–55. doi: 10.1016/j.jmp.2017.05.006

MacKay, D. M. (1956). Towards an information-flow model of human behaviour. Br. J. Psychol. 47, 30–43. doi: 10.1111/j.2044-8295.1956.tb00559.x

Mandler, G. (2007). A History of Modern Experimental Psychology: From James and Wundt to Cognitive Science. Cambridge, MA: MIT Press.

Maxwell, S. E., Lau, M. Y., and Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? Am. Psychol. 70, 487–498. doi: 10.1037/a0039400

McCarthy, J., Minsky, M. L., Rochester, N., and Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Mag. 27, 12–14.

McCulloch, W., and Pitts, W. (1943). A logical calculus of ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133. doi: 10.1007/BF02478259

Meyer, L. B. (1957). Meaning in music and information theory. J. Aesthet. Art Crit. 15, 412–424. doi: 10.1016/j.plrev.2013.05.008

Microsoft Canada (2015). Attention Spans Research Report. Available at: https://www.scribd.com/document/317442018/microsoft-attention-spans-research-report-pdf

Miller, G. A. (1953). What is information measurement? Am. Psychol. 8, 3–11. doi: 10.1037/h0057808

Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol. Rev. 63, 81–97. doi: 10.1037/h0043158

Miller, G. A. (2003). The cognitive revolution: a historical perspective. Trends Cogn. Sci. 7, 141–144. doi: 10.1016/S1364-6613(03)00029-9

Miller, G. A., Galanter, E., and Pribram, K. H. (1960). Plans and the Structure of Behavior. New York, NY: Holt. doi: 10.1037/10039-000

Minsky, M. (ed.). (1968). Semantic Information Processing. Cambridge, MA: The MIT Press.

Montagnini, L. (2017a). Harmonies of Disorder: Norbert Wiener: A Mathematician-Philosopher of our Time. Roma: Springer.

Montagnini, L. (2017b). Interdisciplinarity in Norbert Wiener, a mathematician-philosopher of our time. Biophys. Chem. 229, 173–180. doi: 10.1016/j.bpc.2017.06.009

Moor, J. (2006). The Dartmouth College artificial intelligence conference: the next fifty years. AI Mag. 27, 87–91.

Moray, N. (1959). Attention in dichotic listening: affective cues and the influence of instructions. Q. J. Exp. Psychol. 11, 56–60. doi: 10.1080/17470215908416289

Nahin, P. J. (2013). The Logician and the Engineer: How George Boole and Claude Shannon Created the Information Age. Princeton, NJ: Princeton University Press.

Neisser, U. (1967). Cognitive Psychology. New York, NY: Apple Century Crofts.

Nelson, N., Rosenthal, R., and Rosnow, R. L. (1986). Interpretation of significance levels and effect sizes by psychological researchers. Am. Psychol. 41, 1299–1301. doi: 10.1037/0003-066X.41.11.1299

Newell, A. (1955). “The chess machine: an example of dealing with a complex task by adaptation,” in Proceedings of the March 1-3, 1955, Western Joint Computer Conference , (New York, NY: ACM), 101–108. doi: 10.1145/1455292.1455312

Newell, A., and Simon, H. A. (1972). Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall.

Neyman, J. (1977). Frequentist probability and frequentist statistics. Synthese 36, 97–131. doi: 10.1007/BF00485695

Neyman, J., and Pearson, E. S. (1928). On the use and interpretation of certain test criteria for purposes of statistical inference: Part I. Biometrika 20A, 175–240.

Neyman, J., and Pearson, E. S. (1933). IX. On the problem of the most efficient tests of statistical hypotheses. Philos. Trans. R. Soc. Lond. A 231, 289–337. doi: 10.1098/rsta.1933.0009

O’Regan, G. (2012). A Brief History of Computing , 2nd Edn. London: Springer. doi: 10.1007/978-1-4471-2359-0

Parolini, G. (2015). The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919–1933. J. His. Biol. 48, 301–335. doi: 10.1007/s10739-014-9394-z

Peterson, W. W. T. G., Birdsall, T., and Fox, W. (1954). The theory of signal detectability. Trans. IRE Prof. Group Inform. Theory 4, 171–212. doi: 10.1109/TIT.1954.1057460

Pezzulo, G., and Cisek, P. (2016). Navigating the affordance landscape: feedback control as a process model of behavior and cognition. Trends Cogn. Sci. 20, 414–424. doi: 10.1016/j.tics.2016.03.013

Piccinini, G. (2004). The First computational theory of mind and brain: a close look at McCulloch and Pitts’s “Logical calculus of ideas immanent in nervous activity”. Synthese 141, 175–215. doi: 10.1023/B:SYNT.0000043018.52445.3e

Posner, M. I. (1978). Chronometric Explorations of Mind. Hillsdale, NJ: Lawrence Erlbaum.

Posner, M. I. (1986). “Overview,” in Handbook of Perception and Human Performance: Cognitive Processes and Performance , Vol. 2, eds K. R. Boff, L. I. Kaufman, and J. P. Thomas (New York, NY: John Wiley), V.1—-V.10.

Pribram, K. H. (1976). “Problems concerning the structure of consciousness,” in Consciousness and the Brain: A Scientific and Philosophical Inquiry , eds G. G. Globus, G. Maxwell, and I. Savodnik (New York, NY: Plenum), 297–313.

Proctor, R. W., and Schneider, D. W. (2018). Hick’s law for choice reaction time: a review. Q. J. Exp. Psychol. 71, 1281–1299. doi: 10.1080/17470218.2017.1322622

Proctor, R. W., and Vu, K. P. L. (2006). Stimulus-Response Compatibility Principles: Data, theory, and Application. Boca Raton, FL: CRC Press.

Proctor, R. W., and Vu, K. P. L. (2016). Principles for designing interfaces compatible with human information processing. Int. J. Hum. Comput. Interact. 32, 2–22. doi: 10.1080/10447318.2016.1105009

Quastler, H. (ed.). (1953). Information theory in Biology. Urbana, IL: University of Illinois Press.

Quian Quiroga, R., and Panzeri, E. (2009). Extracting information from neuronal populations: information theory and decoding approaches. Nat. Rev. Neurosci. 10, 173–185. doi: 10.1038/nrn2578

Rao, R. P. (2004). Bayesian computation in recurrent neural circuits. Neural Comput. 16, 1–38. doi: 10.1162/08997660460733976

Rao, R. P., and Ballard, D. H. (1997). Dynamic model of visual recognition predicts neural response properties in the visual cortex. Neural Comput. 9, 721–763. doi: 10.1162/neco.1997.9.4.721

Ratcliff, R. (1978). A theory of memory retrieval. Psychol. Rev. 85, 59–108. doi: 10.1037/0033-295X.85.2.59

Ratcliff, R., and Smith, P. L. (2004). A comparison of sequential sampling models for two-choice reaction time. Psychol. Rev. 111, 333–367. doi: 10.1037/0033-295X.111.2.333

Rosenthal, R., and Rubin, D. B. (1982). A simple, general purpose display of magnitude of experimental effect. J. Educ. Psychol. 74, 166–169. doi: 10.1037/0022-0663.74.2.166

Rucci, A. J., and Tweney, R. D. (1980). Analysis of variance and the “second discipline” of scientific psychology: a historical account. Psychol. Bull. 87, 166–184. doi: 10.1037/0033-2909.87.1.166

Russell, E. J. (1966). A History of Agricultural Science in Great Britain. London: George Allen and Unwin, 1620–1954.

Schachter, S., and Chapanis, A. (1945). Distortion in Glass and Its Effect on Depth Perception. Memorandum Report No. TSEAL-695-48B. Dayton, OH: Aero Medical Laboratory.

Schmidt, R. A. (1975). A schema theory of discrete motor skill learning. Psychol. Rev. 82, 225–260. doi: 10.1037/h0076770

Scott, S. H. (2016). A functional taxonomy of bottom-up sensory feedback processing for motor actions. Trends Neurosci. 39, 512–526. doi: 10.1016/j.tins.2016.06.001

Seidenfeld, T. (1992). “R. A. Fisher on the design of experiments and statistical estimation,” in The Founders of Evolutionary Genetics , ed. S. Sarkar (Dordrecht: Springer), 23–36.

Selfridge, O. G. (1959). “Pandemonium: a paradigm for learning,” in Proceedings of the Symposium on Mechanisation of thought Processes , (London: Her Majesty’s Stationery Office), 511–529.

Seth, A. K. (2015). “The cybernetic Bayesian brain - From interoceptive inference to sensorimotor contingencies,” in Open MIND: 35(T) , eds T. Metzinger and J. M. Windt (Frankfurt: MIND Group).

Shalizi, C. (2007). Advanced Probability II or Almost None of the theory of Stochastic Processes. Available at: http://www.stat.cmu.edu/∼cshalizi/754/notes/all.pdf

Shannon, C. E. (1945). A Mathematical theory of Cryptography. Technical Report Memoranda 45-110-02. Murray Hill, NJ: Bell Labs.

Shannon, C. E. (1948a). A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423. doi: 10.1002/j.1538-7305.1948.tb01338.x

Shannon, C. E. (1948b). Letter to Norbert Wiener, October 13. In box 5-85, Norbert Wiener Papers. Cambridge, MA: MIT Archives.

Shannon, C. E., and Weaver, W. (1949). The Mathematical theory of Communication. Urbana, IL: University of Illinois Press.

Shiffman, D. (2012). The Nature of Code. Available at: http://natureofcode.com/book/

Shiffrin, R. M., and Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychol. Rev. 84, 127–190. doi: 10.1037/0033-295X.84.2.127

Simon, H. A. (1969). Designing organizations for an information-rich world. Int. Libr. Crit. Writ. Econ. 70, 187–202.

Simon, H. A. (1997). Allen Newell (1927-1992). Biographical Memoir. Washington DC: National Academics Press.

Srivastava, V., Feng, S. F., Cohen, J. D., Leonard, N. E., and Shenhav, A. (2017). A martingale analysis of first passage times of time-dependent Wiener diffusion models. J. Math. Psychol. 77, 94–110. doi: 10.1016/j.jmp.2016.10.001

Sternberg, S. (1969). The discovery of processing stages: extensions of Donders’ method. Acta Psychol. 30, 276–315. doi: 10.1016/0001-6918(69)90055-9

Stokes, D. E. (1997). Pasteur’s Quadrant – Basic Science and Technological Innovation. Washington DC: Brookings Institution Press.

Student’s (1908). The probable error of a mean. Biometrika 6, 1–25.

Swets, J. A., Tanner, W. P. Jr., and Birdsall, T. G. (1961). Decision processes in perception. Psychol. Rev. 68, 301–340. doi: 10.1037/h0040547

Taylor, F. V. (1949). Review of Cybernetics (or control and communication in the animal and the machine). Psychol. Bull. 46, 236–237. doi: 10.1037/h0051026

Treisman, A. M. (1960). Contextual cues in selective listening. Q. J. Exp. Psychol. 12, 242–248. doi: 10.1080/17470216008416732

Treisman, A. M., and Gelade, G. (1980). A feature-integration theory of attention. Cogn. Psychol. 12, 97–136. doi: 10.1016/0010-0285(80)90005-5

Tulving, E., and Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychol. Rev. 80, 352–373. doi: 10.1037/h0020071

Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 2, 230–265. doi: 10.1112/plms/s2-42.1.230

Van Meter, D., and Middleton, D. (1954). Modern statistical approaches to reception in communication theory. Trans. IRE Prof. Group Inform. Theory 4, 119–145. doi: 10.1109/TIT.1954.1057471

Velasco, M. A., Clemotte, A., Raya, R., Ceres, R., and Rocon, E. (2017). Human-computer interaction for users with cerebral palsy based on head orientation. Can cursor’s movement be modeled by Fitts’s law? Int. J. Hum. Comput. Stud. 106, 1–9. doi: 10.1016/j.ijhcs.2017.05.002

Verschure, P. F. M. J. (2016). “Consciousness in action: the unconscious parallel present optimized by the conscious sequential projected future,” in The Pragmatic Turn: Toward Action-Oriented Views in Cognitive Science , eds A. K. Engel, K. J. Friston, and D. Kragic (Cambridge, MA: MIT Press).

Wald, A. (1950). Statistical Decision Functions. New York, NY: John Wiley.

Waterson, P. (2011). World War II and other historical influences on the formation of the Ergonomics Research Society. Ergonomics 54, 1111–1129. doi: 10.1080/00140139.2011.622796

Wiener, N. (1921). The average of an analytic functional and the Brownian movement. Proc. Natl. Acad. Sci. U.S.A. 7, 294–298. doi: 10.1073/pnas.7.10.294

Wiener, N. (1948a). Cybernetics. Sci. Am. 179, 14–19. doi: 10.1038/scientificamerican1148-14

Wiener, N. (1948b). Cybernetics or Control and Communication in the Animal and the Machine. New York, NY: John Wiley.

Wiener, N. (1949). Extrapolation, Interpolation, and Smoothing of Stationary Time Series, with Engineering Applications. Cambridge, MA: Technology Press of the Massachusetts Institute of Technology.

Wiener, N. (1951). Homeostasis in the individual and society. J. Franklin Inst. 251, 65–68. doi: 10.1016/0016-0032(51)90897-6

Wiener, N. (1952). Cybernetics or Control and Communication in the Animal and the Machine. Cambridge, MA: The MIT Press.

Wiener, N. (1961). Cybernetics or Control and Communication in the Animal and the Machine , 2nd Edn. Cambridge MA: MIT Press. doi: 10.1037/13140-000

Wixted, J. T. (2014). Signal Detection theory. Hoboken, NJ: Wiley. doi: 10.1002/9781118445112.stat06743

Wootton, D. (2015). The Invention of Science: A new History of the Scientific Revolution. New York, NY: Harper.

Yates, F. (1964). Sir Ronald Fisher and the design of experiments. Biometrics 20, 307–321. doi: 10.1080/03639045.2017.1291672

Yates, F., and Mather, K. (1963). Ronald Aylmer fisher. Biogr. Mem. Fellows R. Soc. Lond. 9, 91–120. doi: 10.1098/rsbm.1963.0006

Keywords : cybernetics, small data, information age, information theory, scientific methods

Citation: Xiong A and Proctor RW (2018) Information Processing: The Language and Analytical Tools for Cognitive Psychology in the Information Age. Front. Psychol. 9:1270. doi: 10.3389/fpsyg.2018.01270

Received: 09 February 2018; Accepted: 03 July 2018; Published: 08 August 2018.

Reviewed by:

Copyright © 2018 Xiong and Proctor. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Robert W. Proctor, [email protected] ; [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

How to cite a research paper in ieee format

For more complete examples, please refer to the link above. Model for resexrch from a scientific database DB Love research paper pdf descriptive phrase [record locator]; accessed date. Accessed on: Month, Day, Year. However, these should never be labeled how to cite a research paper in ieee format enumerated. You can use lots reseaarch cutting-edge iree within the software, such as:. Let us know in the comments section below! Types of sources covered include:. This is so that engineers working on practical problems, who may not have significant background in your particular field, will be able to understand the application of your work to theirs. These guidelines vary from what is expected in the Business Report Writing Guidelines. Index Terms — Aerospace engineering, biomimetics, CMOS process, damascene integration, evolutionary computation, fuzzy systems. Awesome nursing subject help in such a short amount of time. Technical reports often include detailed research on an organization's own work concerning a problem.

information processing research paper

University of Washington Information School

Aylin Caliskan seated on a bench

i School's Caliskan wins award to battle bias in artificial intelligence

Imagine losing out on your dream job due to bias in AI tools used in the resume screening process or having your health care compromised for the same reason.

Those are the disturbing scenarios that Aylin Caliskan , an assistant professor in the University of Washington Information School, is dedicated to thwarting.

Caliskan was recently awarded a $603,342 National Science Foundation Faculty Early Career Development (CAREER) Award for her project titled, “The Impact of Associations and Biases in Generative AI on Society.” She is planning to develop computational methods to measure biases in generative artificial intelligence systems and their impact on humans and society. Caliskan says her goal is reducing bias in AI and human-AI collaboration.

“Hopefully, in the long term, we will be able to raise awareness and provide tools to reduce the harmful consequences of bias,” said Caliskan, who became a co-director of the UW Tech Policy Lab earlier this year. Her research in computer science and artificial intelligence will also provide empirical evidence for tech policy.

Caliskan noted that AI is used in a variety of places that many people don’t realize. Companies often use AI to screen job applications; some colleges use it to screen student applications; and health-care providers use AI in reviewing patient data. 

But because AI is trained on data produced by humans, it learns biases similar to those found in society. Women and people of different ethnicities are more frequently discriminated against in AI than white males, Caliskan said. She cited an example from current use of generative AI in health care, where African American patients may receive less effective or lower-cost medications when prescribed through AI than patients of European descent.

Caliskan’s work was among the first to develop methods to detect and quantify bias in AI. One of the difficulties she faces is that AI doesn’t work or “think” exactly like humans, despite being developed by them. However, AI is being used on a large scale and is helping to shape society. 

Another challenge for Caliskan is that not all AI is the same. Many companies have their own proprietary AI systems that they may or may not be willing to allow researchers like Caliskan to study. 

One of the keys to reducing bias in AI is understanding the mechanisms of bias and where the bias originated, she said. Some bias is cultural, societal or historical. Figuring out what is “fair” in a specific context and task isn’t trivial.

“There are many fairness notions,” Caliskan said. “We don’t have simple, straightforward answers to these complex open questions.”

Caliskan notes she grew up a multilingual immigrant, which fostered her interest in the subject of fairness. She speaks German, Turkish and Bulgarian as well as English.

“I was able to observe and live in different cultures in my childhood and observe different societies,” she said. “I have always been fascinated by culture and languages.”

Since coming to the UW, Caliskan has been invited to speak at AI-related events at Stanford University, Howard University, the Santa Fe Institute, and the International Joint Conferences on Artificial Intelligence. Her paper rigorously showing that AI reflects cultural stereotypes was published in Science magazine. 

In 2023, Caliskan was listed among the 100 Brilliant Women in AI Ethics by the Women in AI Ethics organization. She previously received an NSF award for her work on privacy and fairness in planning while using third-party sources. Caliskan is teaching a course on generative AI literacy this fall. 

Caliskan’s NSF grant will last for five years, but she doesn’t see her work on the subject ending then. 

“I see this research going on my entire life,” she said. “Since bias cannot be entirely eliminated, this is a lifelong problem.”

However, Caliskan believes that identifying, measuring and reducing bias can help align AI with societal values and raise awareness.

“I don’t think eliminating bias entirely in AI or people is possible,” she said. But “when we know we are biased, we adjust our behavior.”

Full Results

Customize your experience.

information processing research paper

Controlling of Steered Quantum Coherence in Non-Markovian System

  • Published: 11 September 2024
  • Volume 63 , article number  233 , ( 2024 )

Cite this article

information processing research paper

  • Hu Ju-Ju 1 &
  • Ji Ying-Hua 1  

Quantum coherence and quantum steering are extremely useful resources for quantum information technology. Based on the correlation between quantum coherence and quantum steering, the steered quantum coherence (SQC) is formed, which is a new quantum correlation measurement method. In this paper, the SQC is applied to explore the dynamic evolution of quantum correlation in non-Markovian systems and it is also compared with typical quantum entanglement (QE). The results show that same as the QE, the SQC can also measure the quantum correlation of quantum system. Further research shows that there are obvious differences between the two measurement methods: on the one hand, in the process of non-Markovian dynamics evolution, death and rebirth occur for QE, but not for SQC, which indicates that SQC is more robust than QE; on the other hand, the superposition effect of the channel is not conducive to maintaining the QE of the system, but it is beneficial to maintaining the SQC.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

information processing research paper

Data Availability

No datasets were generated or analysed during the current study.

Li, P., Zhang, Q., You, J.Q.: Dividing two-qubit Hilbert space via abrupt and asymptotic disentanglement. Phys. Rev. A. 79 , 014303 (2009)

Article   ADS   Google Scholar  

Lopez, C.E., Romero, G., Lastra, F., Solano, E., Retamal, J.C.: Sudden birth versus sudden [Death of Entanglement in Multipartite systems. Phys. Rev. Lett. 101 , 080503 (2008)

Shi, H.L., Liu, S.Y., Wang, X.H., Yang, W.L., Yang, Z.Y., Fan, H.: Coherence depletion in the Grover quantum search algorithm. Phys. Rev. A. 95 , 032307 (2017)

Article   ADS   MathSciNet   Google Scholar  

Costa, A.C.S., Beims, M.W., Angelo, R.M.: Generalized discord, entanglement, Einstein–Podolsky–Rosen steering, and Bell nonlocality in two-qubit systems under (non-)Markovian channels: Hierarchy of quantum resources and chronology of deaths and births. Phys. A. 461 , 469 (2016)

Article   MathSciNet   Google Scholar  

Mondal, D., Pramanik, T., Pati, A.K.: Nonlocal advantage of quantum coherence. Phys. Rev. A. 95 , 010301 (2017)

Hu, M.L., Fan, H.: Nonlocal advantage of quantum coherence in high-dimensional states. Phys. Rev. A. 98 , 022312 (2018)

Hu, M.L., Wang, X.M., Fan, H.: Hierarchy of the nonlocal advantage of quantum coherence and Bell nonlocality. Phys. Rev. A. 98 , 032317 (2018)

Ding, Z.Y., Yang, H., Yuan, H., Wang, D., Yang, J., Ye, L.: Experimental investigation of the nonlocal advantage of quantum coherence. Phys. Rev. A. 100 , 022308 (2019)

Wang, L.F., Du, M.M., Sun, W.Y., Wang, D., Ye, L.: Nonlocal advantage of quantum coherence under relativistic frame. Mod. Phys. Lett. B. 32 , 1850377 (2018)

Hu, M.L., Zhang, Y.H., Fan, H.: Nonlocal advantage of quantum coherence in a dephasing channel with memory. Chin. Phys. B. 30 , 030308 (2021)

Wolf, M.M., Eisert, J., Cubitt, T.S., Cirac, J.I.: Assessing non-markovian quantum dynamics. Phys. Rev. Lett. 101 , 150402 (2008)

Ferraro, E., Scala, M., Migliore, R., Napoli, A.: Non-markovian dissipative dynamics of two coupled qubits in independent reservoirs: Comparison between exact solutions and master-equation approaches. Phys. Rev. A. 80 , 042112 (2009)

Guo, Y.N., Fang, M.F., Zeng, K.: Entropic uncertainty relation in a two-qutrit system with external magnetic field and dzyaloshinskii–moriya interaction under intrinsic decoherence. Quantum Inf. Process. 18 , 187 (2019)

ADS   MathSciNet   Google Scholar  

Modi, K., Brodutch, A., Cable, H., Paterek, T., Vedral, V.: The classical-quantum boundary for correlations: Discord and related measures. Rev. Mod. Phys. 84 , 1655 (2012)

Mazzola, L., Maniscalco, S., Piilo, J., Suominen, K.A., Garraway, B.M.: Sudden death and sudden birth of entanglement in common structured reservoirs. Phys. Rev. A. 79 , 042302 (2009)

Ji, Y.H., Liu, Y.M.: Regulation of entanglement and geometric quantum discord of hybrid superconducting qubits for circuit QED. Inter J. Theor. Phys. 52 , 3220 (2013)

Li, J.G., Zou, J., Shao, B.: Entanglement backflow under the composite effect of two non-markovian reservoirs. Phys. Lett. A. 376 , 1020 (2012)

Bellomo, B., Franco, R.L., Compagno, G.: Non-markovian effects on the dynamics of entanglement. Phys. Rev. Lett. 99 , 160502 (2007)

Zhang, Y.L., Kang, G.D., Yi, S.J., Xu, H.Z., Zhou, Q.P., Fang, M.F.: Relationship between quantum-memory-assisted entropic uncertainty and steered quantum coherence in a two-qubit X state. Quantum Inf. Process. 22 , 114 (2023)

Baumgratz, T., Cramer, M., Plenio, M.B.: Quantifying coherence. Phys. Rev. Lett. 113 , 140401 (2014)

Bu, K., Anand, N., Singh, U.: Asymmetry and coherence weight of quantum states. Phys. Rev. A. 97 , 032342 (2018)

Wootters, W.K.: Entanglement of formation of an arbitrary state of two qubits. Phys. Rev. Lett. 80 , 2245 (1998)

Yin, X.L., Ma, J., Wang, X.G., Nori, F.: Spin squeezing under non-markovian channels by the hierarchy equation method. Phys. Rev. A. 86 , 012308 (2012)

Milburn, G.J.: Intrinsic decoherence in quantum mechanics. Phys. Rev. A. 44 , 5401 (1991)

Konrad, T., de Melo, F., Tiersch, M., Kasztelan, C., Aragao, A., Buchleitner, A.: Evolution equation for quantum entanglemeng. Nat. Phys. 4 , 99–102 (2008)

Article   Google Scholar  

Hu, J.J., Liu, S., Ji, S.Q.: Measurement of quantum correlation by Frobenius norm in non-markovian system. Int. J. Quantum Inform. 16 , 1850022 (2018)

Li, J.G., Zou, J., Shao, B.: Entanglement evolution of two qubits under noisy environments. Phys. Rev. A. 82 , 042318 (2010)

Li, J.G., Zou, J., Shao, B.: Non-markovianity of the damped Jaynes-Cummings model with detuning. Phys. Rev. A. 81 , 062124 (2010)

Ji, Y.H., Hu, J.J., Hu, Y.: Comparison and control of the robustness between quantum entanglement and quantum correlation in open quantum system. Chin. Phys. B. 21 , 110304 (2012)

Wiseman, H., Jones, S., Doherty, A.: Steering, entanglement, nonlocality, and the Einstein-Podolsky-Rosen paradox. Phys. Rev. Lett. 98 , 140402 (2007)

ur Rahman, A., Abd-Rabbou, M.Y., Haddadi, S., Ali, H.: Two-qubit steerability, nonlocality, and average steered coherence under classical dephasing channels. Ann. Phys. (Berlin). 535 , 2200523 (2023)

ur Rahman, A., Shamirzaie, M., Abd-Rabbou, M.Y.: Bidirectional steering, entanglement and coherence of accelerated qubit–qutrit system with a stochastic noise. Optik. 274 , 170543 (2023)

Xie, Y.X., Qin, Z.Y.: Enhancing nonlocal advantage of quantum coherence in correlated quantum channels. Quantum Inf. Process. 19 , 375 (2020)

Xue, G.H., Qiu, L.: Recovering nonlocal advantage of quantum coherence by weak measurement reversal. Phys. Scr. 95 , 025101 (2020)

Dalton, B.J., Barnett, S.M., Garraway, B.M.: Theory of pseudomodes in quantum optical processes. Phys. Rev. A. 64 , 053813 (2001)

Hu, J.J., Li, S.: Influence of composite effect from quantum channels on entanglement. Commun. Theor. Phys. 62 , 183–188 (2014)

Xu, X.X., Hu, M.L.: Maximal steered coherence and its conversion to entanglement in multiple bosonic reservoirs. Ann. Phys. (Berlin). 534 , 2100412 (2022)

Sun, W.Y., Wang, D., Ding, Z.Y., Ye, L.: Recovering the lost steerability of quantum states within non-markovian environments by utilizing quantum partially collapsing measurements. Laser Phys. Lett. 14 , 125204 (2017)

Wang, Y., Hao, Z.Y., Li, J.K., Liu, Z.H., Sun, K., Xu, J.S., Li, C.F., Guo, G.C.: Observation of non-markovian evolution of Einstein-Podolsky-Rosen steering. Phys. Rev. Lett. 130 , 200202 (2023)

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China under Grant Nos. 61663016 and 11404150.

Author information

Authors and affiliations.

College of Physics and Communication Electronics, Jiangxi Normal University, Nanchang, Jiangxi, 330022, China

Hu Ju-Ju & Ji Ying-Hua

You can also search for this author in PubMed   Google Scholar

Contributions

Juju Hu completed the simulations and prepared the figures, Yinghua Ji wrote the main text. All authors reviewed the manuscript.

Corresponding author

Correspondence to Hu Ju-Ju .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Ju-Ju, H., Ying-Hua, J. Controlling of Steered Quantum Coherence in Non-Markovian System. Int J Theor Phys 63 , 233 (2024). https://doi.org/10.1007/s10773-024-05775-9

Download citation

Received : 24 May 2024

Accepted : 04 September 2024

Published : 11 September 2024

DOI : https://doi.org/10.1007/s10773-024-05775-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Steered quantum coherence
  • Quantum entanglement
  • Non-markovian process

PACS number(s)

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. (PDF) Information Processing in Research Paper Recommender System Classes

    information processing research paper

  2. (PDF) Data Processing

    information processing research paper

  3. Research Paper Format

    information processing research paper

  4. Research Paper

    information processing research paper

  5. The Information Processing Theory

    information processing research paper

  6. Information Processing Methods For Data Analysis And Research PPT

    information processing research paper

VIDEO

  1. The Information Search Process

  2. Formulation of Research Problem I Data Collection

  3. The Practitioners Update on Intelligent Document Processing (Part One)

  4. Using Intelligent Document Processing IDP with RPA

  5. 5-Data & Information(Processing Cycles)||Basic Computer-ICT & IT||Full Course for ICT

  6. Data Processing and Analysis in Research Methodology

COMMENTS

  1. (PDF) Information Processing

    Information processing in biologically motivated Boolean networks is of interest in recent information theoretic research. One measure to quantify this ability is the well-known mutual information.

  2. (PDF) An Overview of the Information Processing Approach and its

    Further research is required to refine the Information Processing Approach, explore its application to other cognitive processes, and investigate its practical implications for cognitive ...

  3. Information Processing: The Language and Analytical Tools for Cognitive

    Our interest in this paper is, of course, with the research tools and logic that Fisher provided, along with their application to scientific content. ... Cognitive Psychology and Information Processing. Along with the information theory, the arrival of the computer provided one of the most viable models to help researchers understand the human ...

  4. Cognitive information processing theory: Applications in research and

    October 2018, Issue 41. Cognitive information processing (CIP) theory, in existence for over four decades, boasts. over 150 evidence-based articles and close to 300. manuscripts in total that ...

  5. Information Processing & Management

    IP&M Best PhD Article Awardees. Edited by Jim Jansen Qatar Computing Research Institute, HBKU. 3 July 2023. View all special issues and article collections. View all issues. Read the latest articles of Information Processing & Management at ScienceDirect.com, Elsevier's leading platform of peer-reviewed scholarly literature.

  6. Information Processing Letters

    Information Processing Letters invites submission of original research articles that focus on fundamental aspects of information processing and computing. This naturally includes work in the broadly understood field of theoretical computer science; although papers in all areas of scientific inquiry will be given consideration, provided that ...

  7. Artificial Intelligence and Information Processing: A Systematic ...

    This study aims to understand the development trends and research structure of articles on artificial intelligence (AI) and information processing in the past 10 years. In particular, this study analyzed 13,294 papers published from 2012 to 2021 in the Web of Science, used the bibliometric analysis method to visualize the data of the papers, and drew a scientific knowledge map.

  8. PDF Information Processing and Memory: Theory and Applications

    In this theory, intelligence is comprised of three kinds of information processing components: metacomponents, performance components, and knowledge-acquisition components. In Sternberg's (1988) model, each of these three components works together to facilitate learning and cognitive development.

  9. Information Processing: A Critical Literature Review and Future

    The paper reviews the extensive literature in this domain, deriving information and models from a wide variety of disciplines including: cognitive information processing, attitudes and attitudinal change, elaboration and receiver involvement, sub-routines and sub-processors, semiotics, cognitive science and psycholinguistics.

  10. Information processing

    Development of the digital retrieval system integrating intelligent information and improved genetic algorithm: A study based on art museums. Cun Lin, XiaoChen Hu, TianYi Cheng, Rao Yin.

  11. PDF An Introduction to Cognitive Information Processing Theory, Research

    The primary purpose of this paper is to introduce essential elements of cognitive information processing (CIP) theory, research, and practice as they existed at the time of this writing. The introduction that follows describes the nature of career choices and career interventions, and the integration of theory, research, and practice.

  12. PDF A Study on Improving Information Processing Abilities Based on Pbl

    Step: 1 Provides a task related to life concerning time through website as shown in Table: 2. There are eleven tasks over eight months. Step: 2 Each team creates a 'plan for task performance'. Step: 3 Team members divide the task into personal tasks based on the 'plan for task performance'.

  13. Introduction to Cognitive Information Processing Theory, Research, and

    The primary purpose of this paper is to introduce essential elements of cognitive information processing (CIP) theory, research, and practice as they existed at the time of this writing. The introduction that follows describes the nature of career choices and career interventions, and the integration of theory, research, and practice.

  14. The Role of Imagery in Information Processing: Review and ...

    research. This article describes imagery, characterizing it as a processing mode in which multisensory information is represented in a gestalt form in working memory, and discusses research on the unique effects of imagery at low levels of cognitive elaboration. It specifies researchable propositions for the relationship between high

  15. Rubrics to assess critical thinking and information processing in

    This work received Institutional Review Board approval prior to any data collection involving human subjects. The sources of data used to construct the process skill rubrics and answer these research questions were (1) peer-reviewed literature on how each skill is defined, (2) feedback from content experts in multiple STEM disciplines via surveys and in-person, group discussions regarding the ...

  16. Information Processing Research Papers

    A Population-Based Ant Colony Optimization Approach for DNA Sequence Optimization. DNA computing is a new computing paradigm which uses bio-molecular as information storage media and biochemical tools as information processing operators. It has shows many successful and promising results for various applications.

  17. Schema Theory: An Information Processing Model of Perception and

    116 Geometric analysis of this section can be extended to yield a number of suggestive theorems about some of the more subtle consequences of the present information processing model. Here are four such results. (1) Potential Completeness.If Q is a continuous space, then under a weak set of assumptions the modified and extended specification of a case will be a complete specification of the case.

  18. Information Processing Research Paper Topics

    Information processing is an approach to the study of behavior that seeks to explain what people think, say, and do by describing the mental systems that give rise to those phenomena. At the heart of the information-processing perspective is the conception of the mind as a representational system. That is, the mind is viewed as a system that (1 ...

  19. Theory and Implications of Information Processing

    Information processing is a model for human thinking and learning, and it is a part. of the resurgence of cognitive perspecti ves of learning. The cognitiv e perspective. asserts that complex ...

  20. Education Research Paper on Information Processing

    This sample education research paper on information processing features: 4400 words (approx. 14 pages) and a bibliography with 26 sources. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help.

  21. How to Write a Research Proposal Paper

    A research proposal paper: includes sufficient information about a research study that you propose to conduct for your thesis (e.g., in an MT, MA, or Ph.D. program) or that you imagine conducting (e.g., in an MEd program). It should help your readers understand the scope, validity, and significance of your proposed study.

  22. Frontiers

    Information is information, not matter or energy. Wiener (1952, p. 132). Introduction. The period of human history in which we live is frequently called the information age, and it is often dated to the work of Wiener (1894-1964) and Shannon (1916-2001) on cybernetics and information theory.Each of these individuals has been dubbed the "father of the information age" (Conway and ...

  23. How to cite a research paper in ieee format

    How to cite a research paper in ieee format There is no line break between the heading and the text. IEEE maintains a standardized list of index terms to make this process easier and its categories more consistent. Merkin, and M. Note: Refer to the Journal titles abbreviations section on this page for information on when to use journal abbreviations.

  24. iSchool's Caliskan wins award to battle bias in artificial intelligence

    Imagine losing out on your dream job due to bias in AI tools used in the resume screening process or having your health care compromised for the same reason. Those are the disturbing scenarios that Aylin Caliskan, an assistant professor in the University of Washington Information School, is dedicated to thwarting.

  25. A Study on Improving Information Processing Abilities Based on PBL

    This study teaches seven steps for the PBL process in eleven tasks, and then analyses. and observes the results to continually instruct insufficient parts of t he Information. Processing Abilities ...

  26. Controlling of Steered Quantum Coherence in Non-Markovian System

    Quantum coherence and quantum steering are extremely useful resources for quantum information technology. Based on the correlation between quantum coherence and quantum steering, the steered quantum coherence (SQC) is formed, which is a new quantum correlation measurement method. In this paper, the SQC is applied to explore the dynamic evolution of quantum correlation in non-Markovian systems ...

  27. (PDF) Research on Computer Information Processing Technology in the

    Processing Technology. ArticlePDF Available. Research on Computer Information Processing Technology in the "Big Data" Era. August 2018. IOP Conference Series Materials Science and Engineering ...

  28. Liberase™ TL Research Grade lyophilized, suitable for tissue processing

    Sigma-Aldrich offers Roche-05401020001, Liberase™ TL Research Grade (blendzyme) for your research needs. Find product specific information including CAS, MSDS, protocols and references. ... Peer Reviewed Papers. ... Cancer immunoediting is a dynamic process of crosstalk between tumor cells and the immune system. Herein, we explore the fast ...