• Docs »
  • Sequential Design of Experiments (SDOE) »
  • Sequential Design of Experiments (SDOE)
  • Edit on GitHub

Sequential Design of Experiments (SDOE) ¶

Experimenters often begin an experiment with imperfect knowledge of the underlying relationship they seek to model, and may have a variety of goals that they would like to accomplish with the experiment. In this chapter, we describe how sequential design of experiments can help make the best use of resources and improve the quality of learning. We describe the different types of space filling designs that can help accomplish this, define basic terminology, and show a common sequence of steps that are applicable to many experiments. We show the basics for the types of designs supported in the SDoE module, and provide some examples to illustrate the methods.

A sequential design of experiments strategy allows for adaptive learning based on incoming results as the experiment is being run. The SDoE module in FOQUS allows the experimenter to flexibly incorporate this strategy into their designed experimental planning to allow for maximally relevant information to be collected. Statistical design of experiments is an important strategy to improve the amount of information that can be gleaned from the overall experiment. It leverages principles of putting experimental runs where they are of maximum value, the interdependence of the runs to estimate model parameters, and robustness to the variability of results that can be obtained when the same experimental conditions are repeated. There are two major categories of designed experiments: those for which a physical experiment is being run, and designs for a computer experiment where the output from a computer model (based on underlying science or engineering theory) is explored. There are also experimental situations when the goal is to collect both from a physical experiment as well as the computer model to compare them and to calibrate some of the computer model parameters to best match what is observed. The methods available in the SDoE module can be beneficial for all three of these cases. They present opportunities for accelerated learning through strategic selection and updating of experimental runs that can adapt to multiple goals.

The current version of the SDoE module has functionality that can produce flexible space-filling designs. Currently, two types of space-filling designs are supported:

Uniform Space Filling (USF) designs space design points evenly, or uniformly, throughout the user-specified input space. These designs are common in physical and computer experiments where the goal is to have data collected throughout the region. They are well suited to exploration, and being able to predict results at a new input combination, as there will be some data available close by. To use the Uniform Space Filling design capability in the SDoE module, the only requirement for the user is that the candidate set contains a column for each of the inputs and a row for each possible run. It is also recommended (but not required) to have an index column to be able to track which rows of the candidate set are selected in the constructed design.

Non-Uniform Space Filling (NUSF) designs maintain the goal of having design points spread throughout the desired input space, but add a feature of being able to emphasize some regions more than others. This adds flexibility to the experimentation, when the user is able to tune the design to have as close to uniform as desired or as strongly concentrated in one or more regions as desired. This is newly developed capability, which has just been introduced into the statistical and design of experiments literature, and has been added to the SDoE module. It provides the experimenter with the ability to tailor the design to what is needed. To use the Non-Uniform Space Filling design capability in the SDoE module, the requirements are that the candidate set contains (a) one column for each of the inputs to be used to construct the design, and (b) one column for the weights to be assigned to each candidate point, where larger values are weighted more heavily and will result in a higher density of points close to those locations. The Index column is again recommended, but not required.

Home Screen

Comparison of USF and NUSF designs

Key features of both approaches available in this module are: a) designs will be constructed by selecting from a user-provided candidate set of input combinations, and b) historical data, which has already been collected can be integrated into the design construction to ensure that new data are collected with a view to account for where data are already available.

Why Space-Filling Designs? ¶

Space-filling designs are a design of experiments strategy that is well suited to both physical experiments with an accompanying model to describe the process and to computer experiments. The idea behind a space-filling design is that the design points are spread throughout the input space of interest. If the goal is to predict values of the response for a new set of input combinations within the ranges of the inputs, then having data spread throughout the space means that there should be an observed data point relatively close to where the new prediction is sought, regardless of the new location.

In addition, if there is a model for the process, then having data spread throughout the input space means that the consistency of the model to the observed data can be evaluated at multiple locations to look for possible discrepancies and to quantify the magnitude of those differences throughout the input space.

Hence, for a variety of criteria, a space-filling design might serve as good choice for exploration and for understanding the relationship between the inputs and the response without making a large number of assumptions about the nature of the underlying relationship. As we will see in subsequent sections and examples, the sequential approach allows for great flexibility to leverage what has been learned in early stages to influence the later choices of designs. In addition, the candidate-based approach that is supported in this module has the advantage that it can make the space-filling approach easier to adapt to design space constraints and specialized design objectives that may evolve through the stages of the sequential design.

We begin with some basic terminology that will help provide structure to the process and instructions below.

  • Input factors – these are the controllable experimental settings that are manipulated during the experiment. It is important to carefully define the ranges of interest for the inputs (eg. Temperature in [200°C,400°C]) as well as any logistical or operational constraints on these input factors (eg. Flue Gas Rate < 1000 kg/hr when Temperature > 350°C)
  • Input combinations (or design runs) – these are the choices of settings for each of the input factors for a particular run of the experiment. It is assumed that the implementers of the experiment are able to set the input factors to the desired operating conditions to match the prescribed choice of settings. It is not uncommon for the experimenter to not have perfect control of the input settings, but in a designed experiment, it is important to have a target value for each input and also to record the observed value if in fact it is different than what was intended. This allows for more precise estimation of the model and improved prediction.
  • Input space (or design space) – the region of interest for the input factors in which the experiment will be run. This is typically constructed by combining the individual input factor ranges, and then adapting the region to take into account any constraints. Any suggested runs of the experiment will be located in this region. The candidate set of runs used by the SDoE module should provide coverage of all regions of this desired input space.
  • Responses (or outputs) – these are the measured results obtained from each experimental run. Ideally, these are quantitative summaries (measured by a numeric value or possibly a vector of numeric values) of a characteristic of interest resulting from running the process at the prescribed set of operating conditions (eg. CO2 capture efficiency is a typical response of interest for CCSI).
  • Design criterion / Utility function – this is a mathematical expression of the goal (or goals) of the experiment that is used to guide the selection of new input combinations, based on the prior information before the start of the experiment and during the running of the experiment. The design criterion can be based on a single goal or multiple competing goals, and can be either static throughout the experiment or evolve as goals change in importance over the course of the experiment. Common choices of goals for the experiment are:
  • exploring the region of interest,
  • improving the precision (or reducing the uncertainty) in the estimation of model parameters,
  • improving the precision of prediction for new observations in the design region,
  • assessing and quantifying the discrepancy between the model and data, or
  • optimizing the value of responses of interest.

An ideal design of experiment strategy uses the design criterion to evaluate potential choices of input combinations to maximize the improvement in the criterion over the available candidates. If the optimal design strategy is sequential, then the goal is to use early results from the beginning of the experiment to guide the choice of new input combinations based on what has already been learned about the responses.

Matching the Design Type to Experiment Goals ¶

At different stages of the sequential design of experiments, different objectives are common. We outline a common progression of objectives for experiments that we have worked with in the CCSI project. Typically, an initial pilot study is conducted to show that the right data can be collected and that measurements can be made with the required precision. Often no designed experiment is used for this small study as it is just to establish viability to proceed.

Home Screen

SDOE sequence of steps

Once the viability of the experimental set-up and measurement system has been established, it is common to proceed to the next step of exploration . This is appropriate if little is known about the response and its characteristics. Hence, a first experiment may have the goal of gaining some preliminary understanding of the characteristics of the response across the input region of interest. Depending on how easy it is to collect and process data, this exploration might be done in a single first experiment, or there may be opportunities to do several smaller stages (this is shown in the figure above with the recursive arrow). It is particularly beneficial to do the exploration step in smaller stages if there is uncertainty about what areas of the input space are feasible. This can help save resources by exploring slowly and eliminating regions where there are problems.

After initial exploration, a common next step in the sequence of experiments is model building or model refinement . For many CCSI experiments, the physical experiments are being collected in conjunction with an underlying science-based model. If a model does not already exist, then one might be developed based on the initial data collected in the previous stage. If a model already exists, then it can be refined by collecting new data where (a) there is maximum uncertainty in prediction, or (b) where there are discrepancies between the data and the model. In this way, the data collection from a physical experiment is used to calibrate the model and provide feedback about where model performance needs improvement (both resolving inaccurate characterization of features and high uncertainty). Often after the first set of data, some regions of the input space perform well, while others have issues. It is ideal to target new data in regions where it can be most beneficially used to improve the model.

After the experimenter has confidence in the model, it can then be used for optimization . This involves using the model to predict regions with desirable values of the response(s) of interest. Often the experiments associated with this stage focus on a smaller region of the input space close to where the optimum lies. The final stage, confirmation is often a very small experiment located right at the location where the model says the response is optimal. The goal of this stage is to verify that the results predicted by the model are matched with what is observed from experimental data. As with the pilot study, often this final stage involves only a small number of runs and no formal designed experiment is run.

We now illustrate these stages with a simple example involving 2 inputs where the candidate set fills a rectangular region defined by the range of each input. In the first stage, the pilot study (the two orange dots) are used to establish viability of the test method and measurement system. The second stage, an initial exploratory experiment (six blue dots) spreads the points throughout the defined region of interest. Here we start to see the benefit of using a sequential approach as the blue dots take into account the locations where the orange pilot data were collected.

Home Screen

SDoE Pilot study (orange) and Exploration (blue) stage

Based on this exploration, it may be discovered that one portion of the region (top right) is not viable for data collection, or is not desirable for the observed response values. Hence, in future experiments no data should be collected here. At this point, an initial model is constructed to combine what is known from the experimental data with the underlying science.

Home Screen

New Constraint added (dashed black line)

In the next stage of experimentation, some additional runs are added (red dots) that are used for model refinement . These are placed in regions where there is larger uncertainty in the model predictions and also seek to fill in empty space.

Home Screen

Model Refining stage of experimentation (red dots)

With the updated model based on the additional data, a region where good response values are possible is identified. This becomes the focus of another experiment for optimizing the response. The oval indicates the region of desirable responses, and the three green dots indicate the new input combinations collected to provide additional information.

Home Screen

The optimal region for the responses (oval) with additional runs (green dots)

The final data collection involves two confirmation runs (black dots) at the identified optimal location to verify that results are observed to match what the model predicts.

Home Screen

SDOE confirmation runs (black dots)

To conclude this example, we illustrate the power of the sequential approach to collecting data. In the figure below, we show the 18 runs collected with the sequential approach (on left) and a typical 18-run space filling design (on right). Both these experiments have the same total budget, but the sequential approach avoids placing much data in the undesirable top right corner as well as has much more data concentrated close to where the overall optimal combination of inputs is located.

Home Screen

A comparison of 2 18-run experiments: On left, the sequential approach. On right, the single experiment approach.

Sequential Design of Experiments

Cite this chapter.

sequential design experiment

  • Herman Chernoff 3  

Part of the book series: Springer Series in Statistics ((PSS))

8921 Accesses

1 Citations

Considerable scientific research is characterized as follows. The scientist is interested in studying a phenomenon. At first he is quite ignorant and his initial experiments are preliminary and tentative. As he gathers relevant data, he becomes more definite in his impression of the underlying theory. This more definite impression is used to construct more informative experiments. Finally after a certain point he is satisfied that his evidence is sufficient to allow him to announce certain conclusions and he does so.

This work was sponsored by the Office of Naval Research under Contract N6 onr-25140 (NR-342-022)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Similar content being viewed by others

sequential design experiment

Design of Experiments Using R

sequential design experiment

Theory Building in Experimental Design Research

sequential design experiment

Stopping rules as experimental design

R.N. Bradt, S.M. Johnson, and S. Karlin. “On sequential designs for maximizing the sum of n observations,” Ann. Math. Stat , Vol. 27 (1956), pp. 1060–1074.

Article   MathSciNet   MATH   Google Scholar  

Herman Chernoff, “Large-sample theory; parametric case,” Ann. Math. Stat. , Vol. 27 (1956), pp. 1–22.

J.L. Doob, Stochastic Processes . John Wiley and Sons, Inc., New York, 1953.

MATH   Google Scholar  

S. Kullback and R.A. Leibler, “On information and sufficiency.” Ann. Math. Stat. , Vol. 22 (1951), pp. 79–86.

H.E. Robbins. “Some aspects of the sequential design of experiments,” Bull. Amer. Math. Soc . Vol. 58, No. 5 (1952), pp. 527 - 535.

Abraham Wald, Sequential Analysis , John Wiley and Sons, Inc., New York. 1947.

Download references

Author information

Authors and affiliations.

Stanford University, USA

Herman Chernoff

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

College of Business and Management, University of Maryland at College Park, 20742, College Park, MD, USA

Samuel Kotz

Department of Statistics Phillips Hall, The University of North Carolina at Chapel Hill, 27599, Chapel Hill, NC, USA

Norman L. Johnson

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag New York, Inc.

About this chapter

Chernoff, H. (1992). Sequential Design of Experiments. In: Kotz, S., Johnson, N.L. (eds) Breakthroughs in Statistics. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-4380-9_27

Download citation

DOI : https://doi.org/10.1007/978-1-4612-4380-9_27

Publisher Name : Springer, New York, NY

Print ISBN : 978-0-387-94039-7

Online ISBN : 978-1-4612-4380-9

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

자동등록방지를 위해 보안절차를 거치고 있습니다.

Please prove that you are human.

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

https://www.nist.gov/programs-projects/sequential-bayesian-experiment-design

Sequential Bayesian Experiment Design

Every day, across the globe, people measure things.  Many measurements are direct: weight from a scale, volume from a measuring cup, length from a tape measure.  But often, the things we want to measure (estimate, really) are parameters in a model.  The slope and intercept of a line, an exponential decay rate, and a peak’s position, amplitude, and width are common examples.  For parameter estimation, it is common practice to make a series of measurements and then find the parameter values that best fit the theoretical curve to the data.  For a more efficient alternative to this measure-then-fit strategy, we create and publish software that implements sequential Bayesian experiment design, a statistical method that interprets measurement data “on-the-fly” and adaptively guides measurements toward the most useful settings, ultimately conserving measurement resources and/or improving measurement uncertainty. This feature is very helpful in situations where the measurements are expensive relative to computational costs.

Description

We develop and publish the optbayesexpt python package. The package implements sequential Bayesian experiment design to control laboratory experiments for efficient measurements. The package is designed for measurements with:

  • an experiment (possibly computational) 
  • that yields measurements and uncertainty estimates, 
  • and that can be controlled on the fly by one or more experimental settings, and with
  • a parametric model, i.e., an equation that relates unknown parameters and experimental settings to measurement predictions.  

The OBED method represents the unknown parameters random variables with a probability distribution. Measurement data typically narrows the parameter distribution, i.e., narrow distributions correspond to small uncertainties.  Using the parameter probability distribution, the algorithms do two tasks. In a learning step, the algorithm uses Bayesian inference to incorporate new measurement data and to refine the probability distribution.  In this way, the algorithm maintains up-to-date “knowledge” of the parameters based on all accumulated data.   Then, in an advising step, the algorithm uses likely parameter values to determine experimental settings that have the best chance of refining the parameter distribution.  This calculation permits a strategy that avoids wasting resources on measurements that are unlikely to improve results.

The optbayesexpt python package simplifies development of efficient laboratory measurements. To accommodate instrument control programs written in other languages, optbayesexpt includes a server script that runs in the background to provide Bayesian inference and measurement settings via TCP messages.

The optbayesexpt theory, software documentation and example descriptions are available at https://pages.nist.gov/optbayesexpt .  The python package and example scripts can be downloaded from https://github.com/usnistgov/optbayesexpt .

Measuring a resonance

Determining the center of a peak is a task that is common across many branches of science and engineering.  In this example, we compare simulated measurements using measure-then-fit (least squares) with OBED strategies.  The right panel shows results after 1000 individual measurements, each with signal/noise = 1.  Red circles show averages of 50 values at each of 20 settings.  The blue circles are also average values with the number of values at each setting shown by the histogram.  Circle areas are proportional to the resulting weights of the points, 1/σ2.  To best determine the peak center, the OBED algorithm focuses measurements on the sides of the peak.

The right panel traces how uncertainty decreases as data accumulates comparing measure-then-fit and OBED methods.  Solid lines are mean values of uncertainty from 100 runs of 20 000 measure-then-fit and 1000 OBED measurements.  The region between the 20th and 80th percentile is shaded.  Least squares fits that did not converge were thrown out, biasing the red curve downward for low measurement counts.  Both methods converge to a 1/√N behavior, but the OBED algorithms produced a 10X decrease in measurements needed to attain comparable uncertainty.

Comparison of uniform scans, averaged and fit with OBED methods

Tuning for quantum control

Quantum technology relies on the ability to reliably manipulate qbit states.  For example, radio- or microwave-frequency pulses can be used to change a spin state from |up> to |down>, and patients benefit from MRI scans that use these methods.  In fact, many systems exhibit similar behavior, and the general phenomenon is known as Rabi oscillation.  In this example, we have simulated a tuning process to determine the desired frequency and duration of pulses to reliably flip electron spins.  The right panel shows simulated measurement points in order from pink to red over a background of the model “true” mean photon count.  Simulated measurements include Gaussian noise (σ = 300, approximately the same size as the overall contrast).  The right panels shows the flip-rate / rf-frequency probability distribution evolving with the number (N) of measurements.  The color scale is stretched in each case.

Measurements of transition probability vs. pulse length and detuning

We can help you reset your password using the email address linked to your Project Euclid account.

sequential design experiment

  • Subscription and Access
  • Library Resources
  • Publisher Tools
  • Researcher Resources
  • About Project Euclid
  • Advisory Board
  • News & Events
  • DOWNLOAD PAPER SAVE TO MY LIBRARY

Sorry, your browser doesn't support embedded PDFs, Download First Page

sequential design experiment

KEYWORDS/PHRASES

Publication title:, publication years.

IMAGES

  1. Sequential experiment design procedure for parameter estimation

    sequential design experiment

  2. Information fluxes for a combined sequential-parallel experiment design

    sequential design experiment

  3. Scheme of the classical sequential design of experiments

    sequential design experiment

  4. | General structure of sequential design of experiments methods

    sequential design experiment

  5. Scheme of the classical sequential design of experiments

    sequential design experiment

  6. Mixed Methods Research

    sequential design experiment

VIDEO

  1. 4:16 Decoder

  2. Umair

  3. SEQUND Quick Tutorials: Create an evolving pattern using simple randomization and polyrhythms!

  4. DLD-99: Design a Synchronous Sequential Circuit using JK, T. D and SR flip flop

  5. Topic 25- Sequential Design

  6. Lecture 10

COMMENTS

  1. PDF Sequential Experimentation: Theory and Principles

    Sequential Experimentation Categories of problems Sequential experimental design Multi-armed bandit Given two or more actions, each of which produces stochastic payo s sampled from an unknown stationary distribution, design a procedure to maximize average payo over time. E.g., Choose a color for the \donate" button on our site. Best arm identi ...

  2. PDF Sequential Design of Experiments

    •Design of experiments is a powerful tool for accelerating learning, by targeting maximally helpful input combinations for experiment goals. •Sequential DoE incorporates (in near real-time) empirical information from an experiment as it is being run. •The criteria over which to optimize should be chosen to match the goals of the experiment.

  3. Sequential Design of Experiments (SDOE)

    A sequential design of experiments strategy allows for adaptive learning based on incoming results as the experiment is being run. The SDoE module in FOQUS allows the experimenter to flexibly incorporate this strategy into their designed experimental planning to allow for maximally relevant information to be collected. Statistical design of ...

  4. PDF Sequential Design of Experiments

    Sequential Design of Experiments 347 where rJ. and f3 are the two probabilities of error, and N is t~e po~sibly random sample size. Of course A and B are determined so as to mInImIZe {I - w}Ro + wR1· Suppose that c approaches zero. Then A and -B are large and Wald's approximations [6J give {2.3} E{NIHo } ~ -BIIo, and E{NIH1 ~AII1 where 10 and 11 are the Kullback-Leibler information numbers ...

  5. PDF Sequential Design of Experiments: Capabilities, Progress, and Applications

    Under Development: Design of Experiments for Conglomerate Models. Approach 1: Full system tests. Using MEA system model case study. Develop methodology for resource allocation decisions for engineering models within a first stage experiment. Demonstrate with case study. Enhancements to methodology for wider application.

  6. PDF Lecture 15: Sequential Experimentation: Theory and Principles

    This lecture will examine several categories of problems related to sequential experimentation. Sequential Experimental Design: Given two or more hypotheses, and one or more exper-iments whose outcome distributions differ under the hypotheses, design a procedure to test which hypothesis is true. For example, Hodgkin or non-Hodgkin lymphoma?

  7. Strategies for sequential design of experiments and augmentation

    The benefits of sequential design of experiments have long been described for both model-based and space-filling designs. However, in our experience, too few practitioners take advantage of the opportunity afforded by this approach to maximize the learning from their experimentation. By obtaining data sequentially, it is possible to learn from ...

  8. PDF Topics in the Design of Experiments Part 2: Sequential Design

    1.2 Examples of sequential designs A sequential design is often more efficient than an equivalent fixed-sample one. The examples below demonstrate the wide range of applications of a sequential approach. Example. Curtailed test. Suppose that a machine produces items which may be judged good or defective, and that the

  9. Sequential Design

    Sequential design is a particular form of experimental design where data are collected in stages or over time. ... Doucet, and Jasra (2006), and has been used to facilitate efficient computation in the sequential design of experiments recently (Drovandi et al., 2013, 2014; McGree et al., 2015; McGree, 2015).

  10. Sequential Design of Experiments (SDOE)

    A sequential design of experiments strategy allows for adaptive learning based on incoming results as the experiment is being run. The SDoE module in FOQUS allows the experimenter to flexibly incorporate this strategy into their designed experimental planning to allow for maximally relevant information to be collected. Statistical design of ...

  11. Design of experiments

    The design of experiments (DOE or DOX), also known as experiment design or experimental design, ... One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952. [16] Fisher's principles

  12. PDF Testing via Sequential Experiments

    Testing via Sequential Experiments . Best Practice and Tutorial Authored by: James Simpson, PhD, Consultant to the STAT COE ... Fundamental to the practice of the design of experiments, is the best practice of planning for an initial set of tests that comprise perhaps only 25% of the resources available (Montgomery, 2009), followed by

  13. PDF Sequential Experiments 6

    Sequential Experiments Chapter 2 6 section's 2 6 I and Many experiments are actually a 2 6.5 only sequence of subexperiments These may be dependent or independent subexperiments D t the procedure of a subexperiment depends on the previous outcome s Example roll a die then flip that many coins and count the number of heads Example flip a coin until you get one head Independent The procedure of ...

  14. [2202.00821] Optimizing Sequential Experimental Design with Deep

    View a PDF of the paper titled Optimizing Sequential Experimental Design with Deep Reinforcement Learning, by Tom Blau and 3 other authors. Bayesian approaches developed to solve the optimal design of sequential experiments are mathematically elegant but computationally challenging. Recently, techniques using amortization have been proposed to ...

  15. PDF Gradient Based Criteria for Sequential Design

    Adaptive sequential experiment designs are sequential designs that add design points to the existing design based on the data obtained from the existing design in order to achieve some objective. Existing methods for adaptive experiments can generally be classified under two separate categories: objective seeking and global fitting.

  16. PDF Sequential Design of Experiments with Unknown Covariates

    ICML 2020 Workshop on Real World Experiment Design and Active Learning Sequential Design of Experiments with Unknown Covariates Harvineet Singh [email protected] New York University Rumi Chunara [email protected] New York University Abstract We study the problem of optimal design of experiments when the design points are not

  17. Bayesian Optimisation for Sequential Experimental Design with

    mental design appears in many different contexts in chemistry and physics (e.g. Lam et al., 2018) where the design is an iterative process and the outcomes of previous experiments are exploited to make an informed selection of the next design to evaluate. Mathematically, it is often formulated as

  18. Sequential Design of Experiments

    Considerable scientific research is characterized as follows. The scientist is interested in studying a phenomenon. At first he is quite ignorant and his initial experiments are preliminary and tentative. As he gathers relevant data, he becomes more definite in his...

  19. Industrial Engineering & Management Systems

    The sequential design of computer experiments has received greater attention in the past two decades. Figure 3 shows the number of articles published on DACE and SDCE from 1995 through 2019. The data was retrieved from Google Scholar on 15 August 2020 with the search keywords "design of computer experiments," "experimental design ...

  20. Sequential Bayesian Experiment Design

    The package implements sequential Bayesian experiment design to control laboratory experiments for efficient measurements. The package is designed for measurements with: an experiment (possibly computational) that yields measurements and uncertainty estimates, and that can be controlled on the fly by one or more experimental settings, and with.

  21. PDF SOME ASPECTS OF THE SEQUENTIAL DESIGN

    mber of independent random variables. A major advance now appears to be in the making with the creation of a theory of the sequential design of experiments, in which the size and composition of the samples are not fixed in advance but are func. ions of the ob servations themselves.The first important departure from fixed sample size came in the ...

  22. Bayesian sequential design for sensitivity experiments with hybrid

    This article considers a new model for the dependent situation and a corresponding sequential design is proposed under the decision-theoretic framework. To deal with the problem of complex computation involved in searching for optimal designs, fast algorithms are presented using two strategies to approximate the optimal criterion, denoted as SI ...

  23. Sequential Design of Experiments

    The Annals of Mathematical Statistics. Contact & Support. Business Office 905 W. Main Street Suite 18B Durham, NC 27701 USA

  24. A Dynamic Matching Mechanism for College Admissions: Theory and Experiment

    Market design provides managerial insights into the success and failure of various market institutions in allocating scarce resources. We investigate a dynamic matching mechanism used in real-world college admissions, where students share a common priority ranking.