Skip to content

Commit

Permalink
Merge pull request #4 from AutoResearch/final-edits
Browse files Browse the repository at this point in the history
Final edits
  • Loading branch information
musslick authored Dec 5, 2024
2 parents 13da692 + ad3f88c commit cdd6b7b
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 10 deletions.
4 changes: 1 addition & 3 deletions paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ @inproceedings{musslick_evaluation_2023
volume = {45},
author = {Musslick, Sebastian and Hewson, Joshua TS and Andrew, Benjamin W and Strittmatter, Younes and Williams, Chad C and Dang, George T and Dubova, Marina and Holland, John Gerrard},
year = {2023},
pages = {1386--1392},
note = {Issue: 45},
}

Expand Down Expand Up @@ -90,9 +91,6 @@ @article{musslick2024perspective
title = {Automating the Practice of Science--Opportunities, Challenges, and Implications},
journal = {Proceedings of the National Academy of Sciences},
year = {in press},
eprint = {2409.05890},
archivePrefix = {arXiv},
primaryClass = {cs.AI},
}


Expand Down
14 changes: 7 additions & 7 deletions paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,21 +55,21 @@ bibliography: paper.bib

# Summary

Automated Research Assistant (`autora`) is a Python package for automating and integrating empirical research processes, such as experimental design, data collection, and model discovery. With this package, users can define an empirical research problem and specify the methods they want to employ for solving it. `autora` is designed as a declarative language in that it provides a vocabulary and set of abstractions to describe and execute scientific processes and to integrate them into a closed-loop system for scientific discovery. The package interfaces with computational approaches to scientific discovery, including `scikit-learn` estimators for scientific model discovery, `sweetpea` for automated experimental design, `firebase_admin` for automated behavioral data collection, and `autodoc` for automated documentation of the empirical research process. While initially developed for the behavioral sciences, `autora` is designed as a general framework for closed-loop scientific discovery, with applications in other empirical disciplines. Use cases of `autora` include the execution of closed-loop empirical studies [@musslick2024], the benchmarking of scientific discovery algorithms [@hewson_bayesian_2023; @weinhardt2024computational], and the implementation of metascientific studies [@musslick_evaluation_2023].
Automated Research Assistant (`autora`) is a Python package for automating and integrating empirical research processes, such as experimental design, data collection, and model discovery. With this package, users can define an empirical research problem and specify the methods they want to employ for solving it. `autora` is designed as a declarative language in that it provides a vocabulary and set of abstractions to describe and execute scientific processes and to integrate them into a closed-loop system for scientific discovery. The package interfaces with other tools for automating scientific practices, such as `scikit-learn` for model discovery, `sweetpea` and `sweetbean` for experimental design, `firebase_admin` for executing web-based experiments, and `autodoc` for documenting the empirical research process. While initially developed for the behavioral sciences, `autora` is designed as a general framework for closed-loop scientific discovery, with applications in other empirical disciplines. Use cases of `autora` include the execution of closed-loop empirical studies [@musslick2024], the benchmarking of scientific discovery algorithms [@hewson_bayesian_2023; @weinhardt2024computational], and the implementation of metascientific studies [@musslick_evaluation_2023].

# Statement of Need
The pace of empirical research is constrained by the rate at which scientists can alternate between the design and execution of experiments, on the one hand, and the derivation of scientific knowledge, on the other hand [@musslick2024perspective]. However, attempts to increase this rate can compromise scientific rigor, leading to lower quality of formal modeling, insufficient documentation, and non-replicable findings. `autora` aims to surmount these limitations by formalizing the empirical research process and automating the generation, estimation, and empirical testing of scientific models. By providing a declarative language for empirical research, `autora` offers greater transparency and rigor in empirical research while accelerating scientific discovery. While existing scientific computing packages solve individual aspects of empirical research, there is no workflow mechanic for integrating them into a single pipeline, e.g., to enable closed-loop experiments. `autora` offers such a workflow mechanic, integrating Python packages for automating specific aspects of the empirical research process.

![The `autora` framework. (A) `autora` workflow, as applied in a behavioral research study. `autora` implements components (colored boxes; see text) that can be integrated into a closed-loop discovery process. Workflows expressed in `autora` depend on modules for individual scientific tasks, such as designing behavioral experiments, executing those experiments, and analyzing collected data. (B) `autora`’s components acting on the state object. The state object maintains relevant scientific data, such as experimental conditions X, observations Y, and models, and can be modified by `autora` components. Here, the cycle begins with an experimentalist adding experimental conditions $x_1$ to the state. The experiment runner then executes the experiment and collects corresponding observations $y_1$. The cycle concludes with the theorist computing a model that relates $x_1$ to $y_1$.\label{fig:overview}](figure.png)
![The `autora` framework illustrated for closed-loop behavioral research. (A) Exemplary `autora` workflow. `autora` implements components (colored boxes; see text) that can be integrated into a closed-loop discovery process. Workflows expressed in `autora` depend on modules for individual scientific tasks, such as designing behavioral experiments, executing those experiments, and analyzing collected data. (B) `autora`’s components acting on the state object. The state object maintains relevant scientific data, such as experimental conditions X, observations Y, and models, and can be modified by `autora` components. Here, the cycle begins with an experimentalist adding experimental conditions $x_1$ to the state. The experiment runner then executes the experiment and collects corresponding observations $y_1$. The cycle concludes with the theorist computing a model that relates $x_1$ to $y_1$.\label{fig:overview}](figure.png)

# Overview and Components
The `autora` framework implements and interfaces with components automating different phases of the empirical research process (\autoref{fig:overview}A). These components include *experimentalists* for automating experimental design, *experiment runners* for automating data collection, and *theorists* for automating scientific model discovery. To illustrate each component, we consider an exemplary behavioral research study (cf. \autoref{fig:overview}) that examines the probability of human participants detecting a visual stimulus as a function of its intensity.
The `autora` framework implements and interfaces with components automating different phases of the empirical research process (\autoref{fig:overview}A). These components include *experimentalists* for automating experimental design, *experiment runners* for automating data collection, and *theorists* for automating scientific model discovery. To illustrate each component, we consider an exemplary behavioral research study (cf. \autoref{fig:overview}) that examines the probability of human participants detecting a visual stimulus as a function of its luminosity.

*Experimentalist* components take the role of a research design expert, determining the next iteration of experiments to be conducted. Experimentalists are functions that identify experimental conditions which can be subjected to measurement by experiment runners, such as different levels of stimulus intensity. To determine these conditions, experimentalists may use information about candidate models obtained from theorist components, experimental conditions that have already been probed, or respective observations. The `autora` framework offers various experimentalist packages, each for determining new conditions based on, for example, novelty, prediction uncertainty, or model disagreement [@musslick_evaluation_2023; @dubova_against_2022].
*Experimentalist* components take the role of a research design expert, determining the next iteration of experiments to be conducted. Experimentalists are functions that identify experimental conditions which can be subjected to measurement by experiment runners, such as different levels of stimulus luminosity. To determine these conditions, experimentalists may use information about candidate models obtained from theorist components, experimental conditions that have already been probed, or respective observations. The `autora` framework offers various experimentalist packages, each for determining new conditions based on, for example, novelty, prediction uncertainty, or model disagreement [@musslick_evaluation_2023; @dubova_against_2022].

*Experiment runner* components correspond to research technicians collecting data from an experiment. They are implemented as functions that accept experimental conditions as input (e.g., a `pandas` dataframe with columns representing different experimental variables) and produce collected observations as output (e.g., a `pandas` dataframe with columns representing different experimental variables along with corresponding measurements). `autora` (4.0.0) provides experiment runners for two types of automated data collection: real-world and synthetic. Real-world experiment runners include interfaces for collecting data in the real world. For example, the `autora` framework offers experiment runners for automating the data collection from web-based experiments for behavioral research studies [@musslick2024]. In the behavioral experiment described above, an experiment runner may set up a web-based experiment that measures the probability of human participants detecting visual stimuli of different intensities. These runners interface with external components including recruitment platforms (e.g, Prolific; @palan_prolific_2018) for coordinating the recruitment of participants, databases (e.g., Google Firestore) for storing collected observations, and web servers for hosting the experiments (e.g., Google Firebase). Synthetic experiment runners act as simulators for real-world experiments: they specify the data-generating process and collect observations from it. For example, ``autora-synthetic`` implements established models of human information processing (e.g, for perceptual discrimination) and conducts experiments on them. These synthetic experiments serve multiple purposes, such as testing and benchmarking `autora` components before applying them in the real-world [@musslick2024] or conducting computational metascience studies [@musslick_evaluation_2023].
*Experiment runner* components correspond to research technicians collecting data from an experiment. They are implemented as functions that accept experimental conditions as input (e.g., a `pandas` dataframe with columns representing different experimental variables) and produce collected observations as output (e.g., a `pandas` dataframe with columns representing different experimental variables along with corresponding measurements). `autora` (4.2.0) provides experiment runners for two types of automated data collection: real-world and synthetic. Real-world experiment runners include interfaces for collecting data in the real world. For example, the `autora` framework offers experiment runners for automating the data collection from web-based experiments for behavioral research studies [@musslick2024]. In the behavioral experiment described above, an experiment runner may set up a web-based experiment that measures the probability of human participants detecting visual stimuli with varying luminosities. These runners interface with external components including recruitment platforms (e.g, Prolific; @palan_prolific_2018) for coordinating the recruitment of participants, databases (e.g., Google Firestore) for storing collected observations, and web servers for hosting the experiments (e.g., Google Firebase). Synthetic experiment runners act as simulators for real-world experiments: they specify the data-generating process and collect observations from it. For example, ``autora-synthetic`` implements established models of human information processing (e.g, for perceptual discrimination) and conducts experiments on them. These synthetic experiments serve multiple purposes, such as testing and benchmarking `autora` components before applying them in the real-world [@musslick2024] or conducting computational metascience studies [@musslick_evaluation_2023].

*Theorist* components embody the role of a computational scientist, employing modeling techniques to find a model that best characterizes, predicts, and/or explains the study’s observations. Theorists may identify different types of scientific models (e.g., statistical, mathematical, or computational) implemented as `scikit-learn` estimators [@pedregosa2011scikit]. In case of the behavioral research study, a model may correspond to a psychophysical law relating stimulus intensity to the probability of detecting the stimulus. `autora` provides interfaces for various equation discovery methods that are implemented as `scikit-learn` estimators, such as deep symbolic regression [@petersen2021deep; @landajuela_unified_2022], `PySR` [@cranmer_discovering_2020], and the Bayesian Machine Scientist [@guimera_bayesian_2020; @hewson_bayesian_2023]. Alternatively, a model may correspond to a fine-tuned large language model [@binz2024centaur], enabling its automated alignment with human behavior from web-based experiments. A model is generated by fitting experimental data. Accordingly, theorists take as input a `pandas` dataframe specifying experimental conditions (instances of experimental variables) along with corresponding observations to fit a respective model. The model can then be used to generate predictions, e.g., to inform the design of a subsequent experiment.
*Theorist* components embody the role of a computational scientist, employing modeling techniques to find a model that best characterizes, predicts, and/or explains the study’s observations. Theorists may identify different types of scientific models (e.g., statistical, mathematical, or computational) implemented as `scikit-learn` estimators [@pedregosa2011scikit]. In case of the behavioral research study, a model may correspond to a psychophysical law relating stimulus luminosity to the probability of detecting the stimulus. `autora` provides interfaces for various equation discovery methods that are implemented as `scikit-learn` estimators, such as deep symbolic regression [@petersen2021deep; @landajuela_unified_2022], `PySR` [@cranmer_discovering_2020], and the Bayesian Machine Scientist [@guimera_bayesian_2020; @hewson_bayesian_2023]. Alternatively, a model may correspond to a fine-tuned large language model [@binz2024centaur], enabling its automated alignment with human behavior from web-based experiments. A model is generated by fitting experimental data. Accordingly, theorists take as input a `pandas` dataframe specifying experimental conditions (instances of experimental variables) along with corresponding observations to fit a respective model. The model can then be used to generate predictions, e.g., to inform the design of a subsequent experiment.

# Design Principles and Packaging
`autora` was designed as a general framework aimed at democratizing the automation of empirical research across the scientific community. Key design decisions were: 1) using a functional paradigm for the components and 2) splitting components across Python namespace packages.
Expand All @@ -85,7 +85,7 @@ Accordingly, the components share their interface – every component loads data
The `autora` framework presumes that each component is distributed as a separate package but in a shared namespace, and that `autora-core` – which provides the state – has very few dependencies of its own. For users, separate packages minimize the time and storage required for an install of an `autora` project. For contributors, they reduce incidence of dependency conflicts (a common problem for projects with many dependencies) by reducing the likelihood that the library they need has an existing conflict in `autora`. It also allows contributors to independently develop and maintain modules, fostering ownership of and responsibility for their contributions. External contributors can request to have packages vetted and included as an optional dependency in the `autora` package.

# Acknowledgements
The AutoRA framework is developed and maintained by members of the Autonomous Empirical Research Group. S. M., B. A., C. C. W., J. T. S. H., and Y. S. were supported by the Carney BRAINSTORM program at Brown University. S. M. also received support from Schmidt Science Fellows, in partnership with the Rhodes Trust. The development of auxiliary packages for AutoRA, such as `autodoc`, is supported by Schmidt Sciences, LLC. as part of the Virtual Institute for Scientific Software (VISS). The AutoRA package was developed using computational resources and services at the Center for Computation and Visualization, Brown University.
The AutoRA framework is developed and maintained by members of the Autonomous Empirical Research Group. S. M., B. A., C. C. W., J. T. S. H., and Y. S. were supported by the Carney BRAINSTORM program at Brown University. S. M. also received support from Schmidt Science Fellows, in partnership with the Rhodes Trust. The development of auxiliary packages for AutoRA, such as `autodoc` or `autora-experiment-server`, is supported by the Virtual Institute for Scientific Software (VISS) as part of Schmidt Sciences, LLC. The AutoRA package was developed using computational resources and services at the Center for Computation and Visualization, Brown University.

# References

Expand Down
Binary file modified paper.pdf
Binary file not shown.

0 comments on commit cdd6b7b

Please sign in to comment.