What if you could stress-test your experiment with AI before you walk into the lab?

Project Silicon

AI-Powered Subject Simulation in Behavioral Studies

A Rigorous Test of AI's Ability to Simulate Human Behavior

Project Silicon is a large-scale collaborative research initiative assembling a diverse portfolio of brand-new, unpublished behavioral experiments, running AI simulations on all of them, and comparing simulated results against actual human data.

The project is developed in coordination with Management Science and is fully funded by institutional sources with no technology provider involvement.

Four Goals Driving the Project

01

Assess AI Accuracy

Determine whether AI can reliably simulate human responses across diverse, unpublished experiments.

02

Benchmark Models

Compare leading AI models—closed and open-source—on simulation accuracy.

03

Open-Source Toolkit

Release an open-source toolkit for AI-based experimental pre-testing.

04

Actionable Guidance

Provide practical guidelines for integrating AI simulation into experimental design.

Researchers Running New Behavioral Experiments

We're seeking researchers planning to conduct—but have not yet completed—behavioral experiments in the following fields:

Operations Management Economics

If you expect to complete your experiment by January 2027, let us know!

Process & Timeline

Now – July 2026

Register & Share Materials

Register your interest. We assess suitability early—before you commit any time. Once finalized, share materials via our secure portal.

July – August 2026

Registered Report Submission

We compile the Project Silicon paper in registered-report format and submit to Management Science for review, with feedback from all participants before submission.

September 2026 – January 2027

Simulation & Experimentation

We run AI simulations across multiple models using your materials. You run your experiment as planned—nothing changes.

January – March 2027

Comparison & Publication

We evaluate how closely each AI model’s simulated responses match actual human data. Results are submitted back to Management Science.

Benefits of Participating

Co-authorship

All participants will be secondary co-authors on the Project Silicon paper, which reports the aggregate findings, model benchmarks, and recommendations across the full portfolio of experiments.

Your AI Results

Receive the full AI simulation output for your experiment—showing how multiple models predicted your subjects' behavior.

Additional Simulation Support

We'll provide additional simulations for your subsequent experiments. You'll also get access to our simulation toolkit.

Results Event

Join a dedicated results-sharing event at the 2027 Michigan Workshop on Large Language Models (May 2027, Los Angeles, CA).

Maintaining the Methodological Firewall

The validity of this project depends on a strict separation between human and AI results.

Do not publish results before simulation is complete

This includes working papers, slides, posters, and any results shared online. Pre-registrations and high-level descriptions are fine.

Do not expose the study design to an LLM

Do not submit design specs to an LLM. The AI models must have zero prior knowledge of your experiment.

Your Work Is Safe With Us

Confidentiality

Only the core PIs have access to your experimental materials. No other participant will ever see your work.

Data Security

All data is stored in a HIPAA-certified University of Michigan computing environment.

Result Reporting

For the Project Silicon paper, we'll work with you to describe your experiment in a way that you are comfortable with.

No Cost

Participation requires no financial commitment. All project costs are covered by institutional funding.

No IRB Burden

The AI simulation involves no human subjects. Your existing IRB protocol requires no amendment.

Frequently Asked Questions

No. By default, the paper reports only aggregate statistics across all experiments. Individual details are included only if you explicitly consent. Your experimental design remains yours.
It's possible given the registered-report format. However, there is no publication embargo for you. Once the simulation is complete, you are free to publish your results as you see fit.
Null results are fully acceptable. For Project Silicon, the question is whether AI matches the observed result, whatever that result is. Your results are the ground truth and we are only assessing whether AI can reach that ground truth.
Minimal beyond your normal research activities. You submit your finalized materials, run your experiment as you would anyway, and share results upon completion. We may consult with you during simulation setup to ensure accuracy.
No. The PI team handles all aspects of the AI simulation, from agent construction to execution to analysis. You do not need to write code, configure models, or interact with any AI systems.
We screen for suitability early in the process, before you contribute your materials. If your experiment is not a good fit, you'll know before committing any time.
We'd be happy to feature your experiment. Prior to publication, we'll work with you to ensure it is accurately described in the Project Silicon paper.

Principal Investigators

AD

Andrew Davis

Cornell University

SL

Steve Leider

University of Michigan

AW

Andrew Wu

University of Michigan

JW

Jing Wu

Chinese University of Hong Kong

Ready to Find Out What AI Would Predict?