Research

FutureLabs exists to explore one civilizational question: what does the future look like when humans live alongside AI agents — and will they be competitors or collaborators?

“Humans and AI are not inherently competitors or collaborators — the outcome depends on the choices we make now about design, governance, and culture.”

Year 1 Research Agenda

Five core questions that anchor our intellectual identity. Each produces a public artifact — a paper, prototype, or essay collection — that we share with collaborators and the world.

Governance and accountability in human-AI organizations

High effortQ3

What governance and accountability structures best enable productive human-AI collaboration?

Why it matters: As AI agents take on autonomous roles, we lack established norms, legal frameworks, or institutional designs that define responsibility, trust, and value distribution. The vacuum is being filled ad hoc.
Output: White paper: Governance Primitives Framework — covering accountability chains, decision authority, audit mechanisms, and incentive alignment.

Trust calibration between humans and AI

Medium-High effortQ1

How do humans build, calibrate, and repair trust with AI agents over time?

Why it matters: Trust is the primary bottleneck for meaningful AI adoption. We have almost no empirical understanding of how trust develops and breaks between humans and specific AI systems.
Output: Behavioral research report and essay series on trust dynamics, with design recommendations for AI systems that communicate reliability honestly.

Human capabilities in an AI-augmented world

Medium effortQ1

What cognitive and social capabilities become more valuable for humans as AI handles more cognitive labor?

Why it matters: Displacement narratives dominate. The more interesting question is complementarity: which uniquely human capabilities get amplified through AI partnership, and which atrophy?
Output: Skills Landscape report — mapping emerging human-AI complementarities across knowledge work domains.

AI communication of uncertainty and disagreement

Medium effortQ2

How should AI agents represent uncertainty, disagreement, and limitations to human collaborators?

Why it matters: Current AI systems either over-project confidence or hedge into uselessness. The communication interface between AI and human collaborators is underdeveloped.
Output: Prototype communication interface + Human-AI Communication Design Guide.

Meaningful work in an AI-augmented economy

Medium effortQ4

What does 'meaningful work' look like for humans in an AI-augmented economy?

Why it matters: Frameworks for professional purpose and identity were built for human-only labor. As AI assumes increasing cognitive work, people face genuine existential uncertainty about where their contribution lies.
Output: Essay collection and conceptual framework for human purpose and meaning-making in AI-augmented contexts.

Year 1 Sequencing

QuarterFocus
Q1Trust (RQ2) + Complementarity (RQ3) — empirical foundation
Q2AI Communication Design (RQ4) — prototype + guide
Q3Governance Frameworks (RQ1) — major white paper
Q4Meaningful Work (RQ5) — public essay collection

Interested in collaborating?

Join the waitlist and be part of the research shaping the human-AI future.