AI research that puts human oversight at the frontier

SiRelations is an AI safety research organization. We study how humans can maintain meaningful agency when working with systems that exceed human cognitive capabilities.

Making human oversight meaningful

As AI systems grow more capable, the question isn't whether humans remain "in the loop" — it's whether that presence translates to genuine agency. We conduct foundational research on discernment: the capacity to remain clear, responsible, and sovereign when collaborating with advanced AI.

The measurement gap

Regulatory frameworks now mandate human oversight of AI systems. The EU AI Act requires humans to "correctly interpret" AI output and decide when to override. Impact assessments identify risks before deployment.

But these frameworks assume capacities that lack established measurement methods. Compliance asks whether oversight exists. Our research asks whether the humans providing it can do so meaningfully.

Featured

Work with us

We partner with organizations preparing for meaningful AI governance.

Get in touch