Discernment: A Framework for Human Oversight Capacity
Introducing discernment as a measurable construct for evaluating whether humans can maintain agency when working with advanced AI systems.
SiRelations is an AI safety research organization. We study how humans can maintain meaningful agency when working with systems that exceed human cognitive capabilities.
As AI systems grow more capable, the question isn't whether humans remain "in the loop" — it's whether that presence translates to genuine agency. We conduct foundational research on discernment: the capacity to remain clear, responsible, and sovereign when collaborating with advanced AI.
Our work spans measurement, policy, and intervention design.
Developing empirical methods to assess whether humans can effectively exercise oversight in practice — not just in theory.
Studying the cognitive and contextual factors that enable or undermine human judgment when working with AI systems.
Bridging research findings to regulatory frameworks like the EU AI Act, where human oversight requirements lack measurement methods.
Translating foundational research into practical tools for organizations navigating AI governance requirements.
Regulatory frameworks now mandate human oversight of AI systems. The EU AI Act requires humans to "correctly interpret" AI output and decide when to override. Impact assessments identify risks before deployment.
But these frameworks assume capacities that lack established measurement methods. Compliance asks whether oversight exists. Our research asks whether the humans providing it can do so meaningfully.
Introducing discernment as a measurable construct for evaluating whether humans can maintain agency when working with advanced AI systems.
A focused assessment tool for organizations preparing for AI governance requirements.
How Article 14's oversight requirements create demand for measurement methods we don't yet have.