Quick answer
AI Summary: Introduces a dynamic trust framework that improves human-agent collaboration by programming Agentic AI to explicitly communicate uncertainty and actively request human intervention during complex tasks.
AI Summary: Introduces a dynamic trust framework that improves human-agent collaboration by programming Agentic AI to explicitly communicate uncertainty and actively request human intervention during complex tasks.
As Agentic AI systems assume more responsibility in high-stakes domains, calibrating human trust remains a critical challenge. Under-trust leads to inefficient micromanagement, while over-trust results in catastrophic failures when agents hallucinate. This paper introduces the 'Dynamic Trust Calibration' (DTC) framework, which models human-agent teaming as a cooperative game. By forcing agents to explicitly quantify their confidence levels and proactively request human-in-the-loop intervention during high-uncertainty tasks, the DTC framework improves joint task performance by 38% in simulated air-traffic control and medical diagnostics scenarios.
Share your opinion to help other learners triage faster.
Write a reviewInvite someone by email to share an invited review for Human-Agent Teaming: Trust Calibration in Semi-Autonomous Agentic AI.