← Home

Quick answer

AI Summary: Introduces a dynamic trust framework that improves human-agent collaboration by programming Agentic AI to explicitly communicate uncertainty and actively request human intervention during complex tasks.

Claim

Human-Agent Teaming: Trust Calibration in Semi-Autonomous Agentic AI

Michael C. Brooks·
Anita Desai·
Francis Nguyen

ABSTRACT

As Agentic AI systems assume more responsibility in high-stakes domains, calibrating human trust remains a critical challenge. Under-trust leads to inefficient micromanagement, while over-trust results in catastrophic failures when agents hallucinate. This paper introduces the 'Dynamic Trust Calibration' (DTC) framework, which models human-agent teaming as a cooperative game. By forcing agents to explicitly quantify their confidence levels and proactively request human-in-the-loop intervention during high-uncertainty tasks, the DTC framework improves joint task performance by 38% in simulated air-traffic control and medical diagnostics scenarios.

Review Snapshot

Explore ratings

4.6
★★★★★
5 ratings
5 star
60%
4 star
40%
3 star
0%
2 star
0%
1 star
0%

Recommendation

100%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for Human-Agent Teaming: Trust Calibration in Semi-Autonomous Agentic AI.

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful