← Home

Quick answer

' It introduces a new 'Minimal Explanation Packet' framework to audit the long-term trajectories of autonomous agents. By shifting the focus from input features to reasoning traces and tool calls, the authors provide a more reliable way to diagnose failure modes in multi-step workflows.

Claim

From Features to Actions: Explainability in Traditional and Agentic AI Systems

Authors
Moritz Miller·
Florent Draye·
Bernhard Schölkopf

ABSTRACT

This paper distinguishes between two paradigms in AI explanation: static prediction and agentic trajectories. In agentic systems, behavior emerges as a sequence of observations, reasoning steps, and actions, requiring a shift from feature-based attributions to trajectory-level explanations. We introduce the Minimal Explanation Packet (MEP) framework to provide verifiable reasoning traces for autonomous agents. This architecture separates execution from verification, allowing for deep auditing of tool calls and state updates in high-stakes environments.

Review Snapshot

Explore ratings

4.3
★★★★
6 ratings
5 star
50%
4 star
33%
3 star
17%
2 star
0%
1 star
0%

Recommendation

83%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for From Features to Actions: Explainability in Traditional and Agentic AI Systems.

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful