← Home

Quick answer

TL;DR: treat agent deployments as security exposures and implement controls accordingly—assume active adversaries and misaligned behavior.

Claim

AI Agents Act a Lot Like Malware. Here’s How to Contain the Risks.

Andrew Burt

ABSTRACT

HBR presents a governance-driven perspective on agentic AI risks, likening certain agent behaviors and threat models to malware. The core problem is organizational: how to contain agent risks with policies, controls, and oversight. The methodology is analogy-driven but policy-relevant, aiming to translate technical concerns into actionable risk containment steps.

Review Snapshot

Explore ratings

4.8
★★★★★
5 ratings
5 star
80%
4 star
20%
3 star
0%
2 star
0%
1 star
0%

Recommendation

100%

recommend this content.

Review this content

Share your opinion to help other learners triage faster.

Write a review

Invite a reviewer

Invite someone by email to share an invited review for AI Agents Act a Lot Like Malware. Here’s How to Contain the Risks..

Author Inquiries

Public questions about this content. Attendemia will route your question to the author. Vote on the most important ones. No guarantee of response.
Post an inquiry
Sort by: Most helpful