Skip to content
Vuong Nguyen avatar Vuong

Why AI Agents Can't Make Decisions Without Context

5 min read

Visual metaphor of an AI system evaluating multiple decision paths simultaneously

Everyone talks about AI agents as if actions are the hard part. They focus on tools, APIs, and execution. But in the real world, execution is the easy piece. The hard part is knowing which action to take, when, and why.

An agent can click any button, call any API, or move any asset. But that doesn’t mean it understands the consequences. Execution isn’t intelligence. Decision-making is. Markets don’t punish slow agents. They punish agents that make the wrong decisions.

Finance exposes this more brutally than any other domain.

The Lagos Problem

An AI agent needs to route a $75,000 corporate payment from Lagos to London. The payment is due in 6 hours. Three rails are available.

The stablecoin corridor costs 1.9% ($1,425), settles in 12 minutes, runs 24/7, and is compliant under $100,000. Tokenized deposits cost 0.7% ($525), settle in 3 hours, but only operate during Lagos banking hours and require a pre-approved counterparty. SWIFT states a 0.5% fee, but the actual all-in cost reaches 2.1% ($1,575) after FX spreads and correspondent markups, settles in 2-3 business days, and requires additional documentation for Nigeria-UK payments.

To a human treasury analyst, the trade-offs are obvious. Is it before 5pm Lagos time? Is the recipient pre-approved? Is the payment urgent? Is SWIFT fast enough for this deadline? Has this corridor been flagged for slowdowns today?

To an agent, all three rails are just API calls. Without context, every route looks equally valid.

The agent can execute the payment. It can’t decide the payment. And in finance, a wrong decision is always more expensive than a slow one.

This is why AI breaks down in live financial workflows. Tools are abundant. Actions are cheap. Execution is trivial. The missing piece is structured, machine-readable context to tell the agent what “good” looks like.

Tools Don’t Create Decisions

Agents today behave like interns with access to every button in the system. They can call an exchange API, run a settlement instruction, generate paperwork, or fetch market pricing.

But nothing in the tool stack tells them when a tool should be used, under which conditions it’s allowed, how to weigh cost versus speed, whether a workflow is complete, or what the organization actually prioritizes.

That requires context, not tools.

RAND Corporation found that over 80% of AI projects fail, twice the failure rate of non-AI technology projects. The gap isn’t capability. It’s decision structure.

Finance has the highest density of decisions per action. A typical capital movement process includes liquidity checks, currency controls, counterparty rules, timing requirements, documentation constraints, compliance workflows, real-time pricing, and banking-hour limitations. Even a “simple” $10M FX trade can require 20-40 decisions before execution.

Atomic settlement solves the timing problem. Programmable money solves the execution problem. But neither solves the decision problem.

What Context Actually Is

Retrieval-Augmented Generation (RAG) can surface information. But agents don’t fail because they lack information. They fail because they lack decision structure.

RAG can tell an agent that SWIFT fees are 0.5%, that stablecoin settlement is faster, and that tokenized deposits require whitelisting. But RAG can’t tell an agent the actual SWIFT cost after invisible FX spreads, whether the counterparty is already approved, that tokenized deposits shut down at 5pm, that stablecoin rails are down 4% liquidity today, or that the CFO prefers cost savings over speed this quarter.

This isn’t retrieval. It’s decision logic.

Context isn’t a prompt. It’s structured operational reality, encoded so machines can evaluate decisions the way humans do.

A real context layer contains rule engines that evaluate actions against policies and constraints, workflow graphs that map allowed sequences, permissioning layers that encode who can do what with which tools, state machines that track the lifecycle of decisions, priority queues for resolving conflicts between competing rules, and business logic tied to cost, timing, compliance, and urgency.

For a Lagos payment, a context layer tells the agent: this vendor isn’t whitelisted for tokenized deposits, it’s 4:12pm local time and the banking window closes in 48 minutes, stablecoin corridor liquidity is down 4% today, company policy prioritizes cost over speed for transfers below $100,000, and SWIFT won’t settle by deadline.

Only then can the agent choose the correct rail.

Beyond Finance

The same failure mode shows up everywhere. In customer support, agents can escalate tickets but don’t know issue severity, customer value, or churn risk. In supply chain, agents can reorder inventory but don’t know seasonal patterns, supplier reliability, or cash position. In healthcare, agents can suggest treatments but don’t know contraindications, insurance constraints, or facility capabilities.

Finance reveals the problem with higher stakes and clearer math. Every other industry carries the same structural gap.

Context as Infrastructure

We spent decades improving predictions. Then we improved tools. Now every agent can act. But action without context is blind.

Context is the layer that turns instructions into decisions. It defines boundaries, encodes preferences, enforces rules, resolves conflicts, evaluates conditions, and shapes outcomes. It transforms agents from task runners into decision systems.

This is the infrastructure layer missing from AI today. Atomic settlement shows the gap clearly. Once execution becomes perfectly synchronized, decision quality becomes the only bottleneck left.

AI won’t break because it lacks tools. It will break because it lacks context.

If agents can do everything, what tells them what they should do?