Why AI Can’t Predict the Crash

In a lecture that defied tech evangelism, investment technologist Joseph Plazo challenged a roomful of brilliant young minds with a message few in Wall Street are willing to confront: AI still can’t grasp the full arc of human judgement.

**MANILA —** On a oppressive Thursday morning in the wood-paneled halls of the Asian Institute of Management, Plazo opted for clarity over hype. His audience—a curated gathering from NUS, Kyoto, HKUST—came expecting an ode to artificial intelligence in finance.

Instead, they received a necessary reckoning.

“AI is like your smartest intern,” he said, half-joking. “But you still don’t hand the intern the vault keys.”

Laughter rippled. And then a pause. Because he wasn’t joking.

### A Technologist Questions the Hype He Helped Build

Plazo isn’t an outsider to this world—he’s part of the architecture. His firm, Plazo Sullivan Roche Capital, designs some of the most widely used trading AIs globally. But that proximity to power makes his critique all the more potent.

“The problem isn’t the tech,” he said. “It’s our expectation that it will save us from the weight of responsibility.”

Plazo offered real-world case studies—AIs that, on paper, flagged perfect trades. Only to be undone by things no algorithm could foresee: a shift in public sentiment.

Context, he argued, remains the province of people.

### The Challenge from the Young—Met by Experience

One Kyoto student asked whether LLMs could model global mood.

Plazo didn’t hesitate.

“AI can detect outrage in a tweetstorm,” he said. “But it can’t register moral weight in a leader’s voice.”

A shared understanding followed.

Another student asked if AI might simulate conviction.

“Conviction,” Plazo replied, “isn’t data. It’s the bruises of being wrong—and surviving. It’s knowing when *not* to act.”

You can’t upload that.

### Beyond Algorithms: A Call to Grow Up

Many students—confident in their tools—admitted to viewing AI as a workaround. A way to evade risk. Bypass emotion. Plazo challenged that notion.

“You can streamline your trading logic. But never your ethics.”

It struck a chord.

Because whether they wore suits or sandals, most in that room shared one goal: success. But Plazo asked a deeper question—*at what cost?*

### This Wasn’t Techlash—It Was Tech Maturity

Plazo was not anti-AI. He enumerated its strengths:

- Filtering massive noise
- Identifying technical patterns at scale
- Stress-testing portfolios in seconds

But he also listed its limits—starkly.

It can’t detect sarcasm. It can’t weigh political nuance. And it doesn’t know that your retirement plan may hang in the balance.

“If the algorithm fails,” he asked, “will you take responsibility? Or just blame the machine?”

The room was quiet. That quiet held meaning.

### The Human Remains the Final Arbiter

What emerged wasn’t a rejection of AI, but a reminder of its place.

Plazo described tools he’s building that consider misinformation, psychological factors—even geopolitical instability. But his parting truth was unambiguous:

“No machine can tell you when *not* to act. That’s a human burden.”

### Maybe the Future Doesn’t Need More AI—But Better Humans

As the crowd dispersed—some thoughtful, some rattled—one phrase echoed in the corridors:

“AI doesn’t know your values. So don’t let it make your decisions.”

In an age obsessed with speed and prediction, Plazo offered something radical:

Judgment.

Because in the end, investing isn’t about beating the market.

It’s about remembering *why* you entered the arena in the first read more place.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Comments on “Why AI Can’t Predict the Crash”

Leave a Reply

Gravatar