Skip to content
Paul LuckeyProduct Architect
← Notes

Why Everything Feels Like a Chatbot

March 15, 2026

I've been building AI experiences for my portfolio and I keep running into the same problem: everything gives chatbot vibes.

At first I thought it was the streaming text. We've been watching ChatGPT stream markdown at us for years. The visual pattern — text appearing word by word, scrolling down — has become shorthand for "I am talking to an AI." So I avoided it. I built structured dashboards with discrete values: entity panels, classification scores, relationship graphs. No streaming narrative at all.

I watched the data populate and felt disappointed. I was still dealing with a chatbot. Except this chatbot was pretending that it wasn't, which felt even worse.

The uncanny valley of AI interfaces

There's a familiar spectrum in robotics: the uncanny valley. As a robot becomes more human-like, our comfort increases — until it gets close enough to be almost human, at which point comfort plummets. We're fine with obviously mechanical robots. We're fine with actual humans. It's the in-between that disturbs.

AI interfaces have their own version. An honest chatbot — a text box where you type and it replies — is fine. We know what it is. A genuine instrument — a dashboard that acquires and displays intelligence without pretending to converse — is also fine. But the in-between — an AI system that's sophisticated enough to feel conversational but presents itself as something else — creates a specific kind of cognitive dissonance.

The chatbot I dressed up in a dashboard UI fell squarely in the uncanny valley. The structured panels didn't change what it was — language model output, repackaged. Users sensed the pretense even if they couldn't articulate it.

Language circles truth

Wittgenstein observed that language circles truth but never lands on it. Words point toward meaning but can't contain it. This is the condition LLMs are built on: they operate entirely in language, generating text that approximates understanding without possessing it.

The chatbot feeling isn't about the UI pattern. It's about this fundamental property of language-based systems. No matter how you present the output — streaming text, structured panels, populating dashboards — the underlying medium is language, and language approximates. Users feel this. They may not have the vocabulary for it, but they sense that the system is performing understanding rather than having it.

This matters beyond UX. How an AI system presents itself shapes how people relate to it — whether they trust it, how they use it, how much authority they grant its outputs. An interface that reveals its nature honestly (I am a language model processing your input) creates one kind of relationship. An interface that obscures its nature (I am an intelligent dashboard analyzing your data) creates another. The second isn't necessarily better. Sometimes it's worse.

What I changed

The fix, when I found it, was counterintuitive: don't hide the AI. Don't dress it up. Instead, design the interaction so the human's contribution is the protagonist and the AI's contribution is the atmosphere. I call this ambient intelligence — AI that surrounds the decision-maker with context rather than presenting itself as the decision-maker.

The conversational frame says: "I analyzed your input and here's what I think." The ambient frame says: "Here's what's around you. You decide."

The chatbot vibes disappeared. Not because the technology changed, but because the relationship between user and system changed. The magic was never in the tool. It was in the design of the relationship.