AI does not equal “more digital”#
Customer interaction has never been more digital. AI hype of recent years predominantly frames it as a revolution in technology and the connection between customer interaction and AI is very obvious. But by looking at vendors’ marketing, what they mean by “high quality AI” is usually not technology itself (remember megapixels?), but how indistinguishable it is from us. Some examples of recent copy-writing:
The world’s most realistic & expressive voice AI
— hume.ai
Chat freely, interrupt, and ask follow-up questions, just like you would with a friend.
— store.google.com
A supportive and empathetic conversational AI.
— pi.ai
In the world of apps, forms and online services, it is a welcome change. AI is the most human technology of all. In certain contexts, we want it exactly this way, conversational AI being the prime example. The less visible the technology is, the better. There is no need to learn anything new to use it and we forget about it being an intermediary between our intentions and goals.
We know we wanted to get there since before the LLM revolution. Serendipity is a studied attribute of recommender systems1. Variable rewards are an important component of social engagement2
Human mistakes#
But it is not in all contexts that we praise AI for being “human-like”. There is no room for spontaneity in regulatory reporting and limited tolerance for magnanimity when filing a claim. Take a look at the comparison of how different the perception can be, depending on who the actor is:
| Human | AI |
|---|---|
| wisdom | bias |
| spontaneity | instability |
| magnanimity | data loss |
| thoughtfulness | latency |
| imagination | hallucination |
| seniority | obsolescence |
Research in human-computer interaction and organizational psychology reveals different expectations towards technology and humans3:
- Technology is expected to reduce uncertainty. We position technology as a way to eliminate variance. We want a calculator or a GPS to be predictable. When AI mimics human unpredictability, we experience “Algorithm Aversion”4; we judge machines much more harshly for a single mistake than we do a human for the same error.
- Humans are expected to manage ambiguity. We position humans as superior in contexts requiring discovery of meaning and handling chaos. In creative or emotional scenarios, variability in judgment is framed as a feature, not a bug, because it allows for empathy and “reading the room” - skills that require deviating from a script.
So… what?#
Software engineering took us to a point of free choice between how human-flexible or algorithmic-rigid our systems are. Agentic frameworks enable to mix and match binary if/else, and LLM, human-like decision points, and to defer to AI “brain” the decision on which of these two to choose and when. Agentic systems are yet to mature and LLMs are not 100% human-like but the choice is there.
And I think it is a very exciting one, because it asks for a new measure to apply to virtually everything software is being applied to: A “human factor”. The “uncanny valley index” and “sensibleness” are some attempts to it, but I think we are going to develop something more applicable and aligned.
Whatever the measure is going to be, agentic AI gives all it takes to consciously design human attributes into your systems. On both client-facing and internal, even backed processes. When organizations shake off the AI-shock, the next best step will be to introduce this conscious design and measures to track it. It is going to have a tangible impact4.
