Three Things Your AI Chatbot Does Poorly (or not at all)
We recently met with a VP of AI at a large F500 all of you would know. He built a public facing AI chatbot using AWS Bedrock that is now GA.
To summarize without using his exact words:
“It answered perfectly. But what it missed were the more human elements — the pauses, the laughter, the context, the curiosity, the memories, the unspoken understanding of another soul.”
The AI chatbot runs on their website and as a mobile app. It serves domain-specific answers to questions you would normally have to make an appointment, wait days if not weeks, and pay $$$,$$$ figure salaried professionals for.
If don’t create an account and complete the intake form, you get very generic answers. If you create an account and complete the intake form it then “personalizes” answers based on your intake form.
Sounds pretty good, right? But to him, this is just scratching the surface.
Three key shortfalls not obvious at first glance:
His current solution doesn’t learn and adapt. Beyond the specific information captured during the intake form step, it doesn’t really listen and take note of information provided by the end user in the course of his/her convos. Not everyone remembers to provide all the relevant information and context at the beginning of a conversation. That isn’t very human at all. Ideas, history, and relevant information pop into our minds during the creative process of conversation.
His current solution has the memory of a goldfish. Actually worse since at least fish can remember the last 3 seconds. It doesn’t remember and reason via the details in past convos as context to improve its responses.
His current solution ignores the Sequence and Timing of User-Provided Information. Bob could be asking about chemical compounds over the course of days and weeks to make a bomb or a drug and the LLM wouldn’t be able to stitch together the pieces to know any better. Mary could be exhibiting early signs of dementia or a heart attack and the LLM also wouldn’t be able to see the pattern. John could be providing information over the course of multiple hours and days that are conflicting and the LLM will happily answer without stopping to clarify and highlight inconsistencies. Trained professionals would be able to see and detect patterns like this.
We built a software tool for AI developers building chatbots to easily customize, program, and deploy “features” that enable LLMs to overcome all three shortfalls.
It’s called Intelligent Memory and using it we quickly built a demo for this VP of AI and will meet with him soon to get feedback. Fingers crossed 🤓.
Intelligent Memory also allows users to take control of their own AI “profile” or “fingerprint” so any LLM they use can quickly adjust and adapt its responses to be more personal and customized to users’ preferences. This solves for the quickly emerging AI agent data and metadata silos that are quickly becoming a limiting factor in UX and effectiveness.
AI Agent memory and personalization is a quick moving target and I don’t claim to have all the answers , but if you’d like to learn more or collaborate, DM me and let’s chat!