feels<\/em> about their experience, typically by asking whether they were a detractor, passive, or promoter after an interaction. But only a small portion (less than 1% of chatters) complete the survey. And even if a customer rates a chat positively, that doesn\u2019t necessarily mean the Salesbot was providing a quality chat experience.<\/p>\nSo we built a custom quality rubric with our top-performing ISCs to define what \u201cgood\u201d actually looks like. The rubric measures factors like discovery depth, next steps, tone, and accuracy.<\/p>\n
This year alone, a team of 13 evaluators manually reviewed more than 3,000 sales conversations. That human QA loop is critical. It keeps our AI grounded in real-world selling behavior and helps us continuously improve performance.<\/p>\n
5. Scale globally to boost efficiencies.<\/h3>\n
Before AI, staffing live chat in seven languages was one of our biggest operational challenges. It was costly, inconsistent, and hard to scale.<\/p>\n
Now, we can handle multilingual conversations around the world, providing a consistent experience no matter where someone\u2019s chatting from. That\u2019s not just an efficiency win \u2014 it\u2019s a customer experience upgrade.<\/p>\n
AI has given us true global coverage without overextending our team, unlocking growth in regions where headcount simply couldn\u2019t keep up.<\/p>\n
6. Build the right team structure.<\/h3>\n
Success didn\u2019t happen because of one person or team \u2014 it happened because a group of smart, customer-driven builders came together across Conversational Marketing and Marketing Technology AI Engineering.<\/p>\n
Conversational Marketing owned the strategy, user experience, and quality assurance, always grounding decisions in what would deliver the best experience for our customers. Our AI Engineering partners in Marketing Technology built the models, prompts, and infrastructure that made those ideas real \u2014 fast.<\/p>\n
Together, we formed a unified working group with shared goals, a common backlog, and a rhythm of weekly experimentation. That mix of deep customer empathy and technical excellence let us move like a product team \u2014 testing, learning, and improving SalesBot with every release.<\/p>\n
7. Approach automation with a product mindset.<\/h3>\n
The biggest unlock in our journey was embracing a product mindset. SalesBot wasn\u2019t a one-off automation project. It\u2019s a living product that evolves with every iteration.<\/p>\n
Over the past two years, we\u2019ve moved from rule-based bots to a retrieval-augmented generation (RAG) system, upgraded our models to GPT-4.1, and added smarter qualification and product-pitching capabilities.<\/p>\n
Those upgrades doubled response speed, improved accuracy, and lifted our qualified lead conversion rate from 3% to 5%.<\/p>\n
We didn\u2019t get there overnight. It took hundreds of iterations and a culture that treats AI experimentation as a core part of the go-to-market motion.<\/p>\n
8. Humans still matter.<\/h3>\n
Even with all this progress, some things still require a human touch. Today, SalesBot can\u2019t build custom quotes, handle complex objections, or replicate empathy in nuanced conversations \u2014 and that\u2019s okay. We\u2019ll always be working toward expanding its capabilities, but human oversight will always be essential to maintaining quality.<\/p>\n
Our agents and subject matter experts play a core role in our success. They evaluate outputs, provide feedback, and ensure the system continues to learn and improve. Their judgment defines what \u201cgood\u201d looks like and keeps our standard of quality high as the technology evolves.<\/p>\n
AI\u2019s role is to scale reach and speed \u2014 not to replace human connection. Our ISCs now focus on higher-value programs and edge cases where their expertise truly shines. The goal isn\u2019t fewer humans \u2014 it\u2019s smarter, more impactful use of their time.<\/p>\n
9. Give your model structure, not just more data.<\/h3>\n
When we first built SalesBot, it ran on a simple rules-based system \u2014 X action triggers Y response. It worked for basic logic, but it didn\u2019t sound like a salesperson. We wanted something that felt closer to an ISC: conversational, confident, and helpful.<\/p>\n
To get there, we experimented with fine-tuning. We exported thousands of chat transcripts and had ISCs annotate them for tone, accuracy, and phrasing. Training the model on these examples made it sound more natural, but accuracy dropped. We learned the hard way that too much unstructured human data can actually degrade model performance. The model starts remembering the \u201cedges\u201d of what it sees and blurring everything in between.<\/p>\n
So, we pivoted. Instead of giving the model more<\/em> data, we gave it a better<\/em> structure. We moved to a retrieval-augmented generation (RAG) setup, grounding the tool in real-time context and teaching it when to pull from knowledge sources, tools, and CRM data.<\/p>\nThe result is a bot that\u2019s significantly more reliable in complex sales conversations and far better at identifying intent.<\/p>\n
How to Get Started Building an AI Chat Program<\/h2>\n
If you’re just getting started, the biggest misconception is that you can jump straight into AI. In reality, AI only succeeds when the foundation beneath it is strong. Looking back at our journey, these three principles mattered the most.<\/p>\n
1. Build the foundation before you automate.<\/strong><\/h3>\nAI is only as good as the human program it learns from. Before we automated anything, we had years of real conversations handled by skilled chat agents. That live chat foundation gave us:<\/p>\n