Introduction

Thoughtly’s generative, conversational AI is a powerful tool for enhancing customer experience, moving beyond traditional intent-based limitations. By combining state-of-the-art LLMs with advanced voice synthesis thanks to transformer-based voice AI models, Thoughtly creates interactions that are adaptive, realistic, and remarkably effective. With thoughtful training and continuous refinement, your Thoughtly Voice Agent will deliver outstanding, human-like conversations that transform your customer interactions.

Instead of having to pre-program every possible interaction, Thoughtly’s AI learns from vast datasets to understand and generate language dynamically. This allows it to handle a wide range of queries, adapt to conversational nuances, and provide responses that feel natural and engaging. While achieving 100% predictability is statistically improbable due to the probabilistic nature of generative systems, Thoughtly’s AI offers a highly effective and flexible solution for delivering exceptional customer service.

How Thoughtly Works

Thoughtly’s voice AI system harnesses cutting-edge technology to create an unmatched customer experience through dynamic, conversational interactions. Unlike traditional intent-based dialog systems that rely on Natural Language Understanding (NLU) models, Thoughtly leverages generative large language models (LLMs) to provide responses that feel natural, flexible, and human-like.

From Intent-Based Systems to Conversational AI: A New Era

Intent-based systems are designed to recognize specific inputs and match them to pre-programmed “intents.” Once an intent is identified, the system triggers a fixed response written manually by the designer of the given dialog system. While effective for handling predictable, repetitive interactions, intent-based systems have limited flexibility. They’re constrained by the defined intents and don’t adapt easily to unexpected or nuanced queries. This approach can make conversations feel robotic and can be frustrating when callers step outside the anticipated dialogue paths.

Thoughtly, on the other hand, is powered by generative LLMs that offer a far more flexible, conversational approach. By leveraging advanced models from OpenAI, Meta (LLaMA), Mistral, and Anthropic, Thoughtly’s AI adapts in real time to the unique phrasing and needs of each interaction. This approach is similar to hiring a human agent: while an agent is trained on company policies and customer service best practices, they aren’t restricted to scripted responses and can adjust dynamically to any conversation. Thoughtly’s AI provides a similar experience, drawing on its extensive training to respond naturally and intelligently to each caller’s needs.

This human-like approach to Thoughtly’s outputs are what make it so well-equipped to handle both sales conversations and more advanced support calls.

This shift represents a technological breakthrough, enabling Thoughtly to deliver conversations that flow naturally, adapt to varied input, and create an engaging, frictionless CX. With its ability to understand complex language patterns, Thoughtly’s conversational AI can handle a far broader range of queries than traditional, intent-based systems, offering a more satisfying and intuitive interaction experience.

How Large Language Models (LLMs) Work

At the heart of Thoughtly’s conversational AI are large language models (LLMs), which function fundamentally differently from intent-based systems. LLMs use a sophisticated neural network architecture known as a transformer, which enables them to understand and generate language based on probabilities rather than pre-set rules.

  1. Self-Attention and Context Awareness: LLMs use a self-attention mechanism that helps the model dynamically “focus” on relevant parts of the input text, enabling it to understand context, relationships, and nuances across a conversation. This contextual awareness allows the AI to provide responses that are adaptive, relevant, and coherent, even in complex interactions.

  2. Probabilistic Response Generation: Unlike traditional rule-based systems, LLMs generate responses based on probabilities. They evaluate multiple possible next words (or tokens) and select one based on its likelihood in the given context. This makes each response unique, adaptive to the conversation, and more human-like. However, it also means responses aren’t fully deterministic, making absolute predictability impossible.

  3. Trained on Vast Data: Thoughtly’s LLMs have been trained on extensive, diverse datasets, which allow them to understand and generate language effectively across many contexts. This broad training makes Thoughtly’s AI highly flexible, allowing it to handle a wide variety of inputs without requiring explicit programming for each scenario.

While these attributes make Thoughtly’s AI impressively dynamic and capable, they also introduce an inherent variability. Because responses are generated based on probability, achieving perfect outputs 100% of the time is statistically improbable. Just as a human conversation partner may occasionally misunderstand a question or need clarification, Thoughtly’s AI may sometimes produce a response that could be refined.

Voice AI: Generative Speech with Transformer-Based Models

Thoughtly’s AI system doesn’t stop at understanding and generating responses; it also translates these outputs into natural-sounding speech. Once the LLM generates a response, Thoughtly uses transformer-based Voice AI text-to-speech (TTS) models to convert text outputted by language models into audio in real time. These models enable a rich, human-like vocalization, providing customers with a seamless, fully generative experience.

However, like any generative system, there is a degree of variability in each response. Because these voice models work probabilistically, they don’t reproduce identical outputs every time. This variability, while making interactions feel more natural, can sometimes result in responses that don’t fully align with the intended outcome. Thoughtly minimizes these by monitoring, fine-tuning, and updating models, but complete perfection isn’t statistically achievable in generative systems.

Just like human phone calls, no two Thoughtly conversations will ever be exactly the same, from what is being said to voice tonation. This is the future of Conversational AI and dialog design systems.

Why 100% Coverage is Statistically Improbable

Given how LLMs work, achieving 100% coverage is statistically improbable. Here’s why:

  • Probabilistic Response Generation: Responses are generated based on statistical probabilities rather than deterministic paths. This allows for natural, varied conversation but also means occasional unexpected outputs.

  • Contextual Sensitivity: LLMs respond dynamically to context, which can change based on subtle variations in phrasing, tone, or past interactions. This variability introduces minor, sometimes unpredictable shifts in responses that may not always perfectly align with expected outcomes.

  • Broad Language Understanding: Thoughtly’s models are trained on a vast range of language patterns, enabling them to respond flexibly but also making it difficult to predict every possible conversational direction. Just as a human agent may encounter scenarios they weren’t trained for, Thoughtly’s AI will occasionally face unforeseen conversational contexts.

For applications where consistent, precise responses are critical, Thoughtly recommends providing rules to your Voice Agent to ensure that it is aware of the strict guidelines it must follow. For example, provide your Voice Agent with a language that should be completley avoided to ensure that it doesn’t say anything that could be considered “off-brand.”

Additionally, enabling fallback to human agents ensures that while Thoughtly’s AI handles the majority of interactions smoothly, any truly unique or unpredictable scenario can be directed to a live representative, maintaining a high standard of customer experience.

By using a product that utilizes generative language models, you acknowledge a minimal risk of occasional unexpected outputs. With Thoughtly, however, this risk is minimized through product guardrails and is likely lower than what may occur with human agents.

By leveraging Thoughtly’s LLM-powered conversational AI and following the recommended training and monitoring steps, you can create a highly effective virtual agent that delivers outstanding CX with minimal variance—while recognizing that a small degree of unpredictability is a natural and even beneficial part of creating a human-like conversational experience.

Training Your Thoughtly Voice Agent: A Step-by-Step Guide

To maximize the effectiveness of your Thoughtly Voice Agent, we recommend a strategic training approach that builds from core conversations to more nuanced interactions. Follow these steps to create a high-performing virtual agent:

1

Define the Happy Path (Majority Coverage, >50%)

Begin by creating a conversation flow that covers the most common, straightforward scenarios—often referred to as the “happy path.” Focus on interactions that make up roughly 60% of expected conversations. This provides your agent with a solid foundation and ensures it performs well in common scenarios from day one.

2

Expand to Edge Cases (90% Coverage)

Once the happy path is performing smoothly, begin to identify and address edge cases. These might include less frequent inquiries, unusual phrasing, or specific customer needs that fall outside standard interactions. Expanding to these edge cases brings your agent’s handling capabilities closer to 90%, significantly improving its ability to manage a variety of scenarios.

Expose a test line to your team internally to gather feedback on how the agent performs in these edge cases. This feedback loop is crucial for identifying gaps and refining responses.
3

Go Live and Monitor Calls (30-Day Evaluation)

When your agent is handling a diverse set of scenarios effectively, you’re ready to go live with customers. For the first 30 days, monitor calls closely to identify any interactions where the agent’s response may have fallen short or could be improved. This period allows you to gather real-world data on how the agent performs under a variety of circumstances.

4

Refine and Update for 99%+ Coverage

As you spot gaps or errors in responses, you can make updates directly on Thoughtly. By training your agent or providing the agent with up-to-date information, you can address most observed issues. This iterative refinement process will bring your agent’s coverage to around 99%.

5

Edge Case Discovery

While Thoughtly’s conversational AI can cover an impressive range of queries, reaching 100% is statistically improbable. No system, human or AI, can anticipate every possible interaction. For cases where achieving complete coverage is critical, Thoughtly recommends providing rules to your Voice Agent to ensure that it is aware of the strict guidelines it must follow. For example, provide your Voice Agent with a language that should be completley avoided to ensure that it doesn’t say anything that could be considered “off-brand.”

By following these steps, you’ll develop a high-performing Thoughtly Voice Agent capable of handling a broad range of customer inquiries with ease, flexibility, and exceptional quality.

Start building

If you haven’t done so already, you’ll need to create a free account to get started. Once you’re in, continue to the Agent Builder to start building your first Voice Agent.