Business Wire

Boost.ai Unveils Large Language Model Enhancements to Conversational AI Platform

11.5.2023 10:01:00 EEST | Business Wire | Press release

Share

Boost.ai, a leading conversational AI solution provider, today announced Version 12 of its platform, the first of a series of planned updates by the company to incorporate Large Language Model (LLM)-enriched features. This iteration is focused on key customer experience (CX) improvements, including content suggestion, content rewriting and accelerated generation of training data. The new update will take advantage of Generative AI to suggest messaging content to AI Trainers within the boost.ai platform, generating suggested responses and resulting in drastically reduced implementation times for new intents. With this latest release, boost.ai reinforces its commitment to researching, developing, releasing, and maintaining responsible implementations of LLM-powered, enterprise-quality conversational AI features in order to further enhance the customer experience.

ChatGPT has dominated headlines throughout 2023, yet questions about the accuracy of its underlying models, GPT-3.5 and GPT-4, persist. With an 85% accuracy rate, LLMs are a good indication of the potential of this technology in consumer-facing applications, but in their current raw state they lack the dependency for integration directly into a bank or insurance firm's systems. Boost.ai helps to resolve this issue by seamlessly integrating the predictive capabilities of LLMs with enterprise-grade control of their conversational AI platform, creating a Hybrid Natural Language Understanding (NLU) system that offers unmatched accuracy, flexibility, and cost-effectiveness. Expertly weaving together the right AI components for each task, boost.ai’s platform ensures precise and accurate answers - harnessing the benefits of LLMs, while still adhering to stringent quality assurance requirements.

“LLM technology offers great promise, but most applications just aren’t properly designed to securely and scalably support real-world businesses. With worries about accuracy or even inappropriate behavior, established institutions like banks could not risk direct access to this iteration of generative AI - until now,” said Jerry Haywood, CEO of boost.ai. “By pairing LLMs with our conversational AI, we’re able to ensure accuracy and open the door for customers in sensitive industries like financial services. We’re proud to be pioneering a way forward for businesses to harness this tech right now. It’s available for customers to use and enhance their existing solution, and to help them achieve speed to value significantly sooner whilst minimizing the risks currently dominating headlines.”

With a Hybrid NLU approach, enterprises gain the ability to combine boost.ai’s market-leading intent management, context handling, and dialogue management solutions with powerful LLM-enriched tools. Boost.ai’s existing intent engine is highly trained with guardrails in place to help guide the LLM, increasing overall accuracy and reducing the number of false positives. The end result is a chatbot that can confidently provide answers to inquiries, and a more streamlined development path that radically enhances how boost.ai customers can build scalable customer experiences for chat and voice.

“With our Hybrid NLU, we’ve been able to surpass the performance of either model on its own, providing our customers with the best of both worlds,” said Lars Ropeid Selsås, Founder of boost.ai. “Boost.ai will continue to operate at the cutting edge of AI possibility, refining our platform so that our customers are always receiving the best possible technology and service.”

On May 9th, Boost.ai co-founder, Henry Vaage Iversen and the technical team conducted a webinar highlighting the strengths of Hybrid NLU and mapping out a path for businesses to start using LLM technology to their advantage. For a recording of the full webinar, please visit: https://www.boost.ai/webinars/enhancing-cx-and-reducing-operational-costs-with-boost-ai-large-language-models

About Boost.ai

Boost.ai is a global leader in conversational AI optimized for scale. Boasting the industry's most robust intent portfolio, Boost.ai is pioneering an era of broad-scope virtual agents to deliver the most advanced and scalable technology on the market. With consistent resolution rates of 90%, Boost.ai’s market-leading virtual agent supports enterprise customers across key industries throughout the United States and Europe, including banking, insurance, telecom, retail and more. In 2021, Boost.ai was named a major player in the IDC MarketScape category, Worldwide Conversational AI Platforms for Customer Service. Key customers include Santander Bank, MSU Federal Credit Union, Aspire General Services, Tokio Marine and more. Learn more at boost.ai.

To view this piece of content from cts.businesswire.com, please give your consent at the top of this page.

Contact information

Jon Burch
Boost.ai@resonancecrowd.com
+44 208 819 3170

About Business Wire

For more than 50 years, Business Wire has been the global leader in press release distribution and regulatory disclosure.

Subscribe to releases from Business Wire

Subscribe to all the latest releases from Business Wire by registering your e-mail address below. You can unsubscribe at any time.

Latest releases from Business Wire

IQM and Zurich Instruments Launch Real-Time Quantum Error Correction Demonstrator with NVIDIA NVQLink16.3.2026 23:24:00 EET | Press release

Today, IQM Quantum Computers and Zurich Instruments announce a joint project to build and operate a real-time quantum error correction (QEC) demonstrator, enabled by the NVIDIA NVQLink platform. This project marks a significant milestone toward scalable and fault-tolerant quantum computing designed for enterprise and datacenter deployment. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260316511715/en/ IQM and Zurich Instruments launch real-time quantum error correction demonstrator with NVIDIA NVQLink As enterprises and public institutions worldwide move from quantum exploration to long-term deployment, the challenge has evolved beyond simply accessing quantum hardware. The focus is now on reliably operating quantum computers, seamlessly integrating them into existing compute infrastructure, and scaling them toward fault tolerance. The announced project directly addresses these needs by focusing on full-system integration f

NetApp Accelerates Momentum in AI Leadership with NVIDIA16.3.2026 22:30:00 EET | Press release

NetApp® (NASDAQ: NTAP), the Intelligent Data Infrastructure company, today announced enhancements to its enterprise-grade data platform, enabling customers to remove roadblocks to AI innovation. In addition to supporting the latest innovations from NVIDIA announced at GTC, NetApp is launching NetApp AI Data Engine (AIDE)—a secure, unified AI data platform stack co-engineered with NVIDIA and integrated with the NVIDIA AI Data Platform reference design. A foundational challenge for AI is enabling enterprises to discover, understand, and govern the data they have across their global data estates. If data is AI’s fuel, finding and using the best data is essential to making truly transformative AI. NetApp AIDE helps enterprises solve this need through an automatically created—and continuously updated—global metadata catalog with powerful search capabilities. Critically, the NetApp AIDE metadata catalog goes beyond standard file system metadata and actively analyzes file content to semantica

Lenovo Brings Production-Scale AI to Global Sports: Enhancing Fan Experience, Driving Revenue Growth, Boosting Performance, and Improving Operational Efficiency with NVIDIA16.3.2026 22:30:00 EET | Press release

At NVIDIA GTC today, Lenovo (HKSE: 992) (ADR: LNVGY) announced an expanded multiyear collaboration with NVIDIA to help the global sports industry deploy production-scale AI across mission-critical environments, transforming live data into revenue growth, operational resilience, and real-time decision advantage. The global sports technology market is projected to grow from $23 billion in 2025 to more than $60 billion by 2030. Global sports events represent some of the most complex and demanding operating environments in any industry, combining unprecedented scale, technical sophistication, and public visibility. These events engage billions of viewers worldwide, generate and process petabytes of data in real time, and require highly coordinated, distributed operations across multiple countries, all within a context where reliability, resilience, and uninterrupted performance are non-negotiable. Scaling AI across this ecosystem requires validated infrastructure, domain-trained intelligen

Lattice Joins NVIDIA Halos Ecosystem to Advance Safety for Physical AI with Holoscan Sensor Bridge16.3.2026 22:30:00 EET | Press release

Lattice Semiconductor (NASDAQ: LSCC), the low power programmable leader, today announced it has joined the NVIDIA Halos AI Systems Inspection Lab ecosystem, the first ANSI National Accreditation Board (ANAB) accredited inspection lab for AI-driven physical systems. Announced at the NVIDIA GTC 2026, Lattice will engage with NVIDIA and other Halos ecosystem members to build Halos-certified Holoscan Sensor Bridge-based designs for physical AI and to help shape best practices as the industry evolves. “Physical AI is rapidly moving from controlled environments into the real world, where safety, reliability, and trust are paramount,” said Raemin Wang, Vice President, Segment Marketing, Lattice Semiconductor. “Through this collaboration, Lattice looks forward to contributing our expertise in low power FPGAs and award-winning solution stacks to enable scalable, trusted physical AI systems across robotics, industrial automation, and autonomous applications.” NVIDIA Halos is a comprehensive full

Kinaxis Advances Large-Scale Supply Chain Optimization with NVIDIA AI16.3.2026 22:30:00 EET | Press release

Kinaxis® Inc. (TSX: KXS), a global leader in supply chain orchestration, today announced a new milestone in advancing large-scale supply chain optimization within the Kinaxis Maestro™ platform. Maestro already delivers high-performance optimization across complex global supply chains, and Kinaxis is now extending that leadership by leveraging GPU acceleration powered by NVIDIA cuOpt™ and NVIDIA AI infrastructure. As supply chains grow in scale and complexity, planning models must reconcile tens of millions of variables across extended time horizons and multiple planning levels. As model size expands, the number of potential decisions can scale into billions, dramatically increasing computational needs. Organizations are no longer constrained by insight alone. They are constrained by how quickly they can iterate. In testing on a large-scale semiconductor planning model with nearly 50 million decision variables, Kinaxis achieved up to a 12X reduction in total end-to-end calculation time.

In our pressroom you can read all our latest releases, find our press contacts, images, documents and other relevant information about us.

Visit our pressroom
World GlobeA line styled icon from Orion Icon Library.HiddenA line styled icon from Orion Icon Library.Eye