Generative AI In Business: Large Language Models Applied In Organizations

Discover the transformative potential of generative AI in business. Learn how leveraging large language models can enhance efficiency and drive growth.
Enterprise AI
August 9, 2023
Generative AI In Business: Large Language Models Applied In Organizations

The digital era has ushered in an age of transformation, especially for businesses striving to leverage technology for growth and efficiency. From small businesses to large enterprises, adopting artificial intelligence (AI) technologies have become essential to optimize business processes and create competitive advantages. 

Specifically, organizations are increasingly adopting Generative AI. Let’s explore how they’re levering it, right now. 

What Is Generative AI?

Artificial Intelligence is an umbrella term that refers to the capability of machines to mimic human intelligence or behavior. Generative AI in particular refers to AI that uses machine learning techniques to generate output, often in the form of text, images, or music. It's called "generative" because it's capable of generating new content, based on its training on large sets of data.

As such, it vastly differs from previously-developed AI solutions that focused on processing inputs (Processing AI) rather than generating outputs (Generative AI).

Processing VS Generative AI

While Generative AI focuses on creativity and the generation of new, original content, Processing AI concentrates on understanding, analyzing, and interpreting existing data.

What Is Natural Language Processing (NLP)?

Generative AI is closely linked to natural language processing (NLP)

As NLP is the ability of a computer program to understand, interpret, generate, and interact using human language (also referred to as “natural language”), it acts as the primary field that deals with language understanding and Generative AI.  To do so, it combines elements of computer and data science, linguistics, and machine learning. 

What Are Large Language Models (LLMs)?  

Large language models (LLMs) are a prime example of Generative AI. They are pre-trained models that apply AI and use machine learning algorithms to recognize, summarize, translate, predict, and generate text and other content. They are called “large” because they are trained on a vast amount of data and have a large number of parameters. For example, OpenAI’s vastly popular GPT-3 has 175 billion parameters.

Business Applications of Generative AI 

Generative AI and large language models can help automate many different business processes and operations. This brings about additional benefits, such as freeing up employees for more creative or high-stakes tasks and growing in revenue while decreasing costs.

We can categorize their applications into the following four areas: 

  • Text Generation
  • Text Summarization
  • Text Classification
  • Document Processing

Let’s examine them one by one and explore how they help businesses in streamlining job processes.

Text Generation

Text generation refers to creating coherent and contextually relevant sentences, with the potential of transforming the way businesses handle content creation. 

Large language models generate text by predicting the next word or sentence based on context, which demonstrates one of the most notable strengths of Generative AI: context awareness.

Unlike previous (non)AI systems, Generative AI doesn’t require strict rules in order to produce coherent and sensical content. It bases its predictions on the context learned during pre-training as well as the context of user inputs, rather than strict and limited rules.

Today, we’re also seeing increasingly more advanced iterations of this traditional NLP task. 

Besides producing text-based outputs, many large language models can also generate code, images, video, and music. Such models are called multimodal models, with OpenAI’s GPT-4 currently being a prime example.

Business Use Cases

  • Automated Marketing Content: Generate blogs, social media posts, and other marketing content to free up your marketing team for more high-stakes tasks or give them an excellent starting point.
  • Automated Customer Relationship Management: Generate custom responses to customer inquiries in seconds, enhance customer relations, and drive brand loyalty.
  • Automated Report Generation: Automatically generate financial, technical, and other reports, minimize the risk of human error, and save hours of your team’s time. 
  • Synthetic Data Generation: Generate synthetic data that mimics real data and accelerate your research and decision-making processes without the need for time-consuming data collection and anonymization procedures. 

Case Study: Korea Telecom’s AICC

Korea Telecom (KT) is South Korea’s leading mobile operator, with over 22 million subscribers. With that big of a customer base comes a great need to efficiently understand customer requests and answer questions. 

This motivated KT to pioneer an AI-driven customer services platform called AI Contact Centre or AICC. AICC is KT’s all-in-one, cloud-based platform that uses LLM-powered virtual agents to respond to various customer questions and inquiries through natural, human-like conversations. 

As the Korean language Hangul is regarded as one of the most complex languages in the world, KT trained LLMs on their own Hangul-based datasets to improve their accuracy and efficiency. 

Today, AICC manages over 100,000 customer inquiries daily and reduces consultation times by 15 seconds without any human intervention. Given KT’s current market position, it is clear that using large language models has improved customer experience and given KT a competitive advantage that helps maintain customer relationships.

KT claims that its various AI-related services have already generated over $617.9 million in revenue, so the company plans to pour an additional $5.4 billion into AI by 2027. 

Summary

  • The goal: automated, AI-powered, and efficient customer service
  • The task: a contemporary twist on text generation (speech generation)
  • The obstacle: the complexity of the Korean language and a lack of off-the-shelf models trained in it 
  • The solution: custom large language models trained on specialized, Hangul-based datasets
  • The results: over $600 million in revenue, over 100,000 customer inquiries managed daily by AI 

Text Summarization

In a world of information and data overload, time has become a precious resource. This is where text summarization comes in handy. 

As the name suggests, text summarization distills lengthy texts into concise summaries. Using NLP at its core, it is enabled with a deep understanding of language that allows it to identify and retain key points while discarding redundant information.

By leveraging their understanding of text structure, context, and meaning, large language models help users save time by allowing them to quickly absorb large amounts of information presented as actionable insights. This also helps organizations reduce human errors, such as mistakes and oversights.

Business Use Cases

  • Document Review: Quickly summarize lengthy contracts, financial reports, and other business documents to ensure you never miss crucial details. 
  • News Briefs: Keep up-to-date with industry trends and news to make more informed strategic decisions.
  • Meetings: Summarize meeting notes to quickly identify action points.
  • Customer Feedback: Summarize feedback to rapidly identify customer trends and improve products based on customer behavior and responses.

Case Study: InShorts

Adapted

InShorts is an application presenting bite-sized news summaries in 60 words or less to its audience in India. 

They trained their LLM-backed algorithm Rapid60 using over 500,000 previously manually-produced summaries. In the words of InShorts CoFounder and CEO, using an LLM as base allowed Inshorts to “remove inefficiencies from the system thereby, helping create a fast-paced news summarization platform for the industry.”

Rapid60 can read, learn and summarize more than 100,000 articles per month, which marks a 10x improvement from their previous capacity. While 20% of InShorts content is purely editorially curated for click-through rate and engagement purposes, 80% of it comes from Rapid60.  

InShorts’ ability to quickly push out content has allowed it to become India’s highest-rated news application with more than 15 million downloads, both on Android and IOS. 

Summary

  • The goal: 60-word summaries of news articles generated by Generative Artificial Intelligence
  • The task: text summarization
  • The obstacle: parsing the data and content summarization as general challenges for machine learning and AI
  • The solution: custom large language models trained on specialized datasets containing over half a million manually-produced summaries
  • The results: 10x increase in generated summaries on a monthly basis, InShorts becoming the highest-rated news app in India

Text Classification

Since large language models can understand textual nuances, they’re excellently suited for the task of text classification. Text classification helps categorize both organized and unorganized text and other data into predefined classes. 

While text classification isn't generative in the same sense as text generation, the underlying algorithms and neural networks are closely related. The models still need to understand the underlying distribution of data to classify text, which they’re usually able to do thanks to a generative modeling approach called Generative Adversarial Networks (GANs)

GANs have two neural networks: a generator that produces synthetic samples, and a discriminator that evaluates them. In the context of text classification, GANs are used to produce and classify synthetic textual data samples. This approach allows the model to better understand various nuances and outliers, become more accurate, and help organizations effectively classify data. 

Business Use Cases

  • Feedback Categorization: Classify customer feedback in desired pre-set categories (e.g., positive, negative, and neutral) to help guide product development and improve customer satisfaction.
  • Email Sorting: Identify spam or prioritize important emails to enhance productivity and streamline communication.
  • Social Media Sentiment Analysis: Analyze customer response and feedback to inform your marketing strategy and brand positioning.
  • Document Sorting: Classify documents in digital repositories to improve accessibility and information management.
  • Customer Inquiry Sorting: Categorize inquiries by urgency or topic for prioritized responses to customer satisfaction.

Case Study: schuh

Founded in 1981, schuh is a UK and Ireland-based footwear retailer. Despite its historic brand presence, Schuh has been up to date with emerging technologies and trends that help it improve continuously. 

schuh uses Amazon Comprehend by AWS, a natural language processing (NLP) service that uses machine learning to discover insights from text. 

Using the custom classification API embedded within Amazon Comprehend, schuh is able to automatically categorize 720,000 monthly customer emails. According to the director of e-commerce and customer experience, Sean Mckee, this helps the company “save hours of manual sorting of free customer text while simultaneously achieving an objectivity which previously eluded [schuh]”. 

Similarly, the targeted sentiment API within Amazon Comprehend lets schuh receive granular sentiment insights (positive, negative, neutral, or mixed) about customer reviews. According to the head of development systems, Blair Milligan, this helps schuh “put a customer problem in front of the right person [and] really give [the company] the best chance of retaining that customer going forward”. 

Given these benefits and the fact that schuh continues to be various retailer of the year awards, Milligan aims to replace all of its human-reliant existing systems with artificial intelligence-friendly software offered by AWS. 

Summary

  • The goal: sort and analyze inbound customer communication more efficiently
  • The task: text classification
  • The obstacle: traditional solutions lacked the semantic understanding, contextual awareness, and overall adaptability to correctly classify diverse textual data
  • The solution: an NLP service with multiple Generative AI capabilities  
  • The results: saving hours of manual sorting, more objective analysis, and better customer retention 

Document Processing

Using NLP to understand the content, structure, and semantics of documents, document processing involves the extraction, classification, and structuring of data from documents, as well as the documents themselves. As such, document processing encompasses several more partial NLP tasks.

As LLMs can be used to fine-tune models to recognize and process the different document structures used in different organizations, document processing becomes a highly useful tool for organizations dealing with heavy paperwork.

Business Use Cases

  • Invoice Information Extraction: Automate payment processing based on invoice data to improve efficiency and reduce errors.
  • Resume Processing: Streamline hiring processes to save time and aid in hiring the best talent at minimal costs.
  • Form Data Extraction: Extract the right data in the right fields to improve record-keeping and aid data analysis.
  • Legal Document Analysis: Extract, verify, or classify key clauses or sections to reduce legal risks and facilitate better contract negotiation.
  • Financial Document Analysis: Extract insights from financial documents to improve financial management and forecasting. 

Case Study: Lemonade

Source: Lemonade

Lemonade is a relatively new multi-purpose insurance provider, with over 1 million customers and only 500 employees. Previously established rivals like Allstate and Statefarm employ around 50,000 people. Lemonade’s lean headcount is enabled by AI tools and LLM algorithms. 

Lemonade employs AI systems to automate several complex business processes, including onboarding customers, deciding on insurance premiums, and processing claims.

It uses a chatbot called Maya to act as a virtual agent and walk customers through the signup process. Maya also sets the insurance premium based on customers' answers to its questions.

The model is pre-trained on data consisting of high-quality insurance policies, it allows Lemonade to eliminate human subjectivity in deciding premiums and other errors in filling and processing insurance documents. This allows Lemonade to move towards a 75% gross loss ratio (the proportion of claimed losses to earned premiums), an industry-leading benchmark for performance. 

It also employs chatbot AI Jim, which acts as another virtual agent to receive, read, process, and award insurance claims. While 40% of claims are handled by AI Jim and approved instantly, the speed and efficiency of the chatbot have allowed Lemonade to break records and process claims in just 3 seconds. 

Maya and AI Jim have consistently worked together to allow Lemonade to live up to its “90 seconds to get insured; 3 minutes to get paid” tagline. Their joint work may also explain Lemonade’s 4.9/5.0 rating.

Summary

  • The goal: underwrite new insurance policies and process claims
  • The task: document processing
  • The obstacle: a lack of off-the-shelf AI systems trained on high-quality training data
  • The solution: two chatbots that act as virtual agents 
  • The results: objectively underwriting policies, processing claims in record times, and boasting a 4.9/5.0 rating

Get Custom AI Agents For Your Business 

Harness the power of Generative AI and experience the benefits yourself.  Whether you want to automate your organizational workflow or develop groundbreaking product features, we will build an AI solution that helps you achieve it. 

Contact our experts today to learn more about our Custom AI Agents for businesses.

Want to learn more about Generative AI Agents for your business? Enter your email and we’ll contact you as soon as possible
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.