Dev

What can ChatGPT and LLMs really do for your business?


ChatGPT’s usage continues to grow, with over 1.8 billion monthly visits and 10 million daily queries as of this writing. It runs on GPT-4, a large language model (LLM), which has several competitors, including Google Lamda, Hugging Face’s BLOOM, Nvidia’s NeMO LLM, and others.

There’s significant excitement, fear, hype, and investment around ChatGPT, LLMs, and other generative AI capabilities. People and businesses are experimenting, and even though it’s been less than a year since many of these capabilities became available, it’s worth asking at least two key questions: Where are ChatGPT and LLMs providing business value, and what activities are risky or beyond today’s capabilities?

The answers aren’t straightforward because generative AI competencies are quickly evolving. For example, GPT-4 was first announced in March 2023 and became the LLM for all ChatGPT users in May. Also, what works well for one person and company may not generalize well to others, especially now that we have the new skill of prompt engineering to master.

But it’s hard for businesses to sit on the sidelines and ignore the opportunities and risks. “ChatGPT and LLMs can change the fundamental equation of business,” says Patrick Dougherty, CTO and co-founder of Rasgo. “Instead of corporate output being bottlenecked by human time investment, your only limitation will become the quality of your strategic decision-making.”

What’s hype, what’s real today, and what’s likely to evolve over the next few years? Below are some guidelines to consider for what ChatGPT can and cannot do, and what you should and should not do with LLMs. 

1. Don’t share proprietary information on public LLMs

“AI is great if you can control it,” says Amy Kenigsberg, COO and co-founder of K2 Global Communications. “While most of us just click ‘I agree’ to a terms and conditions page, you need to read the terms of AI tools very closely.”

Many firms are drafting ChatGPT policies, and a primary concern is the risks of sharing business-sensitive information. In one recent instance, engineers asked for help debugging by pasting proprietary code into ChatGPT.

Kenigsberg continues, “The issue with ChatGPT and many other AI tools is that any information you paste in becomes part of its training data set. If someone enters proprietary data, that information may appear in a competitor’s materials. If personally identifiable information (PII) is entered to analyze a client, the company may violate GDPR, CCPA, or any of the many privacy regulations.”

So before experimenting and exploring use cases, review company AI and data governance policies and disclose your objectives if required for compliance. 

2. Review LLM capabilities in primary workflow tools

Over the last several months, many technology vendors announced new AI and LLM capabilities built into their platforms. If you’re looking for business value, review how these capabilities improve productivity, simplify accessing information, or provide other new operational benefits. Here’s a sample of several recent announcements:

3. Get quick answers, but know the limits of LLMs

A primary use case for ChatGPT and LLMs is to get quick answers without doing all the underlying research or learning required to become an expert. For example, marketers may seek help wording customer emails; technologists may want technical terms defined; or human resources may ask for help rewording a policy.

LLMs developed on enterprise content also offer many benefits, enabling employees to ask questions to accelerate onboarding, understand company benefits, find product information, or identify subject matter experts.

In other words, ChatGPT and other LLMs can be a productivity booster, enhance people’s skills, and assist in creating content. 

“Generative AI is incredibly useful in helping businesses generate quick analyses and reports by scouring the web for open source intelligence like government, economic, and financial data,” says Raul Martynek, CEO of DataBank. “AI is already helping us quickly understand the environment of our data centers, the intent of our customers, and the sentiment of our staff, to ensure we’re making informed decisions quickly across all dimensions of the business.”

But it’s very important to understand the limitations of ChatGPT and other LLMs. Alex Vratskides, CEO of Persado, says,  “Sam Altman, CEO of OpenAI, was spot on when he said ChatGPT creates a ‘misleading impression of greatness.’ If you’re looking for a productivity jumpstart, ChatGPT is an impressive tool. But ChatGPT alone is still unproven, insufficient, and can be misleading.”

Vratskides recommends that greatness comes when AI enables people to improve decision-making. “When transformer models are trained on behavioral data from enterprise communications, language can be personalized to motivate people to engage and act, thus delivering business impact.”

People must also expect AI biases as models are trained on sources that contain conflicting information, falsehoods, and prejudiced opinions. Marko Anastasov, Semaphore CI/CD co-founder, says, “Though powerful, language models are ultimately bound by the biases ingrained in their training data and the complexity of human communication.”

Lastly, while ChatGPT is a great research tool, users must review what data it was last trained on. “ChatGPT is unaware of the latest events or news,” says Anjan Kundavaram, chief product officer of Precisely. “It’s also trained on text-based human conversations, using potentially inaccurate, untruthful, or misleading data. The integrity of the data fueling an AI model directly impacts its performance and reliability.”

Kundavaram recommends looking for business efficiencies. “It’s a great fit for customer-facing departments, helping to automate straightforward, conversational tasks so employees can focus on adding value.

4. Simplify understanding of complex information

There are many places in a company’s technology and information stack where it’s hard to identify critical information from within complex content and data sources. I expect many companies to explore using AI search to improve customer and employee experiences because keyword search boxes are generations behind natural language querying and prompting.

Finding information is one use case, and another is solving operational issues quickly. For example, performance issues in a multipurpose database can take a team of site reliability engineers, database administrators, and devops engineers significant time to find the root cause. “Generative AI will make it easier to manage and optimize database performance, says Dave Page, VP and chief architect of database infrastructure at EDB. “AI-powered tools can automatically monitor databases, detect issues, and suggest optimizations, freeing up valuable time for database administrators to focus on more complex tasks.”

But, Page acknowledges, “Database issues can be complex, and there may be factors that the AI cannot take into account.”

Another use case is simplifying access to large and complex unstructured information sources such as product manuals and operational training guides. “Our customers generate a ton of documentation that may be hard to follow, not easy to search, or outside the scope of the average user,” says Kevin Miller, CTO of IFS North America. “We see LLMs as a great way to help provide context to our users in new ways,  including unlocking the power of service manuals and showing how other users have solved similar problems.”

But Phil Tee, CEO and co-founder of Moogsoft, warns of a false equivalence between knowledge and understanding. “ChatGPT and other LLMs provide technical tips and explain complicated processes on a more human level, which is incredibly valuable—no jargon, just information, though we have certainly learned to fact-check the information,” he says. “But knowing that a set of steps will solve a problem is not the same as understanding whether these steps are correct to apply now, and that becomes detrimental if we rely too much on LLMs without questioning their output.”

If you’re considering plugging an LLM capability into one of your applications, Phillip Carter, principal product manager at Honeycomb, shares a recommendation. “Challenge yourself to think about where people struggle the most in your product today, ask what can be solved without AI first, and only reach for LLMs when reducing toil or teaching new users helps solve those problems.” He adds, “Don’t fool yourself into thinking you can slap a chat UI onto some sidebar of your product’s UI and expect people to get excited.”

5. Prepare to build LLMs on proprietary data products

People can use open LLMs like ChatGPT today, leverage LLM capabilities embedded in their software platforms, or experiment with generative AI tools from startups. Developing a proprietary LLM is currently expensive, so that’s not an option for most businesses. Using an existing LLM to create proprietary capabilities is an option some companies are beginning to explore.

John Ehrhard, CEO of Orson, says, “The biggest opportunities are for businesses with specific domain expertise who are building the context and knowledge layers on top of LLMs and using them as a translator to deliver a personalized interaction with every user.”

Domain-specific LLMs include Intuit GenOS, an operating system with custom-trained financial LLMs specializing in solving financial challenges. Another example is BloombergGPT, a 50-billion parameter LLM trained on 700 billion tokens from English financial documents and public datasets.

“LLMs are already in deployment and driving business value today, but they just don’t look like ChatGPT,” says Kjell Carlsson, head of data science strategy and evangelism at Domino. “Biotech firms are accelerating the development of proteins for new treatments, while organizations across industries use LLMs to understand customer conversations and optimize customer service operations.”

Integrating LLM capabilities into the existing business model is not a trivial undertaking, as Carlsson explains. “The generative capabilities of these models are currently the hardest ways to drive business value because the business use cases are untried and because of the enormous limitations, including cost, privacy, security, and control of ChatGPT-like models that are consumed as a service.”

Enterprises with revenue-generating business models from their large, proprietary, and unstructured data sets should consider the opportunities to incorporate their data into LLMs. “Businesses can run and manage specialized models within their own security boundaries, giving them control over data access and usage,” says Dror Weiss, co-founder and CEO of Tabnine. “Most importantly, businesses can customize specialized models using their own data, which is essential for machine learning models to produce accurate results.”

The opportunity to build LLMs in industries with rich data sources, such as financial services, healthcare,  education, and government, is significant. So is the potential for disruption, which is one reason business leaders will explore the opportunities and risks in applying LLMs in their products and operations.

Copyright © 2023 IDG Communications, Inc.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.