Generative AI, the wave of artificial intelligence led by OpenAI’s GPT large language models and ChatGPT, is experiencing rapid, never-before-seen levels of adoption, according to a report from technology publisher and training provider O’Reilly. But issues remain with adoption, including lack of perceived business cases and worrisome legal questions.
The company’s report, 2023 Generative AI in the Enterprise, published November 21, said two-thirds of survey respondents already were using generative AI. “We’ve never seen a technology adopted as fast as generative AI—it’s hard to believe that ChatGPT is barely a year old.”
Nevertheless, the difficulty of finding business use cases and concerns about legal issues is holding generative AI back, the report found. Badly conceived and poorly implemented AI solutions can be damaging, and the legal consequences of using generative AI are still unknown, with questions such as who owns the copyright over AI-generated output.
Company culture also can restrain AI adoption, O’Reilly said. “In some respects, not recognizing the need is similar to not finding appropriate business use cases.” The difficulty and high cost of building infrastructure for generative AI also was raised as a concern.
The survey underlying the report was conducted from September 14 through September 23, 2013. Among a total of 4,782 responses, 2,857 respondents answered all questions, O’Reilly said. Most responses, 74%, were from North America or Europe.
Other findings of O’Reilly’s 2023 Generative AI in the Enterprise report:
- 54% of AI users expect AI’s biggest benefit will be greater productivity. Only 4% pointed to lower head counts.
- 77% of respondents use AI as an aid in programming. Among specific applications cited were fraud detection, teaching, and customer relationship management.
- AI users said AI programming (66%) and data analysis (59%) were the most-needed skills.
- Many AI adopters are still in the early stages: 26% have been working with AI for less than a year, while 18% already have applications in production.
- 16% of respondents working with AI were using open-source models.
- Unexpected outcomes, security, safety, fairness, bias, and privacy were the biggest risks for which adopters are testing.
Copyright © 2023 IDG Communications, Inc.