Cloud

Choosing a genAI partner: Trust, but verify


Enterprise executives, still enthralled by the possibilities of generative artificial intelligence (genAI), more often than not are insisting that their IT departments figure out how to make the technology work. 

Let’s set aside the usual concerns about genAI, such as the hallucinations and other errors that make it essential to check every single line it generates (and obliterate any hoped-for efficiency boosts). Or that data leakage is inevitable and will be next to impossible to detect until it is too late. (OWASP has put together an impressive list of the biggest IT threats from genAI and LLMs in general.) 

Logic and common sense have not always been the strengths of senior management when on a mission. That means the IT question will rarely be, “Should we do GenAI? Does it make sense for us?” It will be: “We have been ordered to do it. What is the most cost-effective and secure way to proceed?”

With those questions in mind, I was intrigued by an Associated Press interview with AWS CEO Adam Selipsky — specifically this comment: “Most of our enterprise customers are not going to build models. Most of them want to use models that other people have built. The idea that one company is going to be supplying all the models in the world, I think, is just not realistic. We’ve discovered that customers need to experiment and we are providing that service.”

It’s a valid argument and a fair summation of the thinking of many top executives. But should it be? The choice is not merely buy versus build. Should the enterprise create and manage its own model? Rely on a big player (such as AWS, Microsoft or Google especially)? Or use one of the dozens of smaller specialty players in the GenAI arena?

It can be — and probably should be — a combination of all three, depending on the enterprise and its particular needs and objectives.

Although there are thousands of logistics and details to consider, the fundamental enterprise IT question involving genAI developments and deployments is simple: Trust.

The decision to use genAI has a lot of in common with the enterprise cloud decision. In either case, a company is turning over much of its intellectual crown jewels (its most sensitive data) to a third party. And in both instances, the third-party is trying to offer as little visibility and control as possible. 

In the cloud, enterprise tenants are rarely if ever told of configuration or other settings changes that directly affect their data. (Don’t even dream about a cloud vendor asking the enterprise tenant for permission to make those changes.) 

With genAI, the similarities are obvious: How is my data being safeguarded? How are genAI answers safeguarded? Is our data training a model that will be used by our competitors? For that matter, how do I know exactly what the model is being trained with? 

As a practical matter, this will be handled (or avoided) via contracts, which brings us back to the choice of working with a big-name third-party or a smaller, lesser-known company. The smaller they are, the more likely they will be open to accepting your contract terms. 

Remember that dynamic when figuring out your genAI strategy: you’re going to want a lot of concessions, which are easier to get when you’re the bigger fish.

It’s when setting up a contract that trust really comes into play. It will be difficult to write into it sufficient visibility and control for your general counsel and CISO and your compliance chief. But of even greater concern is verification: What will a third-party genAI provider allow you to do to audit their operations to ensure they’re doing what they promised? 

More frighteningly, even if they agree to everything you ask, how can some of these items be verified? If the third-party promises you your data will not be used to train their algorithm, how the heck can you make sure that it won’t?

This is why enterprises should not so quickly dismiss doing a lot of genAI work themselves, possibly by acquiring a smaller player. (Let’s not get into whether you trust your own employees. Let’s pretend that you do.) 

Steve Winterfield, the advisory CISO at Akamai, draws a key distinction between generic AI — including machine learning — and LLMs and genAI, which are fundamentally different.

“I was never worried about my employees dabbling with (generic) AI, but now we are talking about public AI,” Winterfeld said. “It can take part of its learning database and can spit it out somewhere else. Can I even audit what is going on? Let’s say someone on a sales team wants to write an email about a new product that is going to be announced soon and asks (genAI) for help. The risk is exposing something we haven’t announced yet. The Google DNA is that the customer is the business model. How can I prevent our information from being shared? Show me.”

Negotiating with smaller genAI companies is fine, Winterfeld said, but he worries about that company’s future, as in going out of business or being acquired by an Akamai rival. “Are they even going to be around in two years?”

Another key worry is cybersecurity: How well will the third-party firmprotect your data, and if your CISO chooses to use genAI to handle your own security, how well will it work?

“SOCs are going to be completely blindsided by the lack of visibility into adversarial attacks on AI systems,” said Josey George, a general manager for strategy at global consulting firm Wipro. “SOCs today collect data from multiple types of IT infrastructure acting as event/log sources [such as] firewalls, servers, routers, end points, gateways and pour that data into security analytics platforms. Newer applications that will embed classic and genAI within them will not be able to differentiate advanced adversarial attacks on AI systems from regular inputs and thus will generate business-as-usual event logs.

“That could mean that what gets collected from these systems as event logs will have nothing of value to indicate an imminent or ongoing attack,” George said.

“Right now is a dangerous time to be partnering with AI companies,” said Michael Krause, co-founder and CTO of AI vendor Ensense and a long time AI industry veteran. “A lot of AI companies have been founded while riding this wave and it’s hard to tell fact from fiction. 

“This situation will change as the industry matures and smoke-and-mirrors companies are thinned out,” Krause said. “Many companies and products make it virtually impossible to prove compliance.”

Krause offered a few suggestions for enterprise CISOs trying to partner for genAI projects.

“Require that no internal data be used to train or fine-tune shared models — and no data [should] be saved or stored. Require a separate environment be deployed for your exclusive use, prohibiting any data sharing, and being access controlled by you. Require any and all data and environments be shut down and deleted upon request or conclusion. Agree to a data security audit prior to and following the engagement conclusion.”

Speaking of things to be careful of, OpenAI — the only company where the CEO can fire the board, albeit with a little help from Microsoft and especially Microsoft’s money — raised a lot of eyebrows when it updated its terms and conditions on Dec. 13. In its new terms of use, OpenAI said that if someone uses a company email address, that account may be automatically “added to the organization’s business account with us.” If that happens, “the organization’s administrator will be able to control your account, including being able to access content.” 

You’ll either need to find a free personal account to use or avoid asking ChatGPT “Can you write a resume for me?” or “How do I break into my boss’s email account?”

The new version allows people to opt-out of OpenAI training its algorithms on their data. But OpenAI doesn’t make it easy, forcing users to jump through a lot of hoops to do so. It starts by telling users to go to this page. That page, however, doesn’t allow an opt-out. Instead, that page suggests users go to another page. That page doesn’t work either, but it does point to yet another URL — and it has a button in the right corner to apply. Next, it has to verify an email address and then says it will consider the request.

You might almost think they want to discourage opt-outs. (Update: Shortly after the  update was posted, OpenAI removed one of the bad links.)

Copyright © 2023 IDG Communications, Inc.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.