Slack noted the user concerns and acknowledged that the previous wording of its privacy principles contributed to the situation. “We value the feedback, and as we looked at the language on our website, we realized that they were right,” Slack said in a blog post Friday. “We could have done a better job of explaining our approach, especially regarding the differences in how data is used for traditional machine-learning (ML) models and in generative AI.”
“Slack’s privacy principles should help it address concerns that could potentially stall adoption of genAI initiatives,” said Raúl Castañón, senior research analyst at 451 Research, part of S&P Global Market Intelligence.
However, Slack continues to opt customers in by default when it comes to sharing user data with the AI/ML algorithms. To opt out, the Slack admin at a customer organization must email the company to request their data is no longer accessed.
Castañón said Slack’s stance is unlikely to allay concerns around data privacy as businesses begin to deploy genAI tools. “In a similar way as with consumer privacy issues, while an opt-in approach is considerably less likely to get a response, it typically conveys more trustworthiness,” he said.
A recent survey by analyst firm Metrigy showed that the use of customer data to train AI models is the norm: 73% of organizations polled are training or plan to train AI models on customer data.
“Ideally, training would be opt-in, not opt-out, and companies like Slack/Salesforce would proactively inform customers of the specifics of what data is being used and how it is being used,” said Irwin Lazar, president and principal analyst at Metrigy. “I think that privacy concerns related to AI training are only going to grow and companies are increasingly going to face backlash if they don’t clearly communicate data use and training methods.”