AI chatbot privacy concerns remain a problem for companies
…
AI — Following the success of ChatGPT, the tech industry has increased its efforts to build and deploy a succession of strong new AI chatbots.
Their efforts, however, have raised data privacy worries among businesses, authorities, and hobbyists.
Due to compliance problems associated with the usage of third-party software, some firms, including JPMorgan Chase, have prohibited their workers’ use of ChatGPT.
Concerns and ban
The usage of third-party software only heightened privacy worries when OpenAI, ChatGPT’s parent business, announced that the tool will be taken down on March 20.
The firm went offline to patch an issue that allowed users to read other users’ previous topic lines.
The problem also allowed users to see another user’s personal information, such as:
- Their first and last name
- Email address
- Payment address
- The last four digits of a credit card number
- The credit card’s expiration date
The problem, however, has subsequently been fixed.
Following OpenAI’s disclosure of the vulnerability, Italian officials temporarily blocked ChatGPT, citing privacy concerns.
Mark McReary, co-chair of Fox Rothschild LLP privacy and data security group, also addressed the privacy problem.
“The privacy considerations with something like ChatGPT cannot be overstated,” said McReary.
“It’s like a black box.”
Companies and the tool
ChatGPT went live in late November, providing users with the tools they need to write essays, tales, and song lyrics in response to unique prompts.
Since then, the AI competition has heated up as technological behemoths Google and Microsoft have released their own versions of AI technologies.
Their AI works similarly, with enormous language models trained on massive quantities of web data.
“You don’t know how it’s then going to be used,” said McReary regarding users putting information into the tools.
As a result, there have been substantial worries, particularly among businesses.
More employees are casually use the tools to assist them in creating business emails or meeting notes.
“I think the opportunity for company trade secrets to get dropped into these different various AIs is just going to increase,” McReary noted.
Boston Consulting Group’s chief AI ethical officer, Steve Mills, expressed similar thoughts, stating that the tools’ main privacy problem for businesses was the unintentional release of sensitive information.
“You’ve got all these employees doing things which can seem very innocuous, like, ‘Oh, I can use this to summarize notes from a meeting,'” Mills offered.
“But in pasting the notes from the meeting into the prompt, you’re suddenly, potentially, disclosing a whole bunch of sensitive information.”
Mills also stated that if the data that individuals submit is utilized to train AI tools, the firms have lost custody of the data and someone else now has access to it.
Read also: GPT-4 is OpenAI’s latest innovation
Privacy policy
According to OpenAI’s privacy policy, it gathers a variety of personal information from those who utilize its services.
The firm said that it may use the information for a variety of purposes, including:
- Improving or analyzing its services
- Conducting research
- Communicating with users
- Developing new programs and services
The privacy policy states that it may disclose personal information to third parties without telling the user unless required by law.
While the more than 2,000-word privacy policy appears cryptic, this is likely owing to industry norms in the internet era.
Furthermore, OpenAI includes a separate Terms of Use agreement that places the majority of the onus on the user to take the required precautions when using its products.
OpenAI has released a new blog post outlining their approach to AI safety.
“We don’t use data for selling our services, advertising, or building profiles of people – we use data to make our models more helpful for people,” it said.
“ChatGPT, for instance, improves by further training on the conversations people have with it.”
Google has a similar privacy policy, which also applies to the Bard tool, and it has special terms of service for its generative AI users.
According to the firm, they choose a subset of talks and utilize automated technologies to remove personally identifying information to assist enhance the AI tool while respecting customers’ privacy.
“These sample conversations are reviewable by trained reviewers and kept for up to 3 years, separately from your Google Account,” Google wrote in a separate Bard FAQ.
Google also urged users not to include anything in their Bard discussions that may be used to identify them or others.
Furthermore, according to the FAQ, Bard chats are not utilized for advertising reasons.
“We will clearly communicate any changes to this approach in the future,” it wrote.
Users might also use Bard without storing their discussions to their Google Account, according to Google.
After that, users may go back and examine their prompts or remove exchanges.
“We also have guardrails in place designed to prevent Bard from including personally identifiable information in its responses,” said Google.
“We’re still sort of learning exactly how all this works,” said Mills.
“You just don’t fully know how information you put in, if it is used to retrain these models, how it manifests as outputs at some point, or if it does.”