ADVERTISEMENT

ChatGPT Login Emails Potentially Exposed in Privacy Vulnerability

Update: 2023-12-25 16:16 IST

Researchers, led by Rui Zhu from Indiana University Bloomington, have unveiled a privacy concern linked to OpenAI's GPT-3.5 Turbo. Zhu's team conducted an experiment exposing the model's ability to recall personal information, surpassing its privacy safeguards. By exploiting the fine-tuning interface, typically used to enhance knowledge in specific areas, the researchers manipulated the model to provide work addresses for 80% of New York Times employees.

ADVERTISEMENT

Although OpenAI, Meta, and Google employ protective measures against requests for personal data, researchers found ways to circumvent these defenses. Zhu's team utilized the model's API and fine-tuning process to achieve their results, prompting concerns about privacy vulnerabilities in large language models like GPT-3.5 Turbo.

OpenAI responded to the issue by reaffirming its commitment to safety and rejecting requests for private information. However, experts remain sceptical, emphasizing the need for transparency in training data practices and the risks associated with AI models holding private information.

The discoveced vulnerability raises broader concerns about privacy in large language models. Commercially available models are deemed to lack robust defenses against privacy breaches, posing significant risks as they continually learn from diverse data sources. Critics advocate for increased transparency and protective measures to secure sensitive information within AI models.

ADVERTISEMENT

Tags:    

Similar News