OpenAI has been found to have disregarded Canadian privacy laws during the training of its widely used ChatGPT tool, leading to the acquisition and utilization of sensitive personal data, according to a collaborative inquiry. The federal privacy commissioner and counterparts in Quebec, British Columbia, and Alberta disclosed their conclusions on Wednesday regarding ChatGPT, a chatbot introduced in 2022 that generates natural, dialogue-like responses based on user input.
The investigation by privacy watchdogs commenced in 2023 following a complaint alleging that OpenAI had unlawfully gathered, utilized, and disclosed personal information without consent. Their examination revealed multiple issues indicating that the initial training of ChatGPT by OpenAI did not adhere to federal and provincial privacy regulations. It was noted that OpenAI amassed substantial personal data without adequate safeguards to prevent its utilization in model training.
The report highlighted that the collected information included potentially sensitive details such as individuals’ health conditions, political opinions, and data related to minors. Moreover, many users were unaware that their data was being collected and employed in training ChatGPT.
Federal commissioner Philippe Dufresne expressed concerns about OpenAI’s lack of accountability in launching ChatGPT without adequately addressing known privacy issues, potentially exposing Canadians to risks like breaches and discrimination based on their personal information. OpenAI disputed the findings, claiming compliance with privacy acts in most aspects, but committed to enhancing privacy measures following the investigation and agreed to implement further safeguards.
In response to these developments, OpenAI recently detailed how Canadian data is used in model training, emphasizing the use of freely accessible information and a privacy filter to anonymize personal data in text. The company acknowledged the responsibility that comes with users trusting ChatGPT for personal inquiries and tasks, highlighting their commitment to privacy protection and risk mitigation while maintaining privacy safeguards.
Despite the company’s efforts, Dufresne stressed the need to update Canada’s privacy laws in light of the growing integration of AI technologies into various applications. The investigation predates the tragic events in Tumbler Ridge, B.C., but aligns with calls for regulatory measures targeting AI chatbots, as highlighted by lawsuits filed against OpenAI following the Tumbler Ridge shooting incident.
In light of the evolving digital landscape, Dufresne emphasized the importance of striking a balance between protecting children and addressing privacy concerns without resorting to immediate bans on technologies like chatbots or social media platforms.
