Is ChatGPT Confidential?

Banks, forex brokers, fintechs, insurers, asset managers, and payment providers are increasingly using generative AI systems like ChatGPT to automate content creation. In fact, McKinsey estimates that generative AI could contribute between $200 billion and $340 billion in annual value to the global banking industry through productivity improvements and cost reduction. Despite these gains, a critical question remains insufficiently examined. Is ChatGPT confidential? For finance brands, where regulatory compliance, data protection, and client trust are foundational, misunderstanding how AI tools handle information can lead to serious legal, financial, and reputational consequences. At Contentworks Agency we do not input client strategies, data or business plans into ChatGPT. And we’re going to tell you why not.

Understanding Confidentiality in Financial Services

Confidentiality in financial services is governed by law, regulation, and contractual obligation. Unlike general consumer software, financial institutions operate under strict frameworks that define how data must be collected, processed, stored, and shared.

Regulations such as the Gramm Leach Bliley Act in the United States, the General Data Protection Regulation (GDPR) in the European Union, and various financial regulatory laws impose clear responsibilities on financial institutions. These include safeguarding personally identifiable information, limiting data use to defined purposes, preventing unauthorised access, and ensuring proper data retention and deletion. Confidentiality also extends beyond customer data. Internal risk models, trading strategies, pricing algorithms, merger plans, and non-public financial results are considered highly sensitive corporate assets.

ChatGPT, by default, does not operate within these regulatory frameworks. It is a third-party cloud-based service. Its handling of data follows platform specific policies rather than financial industry confidentiality standards.

How ChatGPT Processes User Data

To understand confidentiality risks, it is necessary to understand how ChatGPT technically handles information.

When a user enters a prompt, that data is transmitted to OpenAI’s servers for processing. The input and the generated response are logged as part of system operations. These logs are used for performance monitoring, abuse prevention, safety evaluation, and in some cases, model improvement. Chat data is not instantly deleted after use. Retention periods vary depending on the product version, user settings, and legal requirements. In consumer versions of ChatGPT, conversations may be stored for extended periods.

Another important factor is human review. OpenAI has publicly stated that a portion of conversations may be reviewed by trained personnel for quality control and safety purposes. While this review is limited and governed by internal policy, it means conversations are not strictly machine only.

In addition, unless users explicitly opt out or are using enterprise versions with contractual protections, user inputs may be used to train future models. Training data is aggregated and anonymised, but it originates from real user interactions. According to a 2024 Cisco survey, 48% of employees admitted to entering sensitive business information into public generative AI tools. This behaviour highlights the gap between perceived and actual confidentiality.

Does ChatGPT Guarantee Confidentiality?

No, ChatGPT does not provide a general guarantee of confidentiality. While OpenAI implements security measures such as encryption and access controls, these are not the same as a confidentiality promise. Security protects systems from external attack, while confidentiality governs who can access, use, and disclose data under all circumstances.

There is no assurance in consumer versions of ChatGPT that user data will never be accessed internally, shared with contractors, or disclosed under legal compulsion. There is also no legal privilege attached to conversations with the system, unlike communications with lawyers, auditors, or financial advisors. Enterprise offerings provide stronger safeguards. These may include commitments not to use customer data for training, shorter data retention periods, administrative oversight, and audit logs. However, even enterprise configurations require careful governance, contractual review, and ongoing monitoring.

For finance brands, the safest assumption is that data entered into non enterprise AI tools should be treated as accessible outside the organisation.

Who Can Access Information Entered Into ChatGPT

There are several pathways through which information entered into ChatGPT may be accessed by others.

  • One pathway is internal access. OpenAI employees or contractors may review conversations as part of safety, compliance, or quality assurance processes.
  • Another pathway is legal access. Courts and regulators can compel disclosure of stored chat data in litigation, investigations, or enforcement actions. Recent legal cases have demonstrated that AI chat logs are considered discoverable records.
  • A third pathway is account compromise. If an employee’s account credentials are stolen through phishing or reused passwords, an unauthorised party could gain access to conversation history. In 2023, cybersecurity researchers reported tens of thousands of stolen ChatGPT credentials circulating on underground marketplaces.
  • A fourth pathway involves third party integrations. When ChatGPT is accessed through external applications or APIs, data flows through additional systems owned by developers or vendors. Each integration introduces additional access points and security considerations.

For financial institutions, each of these access vectors represents a confidentiality risk that must be addressed through policy and technical controls.

Real World Examples of Confidentiality Failures

Several high-profile incidents illustrate how confidentiality can break down in practice. Let’s look at one of them:

2023, Samsung Electronics

In 2023, Samsung Electronics confirmed that employees had accidentally leaked sensitive, proprietary information into ChatGPT on at least three separate occasions, prompting the company to restrict the use of generative AI tools.

  • Engineers in Samsung’s semiconductor division used ChatGPT to assist with tasks, but in doing so, they uploaded sensitive data that was then transmitted outside the company’s secure network.
  • The leaked information included proprietary source code for a semiconductor database, code for identifying defective equipment, and confidential internal meeting notes.
  • Because OpenAI (the creator of ChatGPT) uses user input data to train its models, the confidential Samsung information became part of the chatbot’s database, potentially exposing it to external users.

Following the leaks, Samsung implemented a ban on the use of ChatGPT and other generative AI tools on company-owned computers and devices. The company restricted prompt sizes to 1024 bytes and warned employees that failure to adhere to new, stricter security guidelines regarding AI usage could result in termination. Samsung then began developing its own internal AI solutions to ensure the secure handling of data.

Italian Regulator Bans ChatGPT

Italy took the decision to temporarily ban ChatGPT due to concerns that it violates the General Data Protection Regulation (GDPR). The Italian data protection agency, Garante per la Protezione dei Dati Personali (also known as Garante) said there was an “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” ChatGPT, in addition to accusing OpenAI of failing to verify the ages of ChatGPT users.

Italy’s ban led to privacy regulators in Ireland and France contacting Garante to find out more regarding its decision to ban ChatGPT.  Legal exposure is another concern. In ongoing copyright and data related litigation in the United States, courts have ordered AI providers to preserve and disclose large volumes of chat data. These cases demonstrate that AI conversations can become subject to legal scrutiny.

Should Finance Brands Input Private Data Into ChatGPT?

In most cases, finance brands should not input private, confidential, or regulated data into public generative AI tools.

This includes customer names, account numbers, transaction histories, credit data, investment portfolios, internal audit findings, risk models, proprietary algorithms, and non-public financial information. Even when the intent is operational efficiency, transmitting such data to an external AI platform can violate internal security policies, regulatory obligations, and client contracts.

There are limited scenarios where AI use may be appropriate, but they require strict safeguards. These include anonymising data before use, masking identifiers, using enterprise grade AI platforms with contractual confidentiality protections, or deploying private models within controlled infrastructure.

Are Organisations Banning ChatGPT?

Are organisations banning ChatGPT completely? In some cases, yes. Several high-profile law firms have banned their employees from using ChatGPT to check documents or produce text for contracts. In other cases, organisations are scrambling to implement guidelines and restrictions to protect their data.

  • A Gartner survey found that 38% of financial services organisations plan to restrict employee access to public generative AI tools due to data protection concerns. This reflects a broader industry shift toward controlled AI adoption.
  • A report by American Banker highlighted that a third of banks have banned employees from using public generative AI tools.
  • A Cisco study (Data Privacy Benchmark Study) found that 27% of organisations surveyed have temporarily or permanently banned the use of generative AI tools due to privacy and security risks.
  • 92% of security and privacy professionals surveyed viewed GenAI as presenting new, novel challenges, with top concerns including data leaks (69% feared for IP rights) and unauthorized sharing of information (68%).

The primary drivers for these restrictions are the risks of intellectual property exposure and the accidental input of confidential, non-public, or client data into public AI models.

Regulatory and Compliance Considerations

Financial regulators are increasingly focused on AI governance.

  • Supervisory bodies expect firms to understand how AI tools handle data, to document risk assessments, and to implement controls proportionate to data sensitivity. Using AI does not shift responsibility to the vendor. Accountability remains with the financial institution.
  • Under GDPR, improper sharing of personal data can result in fines of up to 4% of global annual revenue. Similar penalties exist under other national frameworks.
  • Regulators have made it clear that ignorance is not an excuse. Firms must proactively manage AI risks rather than react after an incident occurs.

Frequently Asked Questions About ChatGPT Confidentiality

Is ChatGPT safe for internal brainstorming

Yes, provided the discussion does not include confidential, proprietary, or client specific information.

Can I use ChatGPT to audit my marketing channels?

Inputting secure password information or giving AI systems access to your confidential data is not advisable.

Should you use ChatGPT for creating content?

Finance companies may choose to utilise ChatGPT to create content. However, content is not unique and may resemble the content created by other finance brands. Additionally, sources and information should be checked carefully for accuracy.

Does opting out of training make ChatGPT confidential?

No. Opting out limits training use but does not eliminate data storage, internal access, or legal disclosure.

Are enterprise AI tools completely secure?

They significantly reduce risk but still require strong governance, monitoring, and compliance oversight.

Can AI systems leak confidential data in outputs?

While safeguards exist, there is a non zero risk of memorisation or unintended disclosure, particularly when sensitive data is repeatedly exposed.

Should regulated financial data ever be used in public AI tools?

Best practice is no. Regulated data should only be used in controlled, compliant AI environments.

Creating Confidential Finance Content

ChatGPT can be a useful tool, but it is not inherently confidential. For finance brands, treating it as a secure environment for sensitive information introduces unnecessary risk. Data input into AI systems can be stored, reviewed, disclosed under legal pressure, or accessed through compromised accounts. Financial institutions that succeed with AI will be those that adopt it thoughtfully, with strong governance, clear employee guidance, and a realistic understanding of how generative AI actually handles data.

For confidential financial services content, speak to our team. We do not upload client documents, strategies, research or confidential documents to AI systems.