AI Is Already in Your Organization. Is Your Information Governance Keeping Up?
Whether you realize it or not, you are already interacting with artificial intelligence. It recommends your next show, suggests a restaurant, and increasingly, it is shaping how your organization collects, manages, and exposes sensitive data.
That last part is where the risk lives.
In an article in Today’s General Counsel, Robert Kirtley — iDS Affiliated Expert, Cybersecurity, Governance, and Compliance — breaks down the privacy risks that emerge at the intersection of AI and information governance, and what legal teams need to understand before those risks become liabilities.
The Promise and the Problem
AI offers genuine advantages for organizations managing large volumes of sensitive data. Behavioral analysis, anomaly detection, and automated data management tools can help detect threats in real time, enforce data retention policies, and reduce the risk of sensitive information leaving the organization through Data Loss Prevention (DLP) capabilities.
But the same systems that protect data can also expose it — and the risks are more immediate than many organizations appreciate.
Kirtley identifies three core areas of concern:
1. Data collection, consent, and leakage.
AI systems require vast amounts of data to function. Organizations must be transparent about what data is collected and how it is used — particularly when that data is fed into publicly available AI tools. Research has found that tools like ChatGPT, Microsoft Copilot, and Google Gemini are a significant source of sensitive data leakage. Kirtley shares a firsthand account from his own team’s testing: when anonymized medical records were submitted to an AI tool, the system returned not only diagnoses but also names, facility information, and medical staff details for patients entirely outside the submitted dataset — Protected Health Information leaked from someone else’s prior interaction with the system.
2. Data security.
Strong internal security is not enough. Organizations must also understand how third-party vendors — payroll providers, software platforms, any partner that incorporates AI into its offerings — are using and protecting the data they have access to.
3. Regulatory compliance.
GDPR, CCPA, and similar regulations impose specific obligations around transparency, consent, and the right to deletion. With AI, meeting those obligations becomes significantly more complex. Determining how a third-party AI system uses submitted data — let alone getting that data deleted — can be extraordinarily difficult.
Why This Matters Now
Kirtley frames AI’s trajectory plainly: it is likely to be as transformative as the internet itself. And just as the internet introduced risks organizations had to learn to manage, AI demands the same disciplined approach to governance — before incidents, not after.
At iDS, this is precisely the work we support every day. Our Information Governance, Privacy, and Cybersecurity practices help organizations understand their data landscape, manage risk proactively, and build the frameworks necessary to use AI responsibly.
To speak with an iDS expert about AI risk, information governance, or data privacy, visit idsinc.com.
iDS provides consultative data solutions to corporations and law firms around the world, giving them a decisive advantage – both in and out of the courtroom. iDS’s subject matter experts and data strategists specialize in finding solutions to complex data problems, ensuring data can be leveraged as an asset, not a liability. To learn more, visit idsinc.com.
Having trouble with a technical term used in this post? Check out our Data Investigators Glossary to crack the code.