You have a data governance policy. You have a confidentiality agreement. You may even have cyber insurance.
None of that matters if your paralegal is summarizing a client's trust document in ChatGPT, your associate is drafting a demand letter with client names in Claude, or your bookkeeper is uploading a client's tax return to an AI tool to "check the numbers."
This is not hypothetical. It is happening in firms like yours, right now, today. And in most cases, no one in leadership knows.
Under most AI platform terms of service, prompts and documents you submit may be used to train future models, reviewed by human moderators, or retained on vendor servers indefinitely.
Your clients did not consent to this. Your engagement letter almost certainly does not address it. Your malpractice carrier has not been asked about it.
These are real use patterns, not edge cases. If your staff has access to an AI tool and an internet connection, assume these scenarios exist in your firm today.
General businesses face reputational risk when employee data leaks. Your firm faces a different category of problem.
For Law Firms
- Privilege Attorney-client privilege may be waived when confidential communications are transmitted to a third-party AI platform.
- Bar Rules State bar rules require reasonable measures to prevent unauthorized disclosure. Passive tolerance of AI tool usage may not qualify.
- Engagement Letters Most engagement letters predate generative AI. Your current language almost certainly does not address AI data transmission.
For CPA Firms
- IRS Requirements IRS Publication 4600 and Treasury Circular 230 require safeguards over taxpayer information. Cloud AI tools may not meet the definition of an appropriate safeguard.
- Consent Gaps Client consent to share tax information with AI platforms is rarely obtained and rarely contemplated in existing engagement terms.
- State Law Data breach notification obligations may be triggered even when no malicious actor is involved — unintended third-party disclosure counts.
For Wealth Managers
- Regulatory Scrutiny SEC and FINRA guidance increasingly scrutinizes how client data is stored, transmitted, and processed by third-party vendors.
- Privacy Notice Submitting client financial information to an AI platform may constitute a data transmission event requiring disclosure under your firm's privacy notice.
- Fiduciary Exposure AI-generated advice or analysis may create unanticipated liability, particularly when it influences decisions about client portfolios.
The AI platforms your staff uses are not your subprocessors. They have not signed a BAA, DPA, or data handling agreement with your firm.
When client data enters those platforms, it exits your control. Period.
Most professional services firms have some version of a data security or acceptable use policy. Almost none of them were written with generative AI in mind. Here is what is typically missing:
- No AI Policy An explicit prohibition — or approved workflow — for using AI tools with client data.
- No Approved Tool List A list of AI tools that have been vetted for your firm's compliance obligations.
- No Review Process A process for staff to ask whether a tool is acceptable before using it.
- No Client Disclosure Language in engagement letters addressing AI tool usage and data handling.
- No Training Any staff training on what constitutes a "data disclosure" under applicable law or professional rules.
The absence of policy is not ignorance of the rule. It is the rule not being followed.
You do not need to ban AI. You need visibility and intentional structure. A reasonable starting point for any firm includes:
- Audit what AI tools are currently in use — including browser extensions and personal devices used for work.
- Map what client data categories exist in your firm and where they reside.
- Establish a written AI tool policy with explicit guidance on client data handling.
- Review and update engagement letters to address AI tool usage.
- For long-term risk reduction, evaluate local or on-premise AI tools that do not transmit data to third-party servers.
The firms that get ahead of this are the ones that treat AI data exposure as a systems design problem, not an IT problem. The question is not whether to use AI. The question is how it moves through your practice — and where client data goes when it does.
About OccuNX
OccuNX is a privacy-first systems and risk consultancy. We work with small and mid-sized professional services firms — law firms, CPA practices, and wealth managers — to map data flows, identify vendor exposure, and reduce unnecessary digital risk. We are not an IT company, a software vendor, or a managed service provider. We do not promise perfect security. We help organizations understand how their data actually moves — and reduce the places it should not go.
Relevant Services for This Advisory
- Business Privacy Audit — includes AI data exposure analysis and subprocessor mapping
- AI Data Exposure Advisory — a standalone written assessment of AI-related data risk in your firm
- Cloud Vendor Risk Tracker — maps which of your vendors are transmitting client data and where
To schedule a consultation or request an AI data exposure assessment: occunx.com
This document is for informational purposes only and does not constitute legal, compliance, or professional advice. Consult qualified counsel for guidance specific to your jurisdiction and practice.






