ANALYSIS: The AI Black Box in Legal Subcontracting

Sept. 26, 2025, 2:04 PM UTC

As artificial intelligence tools become entrenched in legal workflows, questions of data control and accountability move to the forefront. The complexity is most evident in the relationship between in-house counsel and outside counsel, particularly when law firms introduce third-party vendors into the mix.

From the corporate client’s perspective, retaining outside counsel often means entrusting highly sensitive company data. When the law firm outsources tasks like discovery or document review to external vendors, the client’s control over data handling diminishes. If those vendors rely on AI tools for document sorting, analytics, or even training, the risks multiply. What assurance does the client have that confidential data isn’t indirectly used to train models, scraped into data-sets, or repurposed beyond its intended scope?

Even well-drafted outside counsel guidelines or confidentiality provisions may fall short. Contractual restrictions rarely extend seamlessly to subcontractors, leaving in-house counsel dependent on assurances rather than verifiable oversight. This outsourcing chain creates a black box in which client data moves downstream while meaningful monitoring mechanisms stop at the first tier of vendors or service providers. The result is heightened exposure where sensitive information can circulate without oversight, creating legal, regulatory, and reputational risks that undermine both compliance and client trust.

The GDPR Analogy

The oversight problem isn’t new. Under the European Union’s General Data Protection Regulation, data controllers such as a bank holding customer data remain responsible for ensuring that their processors and sub-processors implement adequate safeguards when handling personal data.

In a law firm context, the distinction can be complex. While corporate clients often act as controllers, law firms themselves typically qualify as controllers as well, because they determine how and why client data is processed in providing legal services. Vendors or service providers may function as processors, acting on the firm’s instructions, but in some cases, they could be controllers or even both, depending on their role in defining the purpose and means of data processing. This blurred line makes oversight challenging, as both clients and firms must ensure that all parties, including sub-processors like e-discovery or cloud providers, comply with the required data protection standards.

Enforcement actions illustrate the point. France’s data protection authority fined Google €50 million in 2019 for transparency and consent failures, underscoring that accountability for downstream practices remains with the data controller.

In 2020, the Hamburg Data Protection Authority sanctioned H&M for €35 million after investigators found systemic failures in protecting employee data, much of which involved inadequate oversight of internal practices and contractors. These examples show that regulators expect organizations to control their vendor chains, not just their direct activities.

The same oversight friction is now emerging with AI in the US. Corporate legal departments may set AI use restrictions for outside counsel, but once a subcontractor deploys an AI tool, visibility disappears. Just as with the GDPR, accountability flows downstream in theory but oversight often stalls in practice.

US Parallels: Vendor Accountability in Data Protection

Although the US doesn’t have a GDPR-style omnibus privacy law, federal regulators enforce vendor accountability through sectoral rules like HIPAA that apply only to specific industries or types of data and consumer protection principles.

The Federal Trade Commission has repeatedly held companies responsible for the practices of their service providers. In 2020, the FTC found that Ascension Data & Analytics failed to ensure one of its vendors adequately protected personal data of tens of thousands of mortgage holders. The vendor stored mortgage documents in plain text on a cloud-server without password protection or encryption, and the unsecured server was publicly accessed dozens of times.

Under the Gramm-Leach Bliley Act’s Safeguards Rule, Ascension was required to implement a comprehensive security program, conduct risk assessments of vendors, and contractually obligate them to protect the data. Instead of a monetary fine, the FTC imposed a consent order requiring strengthened oversight, biennial third-party audits, breach reporting, and annual certification by a senior executive.

With the Trump administration’s hands-off approach to AI oversight, vendor AI usage will face even less regulatory scrutiny, leaving the space for bad actors or bad actions to flourish without accountability.

The FTC is stretched thin, and recent remarks from Commissioner Andrew N. Ferguson signal caution about over-regulating AI. A “knee-jerk regulatory response will only squelch innovation,” Ferguson said in a dissenting statement on the FTC’s AI Partnerships & Investments report. For in-house counsel, that means anticipating and guarding against legal and compliance risks that regulators aren’t monitoring.

Delegating work, however, doesn’t delegate liability. If a law firm’s vendor uses AI in ways that compromise confidentiality, fairness, or privacy, regulators and courts will look not only at the vendor, but also at the law firm and the corporate client.

Under the GDPR, data controllers are responsible for ensuring that processors and sub-processors follow data protection rules, and they can be held liable for breaches even if they never touch the data themselves. US cases like Mondelez/BCLP and the Orrick Settlement show that corporate clients can in fact face claims alongside their law firms when a vendor mishandles data.

The key takeaway is that even if the corporate client is the one ultimately affected, failing to enforce robust vendor oversight, including clear AI usage limits, transparency requirements, and audit rights, can leave everyone exposed to liability.

Cross-Border AI Obligations

AI obligations are increasingly becoming cross-border, mirroring the accountability principle established under the GDPR.

Under the law, US companies handling EU employee or client data remain accountable for processor and sub-processor practices, even without an EU presence. The EU AI Act will extend accountability by regulating AI systems whose outputs affect individuals in the EU, regardless of where the developer or deployer is based.

In the US, Colorado’s AI Accountability Act, effective Feb. 1, 2026, imposes duties on developers and deployers of high-risk AI systems used in “consequential decisions” such as employment, housing, and legal services. A vendor engaged by a law firm could bring a client within the law’s scope if Colorado residents are affected, even absent a direct contractual link between the client and vendor.

Together, these frameworks underscore that AI obligations are inherently cross-border. A single legal matter can trigger overlapping obligations across multiple jurisdictions.

Ethics and Compliance Imperatives

Ethical duties heighten the stakes for attorneys—who are bound by confidentiality and tech competence—when delegating work to subcontractors who deploy AI without transparency. This scenario undermines those obligations, even if technically allowed by contract. Professional conduct rules require lawyers to understand where client data flows and to safeguard against unauthorized uses.

Compliance functions face similar strain. With AI-specific rules still fragmented, corporate legal departments must adapt GDPR-style accountability models for an emerging patchwork of state and international AI laws. Without rigorous oversight, organizations risk regulatory penalties, ethical breaches, and reputational harm.

Transparency Required

Just as GDPR enforcement has revealed how difficult it is to oversee processors and sub-processors, AI exposes the same weak point in the in-house and outside counsel relationship. It’s not enough for in-house legal teams to review law firm assurances. They must go further and demand disclosure of subcontractors, extend audit rights downstream, and build AI-specific controls into outside counsel guidelines.

Until regulatory frameworks are aligned and standardized, both compliance and ethics require in-house counsel to treat AI subcontracting as a high-risk area. The black box must be opened not only to protect data but also to preserve the integrity of the attorney-client relationship in the age of AI.

-With assistance from Mary Ashley Salvino

Bloomberg Law subscribers can find related content on our Practical Guidance: AI for Legal Ops & Firm Management page.

If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> in order to access the hyperlinked content, or click here to view the web version of this article.

To contact the reporter on this story: Janet Chanchal in Washington at jchanchal@bloombergindustry.com

To contact the editor responsible for this story: Melissa Heelan at mheelan@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.