Sanctions Compliance, AI and the Problem of Accountability

Image of man at desk, facing laptop. papers on desk

Sanctions Compliance, AI and the Problem of Accountability

A lot of promises are being made about artificial intelligence. AI tools are increasingly being used across business workflows, driven by speed, accessibility and perceived efficiency gains. In areas such as sanctions compliance, however, not all forms of AI are appropriate. Legal accountability, auditability and decision ownership are central regulatory expectations.

This article explains why general‑purpose large language models (LLMs) such as ChatGPT, Copilot or Gemini are not suitable for sanctions screening or decision‑making, while recognising that properly designed, task‑specific AI tools developed by specialist technology providers can play a legitimate role within controlled compliance frameworks.

How general‑purpose LLMs ‘understand’ information

Large language models generate responses by identifying statistical patterns in vast volumes of training data. They do not ‘know’ whether a statement is legally correct, current or complete. They do not interrogate authoritative sanctions lists in real time, nor do they apply legal judgement.

Instead, they predict plausible‑sounding text based on probability. This means that an LLM may respond confidently to a sanctions‑related question while being wrong, incomplete or internally inconsistent. Crucially, the same question asked in different ways, or at different times, may produce different answers, with no reliable explanation as to why.

Why public LLMs are unsuitable for sanctions screening

Using general‑purpose LLMs to determine whether a person, entity or transaction is subject to UK or EU sanctions creates several material risks:

  • Out‑of‑date or incomplete information, particularly where sanctions designations change rapidly
  • Inconsistent outputs depending on how questions are phrased
  • No reliable audit trail or reproducibility of results
  • Inability to demonstrate how a conclusion was reached
  • Risk of false negatives, where a designated person or entity is missed
  • Risk of false positives, where legitimate activity is incorrectly flagged

In practice, asking a public LLM whether a named individual or company is subject to sanctions may yield different answers depending on wording, context or timing. There is no dependable way to validate the output, explain it to a regulator, or rely on it as evidence of having taken reasonable compliance steps. This lack of transparency and control is fundamentally incompatible with sanctions compliance obligations. Put simply, when used in this way, these LLMs cannot undertake the fact-based due diligence that regulators and national authorities expect.

Why this matters under UK and EU sanctions regimes

Sanctions compliance, particularly in the UK and EU, is built around accountability. Firms are expected to understand their exposure, apply appropriate controls, and be able to explain and defend their decisions. When things go wrong, regulators do not ask which tool was used; they ask who made the decision, on what basis, and using which verified sources.

Delegating sanctions decisions to opaque, non‑deterministic systems undermines this accountability. It introduces risk that cannot be effectively mitigated through policies or disclaimers and exposes firms to legal, regulatory and reputational consequences.

Where AI can be used appropriately

This does not mean that AI has no role in sanctions compliance. Far from it. Properly designed, task‑specific tools developed by specialist technology providers can be used very effectively, for example, to support sanctions screening, identify connections, adverse news research and alert generation, provided they are:

  • Built on verified, up‑to‑date sanctions data
  • Designed for a clearly defined compliance purpose
  • Subject to testing, governance and human oversight
  • Integrated into documented, auditable compliance processes

By contrast, free or publicly accessible general‑purpose LLMs should not be used for sanctions screening, clearance decisions or risk determinations. Their lack of determinism, transparency and legal grounding makes them unsuitable for these functions.

Conclusion

AI can enhance efficiency, but it does not replace judgement. Under UK and EU sanctions regimes, compliance ultimately depends on human decision‑makers who can understand risk, apply the law, and stand behind the decisions taken.

Firms considering the use of AI in sanctions compliance should focus less on novelty and more on accountability, auditability and control. The question is not whether AI can generate an answer, but whether the firm can explain, evidence and defend the decision that follows.

Need help navigating sanctions risk?
I help companies build practical, risk-based compliance programmes that work across jurisdictions and value chains. If you’re unsure whether your controls are fit for purpose, get in touch or explore other posts on fairgreensanctions.com for insights and guidance.

Subscribe to new posts: