The Green Sheet Online Edition
April 27, 2026 • 26:04:02
Auditability is the real AI requirement in financial services
In the financial services sector, AI deployments often stall because they are difficult to audit. If there's uncertainty as to how the system processes data, how files are handled, and why particular outputs are produced, it will be hard to defend results during a bank exam or security review.
Before a bank or payments organization adds AI into real workflows, the first question is not, Which model is best? The more fundamental one is, Can we prove what happened to data in any scenario?
This requires digging into:
- Where and how does the system run?
- What data can it access, and under which role-based permissions?
- What gets logged and retained, including prompts, outputs, and the sources the system relied on?
- If there is a problem later, can your team reconstruct who asked the question, what information the system pulled, and what it returned?
If you cannot answer those questions with transparency, detailed records, logs and documentation, you do not have a well governed system. You have a black box, meaning you have a non-transparent system.
A need for higher standards
This is a common problem with mainstream, general-purpose AI platforms designed for broad use across a variety of industries. Many are designed to run outside an institution's environment and to work across large datasets and training systems.
In a regulated setting, however, this inevitably creates unacceptable uncertainty and risk as to where processing is actually occurring, whether sensitive data stays under institutional control, and whether outputs can be traced to vetted, internal material only. Serious consequences can arise, including unauthorized third-party data exposure, legal liabilities and erosion of trust.
Many early deployments of these general-purpose AI platforms in financial services have centered on chat interfaces and pilots. These tools are proving useful, but they can also distract from the real danger. When AI is part of a highly regulated environment, it needs to meet higher standards for showing accuracy, traceability and consistent access enforcement.
A new approach
One new approach that the regulation-heavy financial industry can utilize to address these challenging questions is "on-prem AI." This model operates strictly within an organization's own environment, on premise, and is controlled by the same security architecture, regulations and policy practices that secure existing core systems.
Financial institutions can then be certain—and prove—that sensitive information stays inside the institution, is tracked, and never pulled out to train outside LLM models or reference unknown, unreliable sources.
Permissions control is the first hurdle. When an AI layer is privy to more data than personnel are, a governance gap has been created. A safer pattern is an AI operating layer that inherits existing role-based permissions and preserves the audit trails, so the system does not become a shortcut to restricted information.
If asked, internal reviewers can reconstruct who accessed what and when. Prompts and responses can be filed like records: logged, retained and linked to the sources that informed the responses.
Source control is another safeguard. Indexing and document management are crucial aspects for creating consistently correct responses. When content is limited to referencing only approved internal materials (procedures, manuals, product guides), users can be confident that outputs align with the institution's policies.
An efficient system will also update what materials it sources as they get revised.
When older versions are routinely reviewed, outdated language will not show up in any content.
Audit-ready AI
When these critical pieces are in place, the benefits become evident in work that already strains teams and budgets, such as compliance documentation in anti-money laundering investigations and suspicious activity reports. These workflows involve careful attention to regulations and are deadline-driven. In a securely governed environment, this significantly cuts down on the time spent searching, cross-referencing and assembling documents for human review.
None of this, of course, removes human accountability. For banking and payments leaders to be confident in audit-ready AI, the standard is to show where the model runs, what it can access, and which controls were applied. For AI to scale in financial services, these are the first guardrails to establish: securely contained, easily auditable, and tightly governed from the start. 
David Moscatelli, CEO and founder of Go Abacus, is a payments and technology leader specializing in analytics, machine learning and enterprise AI. He was the innovator behind Deloitte Cortex and previously led software innovation at Deloitte and Synchrony Bank. As co-founder and CEO of Go Abacus, he drives product strategy and infrastructure development focused on secure, compliant AI deployment for regulated industries. Based in Chicago, the company provides on-prem AI infrastructure that enables financial institutions, healthcare organizations and insurers to deploy AI within existing systems while maintaining strict control, auditability and regulatory readiness. For more information or connect with David, please visit https://goabacus.co or email info@goabacus.co.
Notice to readers: These are archived articles. Contact information, links and other details may be out of date. We regret any inconvenience.



