UK Credit Unions
UK credit unions using AI in lending, fraud detection, member services, or AML face growing regulatory expectations. With the FCA's Consumer Duty fully in force, SM&CR accountability applying to AI governance, and the ICO actively supervising automated decision making, demonstrating robust AI governance is no longer optional.
This interactive scorecard provides a confidential assessment of your organisation's position. Answer the 24 questions below to generate your preliminary score and board ready results report.
Why this matters now (April 2026)
The FCA and PRA confirmed in April 2024 that SM&CR already applies to AI governance. Senior Managers are personally accountable. Consumer Duty is fully in force for all products and services. The ICO is supervising automated decision making under UK GDPR. The FCA launched its AI review (Mills Review) in January 2026, signalling increasing regulatory scrutiny.
UK GDPR and ICO Compliance
This section focuses on understanding the AI systems in use and compliance with UK GDPR and ICO guidance on automated decision making and profiling. For most credit unions, the most significant AI systems will be in credit decisioning, AML and fraud monitoring, and member facing chatbots.
Has the credit union conducted and documented a formal inventory of all AI and machine learning systems currently in use, including loan decisioning modules, credit scoring tools, fraud detection systems, member chatbots and AML monitoring?
Have you identified whether any of your AI systems perform solely automated decision making that produces legal or similarly significant effects on members, and ensured compliance with UK GDPR Article 22 obligations, including the right to human intervention, the right to express their point of view, and the right to contest the decision?
Have Data Protection Impact Assessments (DPIAs) been completed and kept current for all AI systems that process member personal data, specifically addressing the risks of profiling, automated processing, and potential discriminatory outcomes?
SM&CR Individual Accountability
This section focuses on the Senior Managers and Certification Regime (SM&CR) obligations that apply to UK credit unions, and how individual accountability maps to AI governance. The FCA and PRA confirmed in April 2024 that SM&CR already applies to AI oversight.
Has the board assigned clear, documented responsibility for oversight of AI governance to a specific Senior Manager, reflected in their Statement of Responsibilities, consistent with the FCA's expectation that SM&CR already applies to AI oversight?
Have Senior Managers taken documented reasonable steps to ensure that the AI systems operating within their area of responsibility are effectively controlled, tested, and aligned with the credit union's documented risk appetite?
Does your annual fit and proper certification process for Certification Staff now consider the competence and capability required to manage, develop, or oversee AI systems relevant to their role?
Does the board receive regular, structured reporting on the performance of key AI systems, including accuracy metrics, fairness audits, identified biases, and any significant incidents or member complaints related to AI driven decisions?
Training, Literacy and Oversight Capability
This section addresses the need for upskilling at all levels of the organisation to ensure effective oversight, responsible operation, and informed governance of AI systems. For credit unions, the priority is practical literacy rather than technical expertise.
Have board members received tailored training on AI governance, data ethics, and their oversight responsibilities in the context of the credit union's strategic objectives and FCA regulatory obligations?
Have Senior Managers and key operational staff received practical training on AI literacy, including the specific risks of bias, explainability and data drift, and the opportunities AI presents for their areas of responsibility?
Have member facing and operational staff who interact with or rely on AI systems been trained on the system's purpose, its known limitations, and the established escalation paths for unexpected results, member queries, or potential errors?
Equality Act 2010 and Consumer Duty
This section focuses on the practical controls needed to manage AI risk, prevent unlawful discrimination, and ensure consistently fair outcomes for all members, including those with characteristics of vulnerability. Credit unions have a particular responsibility here given their community focus and the demographics of their membership.
Have you conducted and documented formal risk assessments for each AI system, covering areas such as data quality, model accuracy, cybersecurity vulnerabilities, and the potential for member harm under the FCA's Consumer Duty framework?
Do you have a process to periodically test for and mitigate bias in AI systems, particularly concerning protected characteristics under the Equality Act 2010, including age, disability, race, sex, religion and pregnancy, to ensure fair and lawful outcomes for all members?
Are there specific controls in place to ensure AI systems do not systematically disadvantage vulnerable customers, and does the credit union have a documented approach to identifying and adjusting AI outputs where a member's circumstances of vulnerability are known?
Have you established procedures to monitor the performance, accuracy, and data drift of your AI models on an ongoing basis, with defined thresholds that trigger review or suspension of the model?
PRA SS2/21, FCA Outsourcing Rules and Practical Vendor Oversight
Most credit unions do not build AI systems internally. They rely on FinTech vendors and platform providers for lending modules, fraud tools, and member services. This section addresses both the formal regulatory requirements and the practical question of how a CEO or board can gain reasonable assurance that a vendor's AI tool is performing fairly and lawfully, without requiring a data scientist on staff.
Have you assessed your AI vendor arrangements against the PRA's SS2/21 and FCA outsourcing rules to determine whether they constitute material outsourcing or critical third party arrangements, and have you notified the FCA where required?
Have you reviewed all contracts for AI systems to ensure they include adequate clauses covering data processing responsibilities, audit rights, exit provisions, service continuity, liability allocation, and the vendor's obligation to maintain compliance?
Is there documented evidence of your due diligence on AI vendors, assessing not just their technical solution but also their own governance, data security, operational resilience, and regulatory compliance position?
Do you have a formal change management process to assess and approve significant changes or updates to vendor AI systems before they are deployed, including model updates, retraining, or changes to input data, to ensure continuity of service and ongoing compliance?
Can you obtain, and have you reviewed, sufficient information from your AI vendors to satisfy yourself, without specialist technical staff, that their systems are not producing biased or unfair outcomes for your members?
Member Rights, Complaints and Incident Response
This section links AI governance to the core principles of member trust, the FCA Consumer Duty's four outcome areas, and the credit union's obligations around transparency, redress, and operational resilience.
Have you ensured that you have access to, or have created, adequate documentation for each AI system that demonstrates how it supports the delivery of good outcomes for retail customers, and that is accessible and understandable for review by the FCA?
Have you updated your member facing materials, including loan application forms, privacy notices and website, to be transparent about the use of AI in decision making processes, using clear and simple language that a member can readily understand?
Have you established a clear, documented process for a member to request and receive a meaningful explanation of a significant decision made about them by an AI system, and to request human intervention or contest the decision?
Is your complaints handling process equipped to manage and investigate complaints related to AI driven decisions, including the ability to reconstruct the decision, identify potential errors or bias, and provide a fair and timely resolution?
Have you updated your operational resilience and incident response procedures to specifically include AI related scenarios, such as an AI system producing systematically incorrect decisions, the discovery of significant bias, a data breach involving AI processed data, or a critical AI vendor outage?
24 questions remaining