Artificial Intelligence (AI) and Machine Learning (ML) are rapidly transforming Kenya’s financial sector. The March 2025 Central Bank of Kenya (CBK) survey on Artificial Intelligence (AI) in the Banking Sector shows that 65% of financial institutions adopted AI for credit risk assessment, making it the most common application. A further 83% plan to adopt AI in the future for credit risk assessment.

The above statistics clearly show that, AI systems are increasingly influencing lending outcomes, offering substantial opportunities to improve financial inclusion and reduce operational costs. At the same time, they introduce complex risks that require responsible fairness, inclusivity and strong risk and governance frameworks.

AI in Credit Risk Assessment and Scoring: How it Works

A credit-scoring AI system, as described by the OECD Framework for the Classification of AI Systems (February 2022);

“Credit-scoring system is representative of a machine-based system that influences its environment (whether people are granted a loan). It makes recommendations (a credit score) for a given set of objectives (that, together, determine credit-worthiness). It does so by using both machine-based inputs (historical data on people’s profiles and on whether they repaid loans) and human-based inputs (a set of rules). With these

two sets of inputs, the system perceives real environments (whether people have repaid loans in the past or whether they are repaying loans on an ongoing basis). It transforms such perceptions into models automatically. A credit-scoring algorithm could use a statistical model, for example. Finally, it uses model inferencing (the credit-scoring algorithm) to formulate a recommendation (a credit score) of options for outcomes (providing or denying a loan)”

Reference: OECD (2022), OECD Framework for the Classification of AI systems, OECD Publishing, https://oecd.ai/en/ai-publications/framework-classification

With the Potential Practical Applications being:

  1. Loan Approval Decisions: The AI system analyzes multiple local data sources, including mobile money transactions histories, utility payments and behavioral indicators to generate more accurate credit scores, even for customers with limited credit history.
  1. Repayment Trend Analysis: Machine learning can detect subtle patterns in customer behavior that signal reliability or early signs of financial distress, enabling institutions to proactively manage risk management and adjust repayment plans.
  1. Default Risk Reduction: By analyzing more variables than traditional scoring methods, AI enhances risk categorization and helping institutions to reduce Non-Performing Loans (NPLs) and improve portfolio performance.

Demonstrated Success of AI Credit Scoring in Africa

According to the OECD Africa Capital Markets Report 2025, AI-based credit scoring as evidenced from some African countries has contributed to financial inclusion and capital market participation. For instance, in Ethiopia, over 380,000 MSMEs accessed approximately USD150M in capital through uncollateralized credit facilities driven by AI-credit scoring. Similarly, in Zambia, the Fintech sector uses AI-powered algorithms to offer uncollateralized credit to underbanked customers, and analyzing data sources to assess credit-worthiness. While in Kenya, some institutions leverage AI to provide credit to DFS consumers and for debt management solutions.

Reference: OECD (2025), Africa Capital Markets Report 2025, OECD Capital Market Series, OECD Publishing, Paris. https://doi.org/10.1787/7d26e1d3-en

Some Potential Risks for Using AI in Credit Scoring include but not limited to:

While use of AI in credit scoring has demonstrated success in promoting financial inclusion and capital across Africa, its adoption introduces risks that may directly impact business strategy:

  1. Bias and Discrimination: It has been argued that many financial institutions focus on “willingness” to repay rather than “true repayment capacity”. Customers with irregular or seasonal income which is common in Kenya’s informal sector, or the underserved population including Persons with Disabilities, may be unfairly scored. Algorithmic design and data assumptions can unintentionally lead to bias.
  2. Limited Transparency and Explainability: Complex models often act as “black boxes”. Institutions may struggle to explain loan denials or risk classification, affecting customer trust.
  3. Data Drift: AI models may underperform when customer’s behavior or economic conditions change. A model trained in stable economy can misclassify risk pricing, and affect portfolio performance during inflationary spikes or income variability.
  4. Data Protection and Security Risks: AI systems rely on sensitive financial and personal data. Weak governance increase exposure to breaches.

Are Financial Institutions Ready for Responsible AI in Lending?

Given the risks of bias, data drift, and data privacy and security, AI doesn’t just automate decisions, it directly impacts a person’s financial position by influencing access, cost, and terms of credit, with potentially long-term consequences if decisions are inaccurate or unfair.

All the more reason financial institutions should begin designing and implementing AI governance frameworks, including policies and risk management, across the entire AI Lifecyle, ensuring responsible adoption and oversight from day one.

The following questions help executives assess readiness and ensure AI is used responsible:

  1. Understanding Your System
  2. Do we fully understand how our AI models make decisions, including statistical or inference processes?
  3. Do we monitor for data drift and maintain data quality monitoring, retraining models as necessary to ensure accuracy?
  4. Are we mitigating interpretation bias by using explainable AI techniques (e.g., SHAP, LIME) and training teams on bias, fairness, and interpretability?
  5. Data Governance & Compliance
  6. Are we applying the principle of data minimization, collecting only the personal data necessary for AI to function effectively?
  7. Have we validated the relevance and representativeness of training data, and are we using anonymization, pseudonymization, or sampling where possible?
  8. Do we have robust data security measures, continuously monitoring and auditing AI systems and the data they rely on?
  9. Fairness and Customer Treatment
  10. Are we testing for bias or unfair outcomes across customer groups, including vulnerable groups such as Persons with Disabilities?
  11. Do we have mechanisms to address customer complaints or feedback related to AI-driven decisions?
  12. Human Oversight and Governance
  13. Are accountability roles and decision-making responsibilities clearly defined?
  14. Are we using AI strategically to create competitive advantage and support board-level decision making?
  15. Regulatory Preparedness
  16. Are we ready for the upcoming CBK Guidance note on AI and other regulatory expectations?
  17. Do we have proactive strategies to align AI practices with emerging regulations and governance standards?

In conclusion

The CBK 2025 survey on Artificial Intelligence in the Banking Sector shows that 50% of the institutions have adopted AI, yet only 30% have formal data strategies and only 41% have AI policies. This highlights the risk and governance gap at a time when regulatory expectations are increasing.

And even as CBK finalizes its Guidance Note on AI, institutions have an opportunity to strengthen AI governance and position themselves for innovation and competitive advantage. Engaging advisory experts early in the AI lifecycle (development, deployment or procurement), ensures institutions stay ahead while deploying AI responsibly. At Riskhouse, we provide AI risk advisory and help institutions establish AI governance and risk management that aligns with their business objectives.

At RISKHOUSE INTERNATIONAL, our multi-disciplinary team combines legal, risk, and compliance to provide an objective review of policies and controls, specialized guidance in ESG, Crypto, and AI governance, access to benchmarking and best practices from multiple industries, and efficient implementation support. This allows your internal teams to focus on execution while ensuring that your governance framework is comprehensive, practical, and strategically aligned, turning policies into organizational growth.

Leave a comment

NEED SOME HELP? +