Our M. AllenĀ Blog

Our latest thought leadership and insights with key strategies to win in a challenging market.

Eight Mistakes to Avoid: AI & Digital in Lending

Jun 29, 2025

The Top Eight Mistakes Holding Back Digital and AI Progress in Mortgage and Consumer Lending
By Matt Slonaker

As the mortgage and consumer lending industries race to embrace digital transformation and artificial intelligence (AI), the promise of streamlined processes, enhanced customer experiences, and smarter risk management is tantalizing. Yet, I’ve seen firsthand how missteps can erode the value these technologies are supposed to deliver. Drawing from my experience and industry insights, here are the top eight mistakes that lenders make when adopting digital and AI solutions—and the actions to avoid them. Backed by real-world examples and data, this perspective underscores what’s at stake and how to get it right.

1. Neglecting Regulatory Compliance and Fair Lending

Mistake: Implementing AI without ensuring compliance with fair lending laws, such as the Equal Credit Opportunity Act (ECOA), risks regulatory penalties and biased outcomes.
Why It Hurts: AI models can inadvertently perpetuate bias if trained on historical data reflecting past discriminatory practices. The Consumer Financial Protection Bureau (CFPB) has emphasized that lenders must provide specific, accurate reasons for credit denials, even when using complex algorithms. Failure to comply can lead to legal action and reputational damage.
Example: In 2023, the CFPB issued guidance clarifying that creditors cannot rely on generic checklists for adverse action notices when using AI, as this fails to inform consumers accurately about denial reasons. Action: Invest in transparent AI models and regular fair lending testing to identify and mitigate disparate impacts. The CFPB recommends automated debiasing methodologies to develop less discriminatory models. Stat: A 2023 Fannie Mae survey noted that 56% of mortgage lenders cite integrating AI with existing systems as a top challenge, often due to compliance concerns.

2. Over-Reliance on Black-Box Models

Mistake: Using opaque AI systems without understanding how they make decisions undermines trust and accountability.
Why It Hurts: Black-box models make it difficult to explain decisions to consumers or regulators, increasing the risk of non-compliance and customer distrust. A 2024 J.D. Power survey found that only 27% of consumers trust AI for financial advice. Example: The CFPB has warned that lenders using black-box models must still comply with ECOA by providing precise reasons for credit decisions, highlighting cases where vague explanations led to regulatory scrutiny. Action: Prioritize explainable AI (XAI) solutions that provide clear rationales for decisions. Human oversight is critical to validate AI outputs and ensure transparency.
Stat: 73% of lenders in a 2023 Fannie Mae survey cited operational efficiency as the primary motivation for AI adoption, but transparency remains a key barrier.

3. Underestimating Data Quality Issues

Mistake: Feeding AI systems incomplete or biased data leads to inaccurate predictions and flawed decisions.
Why It Hurts: Poor data quality can exacerbate biases, misjudge creditworthiness, or fail to detect fraud. For instance, a Federal Reserve study found that credit scores were not reliable predictors of subprime mortgage default risk, highlighting data limitations. Example: Ocrolus, a document processing platform, combines machine learning with human verification to achieve 99% accurate data extraction, addressing the issue of error-prone manual processes. Action: Invest in robust data governance frameworks and synthetic data to fill gaps while removing risky variables that could introduce bias. Stat: 56% of mortgage lenders report that integrating AI with legacy systems creates data silos, impacting accuracy.

4. Ignoring Customer Trust and the Human Element

Mistake: Over-automating processes without preserving human interaction alienates customers who value personalized guidance.
Why It Hurts: A 2024 Bankrate survey revealed that many consumers still prefer speaking with a human loan officer over a chatbot for mortgage queries. Over-automation can erode trust, especially when AI outputs are not explained clearly. Example: Rocket Mortgage’s Rocket Logic AI platform reduced turn times by 25% from 2022 to 2024, but the company emphasizes that human loan officers remain essential for coaching borrowers. Action: Blend AI with human expertise, using tools like chatbots for routine tasks while reserving complex interactions for loan officers.
Stat: 54% of consumers in a 2024 J.D. Power survey had used generative AI tools, but only 27% trusted them for financial decisions.

5. Failing to Modernize Legacy Systems

Mistake: Attempting to integrate AI with outdated legacy systems creates inefficiencies and data silos.
Why It Hurts: Legacy systems struggle to handle the scale and complexity of modern data, hindering AI’s potential. A 2023 EY analysis found that large banks’ technology spend for closed mortgages is four times that of nonbanks due to outdated platforms. Example: FinTechs like SoFi leverage modern platforms to offer seamless digital experiences, outpacing traditional banks stuck with fragmented systems. Action: Prioritize lending platform modernization to create flexible, data-driven infrastructures that support AI integration.
Stat: 63% of consumers prefer an online mortgage process, and 58% say digital application availability influences lender choice.

6. Overlooking Ethical and Privacy Concerns

Mistake: Using sensitive consumer data, like social media or behavioral patterns, without robust privacy safeguards risks breaches and ethical violations.
Why It Hurts: AI systems harvesting data from non-traditional sources (e.g., social media) can violate privacy laws and erode trust. The CFPB has flagged such practices as potential sources of digital discrimination. Example: The use of proxies for protected characteristics (e.g., gender inferred from shopping habits) has drawn scrutiny from regulators, as it’s illegal under fair lending laws. Action: Implement strict data privacy protocols and avoid using proxies for protected attributes. Synthetic data can help model scenarios without compromising privacy.
Stat: In 2023, 73% of lenders cited data privacy concerns as a barrier to AI adoption.

7. Underinvesting in Talent and Training

Mistake: Failing to train staff on AI tools or hire skilled data scientists limits effective implementation.
Why It Hurts: Without proper training, employees may misuse AI tools or fail to leverage their full potential. A 2025 Forbes article noted that AI’s sophistication risks stunting entry-level analyst development, as tasks traditionally used for training are automated. Example: Arch Mortgage emphasizes company training and webinars to help loan officers integrate AI tools into workflows, boosting adoption. Action: Invest in continuous training and hire AI specialists to bridge skill gaps. Encourage experimentation with guardrails to foster innovation.
Stat: The U.S. Bureau of Labor Statistics projects a 4% increase in loan officer employment from 2021 to 2031, underscoring the need for human-AI collaboration.

8. Chasing Hype Over Strategy

Mistake: Adopting AI for its buzz without aligning it with business goals leads to wasted resources.
Why It Hurts: A 2024 EY report highlighted that banks focusing on AI without strategic alignment face amplified risks, including model drift and ethical concerns. Chasing trends can divert focus from core priorities like customer experience or compliance. Example: Some banks rushed to adopt generative AI post-ChatGPT’s 2022 launch, only to face integration challenges due to misaligned strategies. Action: Develop a clear AI strategy tied to specific outcomes, such as improving underwriting speed or reducing origination costs.
Stat: 58% of finance leaders surveyed by Gartner in 2024 had deployed or planned AI initiatives, up from 37% in 2023, but many lacked clear strategies.

My Point of View

As someone deeply invested in the mortgage and lending space, I believe AI and digital tools are game-changers—but only if wielded thoughtfully. The mistakes above stem from a rush to innovate without addressing foundational issues like compliance, transparency, and customer trust. Lenders who succeed will balance cutting-edge tech with human expertise, ensuring that AI enhances rather than replaces the personal touch that borrowers crave. By prioritizing data quality, ethical practices, and strategic alignment, we can unlock AI’s potential to deliver faster, fairer, and more efficient lending experiences.

The path forward requires vigilance. Regulators like the CFPB are watching closely, and consumers are demanding transparency. My advice? Start small, test rigorously, and always keep the borrower at the center. The future of lending isn’t just digital—it’s human, too.

Matt Slonaker is a thought leader in mortgage and consumer lending, passionate about leveraging technology to drive efficiency while maintaining trust and compliance.

 

Sources:

  • Consumer Financial Protection Bureau (CFPB) Guidance on AI and Credit Denials 
  • Fannie Mae Mortgage Lender Sentiment Survey 2023 
  • J.D. Power 2024 Consumer Survey 
  • Federal Reserve Bank of St. Louis Study on Credit Scores 
  • EY Analysis on Lending Platform Modernization 
  • Bankrate on AI in Mortgage Lending 
  • Forbes on AI Challenges in Finance 
  • EY on AI Risks in Banking