Skip to main content
Sponsored by ?

This article was paid for by a contributing third party.More Information.

AI in capital markets: bridging predictive precision with generative possibility

3D illustration of a single human head split into two halves, with a glowing bridge of pink binary symbols connecting the open sections, symbolizing digital communication, data transfer or AI

Artificial intelligence is transforming financial services, driving advances in credit risk, synthetic data generation, automation and model governance. But with dynamic innovation comes complexity. Evolving regulatory requirements, rising compliance demands, and growing concerns over data quality and consistency are prompting firms to approach AI with renewed caution and clarity

Drawing on exclusive insights from experts at SAS and Microsoft, this feature explores the strategic value of predictive AI and generative AI (GenAI) across risk functions, distinguishing their roles in areas such as credit modelling and model risk management.

It also examines how AI and large language models (LLMs) are being used to improve data processing, document intelligence and predictive insights, while also reshaping how firms approach model development and oversight. At the foundation of this progress is the need for robust data governance – an area in which AI can play a vital role in simplifying and strengthening operations.

Credit risk landscape

While predictive and GenAI are undoubtedly transforming financial services, many organisations are taking a strategic pause – reassessing how best to deploy these technologies and fully realise their potential. The opportunity is enormous, but so too are the challenges.

For most, the journey begins and hinges on one core factor – data. Poor-quality data inputs can generate misleading outputs, often referred to as AI hallucinations, which can undermine confidence, damage customer relationships and compromise regulatory standing. In a highly regulated financial services industry, flawed data doesn’t just introduce operational inefficiencies – it becomes a risk factor itself.

Leaders at SAS and Microsoft emphasise that success lies in leveraging the right type of AI for the right problem. Determining where predictive AI, with its strength in structured modelling, and GenAI, which excels at pattern creation and synthetic data, each fit within specific risk domains is essential. Whether dealing with credit, operational, market or liquidity risk, each area poses unique constraints that demand tailored approaches.

Geography is also a critical factor in AI maturity. Markets such as the European Union and Asia are setting the pace, with regulatory frameworks that support innovation while managing downside risk. In contrast, other regions remain less structured, offering both flexibility and uncertainty. In all cases, AI initiatives must align with enterprise values and be grounded in practical, validated use cases before they scale.

There is growing traction for predictive AI, particularly in fraud detection and operational risk. Techniques such as gradient-boosting algorithms are already reshaping credit decisioning, making outcomes more precise and data-driven. However, organisations must go beyond implementation – they must think deeply about model architecture, and control frameworks and AI governance to ensure sustainability and compliance.

Risk teams across financial services organisations must lead the way in AI adoption for other business divisions to follow, experts at SAS and Microsoft note. Their structured, regulated environment makes them well suited to piloting responsible innovation. In doing so, they also help build organisational confidence – creating a path for broader adoption across business units.

A promising development is AI’s role in documenting processes, testing controls and enhancing model traceability. While this brings architectural and oversight demands, the upside is considerable: faster compliance, clearer audit trails and reduced operational friction.

Looking ahead, the rise of agentic AI – where one AI system evaluates another – offers exciting new models for oversight. Meanwhile, educating and enabling regulators will remain critical. As seen in joint efforts led by Microsoft, empowering regulators to understand and apply AI themselves will shape how innovation is governed across the financial ecosystem.

Automation use cases

Organisations are increasingly adopting predictive AI in credit, liquidity and market risk, where accuracy is paramount and trust in model outputs is non-negotiable. SAS and Microsoft have partnered to deliver robust platforms for complex asset-liability management scenarios, such as Monte Carlo simulations, helping institutions assess risk exposure, optimise investment strategies and improve long-term decision-making.

These are ideal use cases for predictive AI: intricate, resource-intensive models that must operate reliably and leave no room for error. Such examples reaffirm the enduring importance of classic, risk-aligned AI – models that are tested, trusted and precise.

In parallel, GenAI is proving a powerful complement, particularly through synthetic data generation. This data allows organisations to safely test and validate models, anonymise sensitive information and improve fraud detection and credit modelling – all without breaching privacy or compliance boundaries. At SAS, synthetic data has already improved validation workflows and data-quality checks, unlocking safer experimentation.

Process optimisation is another fast-evolving domain. Financial institutions are capturing metadata about internal workflows and applying predictive architectures – often using neural networks – to uncover inefficiencies and improve execution. These models excel at pattern detection and enable a clearer understanding of how business processes are structured, whether in code or through directed graphs. For many organisations, this visibility has been transformative.

Adding to this momentum, Microsoft’s work on agentic AI is shaping a future in which autonomous agents complete workflows independently. SAS sees this capability gaining traction in retail banking, where automation of portfolio maintenance and routine tasks can significantly reduce cost and improve efficiency.

At SAS, synthetic data has already improved validation workflows and data-quality checks, unlocking safer experimentation.

SAS and Microsoft recently piloted embedded OpenAI capabilities across SAS tools, presenting the solution to more than 150 customers. While the chat interface sparked interest, it was document intelligence that delivered the real impact, automatically converting unstructured financial documents into structured data, populating templates and feeding insights into predictive systems. The implications for credit risk analysis and enterprise-wide automation are significant.

For SAS, demonstrating tangible business value from reduced full-time equivalent costs and improved data completeness, as well as eliminating manual risk exposure, remains essential. Above all, clients’ ability to achieve the critical trifecta of revenue growth, cost control and risk management is unrivalled.

As adoption grows, so does scrutiny. Financial institutions are increasingly demanding clarity on model return on investment to balance upside with operational risk. Governance is central. Trusted partnerships and a disciplined approach to AI use will define success.

In an AI-enabled era, institutions that pair strategic foresight with practical execution and collaborate with trusted partners will lead with clarity, control and confidence.

Model development and data

Financial services firms are at a pivotal moment in the adoption of AI. As the focus shifts from experimentation to integration, firms are now making informed decisions about how and where to use predictive and GenAI. It is not just about powerful models, it’s about delivering end-to-end solutions that align with regulatory, operational and strategic goals.

AI aids in bringing together copies of data from multiple sources to run unified analytics. Once centralised, this data can be used for knowledge graphing, data entity resolution and, ultimately, fuelling predictive models and generative applications.

This approach is complemented by a strong focus on data use patterns. The demand for real-time and near real-time decision-making in high-frequency scenarios, such as payments, transactions and scoring, is rising. In these contexts, data must be lean and stored in highly responsive environments. Conversely, larger-scale use cases – such as anti-money laundering scoring – are better served through batch processing within different architectural frameworks.

Data consistency is a key where, for instance, a shared definition of ‘default’ must be consistent, whether it’s being used in a risk model or across audit functions. Ensuring data quality, lineage and governance are aligned all the way to the last mile of model execution is pivotal.

Today, many financial firms are building their own predictive models using neural networks, decision trees and gradient boosting. But there is also growing interest in GenAI for creating synthetic data, which helps test systems and models in a privacy-preserving way. While few are building their own LLMs due to cost, risk and legal concerns, many are subscribing to services like OpenAI and fine-tuning them through regulatory-compliant approaches, such as low-rank adaptation, or LoRa as it is known.

GenAI algorithms are often likened to new hires – they arrive green but, with training and direction, they become increasingly capable. The challenge, however, lies in understanding where these models should be deployed. This often boils down to regulation, governance and reputational risk. Many are taking a measured approach, guided by internal risk management functions.

This makes governance not just a box to tick, but a central priority. Firms must have operational controls, including a central model repository and formal processes for model development, review and deployment. These are not insurmountable hurdles, but foundational elements that must be addressed quickly as model usage scales.

Predictive AI has matured over the past few years. Once a topic of academic interest, it is now deeply business-driven, used to improve efficiency, customer experience and risk management. GenAI is following a similar path – arguably at a faster pace and with greater potential for disruption.

In summary

As financial services firms advance their AI journeys, success hinges on more than technology – it requires clear governance, trust in data, cross-functional leadership and alignment with core values. Threats are becoming more sophisticated, with fraudsters leveraging natural language tools, making robust defences and regulatory collaboration critical.

True transformation is not plug-and-play; it’s a co-ordinated effort across the entire ecosystem. End-to-end challenges demand end-to-end solutions – secure, efficient and strategic. Above all, organisations must begin with a clear objective. Without a defined North Star, the risk of misalignment – and missed opportunity – grows exponentially.

Navigate the future of banking with intelligent, adaptive automation

Register for the SAS webinar, Beyond the buzzword: agentic AI for financial services

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here