Large language models (LLMs)
Do banks still need to validate GenAI models?
Regulators carved out GenAI models from new risk guidance. Banks shouldn’t see this as a reason to stop validating them.
The MIT professor giving LLMs a ‘brain scan’
Hui Chen’s research is yielding new ways to interpret – and steer – AI models
Basel III endgame – a timeline
A review of Risk.net’s coverage of the US implementation saga
New LLMs are proving to be surprisingly good quants
Strides in AI’s ability to do maths mean models can plausibly help with research
FHLB Cincinnati explores AI to spot failing banks
Agentic model detects anomalies, monitors sentiment and drafts credit reports for analyst review
Rethinking model validation for GenAI governance
A US model risk leader outlines how banks can recalibrate existing supervisory standards
More interdealer e-trading needed to support FX swap streaming
Dealers say primary venues must gain more traction to allow further electronification on client side
SNB researchers test LLM-based FX trading strategy
Meta’s Llama 3.1 comes out top predicting G10 currency sentiment based on news articles
Why know-it-all LLMs make second-rate forecasters
A bevy of experiments suggests LLMs are ill-suited for time-series forecasting
Generative AI brings testing times for modellers
Flagstar’s lead model validator offers some tips for safely integrating LLMs into risk models
Quants use AI to shush noisy order-book data
Signals from clusters of seemingly informed trading perform better, researchers say
Academic warns of systemic risk from AI-powered trading
Strategies generated by LLMs exhibit “very strange, correlated trading behaviour”, says Lopez Lira
Former regulator urges new approach to AI explainability
Ex-OCC chief Michael Hsu suggests shift from academic analysis to decision-based techniques
DeepSeek success spurs banks to consider do-it-yourself AI
Chinese LLM resets price tag for in-house systems – and could also nudge banks towards open-source models
Quants try investing like Socrates, with help from AI
Researchers are testing whether LLMs can use methods borrowed from ancient philosophy to answer complex questions
AI could cut time for money laundering checks by 99%
Leading crypto exchange rolling out large language model for enhanced due diligence checks
Execs can game sentiment engines, but can they fool LLMs?
Quants are firing up large language models to cut through corporate blather