COURSE DETAILS

The use of AI and machine learning in finance has grown significantly in the last few years. As more and more AI and ML applications are being deployed in enterprises, concerns are growing about the increased complexity of models, the growing ecosystem of untested frameworks and products, potential for AI accidents, model and reputation risk. As the debate about explainability, fairness, bias, and privacy grows, there is increased attention to understanding how the models work and whether the models are designed and thoroughly tested to address potential issues.

The growth of data-driven applications have changed the financial industry. AI and ML models have accelerated business transformation, reduced turn-around times and have enabled applications that weren’t feasible just a few years ago. Institutions have ramped up the adoption of ML models and are seeing significant benefits through the growing portfolios of ML based decision making models. While the interest is huge, the challenges of comprehensively testing and evaluating ML models remain. AI accidents and the risks associated with algorithmic decision making is challenging enterprises to innovate and adopt risk management techniques factoring the new realities!

Delivery:

  • LIVE: Email info@qusandbox.com for upcoming LIVE training dates
  • ON DEMAND: Pre-recorded sessions with interactive videos, slides, demos and fully functional code through Qu.Academy.

Who should attend

  • Risk professionals
  • Model Validators
  • Model Auditors
  • Data Scientists
  • ML engineers and Software engineers involved in ML and AI deployment
*Combo offer*

This course is a part of the QuantUniversity Machine Learning and AI Risk Certificate Program. Avail additional discounts by enrolling to the Certification program.

QuantUniversity has partnered with (Professional Risk Managers' International Association)PRMIA to offer this course and is eligible for Continued Risk Learning Credits

Note: All courses come with a 90-day access to course materials and recordings Qu.Academy from the activation/class-start date. You can extend access to Qu.Academy. Contact us for subscription options. All sales are final. No request for cancellations, exchanges, changes or refunds shall be honored.

Delivery

LIVE/ON DEMAND

Where

QuAcademy

Number of Modules

6 modules

Each module

1.5 hours/module

Registration options
ON DEMAND:
Access now through Qu.Academy

COURSE SUMMARY

In this QuantUniversity Course, we will discuss the key aspects of risks in ML models and discuss key techniques in stress, scenario testing and evaluation of machine learning models. Through examples and case studies, we will discuss the state-of-the-art in testing and evaluation of ML-based models and how to comprehensively address risk when developing, deploying and monitoring ML applications. By the end of the course, participants will have a clear idea of the challenges, best practices and pragmatic tools that can be used to address risks in machine learning models.

Hands-on examples and case studies through QuSandbox will be provided to reinforce concepts.

MODULES 1: Introduction to Machine Learning, AI and Risk

  • Machine Learning In Finance : A Tour Key Methods Used In Machine Learning
  • Defining Risk In ML Models
  • Concept Drift, Data Drift, Model Drift
  • Stress, Scenario Testing & Evaluation
  • Key Metrics
  • The Role Of Algorithmic Auditors For ML Models
  • Motivation: Case Study Covid -19
  • Scenario Generation And Testing With Synthetic Data

MODULES 2: Stress Testing and Scenario Generation

  • How Are AI/ML Models Different From Traditional Models?
  • Scenario Stress Testing
  • Reverse Stress Testing
    • Identifying And Assessing Tail Risk Scenarios
  • Scenario Generation
    • Role Of Synthetic Data And Data Augmentation
  • The ML Life Cycle And Risks
  • Case Study: Stress Testing Of An ML-Based Forecasting Model Under Different Regimes

MODULES 3: Metrics and Evaluation for Risk in ML Models

  • Metrics For Quantifying Risks In ML Models
  • Working With Sensitive Data
  • Detecting Data Leakage
  • Quantifying Risk & Metrics For ML Models
  • Monitoring And Retuning/Retraining
  • ML Risk Reporting
  • Case Study: A Dashboard For Measuring And Evaluating Risk In ML Models

MODULES 4: Anomalies and Outliers

  • Detecting And Addressing Anomalies
  • Explainability & Outlier Analysis
  • Methods For Generating And Testing For Anomalies
  • Checks For Plausibility
  • Data Techniques And Ensemble Methods To Address Anomalies
  • Case Study: Anomaly Detection In Time-Series Datasets Using GANS

MODULES 5: Model Validation of Machine Learning Models

  • Verification Vs Validation Of ML Models
  • Benchmarking ML Models
  • Challenger Models
  • Backup Models
  • Issues When Adopting Machine Learning Models
    • Model Selection Challenges
    • Interpretability And Explainability
  • Case Study: Validating An ML Model For Credit Risk

MODULES 6: Frontier Topics and Wrap-up

  • Operationalizing Evaluation Of Risk In ML Models
    • Real-Time & Near-Realtime Risk Evaluation
    • Architecture Choices For Scaling Risk Calculations
    • Issues With Integrating Traditional And ML Models
    • Governance Mechanisms To Address Risk In ML Models
    • Algorithm Auditing & Issues Of Bias And Fairness
    • Adversarial Attacks, Sensitive Data And Unknown Risks
  • Frontier Topics
    • Deep Learning And Other ML Innovations
    • Technologies And Trends To Look Out For

PAST ATTENDEES

Past Attendees of QuantUniversity workshops include Assette, Baruch College, Bentley College, Bloomberg, BNY Mellon, Boston University, Datacamp, Fidelity, Ford, Goldman Sachs, IBM, J.P. Morgan Chase, MathWorks, Matrix IFS, MIT Lincoln Labs, Morgan Stanley, Nataxis Global, Northeastern University, NYU, Pan Agora, Philips Health, Stevens Institute, T.D. Securities and many more..