How to design ML Observability for high-risk AI use cases

Vinay Kumar Sankarapu 1.5.2023

MLOps simplified the baseline processes making it easy to build models at scale today. But there has little or no focus on ML acceptance. Any AI/ML model can fail, models are not explainable by design, models can carry the risk of usage during production and model auditing is very complex. Deploying AI for mission-critical use cases requires additional layers like explainability, monitoring, auditability, data privacy and risk mitigation to ensure the AI solution is acceptable to all stakeholders.
Agenda:

  1. Introducing ML Observability
  2. Using ML Observability for model monitoring, model explainability and auditing.
  3. Designing the policy layers to manage model usage risk in ML Observability.

Video

Slides


——————————————————————————————————————————
I put a lot of thoughts into these blogs, so I could share the information in a clear and useful way.
If you have any comments, thoughts, questions, or you need someone to consult with,

feel free to contact me via LinkedIn – Omid Vahdaty:

Leave a Reply