online gambling singapore online gambling singapore online slot malaysia online slot malaysia mega888 malaysia slot gacor live casino malaysia online betting malaysia mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 4 key tests for your AI explainability toolkit

摘要: Enterprise-grade explainability solutions provide fundamental transparency into how machine learning models make decisions, as well as broader assessments of model quality and fairness. Is yours up to the job?

 


images/one_large_glowing_question_mark_surrounded_by_many_small_question_marks_questions_doubts_unknown_by_carloscastilla_gettyimages-1188952754_cso_nw_2400x1600-100854972-large.jpg

▲圖片標題(來源:Getty Images)

Until recently, explainability was largely seen as an important but narrowly scoped requirement towards the end of the AI model development process. Now, explainability is being regarded as a multi-layered requirement that provides value throughout the machine learning lifecycle.

Furthermore, in addition to providing fundamental transparency into how machine learning models make decisions, explainability toolkits now also execute broader assessments of machine learning model quality, such as those around robustness, fairness, conceptual soundness, and stability.

Given the increased importance of explainability, organizations hoping to adopt machine learning at scale, especially those with high-stakes or regulated use cases, must pay greater attention to the quality of their explainability approaches and solutions.

There are many open source options available to address specific aspects of the explainability problem. However, it is hard to stitch these tools together into a coherent, enterprise-grade solution that is robust, internally consistent, and performs well across models and development platforms.

Does it explain the outcomes that matter?

As machine learning models are increasingly used to influence or determine outcomes of high importance in people’s lives, such as loan approvals, job applications, and school admissions, it is essential that explainability approaches provide reliable and trustworthy explanations as to how models arrive at their decisions.

Explaining a classification decision (a yes/no decision) is often vastly divergent from explaining a probability result or model risk score. “Why did Jane get denied a loan?” is a fundamentally different question from “Why did Jane receive a risk score of 0.63?”

While conditional methods like TreeSHAP are accurate for model scores, they can be extremely inaccurate for classification outcomes. As a result, while they can be handy for basic model debugging, they are unable to explain the “human understandable” consequences of the model score, such as classification decisions.

Instead of TreeSHAP, consider Quantitative Input Influence, QII. QII simulates breaking the correlations between model features in order to measure changes to the model outputs. This technique is more accurate for a broader range of results, including not only model scores and probabilities but also the more impactful classification outcomes.

Outcome-driven explanations are very important for questions surrounding unjust bias. For example, if a model is truly unbiased, the answer to the question “Why was Jane denied a loan compared to all approved women?” should not differ from “Why was Jane denied a loan compared to all approved men?”

Is it internally consistent?

Open source offerings for AI explainability are often restricted in scope. The Alibi library, for example, builds directly on top of SHAP and thus is automatically limited to model scores and probabilities. In search of a broader solution, some organizations have cobbled together an amalgam of narrow open source techniques. However, this approach can lead to inconsistent tools and provide contradictory results for the same questions.

A coherent explainability approach must ensure consistency along three dimensions:

Explanation scope

Deep model evaluation and debugging capabilities are critical to deploying trustworthy machine learning, and in order to perform root cause analysis, it’s important to be grounded in a consistent, well-founded explanation foundation. If different techniques are used to generate local and global explanations, it becomes impossible to trace unexpected explanation behavior back to the root cause of the problem, and therefore removes the opportunity to fix it.

The underlying model type

A good explanation framework should ideally be able to work across machine learning model types — not just for decision trees/forests, logistic regression models, and gradient-boosted trees, but also for neural networks (RNNs, CNNs, transformers).

The stage of the machine learning lifecycle

Explanations need not be consigned to the last step of the machine learning lifecycle. They can act as the backbone of machine learning model quality checks in development and validation, and then also be used to continuously monitor models in production settings. Seeing how model explanations shift over time, for example, can act as an indication of whether the model is operating on new and potentially out-of-distribution samples. This makes it essential to have an explanation toolkit that can be consistently applied throughout the machine learning lifecycle.

轉貼自: infoworld.com

若喜歡本文,請關注我們的臉書 Please Like our Facebook Page:    Big Data In Finance

 


留下你的回應

以訪客張貼回應

0
  • 找不到回應

YOU MAY BE INTERESTED