Why believe the machines ?
Posted by
Manish Panchmatia
on Sunday, April 22, 2018
Labels:
ArtificialIntelligence,
MachineLearning,
Meetup
We trust God, Men and Machine should bring data.
Sometimes statistics can be another way to tell lie. Look at this example: Death rate is low among group of solders compare to city population. It does not mean, all should join military for having longer life. Solders are healthy young population, while city population includes all age group and all health condition people.
Does machine lies?
Machine does not lie on its own. However, one makes machine to lie us unconsciously by
Importance of accuracy
We does not care about incorrectness by machine learning algorithm for image recognition, recommendation system, super accurate OCRs etc.
However, all people are sensitive to incorrectness by machine learning algorithms at medical diagnostics, financial fraud detection, autonomous car.
Explain
So machine learning algorithms should also explain the rational behind proposed solution/answer, same like human beings. Google map already explains rational behind best suggested route, with traffic jam indications.
The benefits of such explanations are enormous. At some of the countries it is enforced by law, that machine learning algorithms must provide rational and justification. It provides more insight to the domain. So it helps:
However, it is computationally expensive. As an alternative "Locally Interpretative Surrogate Model" can be used, that closely mimic the actual model.
Reference:
A talk by Dr. Vikas Agrwal from "Oracle Analytics Cloud" on the topic "Why believe the machines ?". It was "Developer Connect" event on Machine Learning and Artificial Intelligence, at Bangalore India by NVIDIA. Please refer https://drive.google.com/drive/folders/1-er_ORBUGr37dmM2owujO821_l1BmwC9?usp=sharing Folder : DC_Bangalore 5th Video "Dr. Vikas Agrwal.mp4" from 3:05 to 48:40
Note:
This blog post is not verbatim of Dr. Vijas's speech. The text describe above is as per my own understanding of discussion at the venue, that may not be fully correct. Any comments, suggestions, additions, modifications, corrections are welcome.
Sometimes statistics can be another way to tell lie. Look at this example: Death rate is low among group of solders compare to city population. It does not mean, all should join military for having longer life. Solders are healthy young population, while city population includes all age group and all health condition people.
Does machine lies?
Machine does not lie on its own. However, one makes machine to lie us unconsciously by
- Accepting poorly-specified models
- Improperly combining different generes of models
- Using biased data
Such models work very well on training data and test data with respect to certain metrics. So people believe those models.
Importance of accuracy
We does not care about incorrectness by machine learning algorithm for image recognition, recommendation system, super accurate OCRs etc.
However, all people are sensitive to incorrectness by machine learning algorithms at medical diagnostics, financial fraud detection, autonomous car.
Explain
So machine learning algorithms should also explain the rational behind proposed solution/answer, same like human beings. Google map already explains rational behind best suggested route, with traffic jam indications.
The benefits of such explanations are enormous. At some of the countries it is enforced by law, that machine learning algorithms must provide rational and justification. It provides more insight to the domain. So it helps:
- To detect unconscious bias in data and incorrect logic. For example: one image recognition algorithm incorrectly classified the image based on background.
- To provide gut feeling and intuitive sense of data
- To learn : novel mechanism, cause of effects, better processes to solve the problem. For example: (1) Doctor job is easier, if machine learning algorithm explains the rational along with diagnostics and recommendations (2) Player can learn how to play Alpha-go game in better way, if the algorithm explains the move.
- People trust more the model
- Humans and machine can work as a team
Machine Learning Algorithm Classifications
Now let's classify the machine learning algorithms as per their capability to provide the explanation.
- White box algorithms with high interpretablity and low accuracy. Examples: Regression, Decision Tree, Association Rule Mining, Linear SVMs
- Grey box algorithms. Examplaes: Clustering, Bayesian Nets, Genetics algorithms, Logic programming
- Black Box algorithms. Examples : DNN, Non-linear matrix factorisation, non-linear dimensionality reduction
Implementation
Layer wise Relevance Propagation (LRP) is useful technique for DNN to get more insight. In this technique backward propagation is used to highlight, which neuron is contributing to final output. At the end it highlight the input features who are contributing more to the final decision.
However, it is computationally expensive. As an alternative "Locally Interpretative Surrogate Model" can be used, that closely mimic the actual model.
Reference:
A talk by Dr. Vikas Agrwal from "Oracle Analytics Cloud" on the topic "Why believe the machines ?". It was "Developer Connect" event on Machine Learning and Artificial Intelligence, at Bangalore India by NVIDIA. Please refer https://drive.google.com/drive/folders/1-er_ORBUGr37dmM2owujO821_l1BmwC9?usp=sharing Folder : DC_Bangalore 5th Video "Dr. Vikas Agrwal.mp4" from 3:05 to 48:40
Note:
This blog post is not verbatim of Dr. Vijas's speech. The text describe above is as per my own understanding of discussion at the venue, that may not be fully correct. Any comments, suggestions, additions, modifications, corrections are welcome.
1 comments:
US Department of Defense's Defense Advanced Research Projects Agency (DARPA) and LIME (Local Interpretable Model-Agnostic Explanations), are behind the Explainable Artificial Intelligence (XAI) project.
More transparent AI Models: linear regression, logistic regression, Naive Bayes classifier, and decision trees are transparent,
Less transparent AI Models: SVM, Random Forests, Gradient Boosted Trees, k-Nearest Neighbors and deep learning algorithms such as ANN, CNN, RNN
What to explain? The algorithms or statistical models used? How learning has changed parameters throughout time? What a model looked like for a certain prediction? A cause-consequence relationship with human-intelligible concepts?
ref: https://www.cmswire.com/digital-experience/what-is-explainable-ai-xai/
Post a Comment