Machine Learning (ML)-based models are often black-box. To use ML model in industrial setting it should be fair and reliable. Metrics like accuracy-score or r2-score makes it reliable, but one cannot say anything about the fairness of the model. Model fairness and interpretability are critical for data scientists as well as production engineer to explain their models and understand the value and accuracy of their findings.
The objective of this task is to interpret a trained ML model using CXPlain and SHAP library.
You will be given a tabular dataset for a classification task. Using the given dataset, a ML model should be trained to predict the target value. Using this trained model, feature importance for the input features should be calculated with the help of CXPlain and SHAP Model interpretation library and compared quantitatively.