Debriefing Classification Predictive Model Results

A predictive model produces performance indicators and reports as a result of a successful training. Here is a short summary of the different components that you can use to debrief your results so you can verify the accuracy of your predictive model.
  • What does Predictive Power measure? : It's your main measure of predictive model accuracy. The closer its value is to 100%, the more confident you can be when you apply the predictive model to obtain predictions. You can improve this measure by adding more variables.

  • What is Prediction Confidence?: It's your predictive model's ability to achieve the same degree of accuracy when you apply it to a new dataset that has the same characteristics as the training dataset. If it's greater than or equal to 95%, you can consider it a robust predictive model. If it's less than 95%, then you need to improve it, for example, add new rows to your dataset.

  • Does the target value appear in sufficient quantity in the different datasets? Get an overview of the frequency in each dataset of each target class (positive or negative) that belongs to the target variable. For more information, refer to Target Statistics.
  • Which influencers have the highest impact on the target? Check how the top five influencers impact on the target. For more information, refer to Influencer Contributions.

  • Which group of categories has the most influence on the target? In Influencer Contributions, you can analyze the influence of different categories of an influencer on the target. For more information, refer to Category Influence, Grouped Category Influence and Grouped Category Statistics.

  • Using the Confusion Matrix: It's the only way to assess the model performance in detail, using standard metrics such as specificity. It allows you to quicky see the actual correctly detected cases and the false detected cases.

  • You can use the Profit Simulation tab to estimate the expected profit, based on costs and profits associated with the predicted positive and actual positive targets.

    For more information, refer to Confusion Matrix, The Profit Simulation.

  • Can I see any errors in my predictive model? Is my predictive model producing accurate predictions? Use a large panel of performance curves in the Performance Curves tab, to compare your predictive model to a random model and a hypothetical perfect predictive model:

  • Determine the percentage of the population to contact to reach a specific percentage of the actual positive target with The Detected Target Curve.
  • Check how much better your predictive model is than the random predictive model with The Lift Curve.
  • Check how well your predictive model discriminates, in terms of the compromise between sensitivity and specificity with The Sensitivity Curve (ROC).
  • Check the values for [1-Sensitivity] or for Specificity against the population with The Lorenz Curves.
  • Understand how positive and negative targets are distributed in your predictive model with The Density Curves.

What's next?

If you are satisfied with the results of your predictive model, use it. For more information, see Generating and Saving the Predictions for a Classification or Regression Predictive Model.

If you are not satisfied, you can try to improve your predictive model by changing the settings, or if necessary changing the data source.