Model Performance Testing - Final Episode [ML]
Model Performance Revision - ROC Before understanding ROC, you must know about Confusion Matrix, if you don't know, read from the previous episode. To draw ROC curve in ROC space, FPR (False Positive Rate) is to be placed on X-axis and TPR (True Positive Rate) is to be placed on Y-axis. So if we set the TPR and FPR rates obtained from the Confusion matrix, we get one point, thus we get one point from each of the models we work with on the same dataset. By adding these points and drawing a graph, you will get your desired ROC curve. The TPR of a perfect classifier is 1 and the FPR is 0. ROC can be understood with an example. Let's look at a scenario, I took a diabetes dataset, there are 1000 observations in the dataset, I divided it into 80% -20%. So 80% of the data is training data, 20% of the data is testing data. Suppose again, 100 out of 200 testing data is Positive (I mean, to put it simply, the output of 100 data is diabetes). And 100 Negative. I made four models, I will ...







