Predeval is software designed to help you identify changes in a model’s output.
For instance, you might be tasked with building a model to predict churn. When you deploy this model in production, you have to wait to learn which users churned in order to know how your model performed. While Predeval will not free you from this wait, it can provide initial signals as to whether the model is producing reasonable (i.e., expected) predictions. Unexpected predictions might reflect a poor performing model. They also might reflect a change in your input data. Either way, something has changed and you will want to investigate further.
Using predeval, you can detect changes in model output ASAP. You can then use python’s libraries to build a surrounding alerting system that will signal a need to investigate. This system should give you additional confidence that your model is performing reasonably. Here’s a post where I configure an alerting system using python, mailutils, and postfix (although the alerting system is not built around predeval).
Predeval operates by forming expectations about what your model’s outputs will look like. For example, you might give predeval the model’s output from a validation dataset. Predeval will then compare new outputs to the outputs produced by the validation dataset, and will report whether it detects a difference.
Predeval works with models producing both categorical and continuous outputs.
Here’s an example of predeval with a model producing categorical outputs. Predeval will (by default) check whether all expected output categories are present, and whether the output categories occur at their expected frequencies (using a Chi-square test of independence of variables in a contingency table).
Here’s an example of predeval with a model producing continuous outputs. Predeval will (by default) check whether the new output have a minimum lower than expected, a maximum greater than expected, a different mean, a different standard deviation, and whether the new output are distributed as expected (using a Kolmogorov-Smirnov test)
I’ve tried to come up with reasonable defaults for determining whether data are different, but you can also set these thresholds yourself. You can also choose what comparison tests to run (e.g., checking the minimum, maximum etc.).
You will likely need to save your predeval objects so that you can apply them to future data. Here’s an example of saving the objects.
Documentation about how to install predeval can be found here.
If you have comments about improvements or would like to contribute, please reach out!