Deep-dive Examples
Quickly explore the features of UpTrain in more detail
The examples in the folder deepdive_examples
somewhat exhaustively explore the features of the UpTrain model monitoring and refinement tool. If you haven't already, we recommend you try out the get_started
example to understand the basic UpTrain framework before diving deep into more features.
Edge-case Detection: First, let's start with the
uptrain_edge_case
examples. For this case, we have three files for the three popular machine learning frameworks: PyTorch, Tensorflow, and scikit-learn. UpTrain easily integrates with all these ML frameworks to provide observability and refinement to production ML models. Recall from ourget_started
example where we monitored the model for any data drifts, we observed that the model was not performing well when the person was in a push-up position. In this example, we specifically define the edge-case signals to be push-up signal, and actively catch them to include in our smart dataset for retraining our model later. This significantly improves the performance of our model after retraining; for example, after retraining, the accuracy of the model increases from 90.0% to 98.5% when using PyTorch.Concept Drift Detection: Concept drift occurs when the model no longer predicts the target variable with expected accuracy. In the example
uptrain_concept_drift
, we monitor the performance of our orientation classification model by measuring the concept drift using the popular Drift Detection Method.Verifying Data Integrity: UpTrain can further be used to identify the integrity of data the ML model sees in production. This is helpful, for example, when the predictions of the model shouldn't be trusted since they were produced on garbage data. In the example,
uptrain_data_integrity
, we define two checks on data integrity:a) Check if the input features are not null. b) Check if body length (a custom-defined metric) is greater than 50.
Data Drift with Custom Measures: In the
get_started
example, we saw how we could use UpTrain to identify distribution shifts in the input data. In the example,uptrain_data_drift_custom_measures
, we go a step further and define checks on data drift on some individual features as well as on a (user-defined) function of them.Hands-off model monitoring and refinement with UpTrain: Finally, in
uptrain_check_all
, we apply all the aforementioned monitors to our orientation classification model in one place for hassle-free model observability and refinement.
Last updated
Was this helpful?