UpTrain Framework
Last updated
Was this helpful?
Last updated
Was this helpful?
The UpTrainframework
is the main object that is used to interact with the UpTrain package and utilize its model monitoring and refinement capabilities. Once a is defined, the UpTrain Framework object can be initialized using that. The config object contains all the necessary information for monitoring, training, and evaluating the performance of your machine learning models. To initialize the UpTrain Framework object, you can use the following code snippet:
where config
is the dictionary defined by the user. Theframework
object can then be used to perform various operations, such as logging input data, monitoring model performance, and retraining the model.
Log model inputs to the framework
As your model serves predictions, you can easily log the input data to the model using the following code snippet:
This will create a new "data-point" for each batch input and kick off the steps. The framework checks if the newly added data-points pass all the checks, such as data drift, edge-cases, etc., that were specified in the UpTrain config. Also, it presents all the and as defined by the user in the config in the UpTrain dashboard.
The identifiers
returned correspond to each data point and can later be used to attach model outputs, ground truths, user feedback, or visualize data points.
The input data should be in the following structure:
where N
represents the batch size, and the features can be of any type or shape. If you have only one data point, please reshape the array to set N = 1.
We also have the option to attach model outputs, ground truth, and user behavior to the UpTrain Framework. This allows us to monitor and evaluate the performance of our model based on the predictions it makes, the ground truth data, and user feedback. To attach this information, the following code snippet can be used:
where identifiers
are the data point identifiers returned when logging the input data, preds
are the corresponding model predictions, gts
are the corresponding ground truths (e.g., ETA prediction for Uber, recommendation feedback in TikTok, etc.) and feedbacks
are the corresponding implicit ground truths (e.g., user behavior, such as asking the same question in multiple ways to ChatGPT, which implies that they are most likely not satisfied with the model output).
The UpTrain framework offers an automated model refinement loop that allows you to constantly improve the performance of your machine learning models. Once sufficient smart data-points (such as edge cases or points that cause data drift) are collected, the framework automatically retrains the model in the background. The retrained model is then compared to the production model using an evaluation report. This report provides insights into the cases the original model was not performing well and how the retrained model is performing in these cases, and helps the user to decide whether to deploy the new model or continue using the existing one. Upon approval, the retrained model is automatically deployed into production.
UpTrain integrates seamlessly with your existing machine learning workflows, providing an out-of-the-box solution for continuous model observability and refinement. If you are facing any issues or have any questions, please feel free to on GitHub.