UpTrain Examples 🚂
Thorough use-cases to get started UpTrain
Last updated
Was this helpful?
Thorough use-cases to get started UpTrain
Last updated
Was this helpful?
We have compiled a list of great examples to quickly get started as well as exhaustively understand the features of UpTrain
: In this example, we see how to use UpTrain to monitor data drifts and collect edge cases to retrain the neural network upon. For the same, we consider a binary classification task of human orientation while exercising. We use UpTrain to monitor data drift and identify data-points which have low representation in the training data (and hence, the model's performance is not good for those cases). As we see, the UpTrain framework uncovers certain patterns (ex: cases when the person is in pushup position) and helps to improve the model's accuracy from 90% to 98%. Additionally, we have created several deep-dive examples that somewhat exhaustively demonstrate the features of the UpTrain package. It's recommended to use this task to get started with UpTrain.
: In this example, we monitor the ML model's performance by measuring the using the UpTrain package in the . We use the popular to identify dips in the model's accuracy. As we see, UpTrain raises an alert when the model's accuracy dips from 99.5% to 97%. To better highlight the model degradation, we also see how to define a custom metric through which we are able to zoom into the model issue.
: In this example, the task is a regression task that involves predicting the duration of a ride given certain input features. UpTrain is used to apply data integrity checks, filter relevant data, and monitor model performance for this task. This example shows how to use the AI explainability toolkit to see how different features impact the model predictions.
: In this example, we see how to use UpTrain to monitor performance of a text summarization task. Summarization creates a shorter version of a document or an article that captures all the important information. We will be using a pre-trained text summarization model. We also showcase the UMAP and t-SNE techniques (both of which are ) for embedding visualization. Additionally, we will see how to monitor model's performance by identifying embeddings drift (we use BERT embeddings), identifying out-of-distribution points and defining UpTrain signals to capture problematic cases.
: In this example, we consider the task of recommending items for purchase to a user. Here, we measure the in the recommended items. For a better understanding of the biases in the model recommendations, we monitor two custom-defined metrics:
Cosine distance between the embeddings of predicted and bought (that is, ground truth) items
Absolute log price ratio between predicted and bought items
: In this example, we take a look at finetuning a large pre-trained language model (such as ) for the task of . This task involves masking a certain percentage of tokens or words in a sentence and training the model to predict the masked tokens. We aim to fine-tune the public model to writing product descriptions of Nike shoes. For the same, we want the model to adopt Nike's tonality and brand language and write positive things about Nike products. We use UpTrain to apply data integrity checks, filter relevant data by defining UpTrain signals and fine-tune the model on them.