What is UpTrain? 🤔
Monitor and Improve your Machine Learning Models in Production
Last updated
Was this helpful?
Monitor and Improve your Machine Learning Models in Production
Last updated
Was this helpful?
- - - -
Data Security - Your data never goes out of your machine.
Slack Integration - Get alerts on Slack.
Realtime Dashboards - To visualize your model's health live.
Label Shift - Identify drifts in your predictions. Specially useful in cases when ground truth is unavailable.
Model confidence interval - Confidence intervals for model predictions
Advanced drift detection techniques - Outlier-based drift detection methods
Advanced feature slicing - Ability to slice statistical properties
Kolmogorov-Smirnov Test - For detecting distribution shifts
Prediction Stability - Filter cases where model prediction is not stable.
Adversarial Checks - Combat adversarial attacks
And more.
To run it on your machine, follow the steps below:
One of the most common use cases of ML today is language models, be it text summarization, NER, chatbots, language translation, etc. UpTrain provides ways to visualize differences in the training and real-world data via UMAP clustering of text embeddings (inferred from bert). Following are some replays from the UpTrain dashboard.
Machine learning (ML) models are widely used to make critical business decisions. Still, no ML model is 100% accurate, and, further, their accuracy deteriorates over time 😣. For example, Sales prediction becomes inaccurate over time due to a shift in consumer buying habits. Additionally, due to the black box nature of ML models, it's challenging to identify and fix their problems.
UpTrain solves this. We make it easy for data scientists and ML engineers to understand where their models are going wrong and help them fix them before others complain 🗣️.
UpTrain can be used for a wide variety of Machine learning models such as LLMs, recommendation models, prediction models, Computer vision models, etc.
This repo is published under Apache 2.0 license. We're currently focused on developing non-enterprise offerings that should cover most use cases. In the future, we will add a hosted version which we might charge for.
We are continuously adding tons of features and use cases. Please support us by giving the project a star ⭐!
is an open-source, data-secure tool for ML practitioners to observe and refine their ML models by monitoring their performance, checking for (data) distribution shifts, and collecting edge cases to retrain them upon. It integrates seamlessly with your existing production pipelines and takes minutes to get started ⚡.
- Identify distribution shifts in your model inputs.
- Track the performance of your models in realtime and get degradation alerts.
- Specialized dashboards to understand model-inferred embeddings.
- User-defined signals and statistical techniques to detect out-of-distribution data-points.
- Checks for missing or inconsistent data, duplicate records, data quality, etc.
- Define custom metrics that make sense for your use case.
- Automate model retraining by attaching your training and inference pipelines.
- Track bias in your ML model's predictions.
- Understand relative importance of multiple features on predictions.
You can quickly get started with .
For more info, visit our .
We are constantly working to make UpTrain better. Want a new feature or need any integrations? Feel free to or directly to the repository.
We are building UpTrain in public. Help us improve by giving your feedback .
We welcome contributions to uptrain. Please see our for details.