UpTrain Visuals 👁️
Deep-dive over your data with UpTrain Visuals
Last updated
Was this helpful?
Deep-dive over your data with UpTrain Visuals
Last updated
Was this helpful?
UpTrain provides several tools, including visualizations, for machine learning engineers to interpret and understand their models better. Visuals are pre-defined classes in the UpTrain framework that helps visualize data and get deep insights into machine learning models. In this documentation, we will understand the following essential visualizations available in UpTrain.
SHAP Explanation: SHAP (SHapley Additive exPlanation) is an increasingly popular technique for model explainability. It provides explanations for each prediction made by the model, indicating the relative importance of each feature in the prediction. With UpTrain's SHAP visualization, you can understand the influence of each feature on the model's output, providing a level of AI explainability and transparency.
UMAP Visualization: UMAP (Uniform Manifold Approximation and Projection) is a machine learning technique used for dimensionality reduction and visualization. It is a faster alternative to t-SNE (t-Distributed Stochastic Neighbor Embedding), which we will discuss later. UMAP is useful when trying to visualize high-dimensional data in a lower-dimensional space. UpTrain's UMAP visualization helps you visualize your data's high-dimensional space, making it easier to understand and analyze.
t-SNE Dimensionality Reduction: t-SNE (t-Distributed Stochastic Neighbor Embedding) is another dimensionality reduction technique used for data visualization. It is particularly useful when visualizing high-dimensional data in a two-dimensional or three-dimensional space. UpTrain's t-SNE visualization allows you to explore your data in a low-dimensional space, making it easier to identify patterns and relationships.
Finally, similar to , UpTrain allows users to define custom visualizations to explore and understand their machine learning models, where users can define their own visualizations and observe them on the UpTrain dashboard. For example, a user might want to visualize the distribution of model predictions for different categories of input data. These custom visualizations can help identify patterns or anomalies in the model's behavior that may not be immediately apparent from standard metrics or pre-defined visualizations.
UpTrain's visualizations, including SHAP, UMAP, and t-SNE, provide ML engineers with valuable insights into their models and data. Next, let's drill down on the predefined visuals in the UpTrain package.