PLEASE NOTE: UiPath Communications Mining's Knowledge Base has been fully migrated to UiPath Docs. Please navigate to equivalent articles in UiPath Docs (here) for up to date guidance, as this site will no longer be updated and maintained.

Knowledge Base

Model Training & Maintenance

Guides on how to create, improve and maintain Models in Communications Mining, using platform features such as Discover, Explore and Validation

Understanding the status of your dataset

Each time that you apply labels or review entities in your dataset, your model will retrain and a new model version is created. To understand more about using different model versions, see here.


When the model retrains, it takes the latest information it's been supplied with and recomputes all of its predictions across the dataset. This process begins when you start training and often when Communications Mining finishes applying the predictions for one model version, it is already recalculating the predictions for a newer model version. When you stop training after a period of time, Communications Mining will shortly catch up and apply the predictions that reflect the very latest training completed in the dataset.


This process can take some time, depending on the amount of training completed, the size of the dataset, and the number of labels in the taxonomy. Communications Mining has a helpful status feature to help users understand when their model is up to date, or if it is retraining and how long that is expected to take.


When you are in a dataset, one of these two icons at the top of the page will indicate its current status:

 

This icon indicates that the dataset is up to date and the predictions from the latest model version have been applied.
This indicates that the model is retraining and predictions may not be up to date.

 

If you hover over the icon with your mouse, you'll see more detail about the status as shown below:

 

 

Dataset status modal

 

Please Note: you may sometimes notice that Communications Mining is in the process of retraining, despite you not having applied any labels or reviewed any entities, this can be due to our team deploying improvements to our platform and our models that can require the models to retrain. Any automations relying on a specific model version number will be unaffected.


Previous: Generative Annotation    |     Next: Model training and labelling best practice

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.

Sections

View all