PLEASE NOTE: UiPath Communications Mining's Knowledge Base has been fully migrated to UiPath Docs. Please navigate to equivalent articles in UiPath Docs (here) for up to date guidance, as this site will no longer be updated and maintained.

Knowledge Base

Getting Started

Validation

User permissions required: ‘View Sources’ AND ‘View Labels’

 

The Validation page shows users detailed information on the performance of their model, for both labels and entities.

 

In the 'Labels' tab users can see their overall label Model Rating, including a detailed breakdown of the factors that make up their rating, and other metrics on their dataset and the performance of individual labels.


In the 'Entities' tab, users can see statistics on the performance of entity predictions for all of the entities enabled in the dataset.


 

Default Validation page for 'Labels'



The 'Model Version' dropdown, located above the model rating, lets you see all validation scores across past model versions on a given dataset. You can also prioritise or 'star' individual ones so that they appear at the top of the list in future. This tool can be useful for tracking and comparing progress as you build out your model.



The model version dropdown

 


Labels 

  

The 'Factors' tab (as shown above) shows:

 

  • The four key factors that contribute to the Model Rating: balance, coverage, average label performance, and the performance of the worst performing labels
  • For each factor it provides a score, and a breakdown of the contributors to the factor's score
  • Clickable recommended next best actions to improve the score of each factor


The 'Metrics' tab (as shown below) shows:

 

  • The training set size – i.e. the number of verbatims on which the model was trained
  • The test set size – i.e. the number of verbatims on which the model was evaluated
  • Number of labels – i.e. the total number of labels in your taxonomy
  • Mean precision at recall – a graph showing the average precision at a given recall value across all labels 
  • Mean average precision – a statistic showing the average precision across all labels
  • A chart showing, across all labels, the average precision per label vs. training set size

 


Metrics tab within 'Labels' validation

  

The Validation page also allows users to select individual labels from their taxonomy to drill-down into their performance.

 

After selecting a label, users can see the average precision for that label, as well as the precision vs. recall for that label based on a given confidence threshold (which users can adjust themselves).

 

 

Label specific validation charts

 

To understand more about how Validation for labels actually works and how to use it, see here.



Entities

 


Validation page for 'Entities'

 

The 'Entities' tab (as shown above) shows:

 

  • The number of entities in the train set – i.e. the number of annotated entities on which the validation model was trained
  • The number of entities in the test set – i.e. the number of annotated entities on which the validation model was evaluated
  • The number of verbatims in the train set – i.e. the number of verbatims that have annotated entities in the train set
  • The number of verbatims in the test set – i.e. the number of verbatims that have annotated entities in the test set
  • Average precision - the average precision score across all entities
  • Average recall - the average recall score across all entities
  • Average F1 score - the average F1 score across all entities (the F1 score is the harmonic mean of precision and recall, and weights them equally)
  • The same statistics but for each individual entity


To understand more about how Validation for entities actually works and how to use it, see here.


Previous: Explore     |     Next: Reports

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.

Sections

View all