Knowledge Base

Model Training & Maintenance

Guides on how to create, improve and maintain Models in Re:infer, using platform features such as Discover, Explore and Validation

Reviewing verbatims

User permissions required: ‘View Sources’ AND ‘Review and label’




Reviewing unreviewed verbatims and accepting or rejecting Re:infer’s predicted labels further trains the model and its accuracy. 


You can review unreviewed verbatims in most of the training modes in Explore and in Discover:

  • Cluster (Discover)
  • Search (Discover & Explore)
  • Recent (Explore)
  • Shuffle mode (Explore)
  • Label mode (Explore)
  • Teach (Explore)
  • Low Confidence (Explore)


Make sure to apply all of the relevant labels in your taxonomy to each verbatim. When you review a verbatim, not only do you teach the model which labels apply, but also which labels don’t. If you don’t apply all relevant labels, you send a negative training signal to the model, which will affect its performance.


Unreviewed verbatims with labels predicted by Re:infer


The opacity of a label indicates the confidence of Re:infer’s prediction of that label, with higher opacity indicating higher confidence. 

Accepting or rejecting labels


Hovering your cursor over the label opens a modal showing the confidence with which the model has predicted the label and, if sentiment analysis is enabled, the net sentiment. 


Accept/reject a label


  • Clicking on the label, or the sentiment indicator (if sentiment analysis is enabled) pins the label to the verbatim, i.e. it confirms the model’s prediction of that label
  • If you want to change the sentiment of the predicted label, click the face image that appears when you hover over the verbatim
  • If the prediction is wrong, add the correct one - this effectively dismisses the incorrect predictions

Previous: Applying Labels     |     Next: Searching for Verbatims

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.