PLEASE NOTE: UiPath Communications Mining's Knowledge Base has been fully migrated to UiPath Docs. Please navigate to equivalent articles in UiPath Docs (here) for up to date guidance, as this site will no longer be updated and maintained.

Knowledge Base

Model Training & Maintenance

Guides on how to create, improve and maintain Models in Communications Mining, using platform features such as Discover, Explore and Validation

Defining and setting up your entities

It’s important to define the key data points (i.e. entities) you want to extract from your comms data. These are typically used to facilitate downstream automation, but can also be useful for analytics - particularly in assessing the potential success rate and benefit of automation opportunities.


Ultimately, entity predictions, combined with labels, can facilitate automation by providing the structured data points needed to complete a specific task or process. It’s much more time-efficient to train entities in your dataset in conjunction with labels, rather than focusing on one and then the other (i.e. training entities after training a full taxonomy of labels).



For example:

If we’re looking to automate ‘Address Change’ requests, a label would be used to capture the request type, whilst entities would capture the various components of the address (i.e. Address Line, City, Postcode / Zip Code, etc.). Each prediction is made available via the API enabling every verbatim to be acted upon. 




Using entities to assess automation opportunities


Once set up and trained to a suitable level of performance, they can help generate important insights on request types that could be in scope for automation.


To understand how, let’s continue the same example: ‘Address Change’


We’ve identified that ‘Address Change’ requests are a high-volume, transactional, and highly-manual task, and want to understand the proportion of them that we could automate. 


To do so, we need to know that the label for identifying the request can perform well. We also need to understand the proportion of the address change requests received that contain the necessary data points (i.e. the entities) required to process the change. 


In this instance, this could be ‘Address Line 1’, ‘Town / City’, ‘Zip Code’, ‘State’. Within the platform we can easily assess the proportion of ‘Address Change’ requests that contain all or some of the required entities using combined filters. This helps us understand the proportion that could be successfully automated end-to-end, and which would require more information or a human in the loop to complete.


If 80% of our address change requests contain the required entities, we know this is a great candidate for automation. If only 20% contain the entities we need, this may be a less significant opportunity (depending on overall volumes). 


Please Note: It’s important for entities to be performing well before assessing these, as otherwise the platform could miss plenty of requests that could be automated E2E, purely through lack of training.


The example above illustrates how the platform can be used to better understand any automation opportunity within your communications channels. By pulling this data from the platform, and feeding it into your automation opportunity pipeline, you can effectively identify and prioritise the opportunities that have the biggest potential success rate, and ultimately the highest ROI.


Next: Understanding entities

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.

Sections

View all