It’s time we got one of the actual designers of our intelligent routing algorithms to talk you through it.

So last week we published a not-at-all tongue-in-cheek Valentine’s Day piece about our routing algorithm. We figured this time we’d actually explain how it all fits together.

See, the starting point we have is the assumption that people, you know, hate waiting. But not all waits are created equal. Which is to say, while we’re pretty sure that waiting on the phone to some glorious hold music is precisely what to expect in the underworld, people are happier to wait for a reply to an email. Marginally.

Of course, matters are complicated by the relative importance of the interaction on both sides of the fence – how important does the customer think the interaction is, and how important does the business think the interaction is?

The meeting point of these two pressure fronts is tricky to navigate using any kind of manual or traditional queue-based routing strategy. When does a text message become more important than a call? Should a call that’s broken out of its service-level agreement (SLA) be jumped to the front or should it, paradoxically, wait a little longer even? When a customer finally does get through, which agent should deal with them?

Now let’s add blending to the mix – how do you handle this all when you’ve got inbound interactions in the pot alongside outbound?

Oh and are you hoping to get this right at scale?

When we ask these questions, we’re usually greeted with peak cringe-face right. Then when we explain that our routing algorithms can handle all of that and more, well, it tends to involve that thing they call the a-ha moment.

So how does our solution do it?

 

The V algorithm

Let’s start with the James Bond–esque V algorithm, shall we?

‘Our V algorithm is essentially a ranking algorithm,’ says Oscar Paulse, one of our in-house mathematical genius-people. ‘Basically we view interactions in the waiting room as task–agent pairs. The V algorithm ranks those pairs in order of importance according to a number of factors that grow over time – factors like business value, waiting time and agent idle time.’

A higher score means the given task–agent match has a higher priority and will more likely be actioned.

‘It’s worth mentioning that SLA is a special factor. Once the SLA is breached – because now we’re already in trouble – it drops to zero, but the waiting time continues to increase and becomes a sort of secondary SLA.’

We do this because it makes more sense to have one person out of SLA for longer than it does to have multiple people out of SLA at all. Counterintuitive, perhaps, but it works.

And why are things considered as task–agent pairs rather than just interactions? Continues Oscar, ‘This answers two questions simultaneously: which interaction gets answered first, and to whom do we route that interaction?’

 

Next up: machine learning

The V algorithm is one we’ve programmed upfront to do specific things. We wanted to balance it with a learning algorithm that tempers our assumptions with its own assumptions about the data.

‘So our V algorithm looks at all this local information – it takes into account what is happening at this moment,’ says Oscar. ‘On the machine learning side of things, we run a probabilistic check on the task–agent pairs using historical data. This helps us ensure we’re sending the customer to the best agent.’

As sanity measure, the check is done from the perspective of both the agent and the customer. On the agent’s side, it considers call outcomes. On the customer’s side, it considers satisfaction survey scores. These results are fed back into the ranking algorithm to further refine it.

‘At the moment, we use a naïve Bayes classifier to do this. It’s simple, it does the job, and it’s effective even with small datasets. Its main drawback is that it looks at each feature of the system in isolation – that’s why we call it “naïve”. As soon as we’ve got enough data to train it adequately, we’ll move over to a Bayesian network, which sidesteps this issue.’

(If you’d like a brief intro to the naïve Bayes classifier, we’ve got you covered. Just check out our series on the beastie here.)

With the V algorithm and the Bayesian classifier working in tandem, says Oscar, it won’t be long before we’ll be able to offer contact center managers three matching and prioritization schemes: one that prioritizes quality; one that prioritizes efficiency; and one that strikes a balance between the two.

And this is just the beginning. We don’t want to reveal too much about our amazing future plans yet about our future plans yet, though. We’ll settle for tapping the sides of our noses inexplicably, like a lovable villain in Victorian London or something.