Smart Campaign Prioritization in SOLUS

Smart Campaign Prioritization in Solus

The most common mechanism for personalized engagement with a customer is through a marketing campaign. These campaigns can be broadly classified into two groups:

  • Customer Lifecycle Management (CLM) campaigns, which are sent on various relationship
    milestones such as the 3 rd anniversary of the first visit, upon the 10 th visit etc, and
  • Go-to-Market (GTM) campaigns, which typically target a large segment of customers with product recommendations, store/category promotions etc.

By its very nature, the second category above tends to target a large percentage of customers, and personalization is achieved through the use of customer-specific information in the messaging, use of recommender algorithms, variations in message tonality etc. However, the side effect of such mass-market campaigns is that, in any given week, there might be multiple campaigns that could be used to target the same customer. Which brings us to the central question in this note: which of these campaigns should we choose for which customer?

There are many ways to approach the prioritization problem. The obvious ones are:

  1. Set a pre-defined priority order for campaigns so that, if the same customer qualifies for two different campaigns, the higher priority one is chosen. This assumes sufficient domain knowledge to set the priority order on the part of the decision maker. While this might be true in general, it might be a challenge when the competing campaigns involve, for instance, different recommender stories (link to the article on recommender stories).
  2. Solve the problem at the design level, by determining more fine-grained targeting rules such that conflicts are avoided. This makes sense when there are only a few campaigns at any point in time, but also assumes that sufficient domain knowledge and care is employed in determining mutually exclusive targeting rules.
  3. Choose campaigns at random from the eligible ones for each customer. This works when you have no prior knowledge of what would work for whom, and want to test out all
    alternatives. This is essentially equivalent to an A/B test. However, one needs to determine how long a random choice is okay to do, and whether the conclusions drawn from the test will hold forever or will require further testing.

As you can see above, each of these techniques have their uses but also some significant
disadvantages. The Solus smart prioritization feature is designed to address these issues.

Solus is, at its heart, a data-driven self-learning product, and this philosophy applies here as well. It figures out what works and what doesn’t for each kind of customer, and uses this information to smartly prioritize campaigns. However, while doing so, it keeps in mind three things:

  1. What works today might not work tomorrow, so constant testing and learning is necessary.
  2. Data-driven approaches can only be driven as long as there is data. For instance, if there has been only one campaign sent to inactive customers in the past, there is no data to determine whether a different campaign might work better. This means that Solus needs to determine when and where it has enough data to be relatively more certain of the outcome and prioritize campaigns accordingly, and when to prioritize exploration.

This is why the key algorithm used to do smart prioritization is a contextual multi-armed bandit (CMAB) algorithm. In order to explain how this works, let us first understand how a multi-armed bandit problem is framed.

Imagine you’re at a casino in Las Vegas, and find a row of slot machines in front of you. Assume that each slot machine has a fixed but unknown probability of paying off, and each pull of the arm in the slot machine is independent of the previous pulls. Now, you have a bag of quarters to feed into these slot machines, and you don’t know which one to pick. How do you spend your money wisely? 

The trick is to start putting a few quarters in each machine, and keep doing it until you start seeing one or more of the machines paying off. When they do, put more quarters in those machines and less in the ones that haven’t paid off much. The more you see, the stronger your understanding of what might be a better bet, and the more money you put in there. The specifics of how to determine the allocation is where all the math comes in.

The colloquial term of a slot machine is “one-armed bandit” since it’s got a crank that looks like an arm and it takes your money. Hence the term “multi-armed bandit”. It is easy to see the analogy between this approach and some of the business problems you’re familiar with. Price testing, for instance, is a prime candidate. You don’t know which price works best, because you don’t know how much the demand might go down when you increase the price. So the best way to do it is to test, but multi-armed bandits allow you to do it in such a way that you quickly shift your focus to the price range that works better, thereby leaving less money on the table while running the test.

The contextual variant of this problem is one where each slot machine pays off with a probability
that depends on who you are. The equivalent in our context is: each campaign is a slot machine, and the likelihood of response to the campaign depends on who the customer is, i.e., what the customer-related variables (RFM, customer segment, favourites etc) are. By framing the prioritization problem as a CMAB problem, we are able to not just test and learn from customer responses, but also determine when more testing is required (e.g. when certain kinds of customers get new kinds of campaigns that they haven’t seen before).