Testing and Multi-Armed Bandits

Imagine that you want to run a marketing campaign targeting your active customers with personalized recommendations. As described in earlier blog posts on this site, you have multiple options on what recommendation strategy to adopt: talk about their favourite category, get them to explore newer categories, talk about what similar customers are buying etc.

Which naturally raises the question: Which strategy should you choose?

You have two options:

  1. Use your domain knowledge to decide what works for whom. For instance, a lookalike strategy (people like you…) might be chosen for first timers, a category exploration strategy might be chosen for people who transact frequently but whose loyalty to the brand we want to reinforce by adding more categories to their shopping cart etc. This strategy works if we know the customer base very well, and wish to implement a specific strategy that advances a business objective.
  2. Test and learn. This approach is ideal when you don’t have perfect knowledge about your customer base, and wish to let the data tell us what works for whom. This article focuses on a way to test and learn, by framing it as a multi-armed bandit problem.

Bandits? Where?

What exactly is a multi-armed bandit? To understand the origin of the term, let us first talk about one-armed bandits.

Figure 1 Source: fullcirclecinema.com

No. not this guy.

Have you ever been to a casino? Played on one of the slot machines? Now, slot machines these days have become very drab, electronically controlled machines where you push a button to operate it, but the old-style slot machines had a big crank on one side that looked like a metal arm. You put in a coin, pulled the crank, and watched the dials spin until, hopefully, they all landed on the same thing, and you won a lot of money. In reality, of course, casinos like to stay in business, so the winning happened quite rarely. So, what you had was a machine with one arm that took your money… hence the term one-armed bandit.

Now imagine that you landed up in an old-style casino in Vegas. (Can you hear Frank Sinatra in the background? Good!) Arrayed before you is a row of slot machines. (Many arms, still with the same felicity at taking your money, hence multi-armed bandits.) Each slot machine has a fixed, but unknown, probability of paying off. You have a bunch of coins, and must decide how to use them effectively. (Let’s assume, for the moment, that putting the coins back in your pocket and using it to prepay a mortgage isn’t an option. Where’s the fun in that?)

  1. Equal allocation: You could simply put an equal number of coins in each machine, and one of them would pay off more often than the others, and hopefully return enough money to make it worth your while.
  2. Adaptive allocation: You could start putting coins in each slot machine, and then, as you see one of them paying off more often than the others, put the rest of your coins there.

The latter strategy seems better, because it adapts to what we know about the payoff rates of the various machines, and prevents us from spending too much on slot machines that don’t pay off too well. The fine print, of course, is knowing how much knowledge is enough, before we decide what to do.

Here’s a way of thinking about this. Imagine that you put one coin into each machine, and one of them paid off. In a way, you now have a machine that pays off 100% of the time (1 of 1) and a bunch of others that pay off 0% of the time. But common-sense dictates that we have too little data to draw that conclusion. Extend that logic a little further, and we arrive at this: the more data we have, the more confident we are in knowing which machine is best. Trouble is, if you extend it even further, you end up having to put so many coins into each machine that you’re back to the equal allocation strategy. Which means, you want to put just enough coins in each machine to know enough about their payoff rates and decide.

Imagine that you’re keeping a running tally of the number of coins you put in each machine, and how many times it’s paid off. The picture below shows what you might conclude about a machine that pays off 20% of the time, based on the number of coins you put into it.

Figure 2 The number of times a slot machine has paid off, given a varying number of tries

The horizontal axis represents the true payoff rate of the slot machine, a number we don’t know. (What we know is just how many times it’s paid off, in how many trials.) The peak of the curve  represents the observed success rate. In the first picture above, it’s at 24%. However, since we’ve only pulled on that slot machine’s arm 50 times, we know there isn’t enough data to conclude that the 24% success rate is absolutely right. There’s some chance that it’s less than 24%, and some chance that it’s greater. That’s what the spread around the peak represents. The true success rate (20%) is represented by the vertical dotted line. Notice how the spread gets narrower, and the distribution itself gets centered around the 20% mark, as you see more and more data. In other words, as you pull on the slot machine’s arm more times, you grow more confident that the success rate you see is the truth.

Now, imagine doing this with two machines, labelled red and blue. Let’s say you’ve tried the red one 200 times and the blue one 50 times, and here’s the result.

Figure 3 The case of two slot machines

Now, by and large the red machine is more likely to be the better one to choose, but there’s still a possibility that its true success rate is lower than what has been observed (the section of the red curve that is to the left of its peak). Similarly, there’s a chance that the blue machine is better than the data so far has suggested (the section of the blue curve that is to the right of its peak). Which means, there’s some chance that the blue machine is better than the red one (look at the area where the two curves overlap).

Now, the more you try both machines, the stronger your conclusions. But the intuitive idea, minus the pesky mathematical fine print, is to try the blue machine just often enough to make sure that you’re confident enough to choose the red one.

The more data you see, the more likely you are to be able to exploit what you know; the less data you see, the more likely you are to explore what you don’t. This delicate dance between exploration and exploitation is the key to winning tons of money in Vegas. (I’m kidding. The money you take to play the slot machines in Vegas stays in Vegas.)

Remind me again, why am I reading all this?

Ah yes, I forgot, you’re not living it up at the Bellagio, you’re a marketing executive.

So you have a bunch of recommender strategies and want to figure out which one might work. Well, now each recommender strategy is like a slot machine, and you get to use what you just learnt to pick a strategy.

Imagine that the red curve is the result observed so far for a personalized recommender strategy where you’re basing your recommendations on the observed customer preferences, while the blue curve is the result of a generic recommender strategy where you’re just sending out what’s popular. Or, if you want to run a different test, imagine the red and blue machines represent two different campaign creatives.

But just to make things interesting, let’s add a few more items of fine print to this discussion:

  1. Always explore: It’s a good idea to always “explore” a little bit. We started off our slot machine example by stating that each machine had a fixed probability of paying off. This is not how social systems work. A campaign strategy that works now might not work so well three months from now. Therefore, maintaining a minimum threshold for exploration (say, 5% for the losing strategy, no matter what) ensures that you’re constantly testing your own assumptions about what works.
  2. Do this by segment: If you’re testing for a diverse population, it’s a good idea to apply and adapt this approach to subsegments of the overall target population. For instance, if you have 4 segments of active customers, determined in terms of value and frequency, apply the same approach independently to each of them. This ensures that you’re getting a more nuanced sense of what works for which value-frequency segment. (An extreme case of this is what is known as the contextual multi-armed bandit. However, this is a bit harder to interpret and explain in practice, so let’s keep it simple for now.)

The moral of the story? Test and learn. And since customers will find a way to constantly surprise you, keep testing, so you keep learning!

Everything but the model

Deploying predictive models, and why the modelling algorithm is perhaps the easiest aspect of the problem.

Predictive models, specifically propensity models, are a staple of data science practice across organizations and verticals. Be it to understand whether a customer is likely to respond to an offer, or whether a borrower is likely to default, we find them applied in a variety of situations. In specific verticals such as financial services, their use is so prevalent that one finds entire divisions devoted to the care, feeding and updating of these models. 

However, as is the case with so many things where we confuse volume for variety, the majority of these problems, as well as what we need to do to solve them, are standard. This implies that it is very much possible to productize model scores. Here are some of the common features of these models, and what they imply for someone trying to productize, or at least streamline, model building.

Problem statements

Most models end up falling into one of the following categories:

  1. The propensity of a customer to transact in the next n days, either in general or in a specific product category
  2. The propensity of a customer to respond to a specific offer (as a function of both offer and customer features)
  3. The propensity of a customer to churn, either implicitly (i.e., through inactivity for a certain period) or explicitly (i.e., account closure)

It is therefore possible to provide, with minimal configuration options, a set of standard models out of the box. These standard models will cover the majority of use cases.

Typical configuration options that make sense are:

  1. What is the response horizon?
  2. What is the nature of response being predicted? Transaction, Cart addition, Closure etc.
  3. What are the products being tracked? All products, products in a specific category etc.
  4. Is the model restricted to:
    1. Active customers
    2. Customers who have purchased this product before
    3. Customers who have never purchased this product before
    4. Unrestricted – applies to all customers

Feature engineering

The features used in most of these models are similar, and involve a standard set of patterns of extraction and preprocessing. Depending on the model, some specialized features might be called for, but these are, by and large, small modifications to a general feature set.

It is possible to standardize feature engineering for the most part. The ones that typically matter are: 

  1. RFM and related features 
  2. Preferences on product attributes, price, time, location channel, etc
  3. Response to past marketing outreaches
  4. Socio-demographics if available
  5. Service history, if any
  6. Preferences for the relevant product subset, if applicable (for instance, if the model predicts the propensity to buy Menswear, has the customer already exhibited a preference for this category?)

Additionally, for businesses where the value of the product is relevant to transactions (share price, NAV of a mutual fund etc), features pertaining to these aspects might be useful.

Model building pipeline

Model relevance depends on regular retraining and scoring, especially in businesses where the pattern of customer behaviour might be very dynamic.

ML Ops matters. One can set up a standard pipeline that is scheduled to run regularly, and retrain and rescore models with updated data on a regular basis. Since features and models are standardized, this does not require ongoing manual effort once it is set up. One can also create a mechanism to track performance on standard metrics such as recall at various deciles, area under the ROC curve etc.

A typical pipeline to train a model can be as follows. Consider a model to predict whether customers will transact between March 1-7, 2024. The predictors for this model would be whatever we know about each customer as of 29 February.

In order to train this model, the easiest thing to do is to go back by a week. That is, find out what you know about your customers as of 22 February, observe responses between 23-29 February, and build a model to correlate the two.

Figure 2 Training data measurement

The model built based on the above data can be used to score customers as of 29 February. In order to test whether the approach works well when applied at a later time point, one can also build an additional model based on responses between Feb 16-22, and then use that model to score data for the week following (generated as described in Figure 2 above), and then compare the scores against reality (we already know who transacted between Feb 23-29) to get a performance measurement.

The overall pipeline now looks like this:

Figure 3 Training and scoring pipeline for model to predict transactions in the next 7 days

The exact same pipeline can be updated to work every seven days so that the model is kept up to date. The interesting thing is: it applies even when we need to customize certain aspects. For instance:

  1. It is possible, in order to account for seasonal behaviour, to specify the response window dates manually, e.g., when one wishes to predict response during an end-of-season sale (EOSS), the response window to train the model would be the dates of the previous EOSS.
  2. If one wishes to add customized features specific to a particular client, all that is needed is for us to add a mechanism to insert custom feature computation code in the above pipeline, while computing customer level features. The rest remains the same.

Modelling technique and Explainability

In reality, the modelling algorithm seldom matters for most problems. The performance of various modelling algorithms is usually comparable. While nonlinear machine learning algorithms do a bit better than logistic regression in a number of cases, there is little to choose between these ML models themselves. What people might look for, however, is explainability.

Model explainability is often asked for by decision makers who use them. However, explainability doesn’t always mean being able to simplify the functional form to the point where it is easy to make out what leads to the final score.

Often, what is required is simply an explanation of the various features that go into the model, along with their relative importance in the final model.

We can choose an approach where fine-tuning is minimal, the libraries are reliable and efficient, and there is support for simple things like categorical variables, missing data etc. without having to build complex preprocessing pipelines. As for explainability, it is possible to set up a standard, model-agnostic pipeline to estimate feature importance in a model. 

For instance, if you shuffle one feature while keeping the others constant, and check the performance degradation as a result, you can expect to find that the more important features cause more degradation when they are shuffled. Simply shuffle one feature at a time and order them based on how much worse the model does when you do.

Additionally, if you know already that you’re likely to pick the top 3 deciles for a campaign, simply run a decision tree to classify people in the top 3 score deciles versus the bottom 7 – this is not the same as the actual model, but can give you a simple rule that explains, at a somewhat coarse level, what is common to the customers with high propensity scores.

In conclusion…

We tend to think of propensity modelling problems as the province of data scientists. To a certain extent, this is true; however, it helps to recognize that: 

  1. While there is a long tail of specialized problems that require bespoke model builds, the vast majority of problems are well understood and commoditized
  2. Even for bespoke problems, the vast majority of things to do in order to build the model (feature engineering, training and validation, routine recalibration etc) are well understood and commoditized

This in turn means that decision makers can be armed with a variety of propensity scores for personalized targeting, with very little effort. This can considerably improve the effectiveness of their marketing outreach efforts.

Hello Rubber, meet Road: Putting recommenders to work

In our previous post, we spoke about how various recommender algorithms work, and why the nature of the data suggests what might work for whom. But having a good list of personalized recommendations is only half the battle. So what’s the other half?

Imagine picking a place to order in from on Sunday evening. (Let’s make it more specific: Sunday, 19 October 2023. You’re likely to be a wee preoccupied with what’s on television, don’t you think?) How do you pick a place?

If I were to take a poll, the answers I’d get would range from:

  1. I just pick a restaurant whose food I’ve always enjoyed and reorder my favourites.
  2. Cricket night is pizza night.
  3. My ten year old has never seen India in a world cup final. So, something new to commemorate the occasion.
  4. Who knows? Whatever looks good on the day.

… and so on.

What I’m trying to get at is, we don’t always approach consumption the same way. So why should a recommender system come up with a single consolidated list of recommendations, however good and however personalized they might be, and assume that the job is done?

And so…?

And so, recommender stories. Each way of approaching consumption is a way of filtering the recommendation list for the customer, and picking a subset that suits. For instance, if I’m in the mood for pizza, then I’d filter the list down to pizzerias. If I’m in the mood for something new, I’d want the recommender to surprise me.

This isn’t a new concept. If you look at the rows of recommendations in your Netflix app, you’ll see row titles that are ways of filtering a larger list to provide subsets. Some subsets are relatively generic (Trending in India is probably the same for everyone logging in from India), while some are personalized (Based on your Watch List), and some are in between (you and I might both get lists of K-Dramas, but yours might be different from mine).

You can think of these different ways of filtering recommendations as recommender stories. In other words, selling strategies.

Here are some examples of how stories can be used to filter down to relevant recommendations:

  1. Recommendations for you: This is simply a matter of picking the top k recommendations from the hybridizer output.
  2. Recommendations in your favourite category: Pick the category that the customer is most fond of (the one where they’ve made the most transactions, for instance), and filter the list by that category. Different people would therefore not just get different categories, but also different recommendations within the category (since the subset comes from their personalized master list produced by the hybridizer).
  3. Cross-selling: Pick a category or subcategory the customer is most fond of, or maybe last purchased, and find complementary products. You can either do this by looking at co-purchase patterns in general, or filtered down to appropriately defined complementary subcategories.
  4. Exploration: Find a category that the customer has never shopped in before. Category addition through exploration-based selling strategies is a good way of improving customer stickiness. If a customer doesn’t come to you for just one thing, you’re more likely to keep them.
  5. Algorithm-driven: Remember the common-sense phrases describing each recommender strategy from the previous article? Each of those are also selling stories, when you think about it. “People like you bought this” is a way of saying, filter the recommendations down to those produced by the algorithm using that strategy.
  6. Source-driven: You can run recommenders on various categories of customer engagement. Actual purchase, adding to a shopping cart (but not purchasing), adding to a wish list, browsing a product page etc. Each of these represents customer intent vis-à-vis a product, but not necessarily to the same degree. One can therefore filter the hybridizer output down to recommendations that were generated based on a particular source.
  7. Category-driven: Suppose you want to promote a particular category. Filter the hybridizer output down to recommendations from that category. Two customers might get recommendations in the same category, but the actual products might differ since they’ve been filtered down from different, personalized supersets.
  8. Specialized products: In verticals such as pharmaceuticals (even when we stick to nutritional supplements and the like), it might be a good idea to limit some recommendations to product categories or subcategories that the customer has already bought. Imagine an old person buying geriatrics-focused nutritional supplements – filter their recommendations down to only that category, and don’t send them diabetes or hypertension-related stuff.

There are many more strategies that one can think of, obviously. There’s also a way to combine these – stories of stories, if you will – that allows multiple, specifically picked strategies to coexist within a single list. For instance, you might want a story of stories that combines a list of specialized product recommendations (#8 above) with a personalized list filtered down to general purpose products (#7, with a filter for general purpose product categories).

What happens if you run out of recommendations?

Yeah, this can happen. Let’s say you want 20 recommendations in the customer’s favourite category and what you actually have is only 12. You can set up a backstop so that the remaining 8 come from trending products in the category. Obviously, if you’re doing this for a customer who has not transacted at all and therefore doesn’t have a favourite category, the resulting list should still be empty.

How do these stories actually make their way to the customer?

Broadly speaking, customer engagement can be both proactive and reactive. Proactive is when the brand sends out a message or an app notification and initiates the engagement, and therefore takes a call on what recommender story to use for whom. Reactive is when the customer initiates the engagement by logging into the app, and the brand now has an opportunity to personalize their experience.

Here are some examples of how reactive stories work.

  1. A carousel with personalized recommendations: This is not unlike the Netflix example we spoke of. Pick one or more recommender stories, and let the customer scroll through the recommendations.
  2. Product page personalization: Commonly co-purchased products can be displayed in a section below the product details. We’ve all seen this.
  3. At checkout: Look at what’s in the shopping cart, identify co-purchase recommendations and display them for the customer to add. The phrase “Would you like fried with that?” might ring a bell.
  4. Preference-driven listing: Imagine a customer with a preference for a particular colour, say blue. When this customer searches for formal shirts or clicks on the relevant category page, reorder the products so that blue shirts come up on top. You could do this with any product attribute: price band is an obvious example. Typical price-driven listings are either low to high or the other way around, but if what you prefer is something in the middle, you shouldn’t have to scroll through a lot to find what you want.

Proactive stories are embedded within messages. The key here is to make sure that the message tonality and wording is in sync with the story being used. For instance, if the message says, “We know you love our range of accessories, so we thought you’d be excited by our new additions”,  it indicates that you’re talking to (most likely) active customers whose favourite category is accessories, and filtering the recommendations accordingly.

Sometimes, you just want to talk to everyone and run a category promotion, and you might therefore think that there’s no place for a recommender story here. But you can get inventive even with a mass campaign like that. Let’s say you want to offer 20% off on ethnic wear during the festive season. Break it down into three different campaigns. Here’s a ChatGPT-designed set of three solutions:

  • For customers whose favourite category is ethnic wear:

🌟 Diwali Special Offer! 🌟

Hello [Customer Name]! Your love for ethnic wear shines bright! Enjoy an exclusive 20% off on your favorite styles. Visit <xyz.com> or use XYZ app. Explore now! 🎉 [Short Link to Personalized Recommendations]

  • For customers who have bought ethnic wear before but isn’t their favourite category:

🎉 Diwali Delight! 🎉

Hi [Customer Name]! Celebrate Diwali in style! Enjoy a 20% discount on ethnic wear. It’s not your usual, but we’ve got something special for you. Visit <xyz.com> or use XYZ app. Check it out now! 🌟 [Short Link to Personalized Recommendations]

  • For customers who haven’t bought ethnic wear before:

🎆 Diwali Debut! 🎆

Greetings [Customer Name]! This Diwali, step into elegance with our ethnic wear. Avail an exclusive 20% off on your first purchase! Visit <xyz.com> or use XYZ app. Start your ethnic journey now! 🌺 [Short Link to Personalized Recommendations]

And for those who have a strong preference for non-ethnic wear, you might want to try a different tactic:

Diwali calls for celebration! 🎊 While our ethnic wear is fabulous, check out our diverse range at <xyz.com>. Personalized picks: <shortlink4>”

In this message, link to a recommender story that filters out the ethnic wear and either focuses on their favourite category, or on some category they haven’t tried before, or a mix of everything except ethnic.

I know what you’re thinking. We started off talking about what recommender strategy works for whom, then went on to talk about hybridizers and about recommender stories, but what all that has accomplished is to kick the can down the road. So now you’re asking…

How do I know which story works for whom?

The short answer is: you don’t, so you test and learn. Imagine that there are a million customers, and you want to find out which of 10 recommender stories configured works best across the entire base.  The simplest thing to do is to try each story for a randomly chosen set of 100k customers, and see what works best.

A slightly more sophisticated strategy would be to take a batch of 100k customers, try each story for 10k of them, and use the results to adjust the sampling in favour of the strategies that seem to work better in the next batch. Keep doing this as you get better at understanding what works, so that you increasingly spend your marketing efforts on the more effective stories, but leave a little room to keep trying the ones that seem to not work but might become effective in the future; in other words, exploit what you know but continue to explore what you don’t. This approach falls under the category of multi-armed bandit problems – while it has been around for decades, it has recently become more popular, especially with digital-focused companies trying to test and learn but not leave too much money on the table.

A more sophisticated variant of the multi-armed bandit approach is the contextual bandit, wherein we want to understand what works, but not in a generic sense but specifically for different kinds of customers. This will again trade-off between exploration and exploitation, except that the extent of exploration and exploitation is also a function of what we know about the customer.

So, back to the short answer: try everything, then do more of what works. If you know this at a customer or segment level rather than in a general sense, even better.

Horses for courses

The problem of personalizing customer engagement can be broken down into three broad problems:

  1. Recording how customers engage with the brand: the most obvious aspect of this is a transaction, but non-transactional signals such as web/app behaviour, service-related communications etc. can be considered as well.
  2. Parsing customer engagement records to determine how, when and what to talk to the customer about. This is where machine learning and other aspects of intelligence come into play.
  3. Actually reaching out or reacting to the customer in a personalized manner. This is where marketing campaigns, web/app personalization etc matter.

The systems that focus on these three problems are broadly referred to as systems of record, systems of intelligence and systems of engagement respectively. Note that these need not always be separate products; rather, this is more of a conceptual division of responsibilities.

When one talks about systems of intelligence, perhaps the first problem that comes to everyone’s mind is: how can we recommend products to customers in a personalized manner? This is where recommender systems come in.

There are many algorithms that figure out how to recommend products to customers. Not all of them work spectacularly in every circumstance, so it is important to understand intuitively what their strengths are and when they work best. Horses for courses, if you will.

But why isn’t there one ring recommender to rule them all?

Well, the most obvious reason is the nature of the data itself. The fundamental, unsaid premise of personalized recommendation is that the brand knows the customer inside out, and can use this to recommend exactly the right product. When we think of this scenario, our instinct is to imagine vast reams of data about each customer. This premise is stress tested in many ways in real-life datasets.

What if the breakdown of your brand’s customer activity is as follows: 50% of your customers have visited only once, and 25% of those made that one visit over 2 years ago. 5% on the other hand, have probably visited 15 times in the past year alone. Your app penetration as a % of total sales is in single digits and climbing, but even there the same unevenness exists. 30% of your digital users have registered on your app, but haven’t bought anything, or even browsed much. But a bunch of them have browsed a lot, bought a lot etc.

The problem looks much worse when you have a lot of customers and/or products in the aggregate. Imagine a matrix (not the one where Keanu Reeves stops bullets, we mean the one with rows and columns) where the rows are customers and the columns are products. If you have a lot of customers and/or products, this matrix becomes very large.

A large matrix is problematic in two ways. First, there’s the computational cost of dealing with something of that size. Second, this matrix is likely to be very sparse, both because the matrix itself is large, but also because the customer activity is uneven, as in the above example. All of this makes the analytical problem tougher, because what any recommender is trying to do is figure out what to do with the empty cells in the matrix, i.e., which products to recommend because the customer is likely to buy them.

(Now, this is not an intractable problem. The computational angle is solved using packages that deal with large sparse matrices, and algorithms that can parallelize nicely, while the analytical angle is solved using methods that reduce the size of the matrix — product attribute-based approaches, neural embedding, matrix factorization etc).  

And then there’s the cold start problem. What if you have a new product, for which you have no transaction data? Or a new customer who has registered on your app but not browsed or bought anything?

Imagine trying to build a single algorithm that knows exactly what to do in all of these scenarios. Tough, isn’t it? That’s why it makes sense to think of different algorithms as being effective for different customer groups.

So what kinds of algorithms are there?

We’ll talk about those in this article. Not every single one of them, but we’ll cover the more popular approaches, and where they are best suited. Rather than talk about the algorithms by name, we will talk about the broad approaches, and refer to the various algorithms that fall into these categories.

People like you also bought…

This approach attempts to find similar customers to a given customer, and use their behaviour as a cue to determine what to recommend.

The most popular of these approaches is user-based collaborative filtering, which looks at user ratings for products, matches users with similar behaviour and uses this as a basis for recommendations. This approach works best when one is dealing with a lot of user behaviour, such as viewership data on a media streaming platform.

However, if one has a lot of other data characterizing a customer (e.g. in BFSI where it is more often collected as part of an application process, location intelligence in a CPG business where the customer is actually a retailer), or a lot of contextual data about the transaction that is valuable to understand behaviour (e.g. shopping time preferences or price/discount preferences in retail), a more generalized way of characterizing the customer becomes valuable. This means stepping back from collaborative filtering to a more generalized class of methods, namely lookalike models.

A variant of this approach is a purely geographical one, that might work in the CPG sector, namely: Stores near you are buying these products. This works when latitude and longitude data is available for all (or most) stores. This implicitly assumes that the location intelligence is reflected in the purchase behaviour of end-consumers in the locality, and therefore the ordering behaviour of nearby stores captures what one needs to know.

Lookalike models in general work best when you know a lot about the customer; however, they have also proven to be surprisingly effective when one is dealing with data sparsity – for instance, the best recommendation for a single transactor sometimes comes from those who have done that single transaction and maybe one more besides.

People who bought this item also bought…

This class of approaches involve mining co-purchase patterns in some form or shape. Item-based collaborative filtering, an algorithm pioneered by Amazon, is one of the best known examples, but simple co-purchase-based approaches like association rule mining are also quite effective.

The strength of this approach depends on how often people buy more than one thing. If the vast majority of customers are single transactors, there might be so little data to work with that this strategy might be a bit brittle.

The other challenge with this strategy is the sparsity of data. For instance, the number of people who bought a formal blue shirt and also bought dark grey slacks might be quite small, but the number of people who bought a formal shirt and also formal pants is likely to be much higher. This means that analyzing copurchase patterns at various levels of abstraction (i.e., figuring out patterns between groups of products rather than individual products) is critical to its effectiveness. This can either be done implicitly based on some statistical patterns that lead of product grouping, it is probably better done if a well-curated set of product attributes is present.

If you liked these, you’ll also like…

Rather than simply understand what customers bought, it is helpful to see if there are patterns across their purchases. For instance, someone might have a preference for the colour blue, whereas someone else might prefer ethnic wear. These patterns suggest that blue outfits might be a good choice to recommend to the former, while ethnic wear in the festive season might be a good time to get the latter to transact again. It is also possible to use these patterns to do the exact opposite: “we know you love our range of blues, but have you looked at what we have in checked patterns?”

All of these fall under the broad category of content-based filtering. These algorithms have proven to be extremely effective when there is a well-curated product master with many attributes.

However, our old adversary – data sparsity – is poised to strike yet again. What if the customer has purchased only one blue shirt? Do we assume that he loves blue? This is where approaches that exploit what we learn about a customer, while still leaving the door open to explore what we don’t, work best. For instance, an approach that assumes that all colours are equally preferred, and then bumps this customer’s preference for blue up by a little bit after this transaction, might work better. The more we see a customer, the more confident we are in our understanding of their preferences.

The good thing about this approach is that it solves for the cold start problem for products. A new product, by definition, is one that we have no transaction history for. This approach allows us to recommend it to customers whose preferences suggest that they might be interested in one with this product’s attributes.

Lots of customers are buying…

This is perhaps the simplest and most used recommender approach: what’s trending right now? We can slice this by location, time frame (including, for instance, what happened around the same time last year), customer attributes, product categories etc.

Trending depends solely on the aggregates, so not knowing much about some customers isn’t a deterrent. That being said, if we calculate what’s trending for narrow slices of the population (e.g. What are women in Kanpur shopping for in the Accessories section?), we might sometimes find that there isn’t a lot of sales data to back up the recommendations, and one might need to reinforce the approach with results from broader slices.

Here’s what you’re likely to buy…

What any recommender algorithm is trying to do is, in some form or shape, predict what a customer is likely to buy, and nudge them in that direction by putting those products front and center, rather than let the customer stumble upon it. So then there’s a way of modelling the problem in precisely those terms. Characterize each customer, characterize each product, characterize how customers interact with products, and use these to predict the probability that a customer will be interested in a particular product. This is now a standard two-class classification problem, and there are a million and three machine learning algorithms at your disposal to solve it.

This approach is especially useful when you’re solving a replenishment problem, i.e., what to recommend that a customer replenish, out of the items they have bought already. Use cases include CPG, grocery, food etc.

That’s a whole bunch of algorithms. How do we put these together?

As the previous descriptions might have indicated, each of the approaches we have spoken of involve a different common-sense principle, and each of them have their strengths and weaknesses depending on the situation. How, then, to decide which algorithm to use for your business? Or, if you want to get more fine-grained, how to decide which algorithm to use for which customer?

Here’s the good news: you don’t have to! There is a class of methods whose job is to combine what these algorithms produce and decide what to recommend. These are called hybrid recommenders or hybridizers. These methods depend broadly on the following factors:

  1. How strongly and how often is a product recommended by the various algorithms
  2. How much one can trust these algorithms, i.e., how well have they performed in the past
  3. What we know about the customer, that indicates how to combine these outputs

The end result is a consolidated recommendation list comprising the output of any or all of the component algorithms.

But how do we know if the algorithms work?

There are two broad ways of assessing the performance of recommenders.

  1. Active: This involves measuring the accuracy of recommendations shown to a customer, either proactively through SMS/Email/Push notifications, or reactively by way of personalizing their app/web experience when they browse. You’re measuring the accuracy of recommendations the customer sees and expresses an interest in, either through browsing or adding to their shopping cart or buying. A variant of this approach is one where the performance is compared against that of a control group who get no recommendations, or generic non-personalized recommendations.
  2. Passive: This involves measuring the accuracy of recommendations generated for a customer, by comparing them against the observed behaviour post generation. This can also be done in back-testing mode, wherein the recommendations are generated as of an earlier date based on what is known at that point, and transactions since then are used for comparison.

The difference between the two approaches is simple: An active approach cares about whether the customer has seen the recommendations, and the power of the actual nudge is considered. The flip side is, you can only measure it when you know the customer has seen the recommendations, so it’s limited by those numbers.

The passive approach, on the other hand, doesn’t care if the customer has seen the recommendations. The underlying assumption is, customers will buy what they want anyway, with or without the nudge, so it makes sense to measure performance of the algorithms as though they are simply predicting what will happen. You get to measure performance of a lot of algorithms against a lot of customers (pretty much everyone who transacted), but you don’t know if the recommendations themselves would’ve nudged the customer in a particular direction.

It is obvious that both approaches have their merits, so it makes sense to do both.

In either case, we can use a variety of metrics to track the actual level of commonality between what is recommended and what is purchased. For instance, you could track how much of what is purchased is in the recommended list (i.e., recall), or how much of what is recommended is purchased (i.e., precision), or some variant or combination of these two. Additionally, you could account for:

  • The rank of the recommendation
  • The extent of the match (maybe the customer didn’t buy the exact same product, but something in the same subcategory)
  • Whether what was recommended and/or purchased was very popular to begin with (if you recommended something rarely bought and it was mostly right, that’s probably more valuable than if you recommended something everyone buys and it turned out to be exactly right)

Okay, got it. Now how do we use these recommendations?

That, dear reader, is a story for another day. In our next article in this series, we’ll talk specifically about how recommendations can be used to personalize customer engagement.

SOLUS AI Achieves SOC 2 Compliance: Your Data, Our Priority

At SOLUS AI, we are deeply committed to safeguarding our customer’s privacy, viewing it as both an ethical imperative and a compliance requirement. We utilise advanced technologies and stringent security measures to protect sensitive customer data, fostering transparency and control.

We are pleased to announce that SOLUS AI has successfully achieved SOC 2 compliance. This significant milestone reflects our unwavering commitment to the security, availability, and confidentiality of the data entrusted to us by our customers.

SOC 2 compliance underscores our dedication to maintaining robust controls and stringent security measures in our operations, ensuring that our clients can have full confidence in the protection of their data. We believe this achievement further solidifies our position as a trusted partner in the realm of artificial intelligence and data analytics.

What is SOC 2 compliance?

SOC 2, which stands for Service Organization Control 2, is a framework for assessing and ensuring the security, availability, processing integrity, confidentiality, and privacy of customer data in service organisations.

It is a set of standards and guidelines developed by the American Institute of Certified Public Accountants (AICPA) to evaluate how well a company manages and protects customer data. SOC 2 compliance involves a thorough audit and assessment of an organisation’s internal controls and processes related to data security and privacy.

Achieving SOC 2 compliance demonstrates a company’s commitment to data security and privacy, which can be crucial for businesses that handle sensitive customer information, such as cloud service providers, data canters, and other service organisations. It helps build trust with customers, partners, and stakeholders by showing that the organisation has implemented strong controls to protect data from unauthorised access, disclosure, or breaches.

SOLUS is committing to protecting data with SOC 2 certification.

We are aligning ourselves with following the five trust service principles.  These principles are designed to assess an organisation’s ability to protect customer data and ensure the reliability and security of its systems and services.

1.     Security

The Security principle assesses the effectiveness of an organisation’s controls and measures to protect against unauthorised access, both physical and logical. This includes safeguarding data, equipment, and facilities from threats and vulnerabilities.

2.     Availability

The Availability principle evaluates the organisation’s ability to ensure that its systems and services are available and operational when needed to meet its commitments to customers. This principle focuses on minimising downtime and disruptions.

3.     Processing integrity

The Processing Integrity principle assesses whether the organisation’s systems and processes are accurate, complete, and reliable in delivering the intended results. It ensures that data is processed correctly and that errors are appropriately addressed.

4.     Confidentiality

The Confidentiality principle evaluates the controls and measures in place to protect sensitive information from unauthorised access and disclosure. It includes assessing how the organisation classifies and restricts access to confidential data.

5.     Privacy

The Privacy principle assesses the organization’s controls and practices related to the collection, use, retention, and disposal of personal information in accordance with its privacy policies and compliance obligations. It focuses on protecting individuals’ privacy rights.

What does this mean for our customers?

We at SOLUS AI are taking this step to signify our commitment to ensuring the privacy and security of customer information. We recognize the importance of instilling confidence in our customers regarding their data, which is why we are delighted to achieve this certification.

  • We utilise an automated GRC platform, guided by our third-party compliance vendor, to effectively manage compliance with all three standards.
  • Our security posture undergoes regular assessments and adjustments to align with these standards.
  • We’ve centralised all compliance-related documents and tasks for SOC 2 on the platform.
  • Our organisation has established and enforces Information Security policies to adhere to these protocols.
  • We’ve integrated information security training as a mandatory component of the onboarding process for new team members.
  • We’ve implemented a proven framework to identify and address potential issues in real time, ensuring proactive mitigation efforts.

This commitment also signifies that we are actively safeguarding any data under our care. As our valued customers, you can have peace of mind, knowing that your data is securely handled and protected.

A fictional image of customer lifecycle management

Customer Life Cycle Management: The Impact It Can Have On Your Business

The business world has seen the rise of numerous paradigms to increase efficiency and profitability. Among these concepts, Customer Life Cycle Management (CLM) has proven to be paramount in recent years. By leveraging the power of new technologies and data analysis, Customer Life Cycle Management can significantly impact your business.

Evolution of Customer Life Cycle Management

Customer Life Cycle Management is not a newcomer to the business world. Yet, the automation tools that facilitate its processes, and the intelligence they add to it, are a recent development. As it has always been, CLM is data and discipline intensive. It thrives on historical data interpretation, understanding customers’ reactions to various nudges, and harnessing these intelligent campaigns & customer insights into smart shopping campaigns. Its intricate nature makes it an ideal candidate for application in reinforcement learning.

Navigating the Customer Journey with CLM

A key aspect of Customer Life Cycle Management is its role in guiding a customer throughout her journey with the brand. This journey can be broadly divided into four key stages:

  • New Customer Handholding and Onboarding: This initial stage helps familiarise the customer with the brand, its products or services, and its unique value proposition.
  • Active Customer Frequency Driving and Category Addition: CLM focuses on increasing the frequency of purchases or interactions while introducing new categories to the customer.
  • High-value Customer Nurturing and Churn Prevention: At this stage, CLM ensures the retention of high-value customers and reduces the chances of their migration to competitors.
  • Lost Customer Win-back: Finally, CLM aims to regain the business of customers who have stopped interacting or transacting with the brand.

Dissecting CLM Programs: Recurring Triggers and Personalisation

Each of these stages is subdivided into actionable campaigns like First to Repeat (FTR), Cross Sell, Frequency Driving, On the Brink (OTB) churn prevention, and Winback. These programs are built around recurring triggers, automatically fired for eligible customers daily. For instance, a customer who hasn’t transacted for 90 days might receive a message that the brand misses them and has an offer waiting.

What sets CLM campaigns apart is their focus on smaller customer sets and high levels of personalisation. A classic CLM message is specific, referencing the recency of purchase, the last category bought, and the outlet, and often includes a personalised recommendation.

Evaluating the Effectiveness of CLM Campaigns

From a conversion and lift perspective, CLM campaigns have proven highly effective. Despite contributing less in absolute dollar terms compared to mass or segmented marketing campaigns, they provide significant value due to their focused customer sets. Typically, CLM campaigns contribute 20-40% of the incremental sales generated, but with a conversion rate that’s usually 60-80% higher and yield numbers that are 3-5X of mass campaigns.

To increase efficiency & personalization, you might want to consider reading about the benefits of AI in marketing.

Crafting an Excellent CLM Program

The ingredients for a successful CLM program are:

  • Single View of the Customer (SVOC): A consolidated view of customer data, spanning dozens of variables, helps in target criteria setting or personalisation.
  • Machine Learning Models: These tools predict propensity scores, offer recommendations, and aid segmentation.
  • Campaign Blueprints: Established strategies and plans that define reasons to communicate with all CLM segments.
  • Target vs. Control Measurement: A method to assess the effectiveness of the program.

Solus is a leading machine learning-based recommendation system that was built to accommodate these crucial aspects of CLM, providing a streamlined solution for businesses to enhance their customer relations and ultimately, their bottom line.

In conclusion, Customer Life Cycle Management, driven by technology and data analytics, offers tremendous potential to businesses. With its high conversion rates, personalised campaigns, and strategic approach to customer engagement, CLM can significantly boost your business’s revenue and customer retention rates.

a fictional image of a man with an AI brain for prompt generator

Onto The ChatGPT Bandwagon With Our Prompt Generator

We’ve jumped onto the ChatGPT bandwagon with our Prompt Generator. (I’m using ChatGPT as a placeholder for all LLMs, so please don’t flame me for not mentioning all the other options!)

What did we solve for?

This was initiated with a simple problem statement – brands struggle with generating creativity for targeted or personalized campaigns. For instance, at the MarTech Summit in Singapore this week, a panel discussion that had Zalora, Citibank and the likes, stated that one of their main challenges to operationalizing the personalization engine is content and creativity. Quite unimaginable right – to be in a world where the ML Algorithm has become the “easy part”.

Knowing the problem statement well led to a fairly focused solution definition: Help brands use ChatGPT to generate creative personalized messages.

So we built a Prompt Generator with a very focused use case – content for SMS/ WhatsApp/ Email/ Notifications.

The SOLUS Prompt Generator

It’s at https://prompt.solus.ai | Free and instant Sign Up | in Beta | Desktop only!

Let’s see how it works:

If the prompt you use in ChatGPT is something as basic as:

“Generate an SMS marketing message for SOLUS, an Apparel brand”

You get something like:

If you use our prompt generator, the prompt evolves to something much more refined and the results can be something like:

 And it gets better. The email copy comes out (using a good prompt, again) looking like this:

Does this work across categories? It does – we’ve tried it for Apparel, QSR, Hospitality and Travel, Mutual Funds, and even Securities!

What about all the fears and reservations against ChatGPT?

I feel these are largely mitigated here. Writing copy for direct marketing has been a fading craft for a while. Creativity has become less attuned to the needs of relevance and personalization, so to use an LLM to generate a baseline creative that ticks off all the requisite boxes from a craft perspective are very valuable. Brands often need dozens of creative templates with variants for targeted messages – this means a factory output, which in turn means there’s a solid case to use AI.  Once ChatGPT generates a message one can iterate and add own tonality to taste – but 80% of the job has been done by the AI.

There’s no sharing of data. There are no biases in the training set of ChatGPT polluting decisions – because the use case is not decisions, it’s playing an otherwise cumbersome craft.

What’s the catch?

You still need to know what you want. In our prompt generator, you’ll need to give inputs of the Hook, the Tonality, what Personalization to use, whether you want a Follow-up message etc. I’ve been told this is intimidating, and many folks in marketing will not have answers to these Qs. Well, there’s always the option to leave these blank – but that’s also a bit of a shame. Whoever is generating your copy – Human or AI – needs a good brief!

Let us know, please.

Use the prompt generator, or have folks in your marketing team give it a spin. And let us know if it works for you, and how we can improve it.

Click here to try our prompt generator | Free and instant Sign Up | in Beta | Desktop only!

A man measuring incremental revenue

Incremental Revenue can transform your Business – if you get it right

In the ever-evolving world of marketing measurement, multiple studies have shown that measuring incremental revenue, or lift, is the gold standard. However, achieving accurate and reliable revenue measurements for the same can be a daunting task. In this article, we will explore the concept of incremental revenue, its robustness as a measurement framework, the nuances involved in its measurement, strategies to increase it, and how to unlock its full potential to formulate smart shopping campaigns for your business.

Understanding Incremental Revenue

At its core, the measurement of incremental revenue is defined as establishing a control group (CG) and measuring the impact of interventions, such as email sends, on the target group (TG). By comparing the response of the TG to that of the CG, we can calculate the revenue generated. This approach is generally considered more robust than measuring conversion percentages or using pre/post approaches. Other methods are harder to defend due to various biases and confounding factors, but incremental revenue measurement provides a solid foundation.

The Nuances of Measurement

Measuring incremental revenue involves addressing several nuances. Firstly, determining the duration for which the CG is held out is crucial. It should be long enough to capture the full impact of interventions without introducing excessive time-based biases. Secondly, it is essential to differentiate between short-term and long-term impacts. Some interventions may lead to immediate revenue gains, while others might have a delayed effect. Separating these impacts enables a better understanding of the true revenue generated. Finally, organisations must decide whether to measure it by messaging channel or overall impact. Both approaches have their merits and should align with specific business growth strategies.

Increasing Incremental Revenue

To maximise incremental revenue growth, it is crucial to explore what truly works. Often, strategies that generate higher absolute profits may not exhibit the highest percentage lift. Additionally, high conversion rates do not necessarily equate to good revenue. It is possible for high TG conversions to mirror high CG conversions, limiting the true incremental gains. By analysing segments, campaign mechanics, timing, channels, and other factors, businesses can develop a comprehensive strategy to maximise their respective potential.

You might also want to read about The Benefits of AI in Marketing: Increased Efficiency and Personalization.

Unlocking the Potential

Realising the full potential of incremental revenue starts with a well-defined measurement framework. This framework should outline the processes for establishing control groups, implementing interventions, and accurately tracking and reporting results. Investing in suitable tools and practices that facilitate CG holdouts and enable robust reporting is crucial. Furthermore, organisations must foster a culture where the team obsesses over it and continually seeks opportunities to improve it. By incorporating a data-driven approach and actively experimenting with different interventions, businesses can unlock the untapped potential of this gold standard.

Reading about the Benefits Of Customer Life Cycle Management: How It Can Improve Your Business can also prove helpful in certain avenues.

Measuring incremental revenue is undeniably challenging, but it provides the most robust framework for assessing the effectiveness of customer engagement efforts. By employing a carefully constructed measurement plan, organisations can leverage it to gain valuable insights into the impact of their marketing initiatives. By understanding the nuances involved, developing effective strategies, and investing in measurement frameworks and practices, businesses can propel their growth and success. Incremental revenue is not just a metric; it is a powerful tool that can transform the way organisations approach CRM and customer engagement.

In conclusion, harnessing the power of Solus AI’s machine learning-based recommendation systems is a game-changer for businesses looking to maximize their incremental revenue. With cutting-edge algorithms and advanced machine learning techniques, Solus AI empowers companies to deliver personalized and targeted recommendations to their customers.

Smart Campaign Prioritization in Solus

Smart Campaign Prioritization in SOLUS

The most common mechanism for personalized engagement with a customer is through a marketing campaign. These campaigns can be broadly classified into two groups:

  • Customer Lifecycle Management (CLM) campaigns, which are sent on various relationship
    milestones such as the 3 rd anniversary of the first visit, upon the 10 th visit etc, and
  • Go-to-Market (GTM) campaigns, which typically target a large segment of customers with product recommendations, store/category promotions etc.

By its very nature, the second category above tends to target a large percentage of customers, and personalization is achieved through the use of customer-specific information in the messaging, use of recommender algorithms, variations in message tonality etc. However, the side effect of such mass-market campaigns is that, in any given week, there might be multiple campaigns that could be used to target the same customer. Which brings us to the central question in this note: which of these campaigns should we choose for which customer?

There are many ways to approach the prioritization problem. The obvious ones are:

  1. Set a pre-defined priority order for campaigns so that, if the same customer qualifies for two different campaigns, the higher priority one is chosen. This assumes sufficient domain knowledge to set the priority order on the part of the decision maker. While this might be true in general, it might be a challenge when the competing campaigns involve, for instance, different recommender stories (link to the article on recommender stories).
  2. Solve the problem at the design level, by determining more fine-grained targeting rules such that conflicts are avoided. This makes sense when there are only a few campaigns at any point in time, but also assumes that sufficient domain knowledge and care is employed in determining mutually exclusive targeting rules.
  3. Choose campaigns at random from the eligible ones for each customer. This works when you have no prior knowledge of what would work for whom, and want to test out all
    alternatives. This is essentially equivalent to an A/B test. However, one needs to determine how long a random choice is okay to do, and whether the conclusions drawn from the test will hold forever or will require further testing.

As you can see above, each of these techniques have their uses but also some significant
disadvantages. The Solus smart prioritization feature is designed to address these issues.

Solus is, at its heart, a data-driven self-learning product, and this philosophy applies here as well. It figures out what works and what doesn’t for each kind of customer, and uses this information to smartly prioritize campaigns. However, while doing so, it keeps in mind three things:

  1. What works today might not work tomorrow, so constant testing and learning is necessary.
  2. Data-driven approaches can only be driven as long as there is data. For instance, if there has been only one campaign sent to inactive customers in the past, there is no data to determine whether a different campaign might work better. This means that Solus needs to determine when and where it has enough data to be relatively more certain of the outcome and prioritize campaigns accordingly, and when to prioritize exploration.

This is why the key algorithm used to do smart prioritization is a contextual multi-armed bandit (CMAB) algorithm. In order to explain how this works, let us first understand how a multi-armed bandit problem is framed.

Imagine you’re at a casino in Las Vegas, and find a row of slot machines in front of you. Assume that each slot machine has a fixed but unknown probability of paying off, and each pull of the arm in the slot machine is independent of the previous pulls. Now, you have a bag of quarters to feed into these slot machines, and you don’t know which one to pick. How do you spend your money wisely? 

The trick is to start putting a few quarters in each machine, and keep doing it until you start seeing one or more of the machines paying off. When they do, put more quarters in those machines and less in the ones that haven’t paid off much. The more you see, the stronger your understanding of what might be a better bet, and the more money you put in there. The specifics of how to determine the allocation is where all the math comes in.

The colloquial term of a slot machine is “one-armed bandit” since it’s got a crank that looks like an arm and it takes your money. Hence the term “multi-armed bandit”. It is easy to see the analogy between this approach and some of the business problems you’re familiar with. Price testing, for instance, is a prime candidate. You don’t know which price works best, because you don’t know how much the demand might go down when you increase the price. So the best way to do it is to test, but multi-armed bandits allow you to do it in such a way that you quickly shift your focus to the price range that works better, thereby leaving less money on the table while running the test.

The contextual variant of this problem is one where each slot machine pays off with a probability
that depends on who you are. The equivalent in our context is: each campaign is a slot machine, and the likelihood of response to the campaign depends on who the customer is, i.e., what the customer-related variables (RFM, customer segment, favourites etc) are. By framing the prioritization problem as a CMAB problem, we are able to not just test and learn from customer responses, but also determine when more testing is required (e.g. when certain kinds of customers get new kinds of campaigns that they haven’t seen before).

personalisation for D2C

The Power Of Personalisation For D2C Marketing

In today’s digital landscape, direct-to-consumer (D2C) marketing has emerged as a powerful strategy for brands to establish a direct connection with their customers. The key to success in this competitive environment lies in delivering personalised experiences that resonate with individual consumers. In this article, we will explore the significance of personalisation for D2C marketing and how it can be leveraged to drive engagement, loyalty, and ultimately, business growth.

What is Personalisation For D2C Marketing

Personalisation in D2C marketing refers to tailoring marketing efforts, product recommendations, and offers to meet the unique needs and preferences of individual consumers. It goes beyond simply addressing customers by their first names or segmenting them based on general demographics. True personalisation involves leveraging data and insights to create meaningful, one-to-one interactions with consumers.

Leveraging Data for Personalisation For D2C Marketing

Data lies at the heart of personalisation for D2C marketing. Through advanced analytics and tracking tools, brands can gather valuable information about customer behaviour, preferences, and purchase history. This data can then be used to create intelligent customer insights, allowing marketers to understand their audience better and anticipate their needs.

To effectively leverage data for personalisation, brands need to invest in robust customer relationship management (CRM) systems. These systems can collect, organise, and analyse data from various touchpoints, such as websites, social media platforms, and email marketing campaigns. By gaining a comprehensive view of each customer’s journey, brands can deliver smart campaigns that have highly personalised experiences at every interaction.

Tailoring Product Recommendations and Offers

One of the most effective ways to implement personalisation for D2C marketing is by tailoring product recommendations and offers. By analysing customer data, brands can understand the preferences, purchase history, and browsing behaviour of individual customers. Armed with this knowledge, they can deliver relevant product recommendations that align with the customer’s interests and needs.

For example, a skincare brand can use a personalisation engine to suggest specific products based on a customer’s skin type, previous purchases, or even the climate of their location. By delivering personalised recommendations, brands not only prove customers with smart shopping campaigns but also increase the likelihood of conversion and repeat purchases.

Enhancing Customer Engagement and Loyalty

Personalisation in D2C marketing goes beyond transactional interactions. It creates opportunities for brands to foster meaningful connections with their customers, ultimately leading to increased engagement and loyalty.

Through personalised email marketing campaigns, brands can deliver tailored content and offers directly to their customers’ inboxes. By addressing customers by name and delivering relevant information based on their preferences, brands can build trust and strengthen the customer-brand relationship. Moreover, personalisation can be extended to social media interactions, where targeting customers through selective content and personalised messaging can be performed by brands.

Future of Personalisation For D2C Marketing

As technology continues to evolve, the future of personalisation for D2C marketing looks promising. Advancements in artificial intelligence and machine learning enable brands to gather and analyse vast amounts of data in real time, allowing for even more precise and timely personalization.

Chatbots and virtual assistants are becoming increasingly sophisticated, providing personalised recommendations and customer support. Augmented reality (AR) and virtual reality (VR) technologies offer immersive experiences, allowing customers to virtually try products before making a purchase decision.

Moreover, the rise of Internet of Things (IoT) devices enables brands to gather data from various touchpoints, including wearables and smart home devices. This interconnected ecosystem opens up new possibilities for personalisation for retail, allowing brands to deliver seamless, personalised experiences across different platforms and devices.


Personalisation for D2C marketing has become a powerful tool. By utilising data and advanced analytics, brands can tailor their marketing efforts to meet the unique needs and preferences of individual customers. This personalised approach allows brands to create meaningful connections, drive business growth, and provide exceptional customer experiences. The future of personalisation in D2C marketing holds great potential, as brands can leverage emerging technologies to stay ahead of consumer expectations and unlock new opportunities for personalisation. By incorporating personalisation into their strategies, brands can boost visibility, engagement, and loyalty, ultimately leading to long-term success in the competitive D2C marketplace.