If there’s one thing you take from this, please make it this: nobody is measuring their marketing perfectly, few are doing it even particularly well. Marketing attribution is challenging. There is no silver bullet solution, and there is no right answer. All we can do is use all the data at our disposal to strike the balance between the art and science to make the most informed decisions possible.
This means taking in conversion data from many sources, but not blindly making decisions on it. Conversion data must be paired with an understanding of how the customer is interacting with a given marketing touchpoint and the user’s unique path to conversion.
So ultimately, perfect conversion attribution is impossible, but let’s see how close we can get by running through some concepts.
The most important and least understood concept in digital marketing today is the notion of incrementality – which is the degree to which an advertising touchpoint actually influenced a conversion event.
An example: you received an email with a great offer and you’re about to buy the product contained within but pause to check the score on ESPN before checking out. While checking the score you see a display ad with your product. You then check out – that display ad from ESPN is taking credit, but what did it do to influence the conversion? You not only paid for a worthless impression, but are being told that impression drove the conversion (incentivizing you to spend on that campaign further).
A second example: you see an ad on instagram for a new watch – you head to the website and are ready to purchase but realize that perhaps you can find a coupon online. You head to retailmenot, find a 10% off code, and go to check out. In GA’s standard setup, Affiliates (retailmenot) drove this conversion. Not only did it not drive the conversion, it actually reduced your product margin by 10% (or more depending on the payment model)!
The purpose of these two examples is to illustrate that while we want our platforms to track how impactful an ad was in influencing someone to buy something, what we’re actually measuring is an ad’s ability to get in front of someone before they convert.
Criteo has an entire business built on this concept. Ever notice how the second you put something in your shopping cart and browse the internet elsewhere, you’re bombarded by ads? Every platform is incentivized to get in front of you before you convert to take credit for that conversion.
Thankfully there a couple ways to mitigate this rampant conversion attribution problem:
Test and Control Methodology (hold out groups)
The most straightforward method to understand conversion lift is through the use of test and control groups. Say you have a direct mailer with an audience of 100,000 of your best customers. You only send the mailer to 50,000 of those customers, and the remaining 50,000 are left alone. In order to measure the incremental lift, you simply take the total revenue driven by the mailed group in the period following the mailer (let’s say $15,000) and compare that against the revenue that was organically driven by the non-mailed group (let’s say $10,000). The incremental revenue driven by the mailer is thus $5,000 ($15,000-$10,000) – this figure can then be compared against the cost of the program to determine the true incremental return on ad spend (incremental revenue / cost).
Test and control groups sound great on paper, but how actionable are they? Typically they’re best used in tactics that have clear 1:1 addressable targeting. This means direct mail and email are the best venues for it. The one caveat here is that you need a sufficient sample size to get a statistically significant read; if a handful of orders are the difference between a given test being positive or negative your sample size isn’t large enough. This incremental measurement tactic can also be used in the cookied realms (Paid Social, Programmatic Display) however executing on this is much easier said than done. Cookies are much less static than many give them credit for, and cookie pollution (which happens when users span across both test and control groups) happens more than most realize. Further, buying ads for control groups is expensive – you’re typically going to have to pay for PSA ads to get a read in some targeting scenarios so you can more or less add anywhere from 3 to 15% of wasted spend to your program in order to accomplish this.
Facebook’s Lift Measurement app would theoretically mitigate these issues to a certain degree but it has its own set of issues. For one it requires an enormous investment over an extended period of time to generate any real results. Secondly, it’s only available as a snapshot in time, and unless you want to invest significant resources to it is not going to be seasonally accurate. An incremental lift percentage at one point in time and at a certain level of spend, should not be applied over different periods of time where so many factors can change.
In all, most ecommerce channels (save for direct mail and email) really aren’t ready for test and control to be a viable source of attribution at scale. There are instances where it can work (and it’s in every digital marketer’s best interest to be constantly testing to find better workarounds than the status quo) but we’re still a long ways away from the industry really figuring this out.
Multi-Touch Attribution (MTA) Methodologies
A significant number of ad tech companies say they have the latest and greatest MTA offering, these should still be taken with an enormous grain of salt.
MTA can be somewhat complicated but it more or less boils down to this: every user has their own unique interaction with a brand that can be tracked. For many brands this equates to millions of touchpoints daily, each of which can be tracked individually.
Let’s say 100 users click through a paid search ad > click through a social ad > click through an affiliate ad, then either purchase or drop off. Another group of 100 users click through a paid search ad > click through a paid social ad, then either purchase or drop off. The only difference between these two groups is the presence of that last affiliate ad. If the group with the affiliate link purchased 50% of the time, and the group without the affiliate link purchased 25% of the time – it could then be assumed that incremental 25% likelihood of purchase was driven by that affiliate link. This is in essence how multi touch attribution works, however this process is scaled up to millions of touch points across many channels and tactics.
Here’s the problem with MTA solutions: although they sound great on paper, it turns out that actually tracking all these unique ad exposures is really, really difficult. The walled gardens (Facebook and Google specifically) don’t allow third party impression trackers so you’re stuck using click data in situations where the view is more impactful. Affiliates and influencers are incredibly challenging to track, particularly when their messaging is presented organically. Cookied environments (display for example) are volatile and people-based third parties who are trying to create independent identity graphs (like LiveRamp) are a long ways away from a 100% match rate. Word of mouth and the presence of physical stores remain a major black hole for data. Coupon blogs and retention marketing take credit for an inordinate number of conversions that shouldn’t count – and segmenting out below the channel level creates sample size issues.
There are solutions (largely based on cross device identity resolution/RFID in the real world) that are making strides towards MTA being a more reliable medium, but we’re still a long ways away from these being truly actionable.
This may sound pessimistic as our two best shots at understanding incrementality (test/control methodologies and MTA) are riddled with holes, but in these challenges lie what makes marketing attribution an interesting journey. We still have to combine the data (at least as much as we have) with the art of understanding customer journey to get an idea of where our dollars should go next. Although we’re far from perfect, at least we still have jobs – the robots haven’t taken over yet!