Automated bidding is nothing new. For many years now various third party platforms and the engines themselves have claimed that sitting back and letting the engines do all the work in spending budget is the best way to maximize conversions. The problem is that very rarely has this been beneficial for the advertiser.
At the end of the day, Google and Facebook are always going to have their own interests at heart. They’ll always say that driving conversions will influence customers to increase spend, thus creating a virtuous cycle where everybody wins, however making that happen still requires a human touch – even when overseeing automated campaigns. One thing automated optimization will do is make it very easy to spend dollars, what we want it to do though is spend dollars efficiently. In order to do that, we need to ensure we’re setting up the algorithm for success.
There are two core factors to take into consideration when considering auto-optimization that hold universally across marketing channels: 1) how is the algorithm handling conversion attribution? 2) does the engine have sufficient data to optimize into high performing segments and out of low performing segments accurately?
1) As I covered in my post about conversion attribution and the concept of incrementality, the way in which a platform is measuring conversions is of enormous importance. This holds true in considering automated campaign optimization. For example (and this holds true across both Google and Facebook), if the engine is only considering last touch conversions when deciding which segments/keywords/audiences to adjust bids on, the engine is incentivized to target users who are as low down the funnel as possible. The core issue: under this scenario, we’ve incentivized engines not to drive more incremental conversions, but simply to get in front of conversions that may or may not happen anyways. This is why frequency goes way up for add-to-cart retargeting and recent site abandoners, and way down for upper funnel prospects who may ultimately be providing much greater incremental value. It’s the marketers job to ensure a machine-learning-driven algorithm is learning on the appropriate data.
2) Sample size and data sufficiency is of equal importance to the above. Once you’ve decided to go consider automated spend optimization, many will simply think it’s as easy as turning the switch on and waiting for the dollars to roll in. In reality, all you’ve done is changed what you need to be working on. Instead of pulling reports, creating segments, and directing spend manually, your new job is to figure out how you can provide the engine the best data possible for it to do its job.
Let’s start with Google as an example: the thought exercise I would consider most important is to think through which data Google has access to and how it can use it to make bidding decisions. For example, Google has bid modifiers for gender, age, location, device, house hold income, and parental status. Your job is to consider: can I trust Google to take from here on these variables, or do I know better than the machine? Gender and age are great examples – at first blush it could seem pretty safe to say that Google will know more than me when it comes to that information, and I should let it do its job and keep my hands off of manual bid optimizations. But what if elderly purchasers, while equally likely to young purchasers to make the first conversion, have a far higher life time value and repeat rate? Google doesn’t have access to that life time value information (or at the very least doesn’t care to optimize on it because they’re only going after one-time conversions). It may make sense in that scenario to segment different audience types and adjust the ROAS goal according to their expected LTV. If on the other hand your customer is largely homogenous in their post-purchase activity, it may make sense to simply let Google take care of any audience adjustments.
Facebook is similarly challenging, but it is far more important to worry about sample size restrictions with them. To level set, they recommend a minimum of 15 conversions per week per ad set in order for the algorithm to have any chance at auto-optimizing. I’d consider that the absolute floor and be careful even at that level. So what should you do if your audiences aren’t meeting that threshold and you want to leverage automated optimization? It’s time to consolidate.
I’ve seen such drastic measures as lumping your entire first party data set into a single ad set prove effective. Facebook knows recency and frequency, they understand where in the conversion funnel your customer is, and they have more demographic/interest data than you can imagine – it’s safe to say they know your customer. Where my level of trust falters though is in understanding attribution. Facebook is notorious for pushing spend to the lower funnel to drive view-through conversions as they see all conversions equally. This is why there’s value in still having levels of segmentation – you can infer (or know definitively through lift tests) how incremental your customer segments are; leverage that knowledge in order to set frequency caps and restrict budget where you see fit across the customer funnel. How granular you want to go really depends on your business, your customer segments, and your campaign performance – it’s always worth testing but this is why we marketers still have jobs! Applying nuance to the science is key.