Attribution modelling in marketing has been long used by companies to ensure their marketing budgets are being spent in the best way.
Now, we may not officially be in a recession, but times are tough and budgets continue to be squeezed on all fronts, especially marketing.
So, agencies and brands need to be more focused than ever on how to ensure their marketing is performing effectively amid difficult times.
Below, Luke Regan, VP and Managing Partner at performance marketing agency DAC, takes us through why the popular methods of attribution modelling may not still be the best way to gauge your marketing effectiveness, and offers some worthy alternatives they are…

Marketers are under more pressure than ever to improve results year on year. Meanwhile, marketing spend is reduced or not adjusted in line with inflation.
It comes as no surprise, then, that marketers often look to attribution modelling and ROAS (Return On Ad Spend) analysis to optimise – and justify – spend, measure the impact of campaigns and inform future decisions.
AdRoll found that 89% of UK marketers believe that attribution modelling is important to their marketing efforts.
Yet only four in 10 feel confident in their ability to accurately measure it. I think we have very different definitions of ‘accurate’.
Accurate attribution modelling is impossible and, used in isolation, it simply cannot tell the full story when it comes to marketing effectiveness.

There is a better way (in theory) – if we were to hold our own industry to the rigour of medical science: for instance, checks and measures such as peer review, double-blind studies, declaration of interest and repeatability, combined with media mix modelling would offer a much more realistic picture of the overall impact from multiple marketing channels.
Put simply, we need to bring a blend of analyses and full transparency to marketing.
However, the reality is it’s far more convenient to focus on a single approach that suits the agency’s narrative, even if it may not be the best use of the client’s budget.
The issue with attribution modelling
Identifying the touchpoints a customer has interacted with to map their journey is a useful exercise. But whether you focus on first touch, last or somewhere in between, it’s impossible to say for sure that any single interaction is the overriding factor in a consumer making a purchase.
After all, there are myriad factors from across multiple on and offline channels that could account for why a purchase is made at a particular time, many of which marketers are not privy to.
If someone books a holiday resort and a marketer can see that they’ve recently been served a digital display ad for the product, it’s easy to assume the ad was the key influence in the purchasing decision.
But perhaps the person has been aware of the brand for months, and has just had lunch with a friend who was singing the praises of the resort in question.
With so many variables in play, the influence of any particular touchpoint – including first/last touch – is impossible to quantify. Correlation does not equal causation.
As such, taking industry studies as read (particularly those sponsored by publishers) would be short-sighted. An over-reliance on web analytics or media mix modelling without incrementality experiments will typically yield misleading results – no matter what anyone tells us.
MMM and beyond
While it’s certainly not a panacea, media mix modelling is a sensible starting point for those spending an appreciable amount (for those with a low spend per channel, the sample size could well be too small).
First, an organisation needs to decide what key metric they’re modelling for. The most appropriate choice varies by brand and customer journey. For some it might be sales or leads, whereas for others it could be footfall.
Next, it’s a case of plotting that metric on a time series, establishing a baseline and factoring in anything else that could have influenced a fluctuation.

This means starting with the ad spend by channel and adding any price promotions, plus external factors such as major weather or news events, consumer confidence – anything likely to have influenced a change beyond media spend.
Once this baseline has been established, it becomes much easier to factor out influences beyond media spend that are impacting ROAS by running comparative experiments across different geographical locations.
Surgical accuracy
These local experiments replicate the repeatable, double-blind element of a medical trial to further increase the accuracy of findings.
It’s easiest to illustrate how they work with a practical example, so let’s consider brand search and whether or not it’s incremental.
A brand could identify pairs of locations in the UK with a similar yield/sales contribution over a three month period.

Then, run A/B testing by switching off brand search in the ‘A’ locations while keeping it on in the ‘B’ locations, for two-three months.
Do you see any uplift in the B locations/any degradation in the A locations (vs. the established baseline)?
If there’s no degradation or no consistent uplift either way then it would be fair to draw the conclusion that it isn’t incremental for that particular brand. That element of the spend can be reallocated elsewhere.
Don’t forget to re-run the test at some point as dynamics change year to year.
Performance marketing is defined by precision and provability, however the means by which some draw their conclusions are questionable and at times, murky.
If agencies and publishers want to prove their worth and demonstrate they are acting in the best interests of their clients, they need to become more open about methodologies and share their code with clients.
Equally, marketers that work with agencies need to understand these methods in order to ask the difficult questions.
If your agency can’t provide evidenced answers to show probity and transparency in their processes, it’s time to reconsider your choices.
Looking to the scientific disciplines for best-practice is a good way for us all to demonstrate a commitment to openness and critical thinking.
Opening up our inner workings to scrutiny is generally not how things are done in marketing, but it would be a significant statement of intent if the most responsible agencies were willing to collaborate through peer review.
With budgets challenged like never before, genuine transparency in how we all measure effectiveness would be more than welcome.