Attribution Is a Lie: How I Actually Measure Channel Value
I am going to say something that will upset some of my friends in the analytics world. Every marketing attribution model I have ever worked with — first-touch, last-touch, linear, position-based, data-driven, Markov chain, Shapley value, whatever — is a polite fiction that teams agree to believe because the alternative is uncomfortable.
The uncomfortable alternative is this: we cannot actually measure, with anything close to precision, which channel caused which conversion. We can approximate. We can tell rough stories. But the confident dashboard numbers that CMOs present to boards are mostly numerology.
I have been making marketing decisions for 12 years partly on the back of those numbers, and I now believe we should use them very differently than most teams do. This post is how I actually measure channel value, what I have stopped pretending about, and what I have found to be more useful than attribution.
Why Attribution Is Structurally Broken
Three honest reasons, each one enough on its own.
We cannot observe most of the journey. A customer sees your LinkedIn post on Tuesday, reads your blog on Thursday, hears your company mentioned on a podcast Friday, searches your brand on Saturday, and signs up after clicking a Google ad on Sunday. The attribution system credits the Google ad. The other four touches were invisible. This is not a dataset with some missing rows. This is a dataset with mostly missing rows.
Cross-device tracking degrades every year. Apple privacy changes, cookie deprecation, users on five devices, users who clear history. A conversion attributed to “direct” is often a conversion whose real first touch was a channel that got disconnected by the time the user came back. The data is not just incomplete — it is biased toward whichever channel happens to be the last identifiable touchpoint.
Incrementality is not the same as attribution. The conversion you attributed to Google Ads might have happened anyway because the user was already going to search your brand. Branded paid search often has attribution credit it does not deserve, because the alternative — an organic click on the same query — is invisible to the attribution report.
None of this is news to a sophisticated marketer. What continues to surprise me is how many teams still run their budget decisions on attribution dashboards anyway. The dashboards look precise. They feel like data. They are, in most cases, measurement theater.

What I Actually Do Instead
I use attribution as a weak signal, not a decision. My real budget decisions sit on top of three different practices. None of them are as clean as a dashboard. All of them are more honest.
Practice 1: Geo and Time Holdouts
If you want to know whether a channel is producing incremental customers, turn it off in a subset of geographies or for a defined period and compare the difference. This is old-school. It is also the closest thing marketing has to a controlled experiment.
Example from last year. We had paid social running across all US states. I was suspicious it was mostly serving ads to people who would have found us anyway. We turned off paid social in five midwestern states for 60 days. Baseline conversion volume in those states dropped about 6%. The cost savings were about 19% of our total paid social spend.
Attribution dashboards had been telling us paid social was driving 21% of signups. The holdout test said it was driving more like 6–7%. The dashboard was 3x overconfident. That gap is not unusual. It is typical.
Downsides: you need enough volume to see the signal. You need to pick non-distorted geographies. You need to run the test long enough that seasonality does not confound it. For small businesses, this is hard. For any company above a few million in ARR, it is entirely doable and enormously clarifying.
Practice 2: Post-Signup Source Survey
The single most useful data point I collect on any product I work with is a free-text survey question after signup: “How did you hear about us?” No dropdown. No multiple choice. Just a text field that takes 15 seconds to skip.
About 40–55% of users answer it. What they write rarely matches what the attribution tool says. Someone who signed up via a Google search after clicking a Google ad will often write “a friend told me about you” or “I saw you on LinkedIn last month.” That is the true origin of the customer journey. The Google ad was the vehicle, not the cause.
Over the course of a year, these text responses build a picture of which channels are actually putting you in people’s minds — what I call demand creation — versus which channels are closing the last click, which is demand capture.
Most attribution systems overweight capture and ignore creation. The surveys help you see creation, which is usually where your real brand value is being built.
Practice 3: Channel Margin Analysis, Not Channel Revenue Analysis
Instead of asking “how much revenue did this channel produce,” I ask “how much net profit did this channel produce after every real cost, including the hidden ones.”
The hidden costs are the ones that kill most channels you think are profitable.
- Support cost per customer acquired through the channel
- Refund rate and bad-fit churn within 60 days
- Payment processing costs (varies by user type)
- Internal ops cost of managing the channel (campaign management, creative production, agency fees)
- Opportunity cost of the people who work on this channel vs. other channels
When I have run this analysis on real teams, roughly half of the paid channels that looked profitable on a revenue basis were unprofitable on a net margin basis. The money was being consumed by costs the marketing team was not paying attention to because they were accruing elsewhere on the P&L.

The Attribution Models I Still Use, Gently
I have not thrown attribution out entirely. I use it for two specific things.
Trend detection. If my last-touch paid search number was 1,200 conversions per week for six months and it drops to 900, something changed. The absolute number is imprecise but the direction is informative. Attribution dashboards are good at telling you when something moved. They are bad at telling you why.
Creative and campaign comparison within a channel. Within paid search, comparing the performance of two ad groups using the same attribution logic is internally consistent. The absolute ROAS may be wrong, but the relative ranking is usually right. Use attribution to pick the better creative, not to allocate budget across channels.
That is the whole honest use of attribution in my current practice. Everything else is pageantry.
What to Say When Your CFO Asks for ROAS by Channel
This is the hardest part. You cannot walk into a finance review and say “attribution is a lie.” You will not have that job very long.
What I say instead, and what has worked, is this: “Here is the attribution view, for trend reference. Here are the three incrementality tests we have run this quarter, which are the basis for our budget recommendations. The attribution numbers and the incrementality numbers tell different stories on these three channels, and the incrementality numbers are the ones we are acting on.”
This language is diplomatic. It also shifts the conversation. Over time, the CFO starts to trust the incrementality tests more than the attribution dashboard. The dashboard becomes a reference tool rather than a decision tool. That is the goal.
If you cannot run incrementality tests because your volume is too small, the honest answer is that you are doing channel portfolio management on intuition supplemented by weak signals, and you should say so. Pretending otherwise corrupts every downstream decision you make.
The Framework I Leave Teams With
When I finish a consulting engagement, I usually leave the marketing team with a decision filter that has three questions, in order. We call it the “attribution override.”
- What does the attribution dashboard say about this channel? This is context, not conclusion.
- What does the most recent incrementality test or holdout tell us? If there is a conflict with question 1, question 2 wins.
- What do the post-signup surveys say about the role this channel plays in the journey? If question 3 conflicts with questions 1 and 2, you have a demand-creation versus demand-capture conversation to have.
This is not a magic formula. It is a discipline. Teams that run it quarterly make better budget decisions than teams who rely on the dashboard alone. Not because the framework is brilliant but because it forces them to acknowledge that attribution is a story, not a measurement.
That acknowledgment is half the battle. Once a marketing team internalizes that the dashboard is a convenient fiction, they start demanding better evidence for big decisions. And better evidence is what actually distinguishes the teams that compound over years from the teams that keep chasing whatever channel last looked good on a slide.
|
|
Written by
Marcus Webb
Marketing strategist with 12+ years of experience. I test tools so you do not waste money on software that does not deliver. More about me → |