
In the remnant TV market, advertisers typically place media buys on a weekly basis and have no guarantee of clearance. Furthermore, spots aired are seen by all viewers, as opposed to online ads in digital marketing in which random samples of viewers see different ads. This makes it difficult to execute a perfect creative test construct. What usually happens is that different creatives end up being distributed across different networks, and occasionally with different spend. Furthermore, with the introduction of new creatives in the mix, advertisers often encounter a scenario in which some creatives aired on certain networks, while the newer ones did not (or vice versa). The impact of network-rotation variability and imperfect creative split should therefore not to be ignored, as it can greatly affect the measured performance of creatives. This is what we refer to as ‘creative bias.’
In the illustrative graph above, Creative A outperformed Creative B, but Network A also outperformed Network B. When creative bias occurs, as it would in this case, it is hard to tell whether a certain creative outperformed the other because of the difference in their messaging, or because they aired on different networks and rotations. To allow for an apples-to-apples comparison between TV creatives, it is therefore important to de-bias the effect of such factors.
At Tatari, we have implemented a model that de-biases performance in two major steps. First, comparisons between each pair of creatives are made across every network on which these creatives aired. This step is essential as it informs our model which attributes to weigh more heavily in the calculation of final performance. Second, differences in spend distribution are then normalized to isolate the effective impact of creative performance. This allows us to compare creatives that aired on different networks with different amounts of spend.
Once our model completes these steps, we report creative performance in form of adjusted efficiency. Adjusted efficiency shows advertisers how their creatives performed while minimizing the biases due to spend distribution inequalities. As mentioned earlier, it is worth noting that this bias is an inherent complexity of TV advertising and the related nature of media-buying. A typical A/B testing is therefore not possible with linear TV—those advertisers who present creative performance results without de-biasing can sometimes end up with incorrect results.

Chief Data Scientist. I'm an explorer: I seek out the facts by following wherever the data leads.
Watch our recap from Marketecture Live featuring leaders from Tatari, NBCUniversal, and WBD discussing why much of premium CTV inventory never enters programmatic and what advertisers and media buyers can do to drive real reach and impact.
Read more
Tatari and its sister company, Vault, have been recognized with two major honors at the 2026 Convergent TV Awards. Tatari was awarded with the Ad Tech Innovation of the Year honor, while Vault’s data clean room solution (DCR), garnered the Identity & Privacy Innovation Award.
Read more
Agencies like Dentsu and WPP exiting OpenPath expose the cracks in The Trade Desk's programmatic CTV platform and highlight the importance and need for direct sales automation, which represents the next phase of CTV buying and why we launched Upstream.
Read more