Can Attention Metrics Quantify the Value Gap Between TV and Digital Video?

Tim Cross 10 May, 2023 

Last month Lumen and TVision, two leaders in the attention measurement space, announced a new partnership which combines their two datasets, enabling advertisers to plan and track their campaigns based on attention data collected by the two companies.

While both businesses measure attention, TVision’s technology specialises in TV and CTV advertising, whereas Lumen focuses on desktop and mobile. Thus combining the two datasets gives advertisers a broad view of attention across the TV and digital elements of their campaigns. But this throws up a big question – is attention paid to a TV ad worth the same as attention paid to a YouTube ad? And as agencies and advertisers look for metrics which allow fair measurement across all forms of video, can attention help capture the differences between different mediums?

VideoWeek spoke with Mike Follett, managing director at Lumen Research and Yan Liu, CEO and co-founder at TVision to hear more about the differences in measuring attention on big and small screens, what really drives attention, and how agencies are using attention data.

What are the differences between Lumen’s attention data and TVision’s attention data?

Mike: Lumen and TVision basically do the same thing, but for different screens. So Lumen has developed technology that turns your phone or your desktop computer’s webcam into an eye-tracking camera. That means that we can work out what people are looking at when they’re looking at a screen. We understand what’s on the screen, and whether or not people looked at it.

TVision does a similar thing, but for TV. And attention measurement for TV is both simpler and also more complicated! It’s simpler because when the ads are on, they fill the screen, so you don’t need to measure which exact part of the screen people are looking at. But the ways in which people watch TV are far more varied. When someone is online, we know they’ve got their phone or their computer screen in front of their face – things are a bit more complicated with TV. So the technology which TVision uses is slightly different.

But ultimately, the two datasets are exactly comparable.

Yan: Essentially, we have almost identical technologies and methodologies. The big difference is really the size of the screen, and each different screen has its uniqueness in terms of the measurement challenge. But the fact that we do the big screen and Lumen does the small screen means this is a really complimentary partnership.

What’s the benefit to the market of bringing these datasets together?

Mike: We’ve been working together for a long time already, and listening to our customers as to what they want. One big part of what clients have been asking for is planning data. They’ve got a million pounds to spend, and they want to know how much they should put on TV, how much on YouTube, how much into app A versus app B.

So we provide agencies with planning data so they can make decisions before they spend any money at all. Companies like Dentsu, Havas, IPG and others will take our data and put it into their planning tools, and this gives them cross-media attention data so their planners and buyers can make informed decisions.

However in addition to that, lots of clients have also been asking for campaign specific reporting data. So the second thing that we’ve done in addition to the planning data is we’ve created a tag, similar to a viewability tag which viewability vendors would provide, that can sit on digital campaigns, on CTV campaigns and on social media campaigns. We ingest data on each impression, apply a predictive model of attention to that which is based on our attention data, and apply that to individual campaign data. And this second area, where we can use CTV data and compare it to Facebook and YouTube on desktop and mobile, that’s really where we’re seeing the fastest growth for this joint venture.

Yan: There’s a huge debate in the industry around the quality of impressions – how can you compare one impression on Facebook or TikTok versus one impression on linear TV or CTV?

Most people would agree that since TV is on a big screen, the audio is on, and there’s no interruption, that’s higher quality. But when you try to quantify the difference in quality, there’s a lot of disagreement! People working on the TV side might say a TV impression is one hundred times more valuable than a TikTok impression, and people working on the social side might say a TikTok impression is worth half as much as TV.

Our cross-platform attention matrix can really help the industry to bridge the gap. Ultimately, the reason why we run advertising is to get audiences’ attention. So if we can use attention as the qualifier, and then quantify how many seconds of attention each media is getting, I think that’s very fair to every single media platform. Then brands can make their own choices based on the data we provide.

Mike: I think the market has an intuitive understanding that a cinema impression is better than a TV impression, a TV impression is better than a YouTube impression, and so on. And in large part, I think those intuitions are right. But the question is how much better. And given how lots of agencies now don’t have TV buying departments, but AV buying departments instead, being able to compare these things in an accurate and dependable way seems to be of real value.

Can attention metrics fully quantify the differences between different mediums? For example, if a TV impression and a social video impression achieve the same attention score, should they be of equal value to advertisers?

Mike: Yan mentioned the equalising impact, and that’s part of what we provide – showing the differences in the sheer volume of attention on each platform. But the other important piece is to be able to link this stuff to outcomes. That’s why predictive models of attention are so important.

Lots of clients have asked us, is ten seconds of attention on TV worth the same as ten seconds of attention on YouTube? With a tag on an ad, once you’ve measured how much attention each ad got, you can survey those people who gave one second of attention, five seconds of attention, ten seconds of attention and so on, and measure outcomes. Are they more or less likely to remember the campaign, are they more or less likely to think positively about the brand?

This is why you can’t just use this data for the planning side; it’s important to use it for this campaign-specific, tag-based approach. With a tag you can work with a company like Dynata, Upwave, Lucid or whoever else, and launch brand uplift studies to measure the relationship between attention and outcomes.

Yan and I were both over at the Advertising Research Foundation event in New York recently, and both of us were presenting on the link between attention and outcomes. We presented some work we’ve done with PwC to show that link across media – there is a good link, it’s a fairly stable link, and it’s well attested by the likes of PwC. But it does vary from campaign to campaign.

What are the main factors which drive attention in CTV environments?

Yan: I think there’s a big difference between desktop or mobile and the big screen. One big difference is that people can leave the room in a big-screen environment, it’s a much more passive experience. It’s common for people to leave the room, come back in, leave again, come back in again. So the two factors we have to think about are whether people are present in the room, and once they’re in the room, whether their eyes are on the screen.

And we’ve found that presence in the room is mainly driven by the quality of the content that people are watching. People want to watch TV shows, not commercials: the only reason they’re in the room is because of the content. And if the content is good they tend to stay in the room, if it’s bad they may either switch channels or leave. That’s pretty simple.

The other key factor is the duration of the ad pod. If you run a ten minute ad break, everyone will leave the room at some point. But in the US, we’re seeing a lot of TV networks introducing one minute ad slots, and including a countdown timer showing how long is left. That makes people stick around, because they know that the show is about to restart.

Those are the two main factors which affect people’s presence in the room – there are others, such as daypart, but these are the main two.

Then when we’re looking at eyes-on-screen attention, a lot of that comes down to creative. If you think about it, when someone is in the room, they’ll often be doing something else during the commercial break. They might be drinking a beer, texting their friends, browsing their phone, or something like that. But if they hear something interesting from the TV, or something catches their eye, that might grab their attention.

Obviously for eyes-on-screen attention, you have to be in the room, so the quality of content and length of the ad break are important here too. But if you’re in the room, whether someone looks at the screen or not really depends on the creative.

Yan you mentioned that some US networks are designing shorter ad breaks, which perform better for attention. Are you both seeing sellers starting to develop ad formats which are specifically designed to perform well against attention metrics?

Yan: Yes that’s definitely happening. People are starting to realise that if you extend the ad break, you’re not going to get additional audience attention. So what they started to do is to push more on native ad formats, like product placements inside the show, or picture-in-picture formats which are popular over here. Those are common in sports during short breaks in play; instead of switching to a commercial break, broadcasters run ads in a small window at the bottom of the screen.

And these are all becoming popular because they sustain high levels of attention. I think that’s good for everybody – audiences want to watch TV shows, marketers want to spend money on ads, but they want to make sure that those ads get audience attention. Now there is an incentive for publishers to improve attention through ad innovation.

Mike: We’re certainly seeing it happen in the UK too. The sports with the shortest ad breaks, tend to have the highest attention. Cricket broadcasts will often have a single ad between overs. People know that the next over will start soon, so there’s not enough time to get up and do something else while that ad is playing. And I know Yan found the same from TVision’s panel over in India. People really pay attention to ads when they know they’re going to miss content if they leave the room.

Are you seeing more interest from the buy-side in creative optimisation or media optimisation, when it comes to optimising for attention?

Mike: We see both really. Some clients come to us saying they’ve optimised creative, and now they want to look at media, sometimes it’s the other way around.

What we find is that across most of the media that we look at, the media accounts for about 70 percent of the effects that we see, and the creative accounts for around 30 percent. But that’s in general – a really good ad can give you a massive share of attention, but that’s quite rare. For us, media accounts for the bulk of attention, when we’re looking at sheer volume of attention.

Yan: We’re also seeing interest in both sides, and typically those questions are coming from different teams.

On the creative side, it’s not just about the quality of a creative and how it draws attention, but also frequency. It’s very expensive to create a TV commercial, and often advertisers will test an ad in a focus group, which is also expensive. But that only measures one exposure. You don’t know how consumers will behave after the second, third, fifth, or tenth exposure. It might be that audiences like an ad the first time they see it, by the third time it’s getting boring, and by the tenth time they hate it!

With our measurement, we can start to understand how frequency affects the creative, so that’s a new value proposition to the buyer. And the ability to optimise against that is really getting traction. But at the same time buyers understand that media plays a huge role in attention as well, so they have to work together.

Can attention metrics capture the things which make a piece of creative really stand out, when it comes to capturing attention?

Mike: I used to work at DDB, so I’ve been on the creative side. Like I say, really amazing creative ads can have an enormous effect. If you think about the Guinness surfers ad, that ad is over twenty years old, but it sticks in your memory because it’s an amazing ad. And ads like that do have an astonishing outsized effect on attention.

Ads like that are a rarity. And I think the creative learnings we get from attention measurement breed quite a lot of respect for that creative magic. You can’t necessarily create a formula for it. The nature of creative thinking is that you’re thinking in a different direction.

But it is worthwhile thinking about what’s possible within each medium. Often people take best practice for making a TV ad, and apply that to Facebook or TikTok. Or they take what works with a publisher, and apply it to TV. And what the data from TVision and Lumen can show is that there are different shapes to attention on each medium. You should try to make the best possible TV ad – and we can give you hints and tips, but not promise anything on the creative side there. But you should also try to make the best possible Facebook ad, which is different, there are different rules and best practices. And on that side of things, we can get quite scientific, and it becomes much more replicable.

Follow VideoWeek on Twitter and LinkedIn.

2023-05-10T15:44:50+01:00

About the Author:

Tim Cross is Assistant Editor at VideoWeek.
Go to Top