The Modern Art of Marketing

Ridho Hidayat
8 min readJan 29, 2021

The digital advertising market is booming. During this century, Google’s estimated ad revenue has increased year after year, more often than not with a double-digit percentage increase, resulting in a 116 billion dollar ad revenue in 2018 [7]. What’s more, worldwide spendings are estimated to surpass 300 billion dollars in 2019 [1], [8]. The revenue of Internet advertising already exceeds that of other advertising media, as can be seen in Figure 1. Have companies found the crucial key to success, or is reality not what it seems? In this article, I am going to take a closer look at the world of digital advertising. More specifically, I am going to present the conventional methodology that is used to measure advertising effects, and add some critical notes and alternatives to that.

Figure 1: Advertising revenue in the US, as seen in [6].

From Mad Men to Math Men

For centuries, the field of advertising had little to do with science. Mad Men ruled the business. These marketers could spend 3 million dollars on a Super Bowl advertisement without having a clue whether or not it was effective. In fact, they did not even care as long as their sales targets were hit. As William Bernbach, a famous American advertising creative director, stated in 1947: “Advertising is fundamentally persuasion and persuasion happens to be not a science, but an art” [4]. However, with the rise of the Internet and the exponential growth of data, Mad Men made way for Math Men. These modern marketers, who may call themselves data consultants, aim to measure the effects of advertising by extracting information from data using mathematical models and other tools. According to a survey, which can be seen in Figure 2, they mainly establish advertising effectiveness based on the sales uplift and the number of impressions they measure. This information is used to steer marketing campaigns and spend budgets more efficiently. So, does this mean that the field of advertising has shifted from art to science? Surprisingly, this does not appear to be the case. Marketers that study the effects of advertisements are often more interested in the results of a research than in the methodology. Here is the problem with that mindset: a flawed methodology leads to flawed results, and indeed it happens to be the case that the methodology is often flawed. Consequently, marketing analytics is often more a form of modern arts than a form of science.

Figure 2: A survey of measuring advertising effects, as seen in [5]

Selection effect versus advertising effect

So what are these flaws that a lot of marketers neglect? We focus on one of the most important flaws: selection effects are often not distinguished from advertising effects. To explain what this means, we take a look at a simple example. Imagine that a restaurant hires two people, Alice and Bob, to hand out promotional coupons. After some time, it appears that Alice has a huge conversion rate, almost everyone who receives her coupons ends up eating at the restaurant. Bob on the other hand appears to be much less successful, his conversion rate is much lower. Does this mean that Alice is a much better salesperson and the entire marketing budget should go to her? Just from this information, it is impossible to draw this conclusion. Why is this the case? A possible explanation for the difference in conversion rates is that Alice was actually handing out her coupons in the lobby of the restaurant. So what appeared to be conversions that resulted from Alice’s promotional efforts, were in fact conversions from people who would also eat at the restaurant without receiving her coupon. This is called the selection effect. Obviously, the selection effect should be distinguished from the advertising effect, the actual effect of exposure to an advertisement.

One may think that this is common sense, and big companies already do this. However, this is far from reality. Take eBay as an example, the most successful way of advertising that they use is brand keyword advertising. Anytime someone searches for the word ‘eBay’ on Google, eBay pays Google to make sure that the top result is a link to the website of eBay. Marketers from eBay showed that this resulted in a revenue of more than twelve dollars for each dollar that they spent [2]. However, they did not take the selection effect into account. So when eBay temporarily stopped advertising due to some circumstances, Steve Tadelis, a professor in Economics, got the opportunity to analyse the effect on the number of visitors of eBay. The result of his analysis can be seen in Figure 3. It turned out that people who normally visited eBay through the paid link now visited eBay through the regular link. Therefore, the true increase in revenues from advertising was not twelve dollars, but only 37 dollar cents. So in fact, eBay actually lost 63 cents on each dollar that it spent on advertising. Its “most successful way of advertising” did not yield a profit of 245 million dollars, it actually cost eBay 20 million dollars. This eBay-example also does not appear to be unique, it happens with a lot of companies [2].

Figure 3: the selection effect of eBay’s advertisements, as seen in [2].

Randomly controlled trials

We discussed an important issue when measuring advertising effects. This raises the question: how should we measure online advertising effects? This does not necessarily require a highly sophisticated econometric model. Instead, all we need is a randomly controlled trial (RCT). To explain this methodology, we take Gordon, Zettelmeyer, Bhargava and Chapsky [3] as an example. In this study, big field experiments are conducted at Facebook to measure advertising effects. A randomly controlled trial is created by splitting up a group of people into a test group and a control group using random assignment. The control group is never exposed to the advertisement, while the test group is eligible to see the advertisement. However, some people in the test group may still not be exposed to the advertisement, for instance if they do not access Facebook during the study period. Therefore, three different groups are actually observed: control-unexposed, test-unexposed and test-exposed.

Figure 4: Results from RCT, as seen in [3].

As can be seen in Figure 4, the conversion rate of the control group is 0.033%. The conversion rate of the entire test group is 0.045%, with the conversion rates of the test-unexposed group and test-exposed group being 0.025% and 0.079%, respectively. What we are mainly interested in, is the effect of the advertisement on the people who are exposed to it. This is called the average treatment effect on the treated (ATT). To estimate this effect, we first estimate the intention-to-treat (ITT) effect, the effect of the advertisement on the people who are eligible to see it. To do this, we simply subtract the conversion rate of the control group from the conversion rate of the test group, 0.045% — 0.033% = 0.012%. The ATT is then estimated by dividing the ITT (0.012%) by the percentage of consumers who were exposed to the advertisement (37%), resulting in an ATT of 0.033%.

While the theoretical framework behind this calculation is more elaborate, as can be seen in [3], it is clear that the calculation itself using a randomly controlled trial is not that complicated. So why is it that companies spend billions on online advertisements, but they do not seem to have an interest in thoroughly researching the effects of advertising? There are several reasons for this.

Results over reasoning

An important reason why companies often do not use randomly controlled trials to calculate advertising effects is, obviously, money. The opportunity costs of conducting such an experiment are large, since the company deliberately needs to exclude a part of their target group from exposure. Marketing teams need to hit their targets, so they are not eager to do these experiments.

Furthermore, the effect of advertising on sales is so small, that even with a huge sample it remains difficult to get conclusive results, the margin of error is simply too big. If we take the accuracy of these experiments into consideration, the conclusion is often: “the null hypothesis that exposure to an advertisement has no causal effect on the conversion rate cannot be rejected”. Or bluntly stated: “it is unknown what the effect of this advertisement is”. Even though such a conclusion is certainly not a matter of incompetence, it is often interpreted in that way. From a business perspective, these insights are not that valuable, because they do not tell whether or not a campaign is profitable. However, even if the effect of advertising is unknown, a company still needs to take action, one way or the other. Therefore, marketers might as well go for their gut feeling and take a guess which advertising campaigns are going to have a positive effect.

Lastly, there is a conflict of interest. Obviously, a company such as Nike benefits from knowing the profitability of its advertising campaigns. However, Nike’s marketing division does not share that interest. This marketing division wants to have the largest possible budget, which is easier to obtain if it is “proven” that these advertising campaigns work wonders. Tadelis’ research got a lot of a media attention when it was first published, however, only about ten percent of all marketers stopped using brand keyword advertising afterwards. They did not even start experimenting more to correctly measure advertising effects. Positive results make everyone happy, who cares how they are obtained, right?

Long live the Mad Men

All in all, it appears that despite the shift to internet advertising and the introduction of marketing analytics, marketing still maintains to be a form of art rather than science. As long as marketers are looking for results in their research that match their own view, the Math Men might as well be called Mad Men. Therefore, I would like to conclude this article with the following statement: the Mad Men are dead, long live the Mad Men.

March 20th, 2019

This article is heavily inspired by Dit is de nieuwe internetbubbel: online advertenties from Jesse Frederik and Maurits Martijn

References

[1] — eMarketer, “eMarketer Releases New Global Media Ad Spending Estimates”, URL: https://www.emarketer.com/content/emarketer-total-media-ad-spending-worldwide-will-rise-7-4-in-2018 (2018)

[2] — Frederik, J., and Martijn, M., “Dit is de nieuwe internetbubbel: online advertenties”, De Correspondent (2019)

[3] — Gordon, B., Zettelmeyer, F., Bhargava, N., and Chapsky, D., “A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook”, SSRN Electronic Journal (2018)

[4] — Ignatius, A., “Advertising is an Art — and a Science”, Harvard Business Review (2013)

[5] — Lambert, B., “Report: How Modern Marketers Measure Advertising Effectiveness”, URL: https://blog.adstage.io/2017/07/11/measure-advertising-effectiveness (2017)

[6] — PricewaterhouseCoopers, “IAB internet advertising revenue report”, URL: https://www.iab.com/wp-content/uploads/2018/05/IAB-2017-Full-Year-Internet-Advertising-Revenue-Report.REV_.pdf (2017)

[7] — Statista, “Advertising revenue of Google from 2001 to 2018 (in billion U.S. dollars)”, URL: https://www.statista.com/statistics/266249/advertising-revenue-of-google/ (2019)

[8] — Statista, “Digital advertising spending worldwide from 2015 to 2020 (in billion U.S. dollars), URL: https://www.statista.com/statistics/237974/online-advertising-spending-worldwide/ (2019)

--

--

Ridho Hidayat

Dutch econometrician with a passion for math in society.