Tag Archives: Researchers

The Future Evolution of Marketing/Media Research

What will happen next in the advertising industry’s important research wing? Where is it all going? What will be the face of advertising/media/BI (Business Intelligence) research in 2015?

First, the drivers:

  • Decision makers want speed
  • They want answers to burning questions that specify the recommended decision with compelling rationale – so their job of taking that position and defending it will involve as little personal risk as possible;
  • They want all the variables and types of evidence reduced to utter simplicity – as in a well-designed graphic dashboard;
  • If they have a dashboard, they love to be able to play what-ifs with the recommended solution and see what happens to the graph, so that they truly do have a key role in the decision that gets made;
  • They need to be able to get their heads above all the weeds and up to where they can actually have a master vision – but the weeds are growing like hydra – the weeds being the excess of nearly-relevant information.

In other words, as compared to when I started in the business and we were looking desperately for any scrap of information, and beating the heck out of it in terms of a high bar for validation, today all too often there is much too much information. One can have an assistant compile it all so one can scan it but that’s about it. No way to actually absorb the ever-growing heap.

This new reality engenders a new way of functioning that is always high risk (as evidenced by CMOs being replaced every 23 months on average) and in which one has to operate like the Hollywood gunslingers – on gut intuition. Or as in the Hollywood story, where Columbia Pictures co-founder and head Harry Cohn could read the quality of a film based on watching butts twitch in seats.

So in other capitals across the country and around the world we have all joined that methodology, except we compile even more quantitative information as back up and proof of whatever it is our butts twitch to.

So the drivers have led so far to a relatively undesirable condition of rationalized guesswork. The researcher tries to work within this environment and tries to uplift it. Given relative rank in organizations, the researcher usually fails in this nowadays (if absolute success is the bar) except it is relatively better than if the researcher was not pushing that envelope.

The job going forward is to achieve absolute success by overturning the current rationalized guesswork mode and bringing in scientific decision making. What we already pretend to be doing.

Next, the needs:

  • Creative people need the kind of information conducive to generating Big Ideas;
  • Creative pre-tests need to be fast, highly predictive of actual cash register ROI, diagnostically rich and appropriate to being able to make quick fixes that will drive up ROI;
  • On-air cash register measurements of Creative, without use of black box attribution methods, used to reallocate so as to run the most sales-effective Creative executions most if not all of the time;
  • Programming content needs the exact same kinds of pre-testing, except instead of brand advertiser ROI, the success metric is audience size weighted by the marketable CPM – once again devolving to a financial ROI equation;
  • Media (including in-store, CRM, place-based, social, and everything else) need to be measured in terms of how well they reach types of purchasers (heavy, disloyal, etc.) and how well they influence purchase behavior (this is even more important than measuring their reach overlaps since each one has to be bought separately);
  • Crossmedia reach overlaps and synergies need to be measured and validated, their changes tracked, and these information types baked in with all the other information so as to give the decision maker a simple integrated dashboard where real (empirical) unmodeled validated information has ultimate weight. And the modeling (marketing mix and all other forms) needed to fuse together everything for the decision maker is as validated and transparent (not black box) as possible, with almost no weight in terms of which media vehicles to buy – whereas crossmedia overlaps and dollar ROI synergies are the most important factors in making the big planning allocations to media types. This unavoidable leaning of our weight on the modeling crutch is a soft spot to be studied and overcome;
  • All data and data fusion methods need to be validated against actual cash register ROI;
  • Data (and proposed decision) delivery from research to the line must be in the form of utter simplicity via dashboard where exec can play what-ifs and see how ROI forecast changes.

Sounds pretty easy, doesn’t it? That was a joke of course.

Finally, the prognostications:

I hate to disappoint, but these really are more like prescriptions. The industry has taken some of my prescriptions in the past, but mixed with a heavy dose of countervailing competitive marketplace forces, which tends to change the outcome a bit away from the admittedly utopian picture I had painted of what could be done. So how can I accurately prognosticate what really will happen?

Here’s instead what I think should happen.

Creatives

Researchers need to do a much better job stoking the fires of the big minds to produce Big Ideas. The advertising business is about producing Big Ideas for money. The rest is just implementation.

By the Creatives I don’t just mean writers and art directors. Everyone is a Creative, to the extent that they are allowed to come up with and share Big Ideas. In some organizations, people are disempowered by not having their Big Ideas taken seriously – but these organizations are becoming more and more rare. Thank God.

Research presentation to Creatives – the people who need to make big planning decisions – has been, well, wanting – that’s probably the kindest word I can use.

People who make planning level decisions need all the information they can get about the people at the other end of the communications process who we are trying to influence. Right now they do get quite a bit. It does generate more insight than probably at any time in the past, including the phase of Motivational research. But it’s not yet enough, and it’s not absorbable and stimulating enough to the writers and art directors.

Instead of dashboards for the writers and artists, something like a ripomatic is used nowadays – both in selling new business and in pumping the Creative people. A ripomatic (or feelomatic, etc.) is a succession of clips – mostly video, a few still, with music – that tell the Creative about the target audience. One thing that could be added is the ability to drill down on one image or idea and get more information in the same emotive form on that facet of the picture – as in some of the early branching video CD-ROMs that IBM, BBC, British Telecom and others produced to show where video could go someday. There might be a dial where the Creative can slow down or speed up the images. And touchscreen or voice command to indicate what to drill down on.

Neuroscience should be able to show a picture of the target audience that is even more conducive to Big Ideas. Findings from neuroscience could be presented in the same video format to inspire the Creative – all findings can be pumped in through the Creative form of the same dashboard idea. Just to have a name I call it the Clashboard – the dashboard for Creative, which is branching video rather than Flash pages that remain static until one plays what-ifs.

The underlying historical reason for both the dashboard and the Clashboard is information overload. People in the advertising industry are no exception – we get even more information than the average person, and the average person is deluged. My book Freeing Creative Effectiveness is all about breaking out of EOP (Emergency Oversimplfication Procedure), the condition that sets in when there is too much information – desperate shortcutting such as rationalized guesswork.

By focusing the eyes on a dashboard or Clashboard that is comprehensive and yet utterly simple, the mind can also begin to focus. All the information is in one place. No distraction thinking of where can I get this piece of missing information – it is all there.

To be continued in next posting on April 24 – covering the Future of Media Research.

In my prior posting I reviewed Neuro-Insight as part of a series on validating our measures across the industry, with emphasis on new cutting edge measures such as in the neuroscience field. Next is a short posting by Chuck Young, CEO of Ameritest, a non-neuro copy testing company whose measures nevertheless are cutting edge and relate to the same mental function levels addressed by some in neuroscience. Researchers if you have validated your measures please send them in and we will publish them here. We post every five days.

All the best,

Bill

3 Levels of Validation

In responding to Bill’s recent call for additional validation work on the new techniques of neuro-copy testers, I should point out that I share Bill’s enthusiasm for the new knowledge being generated by the exploding field of neuroscience.  But I also agree with the conclusion of the recent Advertising Research Report that neuroscience techniques should not be used as stand-alones, but in conjunction with the well-established self-report data currently used by mainstream copy testers.

At Ameritest we have been combining in a single on-line system standard copytest metrics with our proprietary Picture Sorts® technique for quite a while.  And while some researchers might not categorize our diagnostic technique with the techniques that measure brain waves, skin conductance, heart rates or facial response, I would argue that our moment-by-moment measure of memory — even though it does not involve electrical apparatus — is equally as important as attention and emotion for understanding how effective advertising works in the brain.

Moreover, our experience working with leading advertisers for many years has taught me that validation is not a one-dimensional construct.

Like the zoom lens of a camera, good copytesting research should be designed to help advertisers see how an ad is going to work when viewed over three different time scales:

  1. Short Term — predicting sales effects over a short-term period of a few weeks to a few months;
  2. Long Term — predicting an ad’s contribution to brand equity over the longer-term period of months to years;
  3. Up Close — diagnosing how an ad is actually working during the few seconds a consumer is interacting with it, in order to provide insights for optimization.

 

The Resources page of our website (www.Ameritest.net) is an open source for reporting the many experiments and studies we have conducted over the years to validate the effectiveness of our own ad research on all three levels of ad performance. To date, we have contributed over 60 articles and peer-reviewed papers to the ongoing research conversation.   I hope that some of the experiments we have described might be useful as models for how neuro-researchers could approach the problem of validating the incremental value of some of the new technologies being applied to ad research.

To illustrate, one example of validation to sales over the short term, chapter VI of the Handbook of Advertising Research, provides a case history of how standard copytesting measures of creative quality (Attention, Branding, Motivation), when combined with media information on share-of-voice, were able to explain over 60% of the change in same-store-sales in the U.S. that McDonald’s reported in public, to Wall Street, over the year and a half period that was studied.

As an example of validation to long term brand equity, papers such as “Connecting Attention to Memory”, “Aesthetic Emotion and Long Term Ad Effects”, and “Why Ad Memories Fade,” describe experiments that show how the short term, moment-by-moment memory test that we employ in our system can be used to predict the four long term brand memories that are laid down by the average thirty second commercial.

Finally, as an example of how moment-by-moment diagnostics can be used to optimize the performance of commercials before putting them on air, you can read the “Spielberg Variables,” an article in the Harvard Business Review about how Unilever achieved an 87% success rate in improving average performers by re-editing and re-testing ads using the insights provided by our on-line picture sort diagnostics.

Test-retest would be a particularly fast and direct way of proving the added value of these new neuroscience and biometric techniques.  In an age when a high school student with a laptop can do a creditable job of re-editing a commercial and uploading it to Youtube, I suggest that it might be useful for ARF to sponsor a Challenge to copy testers where they can prove the value of these new diagnostic insights by re-editing and re-testing some ads that have proven to be poor performers.  A company like Bill’s TRA, which combines sales with media data, would be ideal for identifying a good set of ads to test.

Chuck Young