Tag Archives: Neuroscience

The Future Evolution of Marketing/Media Research

What will happen next in the advertising industry’s important research wing? Where is it all going? What will be the face of advertising/media/BI (Business Intelligence) research in 2015?

First, the drivers:

  • Decision makers want speed
  • They want answers to burning questions that specify the recommended decision with compelling rationale – so their job of taking that position and defending it will involve as little personal risk as possible;
  • They want all the variables and types of evidence reduced to utter simplicity – as in a well-designed graphic dashboard;
  • If they have a dashboard, they love to be able to play what-ifs with the recommended solution and see what happens to the graph, so that they truly do have a key role in the decision that gets made;
  • They need to be able to get their heads above all the weeds and up to where they can actually have a master vision – but the weeds are growing like hydra – the weeds being the excess of nearly-relevant information.

In other words, as compared to when I started in the business and we were looking desperately for any scrap of information, and beating the heck out of it in terms of a high bar for validation, today all too often there is much too much information. One can have an assistant compile it all so one can scan it but that’s about it. No way to actually absorb the ever-growing heap.

This new reality engenders a new way of functioning that is always high risk (as evidenced by CMOs being replaced every 23 months on average) and in which one has to operate like the Hollywood gunslingers – on gut intuition. Or as in the Hollywood story, where Columbia Pictures co-founder and head Harry Cohn could read the quality of a film based on watching butts twitch in seats.

So in other capitals across the country and around the world we have all joined that methodology, except we compile even more quantitative information as back up and proof of whatever it is our butts twitch to.

So the drivers have led so far to a relatively undesirable condition of rationalized guesswork. The researcher tries to work within this environment and tries to uplift it. Given relative rank in organizations, the researcher usually fails in this nowadays (if absolute success is the bar) except it is relatively better than if the researcher was not pushing that envelope.

The job going forward is to achieve absolute success by overturning the current rationalized guesswork mode and bringing in scientific decision making. What we already pretend to be doing.

Next, the needs:

  • Creative people need the kind of information conducive to generating Big Ideas;
  • Creative pre-tests need to be fast, highly predictive of actual cash register ROI, diagnostically rich and appropriate to being able to make quick fixes that will drive up ROI;
  • On-air cash register measurements of Creative, without use of black box attribution methods, used to reallocate so as to run the most sales-effective Creative executions most if not all of the time;
  • Programming content needs the exact same kinds of pre-testing, except instead of brand advertiser ROI, the success metric is audience size weighted by the marketable CPM – once again devolving to a financial ROI equation;
  • Media (including in-store, CRM, place-based, social, and everything else) need to be measured in terms of how well they reach types of purchasers (heavy, disloyal, etc.) and how well they influence purchase behavior (this is even more important than measuring their reach overlaps since each one has to be bought separately);
  • Crossmedia reach overlaps and synergies need to be measured and validated, their changes tracked, and these information types baked in with all the other information so as to give the decision maker a simple integrated dashboard where real (empirical) unmodeled validated information has ultimate weight. And the modeling (marketing mix and all other forms) needed to fuse together everything for the decision maker is as validated and transparent (not black box) as possible, with almost no weight in terms of which media vehicles to buy – whereas crossmedia overlaps and dollar ROI synergies are the most important factors in making the big planning allocations to media types. This unavoidable leaning of our weight on the modeling crutch is a soft spot to be studied and overcome;
  • All data and data fusion methods need to be validated against actual cash register ROI;
  • Data (and proposed decision) delivery from research to the line must be in the form of utter simplicity via dashboard where exec can play what-ifs and see how ROI forecast changes.

Sounds pretty easy, doesn’t it? That was a joke of course.

Finally, the prognostications:

I hate to disappoint, but these really are more like prescriptions. The industry has taken some of my prescriptions in the past, but mixed with a heavy dose of countervailing competitive marketplace forces, which tends to change the outcome a bit away from the admittedly utopian picture I had painted of what could be done. So how can I accurately prognosticate what really will happen?

Here’s instead what I think should happen.

Creatives

Researchers need to do a much better job stoking the fires of the big minds to produce Big Ideas. The advertising business is about producing Big Ideas for money. The rest is just implementation.

By the Creatives I don’t just mean writers and art directors. Everyone is a Creative, to the extent that they are allowed to come up with and share Big Ideas. In some organizations, people are disempowered by not having their Big Ideas taken seriously – but these organizations are becoming more and more rare. Thank God.

Research presentation to Creatives – the people who need to make big planning decisions – has been, well, wanting – that’s probably the kindest word I can use.

People who make planning level decisions need all the information they can get about the people at the other end of the communications process who we are trying to influence. Right now they do get quite a bit. It does generate more insight than probably at any time in the past, including the phase of Motivational research. But it’s not yet enough, and it’s not absorbable and stimulating enough to the writers and art directors.

Instead of dashboards for the writers and artists, something like a ripomatic is used nowadays – both in selling new business and in pumping the Creative people. A ripomatic (or feelomatic, etc.) is a succession of clips – mostly video, a few still, with music – that tell the Creative about the target audience. One thing that could be added is the ability to drill down on one image or idea and get more information in the same emotive form on that facet of the picture – as in some of the early branching video CD-ROMs that IBM, BBC, British Telecom and others produced to show where video could go someday. There might be a dial where the Creative can slow down or speed up the images. And touchscreen or voice command to indicate what to drill down on.

Neuroscience should be able to show a picture of the target audience that is even more conducive to Big Ideas. Findings from neuroscience could be presented in the same video format to inspire the Creative – all findings can be pumped in through the Creative form of the same dashboard idea. Just to have a name I call it the Clashboard – the dashboard for Creative, which is branching video rather than Flash pages that remain static until one plays what-ifs.

The underlying historical reason for both the dashboard and the Clashboard is information overload. People in the advertising industry are no exception – we get even more information than the average person, and the average person is deluged. My book Freeing Creative Effectiveness is all about breaking out of EOP (Emergency Oversimplfication Procedure), the condition that sets in when there is too much information – desperate shortcutting such as rationalized guesswork.

By focusing the eyes on a dashboard or Clashboard that is comprehensive and yet utterly simple, the mind can also begin to focus. All the information is in one place. No distraction thinking of where can I get this piece of missing information – it is all there.

To be continued in next posting on April 24 – covering the Future of Media Research.

In my prior posting I reviewed Neuro-Insight as part of a series on validating our measures across the industry, with emphasis on new cutting edge measures such as in the neuroscience field. Next is a short posting by Chuck Young, CEO of Ameritest, a non-neuro copy testing company whose measures nevertheless are cutting edge and relate to the same mental function levels addressed by some in neuroscience. Researchers if you have validated your measures please send them in and we will publish them here. We post every five days.

All the best,

Bill

3 Levels of Validation

In responding to Bill’s recent call for additional validation work on the new techniques of neuro-copy testers, I should point out that I share Bill’s enthusiasm for the new knowledge being generated by the exploding field of neuroscience.  But I also agree with the conclusion of the recent Advertising Research Report that neuroscience techniques should not be used as stand-alones, but in conjunction with the well-established self-report data currently used by mainstream copy testers.

At Ameritest we have been combining in a single on-line system standard copytest metrics with our proprietary Picture Sorts® technique for quite a while.  And while some researchers might not categorize our diagnostic technique with the techniques that measure brain waves, skin conductance, heart rates or facial response, I would argue that our moment-by-moment measure of memory — even though it does not involve electrical apparatus — is equally as important as attention and emotion for understanding how effective advertising works in the brain.

Moreover, our experience working with leading advertisers for many years has taught me that validation is not a one-dimensional construct.

Like the zoom lens of a camera, good copytesting research should be designed to help advertisers see how an ad is going to work when viewed over three different time scales:

  1. Short Term — predicting sales effects over a short-term period of a few weeks to a few months;
  2. Long Term — predicting an ad’s contribution to brand equity over the longer-term period of months to years;
  3. Up Close — diagnosing how an ad is actually working during the few seconds a consumer is interacting with it, in order to provide insights for optimization.

 

The Resources page of our website (www.Ameritest.net) is an open source for reporting the many experiments and studies we have conducted over the years to validate the effectiveness of our own ad research on all three levels of ad performance. To date, we have contributed over 60 articles and peer-reviewed papers to the ongoing research conversation.   I hope that some of the experiments we have described might be useful as models for how neuro-researchers could approach the problem of validating the incremental value of some of the new technologies being applied to ad research.

To illustrate, one example of validation to sales over the short term, chapter VI of the Handbook of Advertising Research, provides a case history of how standard copytesting measures of creative quality (Attention, Branding, Motivation), when combined with media information on share-of-voice, were able to explain over 60% of the change in same-store-sales in the U.S. that McDonald’s reported in public, to Wall Street, over the year and a half period that was studied.

As an example of validation to long term brand equity, papers such as “Connecting Attention to Memory”, “Aesthetic Emotion and Long Term Ad Effects”, and “Why Ad Memories Fade,” describe experiments that show how the short term, moment-by-moment memory test that we employ in our system can be used to predict the four long term brand memories that are laid down by the average thirty second commercial.

Finally, as an example of how moment-by-moment diagnostics can be used to optimize the performance of commercials before putting them on air, you can read the “Spielberg Variables,” an article in the Harvard Business Review about how Unilever achieved an 87% success rate in improving average performers by re-editing and re-testing ads using the insights provided by our on-line picture sort diagnostics.

Test-retest would be a particularly fast and direct way of proving the added value of these new neuroscience and biometric techniques.  In an age when a high school student with a laptop can do a creditable job of re-editing a commercial and uploading it to Youtube, I suggest that it might be useful for ARF to sponsor a Challenge to copy testers where they can prove the value of these new diagnostic insights by re-editing and re-testing some ads that have proven to be poor performers.  A company like Bill’s TRA, which combines sales with media data, would be ideal for identifying a good set of ads to test.

Chuck Young

 

Can We Sense The Extremisms In Our Own Culture?

We are what we become used to. Having become used to something, it is taken for granted. Then we don’t notice it any more.

All cultures have extremisms; that’s what makes them cultures*. A perfectly balanced culture would be, by definition, boring. There could be no drama. Who would caonvene such a culture? Not human beings, certainly.

In the Cheyenne culture, courage and leadership are cultivated to the point that the individual is expected to stand against authority. This is their rite of passage. What is ours?

In what way is our culture extreme?

 

This is interactive; you can answer the question for yourself.

Rather than tell you what my view is, let me give you a clue. See if you can guess it – or better yet, see what you get when you cogitate the following riddle.

When and why did it become acceptable for there to be a ‘bug” – the channel’s logo – and sometimes text promoting other programs – over our TV shows?

What does that tell you about what (one of) our culture’s extremism(s) is?

What I get is that ours is such a mercantile culture, everything has to have a brand on it. We get branded as if with an iron when we pay to buy clothing that advertises some brand we may or may not care about. We should charge on a CPM basis. Finally we had to put the brand on the TV screen to stay there forever and only be relieved by commercials. This may increase commercial effectiveness and reduce program effectiveness accordingly.

Seriously, would Hollywood put a bug over their movies? Even in this mercantile culture, cinema remembers its roots, that in drama one wants to immerse and suspend disbelief, become the protagonist. The bug is a rude interruption to the self-pretend and that bubble bursts or never forms. So we watch to some measurable degree less immersed than we would have been years ago.

For non-drama programming, the bug is to some extent less intrusive.

Biometrics should be easily able to detect this difference.

First Neuroscience Research Company to Submit Validation to BillHarveyBlog.com:

Neuro-Insight

What makes N-I different from all other suppliers is SST: Steady State Topography, the company’s own method being used worldwide today in cognitive neuroscience but in advertising research only by N-I. SST is a measure of neural processing speed at specific sites corresponding to parts of the brain, and metrics are calculated by indexing certain key relationships across sites – such as the SST relationship between left and right prefrontal cortexes, revealing approach-avoidance.

Because commercials involve split-second action, the otherwise superb fMRI (Functional Magnetic Resonance Imagery – similar to the MRIs we get taken of us for medical purposes) technique is too slow to capture changes occurring in response to these fast-changing stimuli, leaving as choices only EEG and SST. EEG uses electrodes the same as SST but is capturing different information – not neural processing speed but the size or magnitude of various EEG components such as alpha activity. These give different information.

For one thing, EEG is a noisy signal. Its low signal-to-noise ratio requires testing by repeating the commercial and then averaging results, ignoring the fact that what one then has is no longer the effect of one exposure. Surprise is no longer present in the repetitions. In Herb Krugman’s terms (Herb is a researcher famed for his work in advertising frequency), the subject after the first exposure is no longer asking What is that, but is now asking, What of it?

SST has a far higher signal-to-noise ratio than EEG so one picture is all the researchers need. The high resolution low noise signal is also obviously ideal in terms of research accuracy while remaining insensitive to factors that can affect EEG such as head movements, muscle tension, blinks and eye movements.

At last year’s ARF Audience Measurement Symposium 5.0, I was serving on the ARF Program Committee and was selected to chair the session on Neuroscience.  Burt Manning, former Chairman/CEO of J. Walter Thompson and one of the industry’s great copywriters and thinkers, had introduced me to Dr. Richard Silberstein, founder/CEO of Neuro-Insight and Professor of Cognitive Neuroscience at Swinburne University in Melbourne. I invited Dr. Silberstein along with Innerscope’s Carl Marci, Sands Research’s Steve Sands, and CBS’s David Poltrack to become the neuroscience plenary panel for that symposium, moderated by Ameritest’s Chuck Young.

During that lively panel the neuroscientists all presented slides and Dr. Silberstein showed three case studies validating SST against sales, online traffic and correct product recall (financial services) respectively. In the most relevant sales case (Bird’s Eye frozen fish), the SST research suggested that a split-second change at a single point during the commercial caused a 130% increase in actual sales ROI.

Based on the extensive scientific validation evidence sent to me by N-I, and the cases shared at ARF last year, I would be interested as a researcher in using SST to help refine nearly-finished commercials before using them on air.

I hope more copy testers will come forward and send me your piles of evidence too, which I will give equal space here.

Best to all,

Bill

*Here’s one definition of “cultures” from Wikipedia: the distinct ways that people living in different parts of the world classified and represented their experiences, and acted creatively. I am characterizing “distinct” as “extreme”.

Where Will Neuroscience Make Its Greatest Contribution to Advertising?

At the recent Advertising Research Foundation (ARF) Re:THINK 2011 conference, ARF reported the results of its study of nine different suppliers’ tests of the same commercials. All nine suppliers utilized their own approach to the measurement of involuntary psychophysiological response to stimuli.

Later that day, two other suppliers who had decided against participation were probably patting themselves on the back for staying out of the study. Why? Because the report had the result of (slightly) dialing back what had been the industry’s excitement about these new tools. The general picture painted was: (1) there is still a lot of work to be done; (2) at least some of the suppliers had not done their homework to become better informed about the test campaigns themselves; and (3) counter to expectation, these practitioners in general appeared to be less rather than more scientific than the existing state of the art in copy testing.

The folks at ARF certainly didn’t set out to pour cold water – they went into this with high enthusiasm about the promise of neuroscience for advertising. What happened?

Perhaps the problem was that the ARF, in order to gain cooperation, promised not to identify the pros/cons of individual suppliers. This protocol had worked well for the Council on Research Excellence (CRE) in their study of set top box (STB) data/analysis suppliers last year, which probably would not have gained enough cooperation to go forward otherwise.

Now the learning experience for industry leadership is that composite supplier descriptions/evaluations is a technique that must be carefully adapted on a case-by- case basis. In fact, the key difference between the two studies is that CRE did not cross the line from description into evaluation, whereas ARF did cross that line.

Possibly this was because the STB data analysis companies were willing to disclose techniques more so than were the neuroscientists. Perhaps ARF felt there would be nothing to report without evaluation, since in-depth technique description was not available. (Although I know of one supplier than provided 40 pages of such documentation.)

Today’s blog posting is motivated by the desire to see no slowdown in the development of the neuroscience field for the advertising industry and in general. Some years ago we did some advertising neuroscience of our own in company with Dr. Richard Davidson, today one of the most respected and quoted neuroscientists in the world, and Dr. Daniel Goleman, best known for his best-selling book series on emotional intelligence, a term he coined. That work convinced this writer that neuroscience can be of great value in advertising and media.

For example, in the research Drs. Richardson, Goleman and I conducted, we succeeded in using neuroscience to solve a conundrum that had baffled a leading drug company for years:

One of their big-spending TV over-the-counter brands had run a commercial years earlier that rang the bell so strongly there was no denying it had caused a substantial sales increase. For years, the agency tried to replicate the results with new commercials but never succeeded.

Neuroscience, however, was able to identify why the commercial was so effective, with such clarity that the agency was able to create a new commercial nearly as sales effective as its progenitor.

This case study is instructive in terms of how to derive greatest value from neuroscience in the context of advertising: instead of using biometrics to evaluate the power of a commercial, we used it to dissect the reason for a commercial’s power.

In other words, we used neuroscience diagnostically rather than evaluatively.

Instead of trying to answer the question “How well does it work?” we set out to answer a different question, “how (or why) does it work?”

Which is not to say that neuroscience cannot be used both ways, just that it’s possible the greatest increase in knowledge might come diagnostically. This is at least something worth looking into.

In the case just described, part of how the commercial worked is that it created the brain signature of the pain state in the viewer. By then segueing to a shot of the product package and the use of the product ending with a pain-free actor, the commercial ended with removal of the pain signature in the viewer’s brain.

Hence the viewer when next in the real pain state would subconsciously remember the product that removed the pain state. Classic problem-solution at the involuntary level rather than at the rational level.

So what is the generalizable clue? The concept of brain signatures for more complex states.

What if we as an industry are able to become aware of the brain signatures of brand gratitude, brand affinity, persuasion, purchase intent – signatures that can be validated against the same person’s change in brand purchase behavior?

What if we can also learn the brain signatures of specific blocks to a commercial’s success, such as lack of comprehension, disbelief, and distrust?

Neuroscience commercial testers are using the concept of brain signatures, but many seem to be stopping at purely evaluative signatures such as attention, arousal, and approach/avoidance, rather than the more complex diagnostic signatures suggested above, which tell more about why a commercial is or is not working.

In the interest of perhaps making a modest contribution to industry knowledge, and to  supplement ARF’s composite report, we will provide a venue in upcoming blog postings for any interested neuro (and non-neuro) copy testers to communicate their validation work, which we will present with individual supplier identification and our own editorial commentary.

 

Briefly Observed News in the Media

  • On April 4, in an interview regarding Libya on Fox News, Dr. Henry Kissinger enunciated his recommended policy for US intervention in such situations. Because US resources are not infinite and are already overstretched in Iraq and Afghanistan, he proposed that the US only become involved in other countries that meet both of the following two criteria:

    • Humanitarian concerns e.g. people being killed by their own government
    • US national strategic interests
  • On April 5, the media reported that because of the situation in Japan, it is being considered that the evacuation zone for the Indian Point Nuclear Power facility in case of an emergency be expanded from ten miles to fifty miles – which would mean the necessity of evacuating New York City. (Need I say more?)
  • Also April 5, it was reported that Muammar Gaddafi is considering a deal to step down. Miraculously, he reached out to Pennsylvania Congressman Curt Weldon, one American he trusted (we have written about the importance of trust before), who flew into Libya to meet with Gaddafi. Weldon is the American that Gaddafi had spent more time with than any other American. I have heard it said that one person does not matter, but obviously that is not always the case. In the words of John Fitzgerald Kennedy, “One person can make a difference, and everyone should try.”

 

Best to all,

Bill