A Schooling in Polls: A Q&A with Strategic Research Professor Anne Danehy

As a professor of research at the College of Communication, Anne Danehy has shaped her curriculum around her impressive career in strategic opinion research. Danehy has acted as a consultant for fields of wide-ranging topics: from public affairs, to public health, to advertising, and public relation firms, later in her career. She came to BU as full-time professor in 2013 to teach students the importance of strategic research in framing persuasive messaging.

 

Communication Research Professor Anne Danehy sheds light on her experience in political polling and strategic opinion research.

 

Nicole Toppino: As the President of your own company, Strategic Opinion Research Inc., can you share your perspective on the importance of research in aiding campaign strategy for public policy, advertising and PR firms?

Anne Danehy: It’s important to understand not only what people think about your product or your candidacy, but also what motivates them to vote for you or buy your product. By doing research, you can get inside people’s heads to understand what are their motivations, what are their pain points, and what is going to influence them.

 

NT: What inspired you to start teaching Mass Communication Research at BU?

AD: I’ve always been fortunate to feel like I’ve done meaningful work that’s made a difference and that has been great. But the more I started teaching, the more I felt like I was really making a difference and having more impact.

 

NT: With all of your experience in political polling, what is your stance on the failure of the polling results in the 2016 election? Are the polls featured on news networks really accurate?

AD: Everybody poo-poos traditional phone surveys. Traditional phone surveys are still widely used in public opinion research, meaning you’re polling the general public because everybody has an equal chance of being selected. A good phone survey is very, very expensive now because we have really low response rates and we need to try to find ways to increase that response rate.

So, then we’re seeing kind of a transition to online polls. It’s still a non-probability sampling methodology. So, technically, you really can’t calculate your margin of error, which is kind of a problem, in terms of knowing how those results would differ if you’d interviewed everybody.

But at the end of the day when you look at the polls, in the polls that had varying results, it had less to do with the data collection method, and I think more to do with who they were sampling and how they were defining a ‘voter’ or ‘a likely voter.’

 

NT: How would you advise the American public in reading polls?

AD: Look at the sample size. Look at the margin of error. Look at exactly how the questions were worded. Look at how the sample was designed, what type, how did they sample, how did they define their sampling unit. If it’s likely voters, what was the definition of a ‘likely voter’? And what was the specific sampling—was it a probability or a non-probability sampling?

 

NT: Can you think of a project where you felt like your research made the most impact?

AD: I was contacted by The Department of Mental Health, and they said, “We want to do some research with our clients who are dealing with severe mental illness.”

So, we conducted twelve focus groups across the state. As a result, they ended up doing a physician training program about the problems people with mental health issues were dealing with, and how they needed to be treated respectfully.

We can make some progress. I felt like, you know what, that actually made such a difference.

 

NT: You’ve conducted focus groups with many sub-groups. How do you prepare and make sure you don’t offend anyone?

AD: I think it’s honestly just about respecting people as individuals. I’ve done focus groups probably with more diverse populations than anyone—active drug users, active heroin users, people who suffer from severe mental illness, very low-income populations, homeless people, etc.

The commonality is that they’re all humans, and you know, the issues that somebody who might be different demographically, the issues they experience, are some of the same exact issues that we all experience.

 

NT: When your research reveals negative perceptions about your client, how does relaying that information work?

AD: It’s very awkward. It’s really uncomfortable when you have to do this. But I’ve gotten to the point where I’m like, this is what they think of you.

I’ve worked for candidates who really don’t stand for anything, and I’m like, they’re going lose, there’s nothing here. If you don’t have substance—if you don’t have a good product or you don’t have a substantive candidate—it doesn’t matter if you have all the window dressing. You can have the best ad, but it’s not going to work.

 

NT: Do you have any predictions on where the field of strategic research is heading? What developments can we expect to see?

AD: I think that we’re in an era where we’re experiencing so many technological changes that we just need to make sure that we continually adapt to those changes, while being able to still conduct really good quality research.

Is polling still going to be as useful ten years down the road? I don’t know.

There are evolutions of new technology and new research methodologies coming out that are looking at gaining deeper insight, other than just the political polls, the kind of in-depth information to really understand how they’re feeling about issues and how they’re responding to messages.

 

 

Nicole Toppino, Staff Writer

Nicole Toppino is a senior majoring in advertising at BU’s College of Communication. Originally from Los Angeles, Nicole enjoys traveling, loves nothing more than a good plate of pasta, and has a low-key-high-key obsession with the Great British Bake Off.

 

 

 

Comments are closed.