• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

5 Circles Research

  • Overview
  • SurveysAlaCarte
  • Full Service
  • About
  • Pricing Gurus
  • Events
  • Blog
  • Contact

Surveys

Leave a Comment

This survey Hertz: lack of thought

I recently took a survey sponsored by the Hertz Corporation, intended to assess the appeal of several new approaches to services. This post discusses some of the problems I found, and why you should avoid creating your surveys like this one.

After asking about the number of times I had rented a vehicle, for what purpose, what type, and from which company, they asked how important price was to me when deciding which company to rent from the last time.

This is a pretty good question.PriceFactor2

Now they have an anchor for later questions about price. They know how important price was to me for this last rental. Note that the importance of price (or any other factor come to that) is situational, certainly for me. The last car I rented prior to taking the survey was for a vacation on Maui for a family wedding. I didn’t want a fancy car, or at least I didn’t feel like justify one for this trip – the wedding and all the fun things to do on Maui were going to be pretty expensive, and I expected to be driving on some substandard roads. I needed something flexible enough to take other members of the party – I expected people to be juggling activities. So I chose a four-door.

The next section of the survey covered the importance of various features, the mileage, and the condition of the vehicle. One question asked about the importance of general areas such as performance, cleanliness, audio features, fuel efficiency.
FeaturesImportance2

Most of the question options made sense to me, but mileage on the vehicle doesn’t seem like something that should be included in the list. From my perspective, the mileage is going to relate to the physical condition and the cleanliness of the vehicle. When I pick up the rental car the current mileage is just a minor item, and perhaps something to note for the contract, especially if there is a mileage limit on my rental. Perhaps I should be paying more attention, but I don’t remember ever thinking about it as I signed a contract. Perhaps somebody pointed out that this was a low mileage vehicle – I don’t remember. I guess I always expect a fairly low mileage on a rental car, especially from one of the major companies. Anyway, I wasn’t too surprised when I saw the first question that included the importance of mileage, although it struck me as a little odd in the same way I’m describing here. The next question asked me to rank the top three options of the nine that were provided previously; this still made sense.

Things went downhill from here. The next question asked the maximum number of miles of a rental car that I would find acceptable and still be satisfied. Here’s the question:
AcceptableMileage
Puzzled as I was by the notion that I could come up with an answer (note that the question text stresses “realistic”), I tried to enter “I don’t know” into the box in the forlorn hope that it would be accepted, despite the fact that the instructions read – please enter a whole number. My attempt generated an error message, repeating the directive, this time in bold red – Please specify a whole number. There was also a red warning at the top of the screen telling me that I needed to follow the instructions. These validation messages weren’t too surprising, but I was disappointed to find that I couldn’t indicate my real feelings. I next tried to enter “0” into the box. This generated a different error message – Sorry, but the value must be at least 100. I think this was the point at which I realized that this survey was going to provide material for an article. Expecting further fun and games, I decided not to waste too much more time on this one question. I entered 10,000 and was allowed to proceed to the next question.

Lesson 1. What’s the worst thing about this little battle with the question? The data they got from me is rubbish. If enough other people are equally uncaring, or like me have no realistic idea of the mileage that would be satisfactory, Hertz is making decisions on a very shaky foundation. Read on, it gets worse.

The introduction to the next section describes what I’ll see next – “…several scenarios – understanding your opinion when renting a vehicle. Please think about what makes a car rental experience enjoyable.” This sounded pretty good, but it turned out that they just wanted to torture me on the mileage issue in a different way.
$270_60000
“Uh oh”, went through my mind. “I wonder how they’ll play this out? Are we going to be negotiating on the mileage?” Yes that’s exactly what happened. I responded “Not very acceptable” to the first round of this question. I don’t really know whether it’s acceptable or not, but I’m pretty sure Hertz wants me to believe that it isn’t. I was actually hoping that I would just get a single question on the subject, but that’s not how it worked. The next question was exactly the same wording except that “had more than 60,000 miles on it” was replaced by “had 50,000 miles on it”. There was an additional instruction too – Please note that the question remains the same, but the NUMBER OF MILES ABOVE has changed. Isn’t there some form of torture based on telling the victim what’s going to happen? But now I’m curious and I want to see how long this can go on. 60,000, 50,000, 40,000, 35,000, 25,000, 20,000, 15,000, 10,000. I answered “Not very acceptable” every time. At 5,000 – yes, that’s the ninth repeat – I chose “Somewhat acceptable” and was allowed to move on to the next torture chamber.

Lesson 2. Why doesn’t this repetitive approach work? For one thing, it’s boring. Even if someone has a realistic idea of a good number (perhaps a rental car should be similarly low mileage to the vehicle at home that’s replaced regularly), they still have to go through the performance to reach the acceptable number. And it’s a negotiation – “how low mileage can I get for the same $270?” This is where annoyance and fatigue is going to build up. Bad data, increased likelihood of dropping out, reducing the likelihood of achieving representative results.

Idiosyncratically,

Mike Pritchard

 

Filed Under: Methodology, Pricing, Surveys

Leave a Comment

IT terminology applied to surveys

James Murray is principal of Seattle IT Edge, a strategic consultancy that melds the technology of IT with the business issues that drive IT solutions. When James gave me a list of things that are central for IT professionals, I thought it might be fun (and hopefully useful) to connect these terms with online surveys for market research.

[Warning: if you are a technical type interested in surveys, you might find this interesting. But if you aren’t in that category, I won’t be offended if you stop reading.]

Scalability

The obvious interpretation of scalability for IT applies to online surveys too. Make sure the survey tool you use is capable of handling the current and predicted usage for your online surveys.

  • If you use an SaaS service, such as SurveyGizmo or QuestionPro, does your subscription level allow you to collect enough completed surveys? This isn’t likely to be an issue if you host your own surveys (perhaps with an open-source tool like Lime Survey) as you’ll have your own database.
  • Do you have enough bandwidth to deliver the survey pages, including any images, audio or video that you need? Bandwidth may be more of an issue with self-hosted surveys. Bandwidth might fit more into availability, but in any case think about how your needs may change and whether that would impact your choice of tools.
  • How many invitations can you send out? This applies when you use a list (perhaps a customer list or CRM database), but isn’t going to matter when you use an online panel or other invitation method. There are benefits to sending invitations through a survey service (including easy tracking for reminders), but there may be a limit on the number of invitations you can send out per month, depending on your subscription level. You can use a separate mailing service (iContact for example), and some are closely integrated with the survey tool. Perhaps the owner of the customer list wants to send out the invitations, in which case the volume is their concern but you’ll have to worry about integration. Most market researchers should be concentrating on the survey, so setting up their own mail server isn’t the right approach; leave it to the specialists to worry about blacklisting and SPF records.
  • Do you have enough staff (in your company or your vendors) to build and support your surveys? That’s one reason why 5 Circles Research uses survey services for most of our work. Dedicated (in both senses) support teams make sure we can deliver on time, and we know that they’ll increase staff as needed.

Perhaps it’s a stretch, but I’d also like to mention scales for research. Should you use a 5-point, 7-point, 10-point or 11-point scale? Are the scales fully anchored (definitely disagree, somewhat disagree, neutral, somewhat agree, definitely agree)? Or do you just anchor the end points? IT professionals are numbers oriented, so this is just a reminder to consider your scales. There is plenty of literature on the topic, but few definitive answers.

Usability

Usability is a hot topic for online surveys right now. Researchers agree that making surveys clear and engaging is beneficial to gathering good data that supports high quality insights. However, there isn’t all that much agreement on some of the newer approaches. This is a huge area, so here are just a few points for consideration:

  • Shorter surveys are (almost) always better. The longer a survey takes, the less likely it is to yield good results. People drop out before the end or give less thoughtful responses (lie?) just to get through the survey. The only reason for the “almost” qualifier is that sometimes survey administrators send out multiple surveys because they didn’t include some key questions originally. But the reverse is the problem in most cases. Often the survey is overloaded with extra questions that aren’t relevant to the study.
  • Be respectful of the survey taker. Explain what the survey is all about, and why they are helping you. Tell them how long it will take – really! Give them context for where they are, both in the form of textual cues, and also if possible with progress bars (but watch out for confusing progress bars that don’t really reflect reality). Use survey logic and piping to simplify and shorten the survey; if someone says they aren’t using Windows, they probably shouldn’t see questions about System Restore.
  • Take enough time to develop and test questions that are appropriate for the audience and the topic. This isn’t just a matter of using survey logic, but writing the questionnaire correctly in the first place. Although online survey data collection is faster than telephone, it takes longer to develop the questionnaire and test.
  • Gamification of surveys is much talked about, but not usually done well. For a practical, business-oriented survey taker, questions that aren’t as straightforward may be a deterrent. On the other hand, a gaming audience may greatly appreciate a survey that appears more attuned to them. Beyond the scope of this article, some research is being conducted within games themselves.

Reliability

One aspect of reliability is uptime of the server hosting the survey tool. Perhaps more relevant to survey research are matters related to survey and questionnaire design:

  • Representativeness of the sample within the target population is important for quality results, but the target depends on the purpose of the research. If you want to find out if a new version of your product will appeal to a new set of prospects, you can’t just survey customers. An online panel sample is generally regarded as representative of the market.
  • How you invite people to take the survey also affects how representative the sample is. Self selection bias is a common issue; an invitation posted on the website is unlikely to work well for a general survey, but may have some value if you just need to hear from those with problems. Survey invitations via email are generally more representative, but poor writing can destroy the benefit.
  • As well as who you include and how you invite them, the number of participants is important. Assuming other requirements are met, a sample of 400 yields results that are within ±5% at 95% reliability. The confidence interval (±5%) means that the results from the sample will be within that range of the true population’s results. For the numerically oriented, that’s a worst case number, true for a midpoint response; statistical testing takes this into account. The reliability number (95%) means that the results will conform 19 out of 20 times. You can play with the sample size, or accept different levels of confidence and reliability. For example, a business survey may use a sample of 200 (for cost reasons) that yields results that are within ±7% at 95% reliability.
  • Another aspect of reliability comes from the questionnaire design. This is a deep and complex subject, but for now let’s just keep it high-level. Make sure that the question text reflects the object of the question, that the options are exclusive, single thoughts, exhaustive (with don’t know, none of the above, or other/specify as appropriate).

Security

Considerations for survey security are similar to those for general IT security, with a couple of extra twists.

  • Is your data secure on the server? Does your provider (or do you if you are hosting your own surveys) take appropriate precautions to make sure that the data is backed up properly and is guarded against being hacked into?
  • Does the connection between the survey taker and the survey tool need to be protected? Most surveys use HTTP, but SSL capabilities are available for most survey tools.
  • Are you taking the appropriate measures to minimize survey fraud (ballot stuffing?) What’s needed varies with the type of survey and invitation process, but can include cookies, personalized invitations, and password protection.
  • Are you handling the data properly once exported from the survey tool? You need to be concerned with overall data in the same way that the survey tool vendor does. But you also need to look after personally identifiable information (PII) if you are capturing any. You may have PII from the customer list you used for invitations, or you may be asking for this information for a sweepstake. If the survey is for research purposes, ethical standards require that this private information is not misused. ESOMAR’s policy is simple – Market researchers shall never allow personal data they collect in a market research project to be used for any purpose other than market research. This typically means eliminating these fields from the file supplied to the client. If the project has a dual purpose, and the survey taker is offered the opportunity for follow up, this fact must be made clear.

Availability

No longer being involved in engineering, I’d have to scratch my head for the distinction between availability and reliability. But as this is about IT terms as they apply to surveys, let’s just consider making surveys available to the people you want to survey.

  • Be careful about question types that may work well on one platform and not another, or may not be consistently understood by the audience. For example, drag and drop ranking questions look good and have a little extra zing, but are problematic on smart phones. Do you tell the survey taker to try again from a different platform (assuming your tool detects properly), or use a simpler question type? This issue also relates to accessibility (section 508 of the Rehabilitation Act, or the British Disability Discrimination Act). Can a screen reader deal with the question types?
  • Regardless of question types, it is probably important to make sure that your survey is going to look reasonable on different devices and browsers. More and more surveys are being filled out on smartphones and iPads. Take care with fancier look and feel elements that aren’t interoperable across browsers. These days you probably don’t have to worry too much about people who don’t have JavaScript available or turned on, but Flash could still be an issue. For most of the surveys we run, Flash video isn’t needed, and in any case isn’t widely supported on mobile devices. HTML5 or other alternatives are becoming more commonly used.
  • Instead of accessing web surveys from any compatible mobile devices, consider other approaches to surveying. I’m not a proponent of SMS surveys; they are too limited, need multiple transactions, and may cost the survey taker money. But downloaded surveys on iPad or smartphone have their place for situations where the survey taker isn’t connected to the internet.

I hope that these pointers are meaningful for the IT professional, even with the liberties I’ve taken. There is plenty of information As you can tell, just like in the IT world there are reasons to get help from a research professional. Let me know what you think!

Idiosyncratically,

Mike Pritchard

Filed Under: Fun, Methodology, Surveys, SurveyTip

Leave a Comment

Top holiday business activities

We asked entrepreneurs, consultants and small business owners how they were spending their time over the holiday period.

Top Holiday Business Activities: 2010 year end

The question asked about the TOP activity, so people needed to prioritize. The most popular answers were “planning next year“, and “delivering to customers“, recognizing both looking forward and (presumably) the need to complete tasks. It would be interesting to see if planning is as popular at a time of the year when New Year isn’t a factor. Reviewing last year wasn’t as common as response. Perhaps people are doing continual reviews (I doubt it), or more likely they have recognized the need and the opportunity for bigger shifts and looking back isn’t as relevant.

An expert in collaborative strategy planning, Robert Nitschke of Arago Partners, tells me that many companies take until the end of Q1 to complete their strategic plan for the year. When will yours be done?

Idiosyncratically,
Mike Pritchard

Filed Under: Fun, Surveys Tagged With: QuickPoll, Surveys

Leave a Comment

Impact of cell phones on 2010 Midterms and beyond politics

Whether you are a political junkie or not, recent articles and analysis about mobile phones as part of data collection should be of interest to those who design or commission survey research. Cost, bias, and predictability are key issues.

In years gone by, cell phone users were rarely included in surveys. There was uncertainty about likely reaction of potential respondents (“why are you calling me on my mobile when I have to pay for incoming calls?”, “is this legal?”). Although even early on surveyors were nervous about introducing bias through not including younger age groups, studies showed that there were only insignificant differences beyond those associated with technology. When cell phone only households were only 7% researchers tended to ignore them. Besides, surveying via cell phone cost more, due to requirements that auto-dialing techniques couldn’t be used, increased rejection rates, compensating survey takers to compensate for their costs, and also a need for additional screening to reduce the likelihood of someone taking the survey from an unsafe place. Pew Research Center’s landmark 2006 study focused on cell phone usage and related attitudes, but also showed that the Hispanic population was more likely to be cell phone only.

Over the course of the next couple of years, Pew conducted several studies (e.g. http://people-press.org/report/391/the-impact-of-cell-onlys-on-public-opinion-polling ) showing that there was little difference in political attitudes between samples using landline only and those using cell phones. At the same time, Pew pointed out that other non-political attitudes and behaviors (such as health risk behaviors) differed between the two groups. They also noted that cell phone only households had reached 14% in December 2007. Furthermore, while acknowledging the impact of cost, Pew studies also commented on the value of including cell phone sampling in order to reach certain segments of the population (low income, younger). What’s Missing from National RDD Surveys? The Impact of the Growing Cell-Only Population.

Time marches on. Not surprisingly give the studies above, for more and more research, cell phone sample is now being included. With cell phone only households now estimated at upwards of 25% this increasingly makes sense. But not apparently for most political polls, despite criticism. The Economist, in an article October 7, 2010, http://www.economist.com/node/17202427 summarizes the issues well. Cost of course is one factor, but this impacts different polling firms and types differently. Pollsters relying on robocalling (O.K. IVR or Interactive Voice Response if you don’t want to associate these types of polls with assuredly partisan phone calls), are particularly affected by cost considerations. Jay Leve of SurveyUSA estimates costs would double for firms to change from automated calling to human interviewers as would be needed to call cell phones. And as the percentage of cell phone only households varies across states, predictability is even less likely. I suspect that much of this is factored into Nate Silver’s assessments on his FiveThirtyEight blog,  but he is also critical of the pollsters for introducing bias (http://fivethirtyeight.blogs.nytimes.com/2010/10/28/robopolls-significantly-more-favorable-to-republicans-than-traditional-surveys/ ). Silver holds Rasmussen up as having a Republican bias due to their methodology, and recently contrasted Rasmussen results here in Washington State with Elway (a local pollster using human interviewers) who has a Democratic bias according to FiveThirtyEight.

I’ve only scratched the surface of the discussion. We are finally seeing some pollsters incorporating cell phones into previously completely automated polls and this trend will inevitably increase as respondents are increasingly difficult to reach via landlines. Perhaps the laws will change to allow automated connections to cell phones, but I don’t see this in the near future given the recent spate of laws to deter use while driving.

But enough of politics. I’m fed up with all the calls (mostly push, only a few surveys) because apparently my VOIP phone still counts as a landline. Still, I look forward to dissecting the impact of cell phones after the dust has settled from November 2nd.

What’s the impact for researchers beyond the political arena?

  • If your survey needs a telephone data collection sample for general population, you’d better consider including cell phone users despite the increased cost. Perhaps you can use a small sample to assess bias or representativeness, but weighting alone will leave unanswered questions without some current or recent data for comparison.
  • Perhaps it’s time to use online data collection for all or part of your sample. Online (whether invitations are conducted through panels, river sampling, or social media) may be a better way to reach most of the cell phone only people. Yes, it’s true that the online population doesn’t completely mirror the overall population, but differences are decreasing and it may not matter much for your specific topic. Recent studies I’ve conducted confirm that online panelists aren’t all higher income, broadband connected, younger people. To be sure, certain groups are less likely to be online, but specialist panels can help with, for example, Hispanic people.

The one thing you can’t do is to ignore the cell phone only households.

By the way, if you are in the Seattle area, you might be interested in joining me at the next Puget Sound Research Forum luncheon on November 18, when REI will present the results of research comparing results from landline, cell phone and online panel sample for projectability.  http://pugetsoundresearchforum.org/

Good luck with your cell phone issues!

Idiosyncratically,

Mike Pritchard

Filed Under: Methodology, News, Published Studies, Surveys Tagged With: News, Published Studies, statistical testing, Statistics

Leave a Comment

SurveyTip: Randomizing question answers is generally a good idea

Showing question answers in a random order reduces the risk of bias from the position.  

To understand this, think of what happens when you are asked to choose a question by a telephone interviewer.  When the list of choices are presented for a single choice question, you might be think of the first option as more of a fit, or perhaps the last option is top-of-mind.   The problem is even more acute when the person answering the survey has to comment on each of several attributes, for example when rating how well a company is doing for time taken to answer the phone, courtesy, quality of the answer, etc.   As survey creators, we don’t know exactly how the survey taker will react to the order, so the easiest way is to eliminate the potential for problems by presenting the options in a random order.  Telephone surveys with reasonable sample sizes are almost always administered with question options randomized for this reason, using CATI systems (computer assisted telephone interviewing).

When we create a survey for online delivery, a similar problem exists.  It could be argued that the survey taker can generally see all of the options so why is a random order needed?  But the fact is that we can’t predict how survey takers will react to the order of the options.  Perhaps they give more weight to the option nearest the question, or perhaps to the one at the bottom.  If they are filling out a long matrix or battery of ratings, perhaps they will change their scheme as they move down the screen.  They might be thinking something like “too many highly rated, that doesn’t seem to fit how I feel overall, so I’ll change, but I don’t want to redo the ones I already did”.    Often there could be an effect from one option being next to another that might be minimized by separating them, which randomizing will do (randomly).   The results from these options being next to each other would likely be very different:

  • Has a good return policy
  • Has good customer service
  • Items are in stock
  • Has good customer service

Some question types and situations are not appropriate for random ordering.  For example:

  • Where the option order is inherent, such as education level or a word based rating question (Likert scale)
  • Where there is an ‘Other’ or ‘Other – please specify’ option.  It is often a good idea to offer an ‘Other’ option for a list of responses such as performance measures in case the survey taker believes that the list provided isn’t complete, but the ‘Other’ should be the last entry.
  • A very long list, such as a list of stores, where a random order is likely to confuse or annoy the survey taker.

As with other aspects of questionnaire development, think about whether randomization will be best for the questions you include.

Idiosyncratically,
Mike Pritchard

Filed Under: Questionnaire, Surveys, SurveyTip

Leave a Comment

LinkedIn B2B sample looks promising

One of the interesting presentations at yesterday’s Puget Sound Research Forum conference was from LinkedIn, covering their recently introduced sample services.

Key advantages for sample from LinkedIn as I see it:

  • Profiling information is entered by the LinkedIn user for reasons unconnected with survey taking. Regardless of of how much of a problem you think lying on sample panel profiling or screening questionnaires might be,  a LinkedIn user’s description of themselves is likely to be fairly accurate – and useful to a survey researcher.  LinkedIn claims that their users inflate career history less than resumes on job seeking sites such as Monster because the information is visible to colleagues.
  • This isn’t a panel. The primary reason for LinkedIn membership isn’t to take surveys.  While response rates may be lower from LinkedIn than from panels, I really care about quality.  Response rate figures are meaningless if you are talking to the wrong people, as long as there isn’t a non-response bias.  Surveys using  LinkedIn sample still have the potential for response bias, of course, but the reasons are less to do with sample than with questionnaire and invitation design.
  • LinkedIn says that they will minimize the number of invitations sent to users, perhaps with a limit of no more than 1 or 2 per month.  Although I’m skeptical about the actual numbers, I accept the point that LinkedIn’s focus isn’t sample and that frequent invitations would annoy members so I am optimistic that the LinkedIn sample will continue to be lightly surveyed.

Results shared with the audience seem to bear out the truth of the LinkedIn sample promise.  A small telephone study validated the accuracy of status, title and start dates for LinkedIn members.  Results from LinkedIn sample and a B2B panel for online study of  U.S. IT decision makers (a notoriously over-surveyed group) showed some interesting differences.  In particular, the panel delivered a high percentage of completes between the hours of 3am and 7am.  Other data supported the suspicion that many of the responses were from India and China, not from the U.S.  Additionally, the panel respondents were more likely than the LinkedIn sample to complete the survey very quickly, meaning that these were probably not the target audience.  Of course, LinkedIn presented information that showed them in the best light, but it was convincing.

I’ll be looking at LinkedIn sample for B2B projects in future, both for my self-service(SurveysAlaCarte) and full-service clients.

Idiosyncratically,

Mike Pritchard

Filed Under: Surveys

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Follow us on LinkedIn Subscribe to Blog via RSS Subscribe to Blog via Email
Mike brings a tremendous balance of theoretical marketing research with a strong practical knowledge of marketing. He can tailor the research to the right level for your project. I have hired Mike multiple times and he has delivered each time. I would hire him again.
Rick DenkerPresidentPacket Plus
Many thanks to you for the very helpful presentation on pricing last night. I found it extremely useful and insightful. Well worth the drive down from Bellingham!
G.FarkasCEOTsuga Engineering
When you work with a market research company you normally have to define the questions. 5 Circles Marketing’s staff have technical backgrounds, so it’s a lot easier to work with them.
Lorie WigleProduct Line Manager, Business Communications DivisionIntel Corporation
Great workshop! You know this field cold, and it’s refreshing to see someone focused on research for entrepreneurs.
Maria RossOwnerRed Slice
Since becoming our contracted consultant for market research services in 2010, 5 Circles Research has revolutionized our annual survey of consumer opinion in Washington. Through the restructuring of survey methodology and the application of new analytical tools, they have provided insights that are both wider in their scope and deeper in their relevance for understanding consumer values and behavior. As a result, the survey has increased its significance as a planning and evaluation tool for our entire state agency. 5 Circles does great work!
Blair ThompsonDirector of Consumer CommunicationsWashington Dairy Products Commission
What we were doing was offering not just a new product, but a new market niche. We needed to understand traditional markets well to characterize the new one. Most valuable was 5 Circles ability to gather research data and synthesize it.
Will NeuhauserPresident Chorus Systems Inc.
You know how your mechanic knows what’s wrong with your car when you just tell them what it sounds like over the phone? Well, my first conversation with Mike was like that — in like 10 seconds, he gave me an insight into my market research that was something I’d been struggling trying to figure out. A class like this will help you learn what you can do on your own. And, you’ll have a better idea of what a research vendor can do for you.
Roy LebanFounder and CTOPuzzazz
Every conversation with Mike gave me new insight and useful marketing ideas. 5 Circles’s report was invaluable in deciding on the viability of our new product idea.
Greg HowePresidentCD ROM Library, Inc.
Mike did multiple focus groups for me when I was at Amazon, and I was extremely pleased with the results. Not only is Mike an excellent facilitator, he also really understood the business problem and the customer experience challenges, and that got us to excellent and very actionable results.
Werner KoepfSenior ManagerAmazon.com
I have come to know both Mike and Stefan as creative, thoughtful, and very diligent research consultants. They were always willing to go further to make sure respondents remained engaged and any research results were applicable and of immediate use to us here at Bellevue CE. They were partners and thought leaders on the project. I am happy to recommend them to any public sector client.
Radhika Seshan, Ph.DRadhika Seshan, Ph.D, Executive Director of Programs Continuing Education Bellevue College

Featured Posts

Dutch ovens: paying a lot more means better value

An article on Dutch ovens in the September/October 2018 of Cook’s Illustrated gives food for thought (pun intended) about the relationship of between price and value. Sometimes higher value for a buyer means paying a lot more money – good news for the seller too. Dutch ovens (also known as casseroles or cocottes) are multipurpose, [Read More]

Profiting from customer satisfaction and loyalty research

Business people generally believe that satisfying customers is a good thing, but they don’t necessarily understand the link between satisfaction and profits. [Read More]

Customer satisfaction: little things can make a big difference

Unfulfilled promises by the dealer and Toyota of America deepen customer satisfaction pothole. Toyota of America and my local dealer could learn a few simple lessons about vehicle and customer service. [Read More]

Are you pricing based on cost rather than value? Why?

At Pricing Gurus, we believe that value-based pricing allows companies to achieve higher profitability and a better competitive position. Some companies disagree with that perspective, or feel they are stuck with cost-based pricing. Let’s explore a few reasons why value-based pricing is generally superior. [Read More]

Recent Comments

  • Mike Pritchard on Van Westendorp pricing (the Price Sensitivity Meter)
  • Marshall on Van Westendorp pricing (the Price Sensitivity Meter)
  • 📕 E mail remains to be the most effective SaaS advertising channel; Chilly emails that work for B2B; Figuring out how it is best to worth… - hapidzfadli on Van Westendorp pricing (the Price Sensitivity Meter)
  • Isabelle Spohn on Methow Valley Ski Trails gets pricing right
  • Microsoft Excel Case Study: Van Westendorp-un "Price Sensitivity Meter" modeli | Zen of Analytics on Van Westendorp pricing (the Price Sensitivity Meter)

Categories

  • Overview
  • Contact
  • Website problems or comments
Copyright © 1995 - 2023, 5 Circles Research, All Rights Reserved