5 “A”s to Improve Your Quiz Response Rate: Analyze

Analyzing the data from your quiz tells you a lot about your quiz-takers and what they want. Find out how to do that!

I wrote an article a few weeks ago titled, “5 Simple Ways to Improve Your Quiz Response Rate Right Now”, where I shared a simple 5 step framework to help you achieve more and better quiz responses. The 5 A’s to increase your response rate are:

1.  Acknowledge

3. Ambience

4. Analyze

5. Appreciate

After I wrote the article, I realized that each of these A’s could have a more in depth article on it’s own, so I wanted to go through an in-depth analysis of each of the steps in the framework.

First, make sure that you have Acknowledged who your customer is and what pain point you are trying to solve for your customer. Second, you need to Ask the right questions. Third, you need to set the right Ambience for your user. Today, I will walk through how to “Analyze” your results and iterate your quiz for improvement.

How do you know whether your quiz is working?

First, we need to analyze the results of the quiz. Second, we need to iterate to increase future response rates.

I can often be too confident that my quiz is working because I was so confident in my initial hypothesis and that it would work just perfectly. Unfortunately I am also often wrong, so I need to have humility and analyze the results with an unbiased perspective.

Personally, I have been working on a Chapter 13 Calculator that will allow individuals to understand their different debt relief options.  There are two options for the user to take: 1) Rough estimate and 2) Precise estimate. The rough estimate has under 10 questions and takes about 2 minutes and the precise estimate has approximately 30 questions and takes about 5-10 minutes. The precise version is much more comprehensive and allows us to give much better guidance, but it is longer and requires an email.

For this experiment, I tested two different landing page experience:

1. Go straight to the precise estimate quiz

2. Go to a landing page that allows them to decide whether to choose the precise estimate quiz or the rough estimate quiz

Interact has an amazing analytics function that will allow you to measure the conversion data differences perfectly. What I love is that it has the funnel conversion with data points for how your conversions are tracking.

Let’s get into the results of my quiz. For my quiz above, we had the following results:

1. When the user went straight to the precise estimate quiz, we had 63 submissions from 258 clicks

2. When the user had the choice between the precise estimate quiz and the rough estimate quiz, we had 94 submissions from 155 clicks.

From there, I can now plug this into an A/B test calculator to determine whether my results are statistically significant. It looks a bit obvious, but I generally test my data in an A/B calculator just to confirm. There are many statistical significance calculators out there, but I have found that Neil Patel’s A/B test calculator does it well, and I like user experience.

Here were my results:

To go one step further, I like to go through the Questions and Answers tab from Interact Analytics, which goes into much more detail about the impressions and answers to measure engagement percentage. Understanding engagement is vital to understand where your drop off points may be in your quiz. For us, we realized that the drop off point is often when the user has to enter personally identifiable information such as an email or a phone number. You may need this data for one reason or another, but having the Question and Answers insights can be beneficial to know all of your user’s interactions. You may also see drop offs more randomly, which may be due to the fact that your user to expecting a shorter quiz. Either way, understanding your engagement percentage is very useful to analyze your results.

Iterate and repeat:

As you can see, the “ability to choose” version” produces a statistically significant result. However, you have to understand how all of the data works because it’s true that the “ability to choose” version output looks much better, but the challenge is that most of the individuals took the rough estimate only, so you may not want to switch over to the statistically significant result just because the data looks better. You need to make sure you understand the data.

What did I do? I actually switched over to the statistically significant result because some folks want the precise estimate version right away, and others will take the rough estimate version than take the precise estimate version. We will see what happens.

It’s important to understand the data from an unbiased viewpoint while questioning your initial hypothesis. The goal is to continue testing and iterating until you are able to solve your customer’s pain point from the data collected.