CXL Institute Growth Marketing Minidegree review Part 3

Running Growth Experiments: Research and Testing and Conversion Research

The ResearchXL method as developed by CXL’s Peep Laja.
The ResearchXL method

After understanding the basics in Part 1 and Part 2, I moved on to learn about running growth experiments. In this review I will talk about research and testing, conversion research taught by the founder of CXL — Peep Laja.

We start off by Peep Laja explaining the basics of testing and focussing on understanding the why behind it. He explains that in order to have greater optimization we should start with testing more effective changes, reducing the duration to reach hypothesis and improving the speed of experimentation.

He explains that start with asking the questions that matter. Ask yourself where the problems are, what are they, why is it a problem, turning known issues in hypothesis and prioritising tests and instant fixes.

After this we move to the ResearchXL (TM) method of arriving at what needs to optimised. This was by far the most fun and eye opening part. The method is based on:

  • Heuristic Analysis to assess user experience based assessment of your website. We assess each page in different browser among 4 different characteristics
    — Clarity
    — Friction
    — Anxiety
    — Distraction
    We discuss some of the models of heuristic analysis later in the article.
  • Technical Analysis to identify functional problems
  • Digital Analysis to identify problem areas with google analytics or any other digital analytics. We want to understand where is the money leaking out. Dig for insights and correlation and friction areas. Check if all the connections are correct.
  • Qualitative Research to draw insights from surveys and user research. You can use on-site polls on key pages where we are solving a challenge.
  • User Testing to understand from real target group or people representing a target group. You need to understand how they are behaving while performing a specific task on your website.
  • Mouse Tracking is where you understand what people are doing on your site. Peep explains that hover maps are not that useful as compared to scroll maps or even click analysis.

The main focus point that he stresses on was digging deeper to find learnings from the existing website rather than looking outside to understand what needs to be optimised.

Through this method, we will arrive at an inward looking list of areas where optimization can be attempted. We can then categorise these problems as per the urgency and impact. We need to bucket this list and rate it to prioritize.

We need to allocate all the findings in these 5 categories:

  • Test: If there is an obvious opportunity to shift behavior, expose insight or increase conversion.
  • Instrument: This can involve fixing, adding or improving tag or event handling on the analytics configuration.
  • Hypothesize: This is where we’ve found a page, widget or process that’s just not working well but we don’t see a clear single solution.
  • Just Do It: This is a bucket for issues where a fix is easy to identify or the change is a no-brainer.
  • Investigate: If an item is in this bucket, you need to ask questions or do further digging.

Then we go on to rank these categories on 1 to 5 scale with 5 being the most critical tasks to be completed and 1 being the low impact tasks. The key is to pick up tasks that are contributing to leaking money or will result in the biggest value gains.

After Categorizing and ranking

Now how do we optimise on these problems? Peep explains it through a 6 step process:

  1. Conduct Research
  2. Build Hypotheses
  3. Create treatment
  4. Test treatment
  5. Analyze results
  6. Follow-up experiments
  1. Testing velocity: How many tests are you conducting
  2. Percentage of tests that provide a win: How many successful tests?
  3. Impact per successful experiment: What is the impact of these tests?

After this structured framework, he spoke about A/B tests where I learnt the why of A/B testing, when should such tests be run, when should we decide that a test is over and some common mistakes while we run our A/B tests.

For an average site a normal testing period should last for a minimum of 28 days which is counted as one business cycle. More than this will start polluting the data.

Peep then goes on to explain the Conversion Research methods. I understood that everything starts from the customer and data. Without speaking to customers and analysing the correct data we won’t reach any actionable insights.

Avoid the analysis paralysis stage, we need actionable data.

One of the key topics he talks about is site walkthroughs. Through site walkthroughs one can be sure about the website performance on multiple browsers, devices and versions.

In my previous jobs at many times, I have encountered a short sighted approach. Test where the traffic is but it is often not compared with the revenue it is bringing in. Peep Laja explains that walking in the customer journey starting with “landing page zero” will reap higher benefits.

We then start our journey for an in-depth understanding of the heuristic analysis. Peep Laja talks about some established frameworks including The 7 Levels of Conversion by Web Arts, Invesp Conversion Framework, the LIFT framework and the MarketingExperiments Methodology heuristic approach.
This section gave me a great exposure to the multiple sources that I wasn’t aware about. It was a real eye opener!!

The LIFT framework

Peep then explains the important points he looks for during the heuristic analysis of a website. I can summarise them here:

  • Clarity: Is it perfectly clear and understandable what’s being offered and how it works?
  • Relevancy: Understand context and evaluate page relevancy for visitors: Do pre-click and post-click messages and visuals align?
  • Incentive: Is it clear what people are getting for their money? Is there some sort of believable urgency?
  • Friction: Evaluate all the sources of friction on the key pages.
  • Distraction: Pay attention to distracting elements on every high priority pages.
  • Buying phase: Understand buying phases and see if visitors are rushed into too big of a commitment too soon.

In the lifetime of a company or any project we launch so many surveys but how often do we nail down the complete process? First and most importantly, clearly define the purpose of the surveys. Moreover, surveys should be:

  1. Closed ended questions
  2. Should not be leading
  3. Reaches the heart of the problem

There are still some common mistakes people make while drafting a survey:

  1. Non-intuitive scales
  2. Mixing questions of behavior and attitude
  3. Questions that don’t communicate
  4. Long surveys
  5. Unnatural learning curve

Existing Customer Surveys assume the level of product knowledge, don’t fall into that trap.

Now even in the best survey environments, there maybe certain biases or challenges that one might face, some of them are:

  1. Reading the room: When the surveyor tells the client what they want to hear. (surveyor end)
  2. Order Bias: When the questions number higher get a better response and hence these questions should always be reshuffled. (surveyor end)
  3. No summaries or technical summaries. (user-end)
  4. No debrief. (user-end)
  5. Selective perception. Force the client be more objective. (surveyor end)

Communicate in the customers verbiage to get better results. Research before you survey.

How do we increase the response rate of a survey?

  1. Build relationship with the customer
  2. Send out surveys promptly
  3. Incentivize the user

Qualitative survey is as important. They fuel the complete thought process behind copywriting because these are customers who converted recently and have no previous relationship with the company.

With qualitative surveys our main goal is to learn about the customers — Who these people are?, What are the problems they are solving for themselves?, What’s the voice of the customer like etc.

Coding enables you to organize large amounts of text and to discover patterns that would be difficult to detect by reading alone. So always try to code the answers of a qualitative survey.

There was so much to cover in this module and every topic was an eye opener. I felt the impact of the knowledge that Peep Laja has amassed in his career. It really showed in the structure of the module and the many points he talks about:

  1. Talk to sales or customer support people to understand the top 10–20 questions they get asked and what concerns customers have.
  2. Trigger polls to know the instant mindset. These can be set time spent on a page or during exit intent. Experiment with the leading and final question mix.
  3. Live chat transcripts help in painting a picture and providing ideas. Analyse the transcripts from 30 days or 3 months (low-traffic website).
  4. The main benefit of user testing is to identify bottlenecks for users. User testing usually involves people observing recruited testers complete a given set of tasks on the website.
  5. There are 3 ways to run user testing — over the shoulder, monitored remote and unmonitored remote testing.
  6. Mouse tracking tools offer key insights into the user behaviour on your site. These can be through heat maps, click maps, user session recordings or scroll analysis.
  7. The Ring Model, developed by Craig Sullivan, focusses on the depth of engagement and not page views. It helps in discovering where the flow is stuck.
  8. Google Analytics health check needs to be completed before attempting any optimization work. Get experts to look at the GA covering account settings, property and view settings to avoid any polluted data.
  9. Funnels and goal flows are critically important. They both, being different, but give out details of what is happening with the traffic on the website.
  10. More than just looking at the number of page views and traffic we also need to look at country, region, city, age groups, visitor type, demographics and affinity categories.
  11. Internal site search is another way to optimize for conversions. You can test for results with and without site search. Behavior -> Site Search -> Overview report and if it shows nothing, it’s most likely unconfigured.
  12. Content reports help to identify pages that works better than others. We can look at Top Content to understand the highest traffic-low performance, Landing Pages to understand the bonce rate on top landing pages as compared to site average, Navigation summary showing what are people doing on specific page and which links are getting clicked on.
  13. Run “conversions per browser” report, and segment by device category.
  14. Copywriting is by far the most important part and sometimes is rated higher than design in persuading a customer. There are 2 main parts in a copy — clarity with complete information and value of the products. Copy testing is the research that you can do to collect data on your copy.
  15. Copy testing is mostly a qualitative survey where we ask questions mostly on friction and completeness. Try to use a scale so that you can quantify the qualitative data.

This track definitely provided key insights into areas I knew little about. It also formed base for enhanced CRO techniques and practices. I understood that the finer details in running growth experiments. I am sure that this will set me up nicely for the next topics that will include A/B testing and Stastics fundamentals for testing.

If this makes you interested in growth marketing and you wish to learn more, I can definitely recommend CXL Institutes’s Growth Marketing Minidegree, or any of their other courses.

Startup Growth | Business Development and Sales | Creative thinker