You know it, you’re on a birthday and someone asks ‘what exactly do you do?’ And you answer with ‘CRO specialist’ (or something like that). Then you used to get ‘the what?’ to hear. But it has been noticed more and more recently that more people outside the field have knowledge (or at least know of the existence) of CRO and experimentation. The field is growing, and with it the maturity of organizations.
Where a few years ago the question focused on how to set up a good experimental process, it is now about the maturity from the company. To what extent does the company embrace a data-driven way of working? Is there a culture of experimentation or failure? Are hypotheses and learnings shared correctly?
In this article we take a closer look at the trends of 4 disciplines within the CRO world:
1. Product Discovery CRO
We are increasingly seeing a shift to Product Discovery. This means that market demand is less focused on helping you run an A/B testing program with the goal of optimizing what has already been built. Demand from the market is increasingly focusing on pre-validation: how can we collect data before we start building and validate whether something works or not? We call this Product Discovery. This shift in demand shows that organizations are becoming increasingly mature and that CRO is really becoming part of the various business processes.
2. Embed experimentation in process product teams
The aforementioned ‘CRO for Product Discovery’ trend can also be viewed more broadly. What we see more and more is that experimentation is becoming part of the entire process of product teams. In both the Discovery and Delivery phases, plenty of experiments are being carried out and studies are being carried out. This close collaboration ensures that you can run larger tests and implement winners quickly. Many organizations are not here yet. But we do expect more and more demand for this in 2023. Wondering how best to approach this? Then read this article once.
3. More organizations are working towards a Center of Excellence
The growth in maturity is also reflected in the number of companies working towards a Center of Excellence (CoE). To scale up experimentation to an organisation-wide method, a CoE must be set up. This CoE is responsible for making experimentation accessible and user-friendly throughout the organization. This CoE has its own budget and team that ensures that every team can experiment.
1. Results are not learnings
Organizations are increasingly A/B testing. Which is very good. The more we can substantiate with data (from gut to substantiated), the fewer assumptions are assumed to be true. However, we are still not getting enough out of the tests that are being done. Unfortunately, we still often see the following conclusions in test reports:
This experiment has a significant uplift of 2.5% so we learned that the variant works better.
Error! You still don’t know why your variant works better this way. And how you can extend this to get even more results from your test. Or to test the same principle in other channels.
Many people find it difficult to go further than just describing the results in the report (for example Uplift in conversion). That is why it is useful to use the following mnemonic: a good report contains the past, the present and the future.
With these 3 elements you ensure that in 2023 you test the bigger picture instead of loose, haphazard tests.
1. The past: hypothesis & substantiation
When drawing up a good report, it is crucial that you have formulated a good hypothesis in advance. Without a hypothesis, it becomes difficult to draw conclusions afterwards and to test them accordingly. When formulating your hypothesis, keep in mind that you support your hypothesis in the right way with different sources of information.
Use the following checklist:
- Does your report describe what you expect?
- Is the metric mentioned on which you will settle your experiment?
- Is it described what changes have been made to the variant (including images)?
- Has the substantiation for your expectation been described?
- What sources do you use as evidence?
- How does the current experiment build on previous findings?
2. The present: results & learnings
This section is about the actual results of your experiment. People tend to forget a losing test as quickly as possible and quickly move on to the next test idea. This is a shame. You can learn a lot from losing tests, if you know how. Here too it is important to link back to your hypothesis. State here how the results found match or deviate from the expectations you had.
If your findings don’t match what you expected, this is a great opportunity to dive deeper and learn. You can do this by properly investigating unexpected outcomes. What layer did it go wrong on? Was there any interaction at all with the element you modified? Also look closely at alternative explanations. If you put the element in a different position on the page, would it work better? If the information shown was not helpful, what information do visitors look for?
Use the following checklist for the present:
- Is the outcome described?
- How is the outcome related to your hypothesis?
- What are possible explanations for an unexpected outcome?
- Tip: take another critical look at the designs.
- Tip: view different segments/devices and other metrics.
- How can you test alternative explanations?
3. The future: recommendations & follow-up tests
In this last section you describe the next steps. What do you recommend? Should the variant be implemented or not? And how will you build on the findings? What are your follow-up tests? Can you make the effect of the winner even bigger by testing on other pages or channels? How can you turn your winner’s learning into a follow-up test to prove otherwise? There is always a next step to your report.
Use the following question for the future:
- Do you want to implement the variant or not?
- If yes: how can you continue testing in the same direction?
- If no, what test ideas follow from your alternative explanations?
By applying the past, present and future in your reporting, you get the most out of your learnings. This way you test with an overarching strategy and you are no longer testing haphazardly.
In the field of data & analytics, we are building on last year’s trends and developments. We mention 3 in this article:
1. GDPR and the use of Google Analytics
This is an ongoing theme as many DPAs (Data Protection Authorities) will rule on the use of Google Analytics. The Dutch DPA (Authority for Personal Data) has not yet made an official statement about this. A plausible option is that the DPA, just like the Danish DPA, will not determine whether Google Analytics will be banned, but will only focus on testing whether companies comply with the law. It is of course not for nothing that Google is pushing harder towards GA4. This has solved a number of privacy problems.
In order to be compliant, it is especially important that you provide the user with clear and good information regarding the personal data at the time you collect it. What can you do in the meantime?
- Make sure your cookie banner has the correct consent and information.
- Check whether your privacy & cookie statement contains the correct information.
- Be critical about which (personal) data you really need.
- Be critical in your tooling. Are you no longer using something? Get it off the website.
Are you unsure about what applies to your company? Get advice from a lawyer. Think like this: if the DPA comes knocking on your door, you can at least show which active steps you have taken. That’s gonna matter!
2. From Universal Analytics to GA4
Hopefully no one has missed that a major change is coming in 2023 in the field of Google Analytics. The transition to GA4 must be completed in July 2023. A large proportion of users already have GA4 running (and if not: start as soon as possible!). But most of the industry still works in GA3’s interface. Partly because GA4 did not contain all the functionalities that were standard in GA3 in recent years. And also because the interface really works in a different way.
Krista Seiden, among others, talks about the many benefits of GA4. The BigQuery connection in particular will provide new skills for analysts. Take into account extra training and acclimatization time before we work with it as smoothly as with GA3.
3. Server side tagging & testing
More and more companies are switching to server side tagging in testing. The big advantage is the ease of linking various data streams to the backend and the positive impact on the performance of the site. With A/B testing it is often attractive to dive straight into the content, while improving the speed can really do wonders for your conversion rate. Sometimes it pays to spend one or more sprints on site performance instead of doing multiple A/B tests! Apart from the A/B testing process, it also does a lot of good for your SEO.
A disadvantage is that you of course no longer see what is being measured. This makes QAs and doing analyzes a bit more complicated, for example. You should consider in advance which events you need and whether these are all measured server-side as standard. Server-side tagging is also not the solution for all data privacy issues with Google Analytics. The fact that as a user you no longer see what is being measured does not mean that you are suddenly allowed to measure everything without permission. Server side tags must also comply with the privacy conditions!
1.UX research post covid
A trend in the field of UX research is remote research. Covid has caused us to have no physical for a very long time usabiltiy tests could perform in our UX lab. But we continued to do research, from a distance. With tools like Lookback we could continue to learn about our users, but from a distance. The big advantage of this turned out to be that respondents then participate in the research in their own familiar environment, which in turn provided very good insights. Every disadvantage has its advantage. We expect that remote usability research will continue to be widely used.
2. Cross-device UX
Cross-device UX is about visitors using two or more different devices in their customer journey. We are constantly changing devices. The most classic example is that visitors browse on mobile and buy on desktop. This black and white image is outdated. Visitors orient themselves on mobile while on the train, view the product again at home on desktop and put it in their shopping cart, then convert on mobile and consult customer service via their laptop at work.
We expect cross-device UX to become an important skill for designers in 2023. The most common example of this is being able to copy text from your iPhone to your MacBook. But there are many more possibilities that companies make smart use of. How do we ensure that visitors have the same experience across all devices? Can we ensure that visitors can easily copy their shopping cart from desktop to mobile and vice versa?
We hope that with this article you have been fully informed about the trends and developments that we foresee in the coming year. And we are curious about your additions! How will you get started with CRO in 2023? Let us know in the comments.
Source: Frankwatching by www.frankwatching.com.
*The article has been translated based on the content of Frankwatching by www.frankwatching.com. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!
*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.
*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!