Your website, as a core component of your business, is certainly no more immune to needing to be tested and optimized than any other aspect of your business, but at times site testing can be a bit opaque and overly complex seeming to anyone outside the web development team. It truly doesn’t need to be that difficult, though. At base, different types of testing provide you with different types of information and are going to be more useful at different stages of the life of your website. The same as with sales or accounting or production needs, what data you want and how you gather it is going to depend on what questions you need answered. While there is a wide range of types of website testing and an equally wide range of ways to do it, here is some information about three of the most common types, to help you understand what your developer is doing and why.
User Experience Testing
As the name implies, this is intended to evaluate how effective and easy the user experience your website offers is. This is a large task that can be done in a number of ways, but at base user experience testing tells you as a business how easy and enjoyable and efficient it is for a user to accomplish what they (and you) want to accomplish using your website. This can be used for testing something with a very clear process, like a product selection and purchase workflow, but it can also be used to to evaluate something a bit more freeform, like research and information gathering. Originally (and still in many cases) user experience testing involved sitting down at a computer with a volunteer and your website (typically a mock-up for testing purposes and not your current live site for obvious reasons), and asking them to complete some task. The designer could then watch for points in the process where the user was tripped up or where things seemed to bog down, and would ask the user questions about their use of the site. Because of the resources involved in doing this type of testing, especially the time required and the need for volunteers without knowledge of the site to act as test users, this type of testing is best suited to prior to a large scale site launch, where you have many design and navigation elements that need to be evaluated. Thankfully for us, there is also now a whole host of software and web services that can allow you to do user experience testing remotely, without needing to bring in volunteers or spend days observing and compiling data. Some of them are as simple as something like a user survey to find out what organization method makes the most sense to your users, but many allow for complex click tracking of visitors to your website that can then be analyzed and compiled into a report showing where the most difficult spots are for users to navigate. While it does potentially require at least a beta version of your site be publicly accessible, these remote testing options can allow you to gather data from a much larger number of users in less time, and can be a good choice especially once the majority of your design and content are set.
Designed to test two or more different versions of what is essentially the same content, you may also see this called split testing and it is sometimes lumped into the larger heading of user experience testing, but it really serves a slightly different purpose. Because it is designed to test the effectiveness of very specific variables (e.g. the difference between two background colors or two different phrasings of the same call to action), A/B testing is typically your last stop on the road to a completed, effective website. The big elements have already been decided on and tested in more general user experience testing, and now it’s down to deciding on the small details. Now don’t get me wrong, at any point in the process you can use this type of testing to decide between two similar but different options, but think of it like asking your significant other which outfit of your possible outfits they prefer. That’s a question that you ask once you know where you’re going for the evening, and once you’ve evaluated what is in your closet, and once you’ve narrowed it down to a few final contenders. I could go through that whole process with A/B testing from the very beginning if I wanted to — should I wear a dress or pants? should I wear long sleeves or short? should I wear black or a color? — but I’m pretty sure I could make most of those decisions on my own just fine and it’s the final decision that’s the important one. Also because A/B testing typically requires two or more versions of a particular page to be live and randomly served to different users, the challenge becomes gathering data in a short enough window that other variables aren’t effecting the outcome. If forces pop-up during the course of your testing that influence traffic to your site, or if changes occur to there portions of your site than the one you’re testing, then the A/B variable quickly becomes not the only thing influencing user behavior and the data you’ve gathered is no longer reliable. The key to effective A/B testing is to make sure the entire team is on board so that as close to everything else is static as possible during the testing period. So use it when it counts and use it fast.
Like A/B testing, this is sometimes considered part of general user experience testing, and it’s understandable why, but I like to think of it as its own thing, if for no other reason than the people who answer surveys are self-selecting, so you’re looking at a different data pool than if with something randomized like A/B testing or remote user experience testing. When you ask people to take a survey, and you leave whether they take it entirely up to them, you are typically going to end up with responses that fall to either extreme end of the spectrum — people who either love or hate what you’ve done enough to be compelled to comment on it. Some companies compensate for this by offering contest entries for completing surveys, but this runs the risk of gathering results that people aren’t taking the time to really consider, but are just clicking whatever answer for the sake of getting their contest entry. One potentially effective method is the type of pop-up survey request used by ShopRunner on many of their participating retailers’ websites. Rather than immediately ask you to take the survey, the survey only populates once you’re completed with the website, which is a much more appealing option for most people and which also eliminates some of the self-selection bias by only asking a customer to take a survey after they’ve made a purchase and potentially had a wonderful or frustrating experience that would compel them to want to answer. In all of these instances, the key to balancing your extremely positive and negative responses is to get enough data that they become outliers and fade into the noise.
While site testing can seem overwhelming, we hope this information helps you feel more prepared to tackle the subject, hopefully with a developer you trust! And remember, the details may be different, but the concepts are the same for any other kind of market testing or hypothesis testing. You’ve probably done this before and you may not even know it! And while we’re on the subject, we’ve all that had one site that was a dream to use or the one that required a graduate degree in forensic investigation to navigate, so share your best and worst user experience stories with us in the comments. We’ll all learn a little more about what (and what not) to do!