Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

How To Optimize Your E-Commerce Site

Selling online is not only smart, it’s essential; as TWICE reported last month, online orders have finally surpassed in-store purchases.

But simply having an e-commerce site doesn’t guarantee that people will buy from you. Today’s shoppers expect seamless, intuitive, personalized experiences. With technology and tactics evolving at head-spinning speed, brands must continuously evolve online to keep pace with users’ expectations. Optimizing your site — that is, tweaking and testing its design and functionality to create a better experience for your customers — can improve user experience and drive revenue.

A/B testing is a great place to start. Also known as split testing, A/B testing is the process of exposing your website visitors to two variations of an experience and then tracking their behavior to determine which variation performs better. Armed with results, you can then improve your online shopping experience, knowing those tweaks have been validated through testing.

More About A/B Testing

In an A/B test, you take a webpage (or app, or email), clone it, and modify the clone so that you have two versions: A, the control, and B, the variation. Then, you split your traffic, filtering visitors to each version to see which performs better.

Home-page layout, primary navigation elements, product-page layouts, and banner presentation and content are commonly tested items. The key, though, is to test significant parts of the experience that receive sufficient views, so that an incremental improvement will impact the bottom line.

The benefits are twofold: You can use A/B testing results to make informed decisions before you make changes to your user interface or user experience, de-risking the process. You can also validate proposed product features to ensure that any changes will be an improvement over what was in place originally.

Think about Amazon: Rather than overhaul its site periodically, which is what many brands do, the mega-retailer continually tests new features, layouts and content, validating changes before universally rolling them out to its millions of active customers. Compared to a brand new site, the tweaks are subtle, but profitable.

Tools of the Trade

Testing requires two tools: an analytics tool and a testing tool. By identifying traffic patterns through your site or indicating where visitors are dropping off before converting, an analytics tool, such as Google Analytics or Adobe Analytics, helps you determine which testing opportunities will have the most impact. A testing platform, like those from Monetate, Optimizely and Qubit, split traffic, track primary test metrics, and determine when tests have reached statistical significance. Most also have a what-you-see-is-what-you-get editor (WYSIWYG) so that you can run simple tests without developer support.

Take away: If you integrate your testing platform with your analytics tool, you can glean additional insights about your customers in each variation by seeing what other actions they are taking on your site.

Testing Strategy

It is critical to develop a testing strategy. A strategy ensures that you’ll test meaningful variations in parts of the shopping experience that will have the greatest impact. It will also inform your future testing. Your strategy should include:

  1. Goals. Define the principle goals of your site and business that you want to optimize.
  2. Themes. Articulate broader-level themes that indicate issues or challenges from your users’ point of view. For example, if there is a major drop-off in the checkout and you see in usability testing that your forms are problematic, your theme could be: make checkout easier. Within your themes, identify hypotheses that you can test by adjusting content, UI and UX.
  3. Experiments. Identify anticipated levels of effort and levels of impact on each of your hypotheses and build a roadmap around the most meaningful prospects.
  4. Actions. Take action at the end of every test, whether you make a change on the site, a change in your business practice or develop a new test proposal based on the findings.

Take away: Make sure you communicate your optimization plans, methodology, justifications and results with peers and senior management to get buy-in. And be sure to reevaluate your framework periodically to tweak goals, generate new themes and adjust your long-term roadmap.

Avoid These Testing Pitfalls

There are mistakes you can make when A/B testing. Among the biggest:

  1. Don’t focus on easy changes. Prioritize tests that will result in meaningful gains for your company instead of easy, obvious changes.
  2. Don’t crowdsource a testing laundry list. It’s not a good idea to ask various people to send over their test ideas. Instead, collaborate to build a meaningful strategy.
  3. Don’t close tests too early. Ending tests after short periods of time may not sufficiently account for seasonality or other major circumstances on the site. Run tests long enough to reach significance and gather sufficient data.
  4. Don’t treat losing or neutral tests as a waste of time. You can gain great insight from losing or inconclusive tests. Was your variation not substantial enough to make a difference? Can you rethink the underlying assumptions that failed and create a new test?
  5. Don’t allow long lulls between tests. If you’re not testing regularly, you’re leaving money on the table. Even finding and implementing small winners continuously can make a substantial difference to your bottom line over time.
  6. Don’t test insignificant variations. These include button colors and one-word changes. Instead, identify substantial usability or content improvements that could significantly improve your customer experience.
  7. Don’t test too many variations. Doing this will increase the time it takes to reach statistical significance and could be arbitrary. Find the critical elements under consideration and build one or two variations that have a strong hypothesis behind them.
  8. Don’t test low-traffic areas. It will take too long to see reliable results, and even if you have a winner, it will apply to a lower-volume part of your site, so the net benefits are lower.
  9. Don’t run concurrent tests on overlapping parts of your site. Testing is a scientific process that requires minimizing uncontrollable variables so that you can trust the outcomes. If you are running more than one test on the same part of your experience, you will have a difficult time attributing lift to a particular experiment.

Implement What You Learned

Most testing platforms enable you to route 100 percent of your traffic to your winning variation, which will ensure that all of your users see that treatment moving forward. Keep in mind though, that you should have your development team implement the winning test treatments directly in your codebase to avoid performance issues.

Going forward, maintain a log of past and current tests that includes information on what makes the variations different, who was included in the experiment, and the resulting metrics. And force yourself to write up a report at the end of each test that includes insights gained and next steps; you might be surprised to realize just how advantageous an A/B testing strategy can be.

Nik Budisavljevic is an e-commerce strategist with Blue Acorn, an award-winning e-commerce agency.

Featured

Close