Testing and Improving Your Product Page

As you know, the more platforms your app works on, the more money you can theoretically make. The more stores you are on, the more money you can theoretically make, too. When it comes to Testing and Improving your app Product pages, the more stores you are on the easier it will be for you to run multiple tests and compare multiple changes.

As you are in the business of improving the performance of your app, improving your app’s product page follows a similar process. This follows the processes typical to A/B and multivariate testing except that most stores are not likely to provide you the means to test different descriptions side by side. Thus testing should follow something like the following process:

  1. Develop several different product descriptions and/or screenshots
  2. Use a different product description on each store your app is on – be on as many stores as needed to test simultaneously.
  3. Evaluate all results according to a statistically significant sample size (1,000+ viewers).
  4. “Promote” your best performing variant to your highest trafficked store product page – it is now your “control”.
  5. Analyze all results, make changes and repeat 1 through 5 until you achieve acceptable results..

Every mobile store is not the same – each has its own audience with different demographics.. If you are engaged in advertising and marketing in support of your app on a specific store, that also impacts your demographic sampling. All of this needs to be noted and characterized in your evaluations of each app store.

It deserves to be noted that on many app stores, without a suitable promotional effort, it may be difficult for you to get a statistically significant sample size. It may not always justify the effort, but if supported by spending just 15 minutes every week or two promoting your app on that store, you might be surprised.

A statistically significant sample size is tricky and is a study of its own. Under ideal test conditions a sample of 1,040 viewers allows for a 99% confidence level that your overall download rate would be +/-4% of your sample group. If you saw a 5% download rate, overall results could range from 1 to 9%.

With time and advertising, however, you will be able to base your tests on 10k to even 100k viewers to get a more reliable average conversion rate. It is known, for example, that the “largest online merchant” takes great pains to ensure that everything its visitors first see when viewing their store “converts”. Relative to their volume, if a product’s performance is not up to par “almost immediately” – it gets swapped with something else, on a continuous, rotating basis.

When you are starting from scratch, you will likely want to test several “completely different formats” – with the objective of reaching a format that works best for you. These could be long-medium-short versions; they could also be story based or feature-based descriptions; or they may try to trigger end-user emotions by playing on competitive play, intrigue and curiosity, or simply focus on having fun.

If you already have several apps, start with one of your lower performing apps to see if you can improve its conversion rate before working on what is already making you money. While we always want to improve performance, it also stands that you don’t want to break what is already working – and a LOT of companies do exactly that. As you acquire more data and become better with you app descriptions, you can begin testing your better performing apps.

In all regards, multivariate testing is a marathon, not a sprint.

Project Manager at the Opera Mobile Store providing Sales-Marketing support. Content development and research.

Facebook Twitter LinkedIn Google+ 

.

Share This:

Comments ( 0 )

    Leave A Comment

    Your email address will not be published. Required fields are marked *