September 8th, 2011
Pay Per Click
In my last blog post, I outlined the two main types of tests that PPC marketers should use, along with some of the metrics they should regularly apply those tests to. Now that you’ve conducted your tests and gathered your data, you’re all done, right? Well, not unless the data you have gathered is statistically significant enough to base conclusions on.
First, let’s define statistical significance. Generally speaking, a result is called statistically significant if it is unlikely to have occurred by chance. The larger the sample size of your test or study, the more confidence you can put in the statistical significance of the results. For all of you math-centric people out there, I realize there is a lot more that goes into statistical significance. The goal here is to outline how statistical significance applies to PPC marketing in particular, and how people can use it to make decisions based on their PPC test results.
One way to determine statisical significance is to conduct a z-test. Z-tests are useful when trying to derive an average statistical outcome from a single sample size. Personally, I find z-tests a bit too complicated to use in most scenarios, so I tend to utilize automated tools like Split Tester (for ad copy split testing) to tell me whether or not some of my tests are statistically significant. Most PPC professionals agree that 100 conversions is the threshold for determining a "valid" test for ads, landing pages, etc. For any of your PPC tests, you should aim for 95% confidence level, which is the standard benchmark used in academic statistical studies. This means that you are 95% sure that one tested element is better than the other.
Experiment with different tests, analyze your results, and then test again. Just make sure you are picking winners based on accurate data!
There are no comments yet.