Data mistakes, part 1: Collection issues

data collection (Click here to expand)

In the rush to collect data and modify marketing campaigns, marketers can lose sight of what quality collection really looks like. To draw useful insights from your data, you need to collect clean, useful information, not just vacuum up every imaginable data point and try to glean meaning from them. But how do you ensure you’re actually doing that?

Avoid the following data disasters in your next campaign by incorporating these best practices into your data collection.

Inconsistent data collection

If data is not collected consistently in the same manner and under the same conditions, error is introduced into any conclusions drawn from that data. That means you should not change data collection methods halfway through a campaign — not if you want to be able to draw useful comparisons between data from both periods. Using different data collection methods or parameters may not seem like a big deal, but it can dramatically impact results.

Instead, if and when you change your data collection approach, make it a clean break. Do so between campaigns when possible, and when you must compare data between the two approaches, make a note of the potential error introduced by these changes. You may even want to collect data in two overlapping systems for a period of time to get a sense of how much error there may be.

Muddy testing practices

If too many elements of a campaign are changed at once, even robust data collection and analysis cannot determine which changes caused gains or drops in campaign performance. What’s more, when you modify multiple elements of a campaign, you’ll begin to accrue very different data points that may skew the results of your overall collection. To ensure you’re collecting valuable insight in your tests, use more strategic, gradual A/B testing.

In gradual A/B testing, you change only one aspect of your campaign at a time, even if that change is as small as changing one form field or the call to action. By making these individual alterations, you can track the impact of each one scientifically and compare the two results. This is valuable if you’re looking to test the impact of one part of a whole system—like a subject line on an email or the color of a button on your website.

Gradual A/B testing is generally the best practice to follow, as it will give you the most detailed, actionable insights. However, there are some situations that may call for radical A/B testing, in which you treat two separate wholes as two different variables. For example, you may test how one downloadable white paper performs against a video on your website to increase engagement. These situations include when your traffic is too low to practically perform a series of gradual tests, when gradual A/B testing is not showing any meaningful results or you need to see large growth immediately. Remember, this test doesn’t teach you anything about the performance of an individual component, like the title of your white paper or the screen size of your video. Instead, it has demonstrated how each of those landing pages works as a whole.
Have you hit a snag in your data collection? We’d be happy to talk more about what you can do to improve data collection in your campaigns. Contact us today.

Contact us

Comments are closed.