Running a Google Ads campaign can be a case of you’re damned if you do and your damned if you don’t. If the campaign turns out to be a flop, you’re obviously in trouble, but if it succeeds, you may encounter other challenges. Campaigns that are successful over a long period of time can present challenges in continuing to deliver performance and add value.
In my experience, new Google Ads accounts offer quick wins and easy results. Over time, these become less apparent and there is more need for innovation. One of the ways we’ve been able to deliver performance over time on already performing accounts is through campaign drafts and experiments.
In this case study, we were running a campaign for a large legal client over five years. The results have been phenomenal over the period and we have experienced tremendous growth. The Google Ads campaign was in a mature state where we were happy with performance and CPA levels, but were challenged to continue driving lead growth. In this competitive industry, it was important to constantly test features and push new boundaries. For a year, we conducted 80 experiments to test a wide range of features. We’ll walk you through some of these tests, the results we received, and what we learned.
Campaign drafts and tests
Before doing so, a brief summary of campaign drafts and experiences is in order. We’ll call these “experiments” for short. This Google Ads feature helped us resolve Ecuador Phone Number the issue and continue to deliver performance in a mature account. The basic process of using the tool is:
- Clone an existing campaign as a new draft
- Make the changes you want in this draft to test some assumptions
- Run this draft with the original campaign for a while
- Split traffic between draft and campaign (usually 50/50) as an A/B test
- Report results in real time throughout the test and provide updates when results are statistically significant
- Apply draft results to original campaign or reject draft campaign in one click
Google has a detailed guide to setting this up which is the best resource to use as a guide
The tool has given us the freedom to rethink the way we manage an account and engage our customers. We can now sit down with a client and come up with a set of hypotheses that we want to test. These assumptions are designed to align with future customer goals and push performance boundaries. Customers are involved in a decision-making process, which is completely transparent. They were able to see the process from the formulation of the question/assumptions to the results.
Experiments also provide a safe environment for implementing and testing new features. When account performance has been strong, we are often hesitant to rock the boat. But we still have to try new features. Take, for example, the recent introduction of machine learning features and tools in Google Ads, such as automated bidding strategies and responsive ads. Handing over ML algorithm keys can be daunting. Although ML can provide incremental performance improvements, there is a risk that these algorithms will not work and account performance will suffer. Experiments allow you to minimize these risks through testing.
What we tested
In consultation with our client, we conducted a series of experiments. These were tested on a continuous basis throughout the year. As an example, some of the main hypotheses we tested were:
- Automatic bidding (maximizing conversions) generates more conversions than manual bidding.
- Automated bidding (Target CPA) will deliver better conversion volume performance than what we currently get with manual bidding.
- A more granular campaign structure based on SKAG will increase the campaign quality score
- Responsive display ads will deliver better CTR than static banners
- Responsive Search Ads will deliver better CTR than full text ads
- A new, less cluttered landing page will prove to be better conversion rates
- A new landing page with a different hero image will provide better conversion rates
- Ad copy with a question rather than a statement in the first headline will provide better CTR
- Showing ads at a lower position will provide a better conversion rate
- 20% higher bid on desktop only will improve conversion rate
Note that the assumptions are specific. We test a single result and use a specific metric to evaluate.
In addition to campaign testing, we also performed several “ad variation tests”. These are slightly different from test campaigns, as they can be cross-campaign. That’s beyond the scope of this article, but we highly recommend you run them as well.