Microsoft Advertising Experiments Takes Cue From IT For Ad Campaigns

New Microsoft ‘Experiments’ Feature Lets Brands Test Ad Campaigns

by , Staff Writer @lauriesullivan, July 23, 2019

Microsoft Advertising Experiments Takes Cue From IT For Ad Campaigns | DeviceDaily.com

Microsoft, with its long history of teaching businesses how to test IT changes across their enterprises, has developed an Experiments feature for its advertising platform that will allow brands to test campaigns in a similar way. The feature should roll out in the near future.  

Experiments provides a duplicate environment, similar to the way IT professionals test changes in enterprise platforms, to test the changes. The technology allows marketers to monitor changes by creating a duplicate version of a campaign. IT professionals implement a complete other server with software to test changes before moving it into production.

Experiments makes it possible to run an A/B test to determine the impact of an update without affecting the entire project.

Changes that marketers can test include ad copy such as testing messages and calls-to-action in ads, landing page URLs to determine whether different landing pages result in better performance, and bidding strategies and modifiers to test bid adjustments or allocate a percentage of campaign budget toward a smart bidding tactic.

Microsoft Advertising worked with numerous brands during the initial pilot. Performics Media Director Brian Hogue said the experiments helped the agency to implement results from an automated bid strategy test.

In a blog post, Subha Hari, senior program manager and editor, and Piyush Naik, principal program manager at Microsoft Advertising, explain the steps involved in using the new feature: Start with a clear hypothesis about the test cycle; go to the Experiments tab to select the campaign; name the Experiment; enter a start and end data, and set the percentage of the original campaign’s daily budget and ad traffic allocation for the experiment.

Hari and Naik recommend setting the experiment split at 50% and running the same for the first two weeks before switching to different, A/B modes.

Validating that the performance is not different in any statistically significant way is important before moving on to A/B testing, they stress.  

MediaPost.com: Search Marketing Daily

(21)