One of the most inter­est­ing ses­sions I attended at Adobe Sum­mit this year exam­ined how to build an opti­miza­tion pro­gram across a series of dif­fer­ent brands or divi­sions under a sin­gle orga­ni­za­tion. Large com­pa­nies often run con­glom­er­ates of busi­nesses, some that might even occupy sim­i­lar spaces in the mar­ket­place. The core chal­lenge for these com­pa­nies becomes how to man­age test­ing and opti­miza­tion pro­grams across these mul­ti­ple brands, while main­tain­ing gov­er­nance over activ­i­ties, assets, and brand mes­sag­ing within dif­fer­ent regions or mar­kets. One ses­sion at Sum­mit offered some best prac­tices for doing just that.

A fun­da­men­tal best prac­tice under­scored in the ses­sion sur­rounds stan­dard­iz­ing an effec­tive test­ing process. Although tests among brands or regions/divisions may dif­fer, the meth­ods for exe­cut­ing a test, eval­u­at­ing its suc­cess, and com­mu­ni­cat­ing the results should be sim­i­lar. It is no longer suf­fi­cient to test brands under a sin­gle orga­ni­za­tion sep­a­rately, requir­ing exten­sive posttest analy­sis of dis­parate data to piece together cross-organizational results.  Con­duct­ing multi­brand test­ing within a sin­gle inter­face stream­lines the process, ensures that crit­i­cal steps are not over­looked, and min­i­mizes the poten­tial for errors in the results. For exam­ple, the qual­ity assur­ance (QA) phase of your test design is a crit­i­cal com­po­nent of every test you con­duct. Minor bugs in the test design can yield inac­cu­rate results, which can skew the decision-making progress. In a sense, the roll­out of a new QA can be more rig­or­ous than a prod­uct roll­out because errors can­not be tol­er­ated owing to their poten­tial impact. Test designs are frag­ile envi­ron­ments, and errors in expe­ri­ences or in track­ing results can skew your reports.

Another best prac­tice out­lined in this ses­sion involved align­ing peo­ple across the orga­ni­za­tion and across brands and estab­lish­ing an expe­ri­enced core team that can man­age the pro­gram, pri­or­i­tize efforts, and offer guid­ance and strat­egy. This con­text and col­lab­o­ra­tion across company-wide cam­paigns and global teams becomes much eas­ier within a uni­fied inter­face, such as that pro­vided by Adobe Marketing Cloud, in which assets, data, tests, and results can eas­ily be shared and mon­i­tored. Prior to recent advance­ments in cloud tech­nol­ogy, dis­persed  depart­ments would often con­duct siloed test­ing, with the abil­ity to com­mu­ni­cate their learn­ings lim­ited to a hand­ful of cross-organizational meet­ings designed to review the results.

In today’s multi­branded busi­ness, test opti­miza­tion requires an opti­miza­tion “steer­ing com­mit­tee.” This core team can man­age and eval­u­ate the learn­ings from all of the brands and divi­sions, as well as “steer” them on the right course based on opti­miza­tion best prac­tices, rev­enue per­for­mance, and the company’s over­all busi­ness direc­tion and brand goals.

In this Sum­mit ses­sion, the speaker out­lined how their ini­tial test­ing pro­gram was get­ting good results but was fail­ing to pub­li­cize those results through­out the com­pany, which eroded the stake­hold­ers’ trust in the pro­gram. It’s impor­tant to remem­ber that trust is one of the most impor­tant fac­tors of a suc­cess­ful pro­gram, so a process was put in place to val­i­date the test results and to align the com­pany with the opti­miza­tion activities.

The process begins with project norms, which are deci­sions the main stake­hold­ers make about the pro­gram, such as how many tests will be run, how the prod­uct teams will be sup­ported, and what types of tests will yield the most ROI. Next, ideas are gath­ered from through­out the com­pany. Although the ideas may be dis­con­nected or siloed, being able to take ideas from within the com­pany and turn them into tests to show the ben­e­fit of the test­ing process helps to build con­fi­dence and trust with the pro­gram. Pri­or­i­tiz­ing test ideas relies on a cost-benefit analy­sis process to deter­mine which will be the most effec­tive in terms of exe­cu­tion and results.

Once the pro­gram has been out­lined and ideas gen­er­ated, tests are designed and val­i­dated to ensure proper design and accu­rate results. From here, the opti­miza­tion roadmap is final­ized and the tests imple­mented. For each test there is a sep­a­rate process that con­sists of test plan­ning and design, devel­op­ment and QA, com­mu­ni­ca­tion of the test to stake­hold­ers, a final test, a live test­ing period, and finally analy­sis and addi­tional answers. The key to the indi­vid­ual test process, and the process as a whole, is that there is built-in account­abil­ity and own­er­ship at each step of the way. This means that spe­cific peo­ple are respon­si­ble for the integrity of the pro­gram and the process at each step, ensur­ing buy-in through­out the orga­ni­za­tion and bet­ter exe­cu­tion of the program.

The final con­cept exam­ined in this ses­sion was embed­ded exe­cu­tion. Embed­ded exe­cu­tion refers to an orga­ni­za­tional cul­ture in which all pro­gram par­tic­i­pants adhere to a stan­dard process. With tools like the new Adobe Target inter­face, involv­ing more and more mem­bers of the orga­ni­za­tion in opti­miza­tion is eas­ier than ever. As opposed to test design and com­mu­ni­ca­tion being con­ducted with indi­vid­ual slices of the larger com­pany, the new user inter­face allows col­leagues to par­tic­i­pate in the test­ing process while being gov­erned and coor­di­nated by the core team.

The post­ing of con­tent, reports, con­cepts, and noti­fi­ca­tions within a uni­fied envi­ron­ment and archi­tec­ture allows for shar­ing of ideas and knowl­edge that have been gleaned from across the orga­ni­za­tional brands. The lessons from suc­cesses in one part of the orga­ni­za­tion can be infused in another, cre­at­ing embed­ded exe­cu­tion where the ben­e­fits of all activ­i­ties in the pro­gram are lever­aged across the entire organization.

Many busi­nesses talk about goals such as coor­di­nated col­lab­o­ra­tion, auto­mated per­son­al­iza­tion, and mas­ter mar­ket­ing pro­files as a sort of future state. How­ever, as we saw at this year’s Sum­mit, these fea­tures and capa­bil­i­ties exist in today’s mar­ket­ing orga­ni­za­tions. They’re easy to pick up and use in con­cert for bet­ter, faster, smarter opti­miza­tion and per­son­al­iza­tion, allow­ing company-wide opti­miza­tion pro­grams to become a reality.