First, let’s bust two com­mon myths:

  • You do not have to build a struc­tured data layer to use a tag man­age­ment sys­tem like Adobe’s Dynamic Tag Management.
  • You may ben­e­fit from a struc­tured data layer whether or not you use a tag man­age­ment system.

Con­fused? Heard some­thing dif­fer­ent? Let’s look under the hood and find out.

Web­page tags are used with many sys­tems and plat­forms in dig­i­tal mar­ket­ing, ana­lyt­ics, and appli­ca­tion devel­op­ment. The use of these tags for mar­ket­ing is grow­ing and accel­er­at­ing so quickly that it’s hard for large com­pa­nies to effec­tively and effi­ciently man­age these tech­nolo­gies. Tag man­age­ment sys­tems (TMS) like Adobe’s Dynamic Tag Man­age­ment (DTM) are being adopted quickly as exec­u­tives start to under­stand the strate­gic advan­tages of using TMS to man­age these tech­nol­ogy com­po­nents — the same com­po­nents that enable vital strate­gic pro­grams like test­ing & opti­miza­tion, con­tent per­son­al­iza­tion, remar­ket­ing, retar­get­ing, cam­paign opti­miza­tion, cus­tomer feed­back, and more.

Of course, these “tags” are often noth­ing more than a way to col­lect data about our read­ers, prospects, and cus­tomers. This is the data that allows us to suc­cess­fully mea­sure, ana­lyze, improve, and con­trol our dig­i­tal ini­tia­tives. In order to be effec­tive, our peo­ple, processes, and sys­tems that use and move this crit­i­cal data from the web­site to the var­i­ous end points along the chain need to be effi­cient and con­sis­tent. This is our data sup­ply chain, and data col­lec­tion is the first set of links in the chain.

Data Col­lec­tion

In soft­ware devel­op­ment and in var­i­ous Web stan­dards, it’s com­mon to sep­a­rate com­plex sys­tems into dif­fer­ent lay­ers. This is noth­ing more than split­ting the pieces into tiers that relate to each other in dif­fer­ent ways. For exam­ple, in an HTML doc­u­ment it’s com­mon to sep­a­rate and think of the HTML code as a “struc­tural” layer, the style rules of CSS doc­u­ments as a “pre­sen­ta­tion” layer, and the func­tions of JavaScript code or tags as a “behav­ioral” layer.

Using DTM can give us effi­cient con­trol and great power over each of these “lay­ers,” and to fully lever­age this power and con­trol, it’s impor­tant to con­sider the data col­lec­tion layer — the first links in our data sup­ply chain. The data col­lec­tion layer sim­ply con­sists of the data we care about in our page ele­ments, vis­i­tor actions, appli­ca­tion states, and events in our web­sites and other dig­i­tal envi­ron­ments. These ele­ments, actions, states, and events gen­er­ate the data that feed our Web ana­lyt­ics tools, our remar­ket­ing plat­forms, our dig­i­tal cam­paigns, and our other dig­i­tal invest­ment oppor­tu­ni­ties. This is our meta­data, the infor­ma­tion build­ing blocks we all need to col­lect, man­age, and manip­u­late in order to report on, ana­lyze and opti­mize our online businesses.

How we imple­ment and man­age the col­lec­tion of this data has a sig­nif­i­cant impact on the value we can earn from our dig­i­tal invest­ments over time.

Page Ele­ments and Vis­i­tor Actions

Web­page ele­ments make up the first part of our data col­lec­tion layer. Page ele­ments are sim­ply the text, images, and other com­po­nents in our webpages—our markup, code, and other dig­i­tal resources like images or videos. Page ele­ments help us answer ques­tions like “how many peo­ple clicked on the new hero image on the home page dur­ing the recent hol­i­day pro­mo­tional cam­paign?” The home­page hero image is the page ele­ment of inter­est here, but of course the image itself isn’t nearly as inter­est­ing as the num­ber of clicks it received. To cap­ture the click events and send those event counts to our Web ana­lyt­ics or other sys­tems, we first need to iden­tify the right image before we can reg­is­ter or count the click event.

Although this is an overly sim­ple and com­mon exam­ple, iden­ti­fy­ing other page ele­ments and vis­i­tor inter­ac­tions with those ele­ments can some­times be a bit more involved. Iden­ti­fy­ing and select­ing spe­cific page ele­ments is some­times called tra­vers­ing the Doc­u­ment Object Model (DOM) and is often done with JavaScript or jQuery code. The DOM is basi­cally an org chart or “tree” of the dif­fer­ent ele­ments in a webpage.

Once we iden­tify and select ele­ments, we can then cap­ture or “han­dle” vis­i­tor inter­ac­tions with those ele­ments and send meta­data about those ele­ments and inter­ac­tions to var­i­ous tools or sys­tems, like Web ana­lyt­ics tools, voice of customer/survey tools, or third-party remar­ket­ing or retar­get­ing sys­tems. The good news about cap­tur­ing data by tra­vers­ing the DOM is that it can be easy and can be even eas­ier with DTM. Sim­ply use the drop­down iden­ti­fiers and CSS selec­tors for your page ele­ments, and you’re done.

DTM Conditions

DTM Con­di­tions

The poten­tial bad news here is that the HTML markup of many large web­sites is often poorly formed, invalid, or dif­fi­cult to access using com­mon DOM tra­ver­sal and selec­tion meth­ods. This method of data col­lec­tion can also be frag­ile and can break when pages are redesigned or con­tent is updated; when the markup of the page (or appli­ca­tion) changes, our data col­lec­tion has to change in sync to remain con­sis­tent. If the markup changes, and no one changes the data col­lec­tion, we could end up with incon­sis­tent report­ing and issues in analy­sis and val­i­da­tion that are dif­fi­cult to trou­bleshoot and correct.

Yes, jQuery can make DOM tra­ver­sal eas­ier, but it won’t help us obtain the src value from an img ele­ment with a spe­cific id attribute if a devel­oper deleted it from the page with the last release.

Vis­i­tor Actions and Applications

On most web­sites today, the line between “pages” and “appli­ca­tions” is blurry. Although web­sites used to have sta­tic pages of text and image con­tent that linked to more sta­tic pages of text and images, we now have a much more dynamic expe­ri­ence online. Pages, text, images, and videos shrink or expand in response to screen sizes and device types (respon­sive designs). Full appli­ca­tions now run com­pletely in our browsers, instead of on our desk­tops (Gmail).

Clicks, swipes, opens, likes, and other inter­ac­tions that read­ers, prospects, and cus­tomers have with our Web con­tent and appli­ca­tions can often be cap­tured as described above using DOM event han­dlers reg­is­tered to spe­cific page ele­ments. Cap­tur­ing events and inter­ac­tions that hap­pen when a vis­i­tor inter­acts with a Web appli­ca­tion com­po­nent or fea­ture can be more dif­fi­cult than cap­tur­ing sim­ple text or image con­tent inter­ac­tions, depend­ing on the appli­ca­tion design. For exam­ple, it’s com­mon to cap­ture text sub­mit­ted in a form and send it to our Web ana­lyt­ics, CRM, or other sys­tems. It’s also com­mon to use JavaScript to val­i­date or process the form input itself. Cap­tur­ing form or other appli­ca­tion data through the DOM can be chal­leng­ing, depend­ing on the spe­cific imple­men­ta­tion method used, espe­cially as the appli­ca­tion code and/or JavaScript in the page exe­cutes and inter­acts with other parts of this behav­ioral layer.

How Can We Make This Easier?

Unique id Attributes

One basic way to improve our data col­lec­tion process is to make DOM tra­ver­sal and selec­tion eas­ier. Adding a unique id attribute to each unique con­tainer ele­ment in our page markup or appli­ca­tion code can really help improve the effi­ciency and effec­tive­ness of any data col­lec­tion imple­mented using DOM tra­ver­sal methods.

For exam­ple, we might have a slider in the hero image loca­tion on a key land­ing page. This is typ­i­cally a large image that slides, rotates, or oth­er­wise changes every few sec­onds. It’s also com­mon for the hero con­tainer ele­ment to be marked up as a <div> or <section> in the HTML. Adding a unique id attribute to this con­tainer <div> or <section> can make it much eas­ier to iden­tify the ele­ments of inter­est within the con­tainer and to enable data cap­ture for vis­i­tor inter­ac­tions with those ele­ments. <div id="hero"> or <section id="hero"> is one exam­ple. This makes DOM tra­ver­sal eas­ier sim­ply because we can eas­ily start at the con­tainer ele­ment with the id, instead of start­ing higher up in the markup or code.

Cus­tom Data Attributes

Front-end devel­op­ers also use another method to add meta­data to pages and appli­ca­tions. Adding cus­tom data attrib­utes to indi­vid­ual page ele­ments like para­graphs, sec­tions, images, or div con­tain­ers, is just like adding a “label” to these elements.

Adding these cus­tom data attrib­utes helps us iden­tify spe­cific ele­ments of inter­est in our pages and appli­ca­tions. In our hero image exam­ple, the orig­i­nal busi­ness ques­tion involved a hol­i­day pro­mo­tional cam­paign. When plan­ning and deploy­ing the image assets for this cam­paign, the devel­op­ers could eas­ily add a cus­tom data attribute to each image, allow­ing the indi­vid­ual pro­mo­tions to be linked to vis­i­tor inter­ac­tions with those image assets. Stan­dard markup for these images could be: <img src="http://example.com/image1.jpg" width="600" height="250" alt="Image1"> and adding our data attribute sim­ply means adding data-campaign="holiday­promo" to the markup.

Although this approach can be effec­tive, it does require more care­ful plan­ning than the unique id addi­tions on con­tainer ele­ments. Because this involves adding meta­data to indi­vid­ual ele­ments and not just unique con­tainer ele­ments, it requires more thought and plan­ning to ensure con­sis­tent tax­onomies and imple­men­ta­tion across our sites. Some also con­sider it a more frag­ile method of adding meta­data, espe­cially if the com­mu­ni­ca­tion within and between dif­fer­ent Web teams and busi­ness units is not timely and man­aged consistently.

A Data Col­lec­tor or Data Object

As pages and Web appli­ca­tions are planned, devel­oped, tested, and deployed, we can ensure that the meta­data we want to cap­ture is within the page, screen, or appli­ca­tion view. By pre­sent­ing the exact data we want to cap­ture, at the exact time we want to cap­ture it, we can ensure one of the most robust, accu­rate, and con­sis­tent forms of data col­lec­tion cur­rently in use.

JavaScript Object

JavaScript Object

Two meth­ods com­monly used to imple­ment this capa­bil­ity are JSON val­ues and JavaScript objects with prop­er­ties and val­ues in the markup. In either case, this just means we are sur­fac­ing the appro­pri­ate val­ues at the appro­pri­ate time so our data col­lec­tion code can pick them up and send them to the appro­pri­ate sys­tem in a very effi­cient, effec­tive, and con­sis­tent man­ner. Most front-end devel­op­ers are famil­iar with this approach and can usu­ally imple­ment it as part of their exist­ing devel­op­ment work. Again, it’s impor­tant to plan and doc­u­ment the data we want to cap­ture before devel­op­ment begins, or at least early on in the devel­op­ment process.

Back-end devel­op­ers who work with CMS tem­plates and other server-side code can also help with this type of meta­data. If your CMS can be pro­grammed to dynam­i­cally pop­u­late the markup of your pages, screens, and views with meta­data that might oth­er­wise not be avail­able client-side, we should then have all the required meta­data avail­able in the right place, in the right for­mat, at the right time for data collection.

Bet­ter Data Enables Bet­ter Decisions

In prac­tice, we usu­ally see and use all of the data col­lec­tion meth­ods men­tioned in com­bi­na­tion. Few large com­pany web­sites have built out a com­plete data col­lec­tor or data object model with all the data they want to cap­ture from their web­pages and appli­ca­tions. Whichever data col­lec­tion strate­gies we choose, ade­quate plan­ning, doc­u­men­ta­tion, and timely com­mu­ni­ca­tion across teams can go a long way in help­ing us ensure that the first link in our data col­lec­tion sup­ply chain is a strong one.

The Peo­ple Factor

Scrap­ing the DOM, select­ing cus­tom data attrib­utes, and work­ing with data objects are just three meth­ods of work­ing with the meta­data within our “data layer.” In prac­tice, any one of these imple­men­ta­tions may be too time con­sum­ing, too expen­sive, or too frag­ile for a par­tic­u­lar team to con­sis­tently imple­ment and man­age over time. The processes, pol­i­tics, and peo­ple in big orga­ni­za­tions typ­i­cally have a greater effect on the degree of suc­cess with one or more of these meth­ods than any­thing related to the par­tic­u­lar tech­nol­ogy in question.

The Road to Standardization

Like most things on the Web, new tech­nolo­gies and tech­niques can start out as bleeding-edge, gain wider adop­tion, and even­tu­ally become “stan­dards” or “best prac­tices.” The use of a struc­tured data object with a spe­cific syn­tax for object names, prop­erty names, for­mats, and value types is a long way from “stan­dard,” but there is a good start. The Cus­tomer Expe­ri­ence Dig­i­tal Data Com­mu­nity Group hosted by the W3C has put out two reports on their data layer work. The Dig­i­tal Data Layer 1.0 “Final Report” details their work toward even­tu­ally stan­dard­iz­ing a data layer for­mat using a JavaScript data col­lec­tion object. The Cus­tomer Expe­ri­ence Dig­i­tal Data Acqui­si­tion Draft details their work toward spec­i­fy­ing the para­me­ters for com­mu­ni­cat­ing this data to dig­i­tal ana­lyt­ics and other tools or systems.

Both reports and all the work by this group is an excel­lent effort by many indi­vid­u­als that helps move the con­ver­sa­tion for­ward when dis­cussing data lay­ers and the tools and sys­tems that use the data. How­ever, a stan­dard is only as good as its adop­tion. If no one com­plies, or only a few com­ply, the stan­dard loses much of its value. But this work is still an excel­lent start­ing point.

Using DTM with or with­out a Data Layer

The really good news is that you can use DTM today regard­less of where or how your source data exists on the Web. In DTM, there are sev­eral ways to iden­tify, select, and cap­ture meta­data from web­pages and appli­ca­tions. Data Ele­ments can be an easy and use­ful way to cap­ture meta­data, regard­less of where that data exist in the page. Any time there are val­ues we’ll refer to more than once within DTM, we should def­i­nitely con­sider cre­at­ing a Data Ele­ment to rep­re­sent and per­sist those val­ues within DTM.

Page Load, Event Based, and Direct Call rules can also make it eas­ier and more effi­cient to iden­tify, select, and cap­ture meta­data, whether or not you decide to use Data Ele­ments.  DTM is flex­i­ble, so it’s easy to use the sys­tem regard­less of where your data layer ele­ments exist.

In a future post, we’ll look at spe­cific ways to do this with DTM.

4 comments
djain99
djain99

Can we use Dynamic Tag Manager to add things like the rel=canonical, rel=page,next, or meta data or robots meta data tag?

Eric Hansen
Eric Hansen

Great article! I highly recommend leveraging the W3C's Customer Experience Digital Data Layer 1.0 specification in conjunction with any Data Layer concept you implement.

Leigh Pember
Leigh Pember

Thank you for that very detailed write-up Jeff. It really helped me get a clear picture of how to manage each interaction across the various elements of my web page. I am now thinking about adding a unique id attribute to each contained element, which I wouldn't have done before reading this.

Keep up the great work.

Thanks

Leigh