Adobe DITA World 2018 – Summary of the sessions of Day 2 – by Danielle M. Villegas

Adobe DITA World 2018 – Day 2 Summary by Danielle M. Villegas

Hello everyone! My name is Danielle M. Villegas, and I’m the “resident blogger” at Adobe DITA World 2018.

It was another smashing day at Adobe DITA World 2018! Like on the first conference day, the conference is hosted and moderated by Adobe TechComm Evangelist Stefan Gentz and his co-moderator is Matt Sullivan. While it was rainy and gray outside my window all day, it was bright with learning and conversations all about DITA! The day included more about integration with Adobe Experience Manager, case studies, intelligent content, and more about the benefits of using DITA!

Unifying Marketing and Technical Content

Succeeding in the Age of Accountability


Day 2 started out with a keynote presentation by Sarah O’Keefe of Scriptorium which talked about how marketing communications and technical communication are in silos now, but they can work together if they can be properly aligned. She pointed out that customers see your content as a single entity, but we know that it’s not even close to that. It’s usually disjointed because different departments do their own things.

The “gatekeepers” of content—namely the cost of distribution—is generally gone. Years ago, it was different with print, but the Internet eliminated that. The distinction of pre-sales versus post-sales content is also fading, due to the lack of gatekeepers. As a result, consumers have the control now, as they can decide what they want to read and when. This includes looking at the technical content before sales instead of after now. In some instances, consumers will look at alternative third-party content which can sometimes be better than the corporate content.

The solution is pulling together all the pieces together. Too many silos are reflected in the authoring side, especially if you have too many platforms. Marketing content and technical content are not the same, but they are more different than they should be. We need a content alignment and think how it’s being delivered out of multiple places. Content alignment needs to come together at the points of design, terminology and usage, taxonomy, search, links, information, timing, and localization. Terminology is often a corporate style issue, and information, timing, and localization are the most needed and focused on strategic elements to make this alignment happen with the right tools.

To bring about content alignment, you need to build out a content model, and implement the content model or variations of one in each authoring system—this would be a content strategy. You need to align design for content consumption by defining UX standards, building publishing layers to meet UX standards, and linking strategy. This is difficult to do! Terminology and usage need to be defined in style guidelines and distributed across all models. Corporate taxonomy needs to be defined by building out a subset of taxonomy for each content type and using that taxonomy to support connections across content. Creating a search system is not easy to do. You must establish search parameters, map search terms across different websites, and potentially create a federated search. Common challenges with content alignment include duplication of effort, competing priorities, different functionality in each tool, link management, and lots of copying and pasting.

However, Adobe Experience Manager (AEM) is a valid solution to solving most alignment issues, as it is built for marketing purposes and has “XML Documentation” – Adobe’s DITA CCMS – built-in to support technical content using DITA.

The advantages of this include having one infrastructure, a single repository, a unified presentation, incremental publishing and translation, and increased content velocity. The disadvantage of going with only one solution is that you get the best and worst of that tool, but the advantages outweigh the disadvantages since there would be fewer problems due to better integration with having that one tool.

The case for unified content comes down to the idea that customers make buying decisions based on all of your content; they hold power, not you. As a result, we must bring content into alignment with how customers use it.

Sarah will post some resources, including AEM and DITA sources on the Scriptorium site, and a Learning DITA live 2019 in Feb, and free DITA training at www.learningdita.com


Adobe Experience Manager as an Authoring Platform

A Mitel Perspective


Chander Aima and Ramabhadran P of Mitel gave the next talk about how Mitel moved towards using Adobe Experience Manager as an authoring platform for their company. Mitel is a global leader in business communications and collaboration, including unified communications solutions and services in the telephony space.

Mitel’s main issues stemmed from their setup. There was no centralized repository for source control. Authors were using multiple authoring tools and different structures and templates across various product documentation sets. Reuse of content across documents was low and laborious, and primarily delivered PDF documents to their customers. Disparate systems and processes made standardization a challenge. They needed to be able to cross-train writers on various authoring tools as the authoring tools varied. This all resulted in disjointed branding.

Mitel knew that the solution was finding a new content management system (CMS), so they looked for an integrated component content management system (CCMS) that could be installed on a private cloud yet could be accessed by all, and would provide for the migration of legacy content into DITA. They also wanted to ensure that they could future-proof the variety of outputs they would produce from the same source, with the ease of customization for those outputs. Lastly, they wanted the ability to reuse content within or across products. They wanted a CCMS that would allow the use of their existing authoring tools and would work to simplify their translation workflow. Ultimately, their legacy authoring environment goal was to go from using many tools and environments down to one.

Their solution was two-fold. First, it was migrating legacy content to DITA using automation. Workflows that used Word, HTML and both structured and unstructured FrameMaker STS files could be dropped into DITA. RoboHelp files were converted to MadCap Flare files, but they were also converted to into XML DITA. FrameMaker and AEM’s out-of-the-box capabilities needed a little bit of tinkering to combine the Word, FrameMaker, RoboHelp, Flare and HTML files. It was only a one-day effort to convert 100 pages into FrameMaker. Once files were in DITA, the FrameMaker Add-on for XML Documentation for AEM made a customized output, and other plug-ins were developed so they could do a variety of outputs. Mitel also used the DITA Open Toolkit (DITA-OT), an open source solution, that helped with customizing outputs, as it was robust with error reporting, and output generation was fast. The main issue with using DITA-OT was that it required an XML developer to customize, maintain, update, and troubleshoot the plug-ins.

On the other hand, they also had FrameMaker Publishing Server (FMPS) to use as well to help with output generation. Seamless integration with AEM allowed for customized, structured FrameMaker templates, and provided a shorter learning curve while producing multiple outputs used the same .sts files, even though Mitel felt that the output speed slower compared to DITA-OT and lacked the robust error reporting, but was improving.

In summary, the ease of migrating legacy to DITA using AEM workflows using AEM’s built-in XML editing tools and seamless integration with FrameMaker meant a shorter learning curve and faster orientation for writers, which in turn supported multiple rendition engines as allowed through DITA-OT and FMPS for the most enhanced flexibility. Through the ease of customization with FMPS, the rendition engine could be customized and could generate output in various formats in a matter of a few days.

The discussion in the Q&A talked a lot about the need for DITA OT developers to help with the customizations, and there aren’t many out there. But worth the cost to automate many of their processes in the end!


DITA for Marketing. And why it is a good idea.

How marketers can leverage the power of DITA


In the third session of the day, Juhee Garg of Adobe talked about how DITA is ripe for marketing communications now. She started off by saying that in a recent Adobe survey, 50 % of respondents used DITA and 30 % were moving to DITA. But not many respondents were in the marketing space—but now the time is right in marketing!

Trends have been changing rapidly, and there is a need to adapt to new trends quickly. This has been reflected in the shift in content consumption patterns. First, with the World Wide Web, there was the need to interactive, interlinking, and searchable. Then came social networks, and with increasing numbers of people on social media, information had to be small, chunk able, and snackable that could be used when engaged for sharing. Smartphones make up 70 % of internet traffic, so mobile content had to be more precise and could squeeze into a smaller space. Now, we are contending with augmented reality (AR) and virtual reality (VR) such as chatbot, IoT, and other interactive platforms that would use marketing information.

There’s a growing need to personalize the customer journey. As the customer user moves through the journey, there needs to be different content through different channels. The bottom line is that you have to serve different content at different phases, and different people are involved at different stages of the journey as well, based on the context of where the user is coming from. Another emerging trend is self-service content. While this is not a new concept, more and more consumers are relying on this now. Users want to figure out their own trajectory and find what they are looking for. Searches with links aren’t necessarily enough. A chat could be manned by a person or bot to discover information faster than reading or utilize web-based assistance like Alexa or Siri or Cortana.

Adaptive content is the need of the hour, namely content that adapts to your needs and end users to present the right content to the end user. It must be semantically rich and structured. Content knows what it knows and associates what kind of content—the meaning of the content—within the content itself.

Contextual information can be a product image, product overview, data sheet, tech details with specific owners or platforms, for example. Other tagged content could be devices, audience, use case, and specific geography. The idea is to create content once but deploy everywhere with different device variations and different deliverables based on role or geography, as well as different stakeholders and users and phases in the customer journey.

You determine if DITA is your solution based on your strong need for content automation. You need to be able to produce different deliverables that are automated, like emailers to different audiences and personalized that can be automatically updated, which is something that DITA can do.

Tech docs are no longer only for sales as more tech docs are slipping into marketing docs now. You want content integrity between both sides. The trick is finding a common format that can be worked for both groups to enable seamless sharing. Most marketers don’t think DITA is for them, but it really is. By simplifying the schema, you can constrain the DITA or use Lightweight DITA to remove elements that aren’t relevant for marketing authors. It’s a matter of simplifying the authoring experience. In AEM editor, it’s like any rich web editor, where you can still customize and add most elements you want without having to think about schema restrictions.

AEM is used by marketing for their websites with the Assets (DAM), Sites (WCMS), Forms (adaptive forms), and XML Doc (DITA). AEM is part of the Adobe Experience Cloud, which also includes Audience Manager, Analytics, Target, and Campaign (email marketing). The entire ecosystem is supporting DITA as an XML format, so it’s working in XML already, but transparently for a marketer. This provides users with a unified enterprise CMS with DITA support.

Juhee took a moment at this point to demonstrate AEM, which is totally web-based. She showed how the DITAmap was in AEM and showed the documentation, then showed how to push them into marketing pages and make changes. Because everything is automatically created in DITA, you can pull in fragments into marketing content easily to publish as needed.


Mixed Marriages …

Using Adobe FrameMaker and Adobe InDesign together in Technical Communications


Charles Cooper of The Rockley Group was the next presenter, and he spoke about how Adobe FrameMaker and Adobe InDesign can work together in technical communications. Charles started off his talk by explaining what intelligent content is, which is the back-end content strategy that consumers don’t know is going on. He recommended looking at the two types of content strategy for more information.

Intelligent content has five main characteristics, which are that the content needs to be modular, structured, reusable, format-free, and semantically rich.

Going into more detail on those characteristics, Charles broke it down this way:

  • Modular content is componentized content designed for reuse (topics, tasks, concepts, etc.).
  • Structured content is content designed to be both human- and machine-readable, like readable metadata, and futureproofing content to some extent.
  • Content reuse involves reusing existing components to develop new content products.
  • Format-free content carries no embedded formatting information, allowing content to be prepared automatically for any device.
  • Semantically-rich content includes metadata that describes what the content is, what it’s about and more.

The challenges include business silos and tool silos, and that time is money, and it’s not getting easier to find either of them. To paraphrase Charles, he said it’s never going to be all unicorns and sparkles. He quoted author Douglas Adams saying, “People are a problem.” They are resistant to change and don’t always want to adapt.

There are several groups that can work together well for content sharing, but it depends on whom the natural allies are for each group, and deciding where the common “pain points” are between them. Marketing, business, content creators, sales, translation/localization, publishing, maintenance (those who maintain the content later), customer service and even user/customer feedback can all provide feedback on how to improve the content and use content as they layer over each other.

Structured Content is key to making this happen. Both FrameMaker and InDesign can produce content in structured and unstructured manners, although that doesn’t always happen, of course.

The typical workflows of creating a project for both tools are to create an outline, then create the content and apply visual styling, followed by a review and approve (which can happen several times in the process and happens too far often in design, with too much discussion about the visual rather than correct content), publishing the content, then finishing with another review and archiving.

When comparing FrameMaker and InDesign, they both had the values of being highly powerful and capable tool, robust, able to create and manage long documents with ease, document focused, and historically focused on text-heavy content. The main difference was the audience who used them, with technical communicators using FrameMaker for manuals and InDesign used by marketers and designers.

The key to working with shared documents, shared topics, and components, and retrieving those parts and assembling them, is planning ahead. You can’t and shouldn’t “wing” it, especially in documentation. Documentation is now part of the product, so you need to have it designed.

Be aware of your corporate or departmental culture. Technology is easy; culture is hard – and strange, and more confusing the deeper you look into it. But you’re going to have to deal with it. “One size fits all” rarely does. There are challenges—it’s a paradigm shift. There’s a change of focus—on topics instead of documents, working towards futureproofing content, ensuring content accuracy and not focusing on appearance, looking at customer needs instead of internal needs, and sharing yet not hoarding. We need out-of-the-box thinking!

There are no bad tools on the market these days, but all have their strengths and weaknesses to match your needs against their capabilities. Do an analysis first before you use a tool. Look at tools you have, but do it one step at a time. Don’t try to do everything else at once! Change is inevitable, and training will be needed. The focus should be on personal ownership of content, personal or department needs rather than customer needs, creating content for a particular output, creating content that’s specific to a particular need, that it’s not reusable or unwillingness to apply metadata to content as it’s created

Intelligent Content provides guidance. You can share content between department using different tools but have to plan for it. You have to take many things into account, such as a shared vision, corporate and departmental culture, and accept out-of-the-box thinking. Planning is essential, as success hinges on it becoming part of the business culture—not an add-on or top-down edict. It takes time and forethought but yields better quality, faster approval, corporate sharing, happier customers.

FrameMaker’s strength is rigidity in structure whereas InDesign’s is its flexibility in design. InDesign also uses its own version of XML (IDML), but an InDesign author could easily learn to use the XML and make the adjustments. Charles often would use InDesign more for output publishing.


From Challenge comes Opportunity

Why BlackBerry’s Enterprise Documentation team made the move to XML Documentation for Adobe Experience Manager


Marco Cacciacarro, Senior Technical Writer at BlackBerry, presented another case study as to why BlackBerry’s technical writing team migrated to AEM. Blackberry does its best to be bleeding edge with its content management and documentation. Marco works within the Enterprise Documentation Team (not the phones!).

It started with BlackBerry’s transition to being more of an enterprise IoT company and moving away from its smartphone business. It had a significant impact on writing teams who were used to heavily customized CMS and built from scratch websites. The company transition brought changes, which meant that the structure they relied on no longer existed. Since they lacked control over their primary tools, the natural solution was to find a new CMS that was easy to use and maintain, with necessary features that would integrate with their processes, provide options for customization but required very little work to do it, and would give them complete control over how they offered content on a new website. The choice was made that they needed a closer alignment to BlackBerry.com. BlackBerry’s web management team already used AEM, along with a larger initiative to bring all company websites into AEM. After attending a demo for XML documentation for AEM, Adobe’s solution stood out as an answer to their situation. After a two-month proof-of-concept trial, the team moved forward with building their new AEM CMS!

The challenge was that the team was not relying on IT to help with migration. To mitigate this, they set a realistic timeline, and formed working groups, each with assigned tasks of installation and configuration, CMS migration, localization, web migration group (web redirects and mods), UX and transform design, and training. They also developed relationships with other teams that could help them, including the web management team, web infrastructure IT team, and business solution experts for executive support. They planned for a lengthy test migration—similar to POC trial, but larger scale, making sure to report, record, and prioritize every issue. They involved as many team members as possible through demos, working meetings, discussions and assigned tasks to develop team familiarity with the tool over time. They felt that it was important that they planned for a self-serve migration process, as writers are best suited to migrate their own content, even if only a few people handle migration, as it would otherwise limit opportunities to learn, and allowed them to develop a step by step migration process that everyone could follow and be comfortable with the new system.

They found that due to the sheer scale of the migration they had to select content to migrate that was for only active products and most recent versions, and evaluate current offerings and streamline due to effective reuse, document length, necessity, duplication, and depreciation of the content. They followed an aggressive timeline to implement in less than one calendar year and broke the project into several phases to ensure that this project could be done while balancing their regular workload. They took advantage of accessing tech skills outside their team, as they needed help with a scripting solution and PDF and AEM site output formats. They also reached out to the other groups in our organization, like a developer co-op student who created and tested excellent scripts and to the web management team for work on output formats. Good relationships with other teams gave us access to essential skills.

Adapting our processing to the new tool was an opportunity to scrutinize our processes. The collaboration was key to testing a feature, coming up with options, demonstrating to the working group, and discussing and reaching consensus. A collaborative approach made us confident in our decisions.

Technical Setbacks are almost a certainty—test, test, test! Specific, clear test cases developed with enough testing allowed them to catch and resolve blockers before they become a problem. They learned to be flexible and readjusted plans by deciding what was critical and gating versus what was nice to have.

Adobe played an active role in their migration project with bi-weekly touchpoint meetings dedicated forums. Adobe was open to suggestions for future improvements, which created a mutually beneficial relationship and a high level of engagement that made a big difference.

The feature highlights that BlackBerry finds most valuable include a clean XML author, content review tools, map building and output, intelligent versioning; and the interface with AEM web authoring.


Conversational DITA

Using DITA for Chatbots and Voice Assistants


Joe Gollner of [A] and Kristen Toole of Adobe kicked off the next to last session of the day talking about Conversational DITA, and how that would be used for chatbots and voice assistants. Chatbots and voice assistants were a very hot topic at last year’s DITA WORLD event, so hearing about Adobe’s approach with Joe’s insights were going to be interesting.

Kristen leads the discussion by saying that chatbots and voice assistants are a tipping point and that adding chatbots and voice assistants to your library of channels changes things. Kristen was tasked with building the Adobe business case for bots. She found that despite robust SEO efforts, customers continue to struggle to find content. In fact, there was a 46 % increase in support chat volume from Q3 in 2016 to Q4 in 2017. Customer communication preferences are changing. To serve Adobe customers better, they needed to stop relying on static self-help content as the only mechanism to deflect support calls. Not everyone has bots as much as people think but doesn’t mean you can’t start to figure out what’s the best way to get them going. There’s a need to keep innovating instead of always having one-to-one discussions. The next generation of creatives are their target audience, and they don’t like phone calls, so there’s a need to stop depending on static self-help pages. For customers, conversational interfaces offer contextual, proactive, and personalized support. The goal was to ensure users have an easy, efficient, and delightful experience solving issues using their Adobe apps and services. For Adobe employees, a conversational interface can amplify their voices, automate repetitive tasks, and easily collect timely and contextual data to reduce problem resolution time.

A small but ambitious pilot was launched using conversational snippets and links to HelpX articles. They couldn’t tax or change the current authoring infrastructure, so AEM fragments or using DITA wasn’t an option. The content was written in a spreadsheet and rendered to JSON manually through a custom encoder. It was a big success, which prompted the next phase.

This is a new area of specialization writing for this kind of content and technology. It was determined that maybe Lightweight DITA might work for the Adobe bot strategy, which also needed to overcome the need to create a content silo, use oversimplified authoring tools, cut off access to review and localization workflows, and arduous QA steps. With further analysis, the content had to reflect something more conversational through DITA. DITA could be converted to “Conversational DITA” which would be made from system sustainability, interoperable metadata, provide authoring ease of use, and responsiveness to change, and the output would be a chatbot, voice assistant, and knowledge base.

The authoring experience would need to be easy for a writer to write short conversational snippets for bots, make applying intent and entity metadata foolproof, and be able to check that content and metadata are valid before they get picked up for the build. (No more Excel sheets!) Conversational DITA is intended to optimize the authoring experience to produce the best, most cost-effective chatbot content possible. Authoring environment, preview services, reusable content, management services, chatbot emulator, reporting services, and metadata resources feed into the authoring experience.

Chatbot content begs for reuse, with dialog components, terminology, references, and snippets from documentation sources. Chatbot interactions quickly shift from fairly formulaic elements to highly specific details about the product feature or process in question. If content components and metadata units are kept simple, then reuse is made easy and intuitive with the resulting content being manageable and processable even at scale.

Quality issues often turn up too late. Writers write and testers test, but what happens when content doesn’t appear at the right time? With such a manual writing process and a lack of real-time metadata checks, this was a real headache for Adobe’s testers and reviewers, not to mention their writers. Integrated Feedback helped to enhance quality. The reuse orientation of Conversational DITA is an important aspect of the quality process. Reusing content components, dialog components, and metadata units that are known to be valid is a step in the right direction.

Chatbots want to be specific—applicability is a key objective—awareness of what content is about. Adobe needed to scale the metadata model in a way that keeps up with the cadence of customers coming up with new problems worth solving, especially with products and offers that evolve at least as quickly. A simple and scalable approach to metadata—metadata resources—are about external systems which, in turn, control that data, and is intended to be useful to downstream processes. Metadata must be reusable across content sets, from source systems, by downstream processes. Metadata resources must be like any other content component—portable and reusable.

What about Voice Assistants? As with chatbots, but even more so, there is a lot going on behind the scenes to create a satisfying experience. Speech Synthesis Markup Language (SSML) Reference is what’s used to format content for voice assistants. Voice compatible content is a specialization that supports variant content treatments, analogous to localization and specialization of interaction maps and dialogs, incorporates just enough SSML, leverages reusable content components, and leverages reusable metadata units, which is published in a consumer consumable content.

Chatbots and voice assistants push us to move more quickly down the path of intelligent modularity and genuinely breaking fee of legacy publishing paradigms. They force us to focus on what is most important: usability, interoperability, precision, scalability, and adaptability—synonyms for simplicity.

For our use of DITA, DITA is rightly understood as an application. It also embodies architectural features and practices that address fundamental needs when managing, reusing and publishing content. We focus on the latter with Conversational DITA and leverage the former when it suits up.

For the enterprise, look at the bigger picture—the master content model, the master semantic model, and how it all fits into the intelligent content lifecycle.

Adobe looking forward is doing more small pilots and bots with the attempt to use natural language along with Conversational DITA strategies.


Stop Making Manuals!

Make the move from manuals to information products


Wim Hooghwinkel of iDTP ended the day by imploring us to stop manuals and start writing information products instead!

Wim made the argument for this by explaining how DITA components can be used in various ways to create more than just manuals, but also other customer-friendly outputs based on the same foundational content, broken up into “blocks,” and selective components pulled and rearranged to form a new type of documentation.

Wim showed several animations about how other components were added to that main content core, which he called the “actual content.” Common content would be placed either at the beginning or end of the actual content. Common content would include metadata to enrich common content with other front matter (like title product name and introduction). The actual content would be product specific; and within that, you can have project specific content. Shared content, which could also be included, would be used to enrich the existing content you already have—images, terms, statements, instructions or project specific or product specific. Some of this can be template driven.

Content can be text, images, objects, etc. that can be named, renamed, and reused and reassembled in several conversions. This can be a topic, folder, or seen as a project, a DITAmap, a folder structure CMS, a navigation HTML, portal structure, or topic. The actual content might be the part that needs to be written, versus the common content or shared content is the reusable parts. You could have global content and generic content that work into this, too. Treat all content differently and store separately by global content, common content, shared content, and generic content.

To help better differentiate the types of content, Wim broke it down this way:

  • Global content is to be used as-is. In any context, it may have variable parts, but these are independent of where it’s used. An example would be something like a copyright statement or legal content, where it would be created once, there would be controlled maintenance and one instance. It would be used it to build end-user products.
  • Common content is made of reusable topics, variable filled in by keyrefs.
  • Shared content also employs reuse in other parts and components. It’s not used directly, but is a content source for keyrefs or conrefs, and used in multiple places used to build content topics.
  • Generic content is global content with context-based variables, created and maintained separately, and used dynamically.
  • Actual content describes the product to the end users. It’s the main content with context.

Once content has been classified, the next step is to create deliverables. Wim used his animation with all the content components and showed how different combinations produced different deliverables.

What’s next? DITA will help build those blocks in a dedicated scheme. By creating the “blocks,” you aren’t writing manuals alone, but rather you can create multiple information products by choosing and arranging the blocks. Wim also showed how this could work with a chatbot configuration as well.


Summary

And that’s a wrap! It was a full day, and now we are two-thirds of the way through our conference! Tomorrow’s lineup looks like it will be equally as exciting.

Danielle M. Villegas

Danielle M. Villegas is a technical communicator who has most recently worked with International Refugee Committee (IRC), MetLife, Novo Nordisk, and BASF North America, with a background in content strategy, web content management, social media, project management, e-learning, and client services. She is also an adjunct instructor at NJIT, and has he own consultancy, Dair Communications. Danielle is best known in the technical communications world for her blog, TechCommGeekMom.com, which has continued to flourish since it was launched during her graduate studies at NJIT in 2012. She has presented webinars and seminars for Adobe, the Society for Technical Communication (STC), the IEEE ProComm, the Institute for Scientific and Technical Communication (ISTC)’s TCUK conference, and at Drexel University’s eLearning Conference. She has written articles for Adobe, STC Intercom, STC Notebook, the Content Rules blog, The Content Wrangler, and InSyncTraining as well. You can also follow Danielle on Twitter: @techcommgeekmom

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.