Adobe DITA World 2017 – Day 3 Summary by Danielle M. Villegas

Hello again, everyone! Your Adobe DITA World 2017 “resident blogger” Danielle M. Villegas is here again. I hope you liked my summaries for Day 1 and Day 2 of Adobe DITA World 2017.

Again, Adobe TechComm Evangelist Stefan Gentz and Adobe Solutions Consulting Manager Dustin Vaughn opened the room and welcomed the audience. At 9 am Pacific Time we started with our Day 3 keynote speaker, Tonie Ressaire!

The day started with an informal poll. Stefan Gentz asked the audience how they are accessing the conference with Adobe Connect. It looks like many access Connect through both the browser and the plug-in app. Many ways to connect! We had another full day with very active Chat Pod discussions in Adobe Connect and learned so much more about DITA and how it can be integrated into what we all do.

Before anything else, there was a change in the middle of the schedule today. Val Swisher, CEO of Content Rules, Inc. was unable to present her talk, “The Holy Trifecta of Content – Combining structure, terminology management, and translation to achieve success” today, as she has a home in one of the areas that’s affected by the crazy wildfires going on in Northern California, and had to attend to what was going on with that. Lucky for all of us, Val will be rescheduling with Adobe in the near future to present this webinar on its own. Having attended another version of this particular talk of Val’s, I can tell you that you will be in for a real treat, and it’ll be worth the wait. We hope everything works out okay, Val!

Dustin Vaughn, one of Adobe DITA World’s co-hosts and the Solutions Consulting Manager for Adobe TechComm, stepped in at the last minute with a great, interactive talk called, “From Chaos to Collaboration – Connecting the dots in DITA Projects,” which participants enjoyed, and is described in more detail below. Thanks for stepping in on short notice, Dustin!

Now, on with the excitement of Day 3!


[Keynote] Toni Ressaire: “Dungeons & Dragons for Marketing & TechComm – Contextualization and Molecular Content in the Information 4.0 World”


Toni Ressaire, CEO at pub.ink / Université de Strasbourg, started her talk by explaining why she is interested in AI, VR and chatbots. Her interest started out with a course in the VR world about 10–12 years ago, but people weren’t that interested at the time. But now, people are!

We know customers are reading pre-marketing content, but it’s important in the aftermarket too, so it’s all one content strategy.

How do they determine context in the gaming industry? The example Toni used is the Dungeons & Dragons roleplaying game, which originally started out as a board game, but evolved into online games. Dungeons & Dragons roleplay characters, and they roleplay to develop a story of their own creation. Looking at this online game helps us understand how playing the game can help us frame how writing for chatbots, VR, and AI can work. It’s all about contextual content, which is a look at the mono-dimensional world versus the multi-dimensional world. User journeys are going to be more complex, and users will expect to make their own decision.

How can we begin to develop a content delivery model that prepares us for future machine conversations? There are several elements that are needed, specifically:

  • Context

  • Personas

  • Molecular content

  • Semantics

  • Mindmaps

  • Chatbot conversation building example

The user journeys here are called scenarios, which are a series of personal experiences based on the evolving context of the user’s path of actions and choices.

How do you write content in an unpredictable environment? The key is building blocks of intelligent content. The context of the content is not static, but an observation of a particular time, space, and intent – what does user want to do?

In a game, you, the user, starts the game, but the starting point is up to you. You will decide how to take the journey. You are given a goal and some hints on how to get to that goal, but no other directions. It’s important to know your location, so you know where you are in the process. You might take some side trips along the way or you might want to explore a bit, and see what else is out there.

How can we predict the next context? We can figure that out through research. In AI, sensors will know what elements of each person’s context are. If you would like further information about it, Toni recommended reading information from Stephan Sigg’s website. Sensors are already used in industry. Data is gathered and given to people who need it, and need to make decisions. But eventually, they will be sensing us – but not there yet.

Software examples of this are emerging. We’ll be figuring out levels of experience with it, such as whether the user is an expert, novice, or first-time user, or if the user is a writer, developer, or project manager.

Designing for context now involves some imagination in creating the user journey or possible scenarios, defined personas, and information offerings.

To break it down more, here are the elements that are needed as you write up your contextual content:

  • Personas – Semi-fictional characters based on real users. You can create them by being specific, and including market research and real data. Personas help us begin to define the context.

  • Molecular content – Traditional content is linear. Molecular content is intended to be read by machines and delivered in multi-dimensional contexts. It allows the user or a machine to amalgamate content to fit a specific context. It’s the smallest part of the information used that is coherent and used with bigger parts to make a larger piece of content.

    • If you’re using DITA, you might already have molecular content, but you might have to change it going forward to be more personalized.

    • Keep in mind that content is not linear in intelligent content.

  • Anticipating the user journey – Try to work out the possible scenes, places, situations, and where the user may go. Again, you need to use your imagination and information already identified in personas. Try to anticipate various paths ahead of time.

Building conversation with users and bots involves all these elements and more – personas, molecular content, semantics, mindmap, context, and testing based on user response. Toni showed us one tool that she uses called Twinery to help imagine scenario and testing; communication layer with bot interaction is not included to understand tasks and respond appropriately.

We still to write for a whole subject. We still have a document mentality, when we need a molecular content mindset instead. This will help us to build better semantic content to take us into the future.

(Danielle’s Note: Toni’s talk reminded me of Jay David Bolter’s talks about hypertext theory which I read about in graduate school. It also reminded me of a game related to The Hitchhiker’s Guide to the Galaxy that the BBC made available online a few years ago that somewhat follows a little bit of what Toni’s talking about here. It’s a little crude, but you’ll get the main feel of what she’s talking about by playing this game (even if you don’t know the book or movie). Play the game, if you can!)

Click on the Slide Title to download the full presentation (PDF):




Eliot Kimber: “XSLT Magic Tricks with DITA and FrameMaker”


Now, I will admit, this talk from Eliot Kimber, Principal Solutions Architect at Contrext Solutions, was very deep level, programmatic stuff that mostly went over my head. Additionally, Eliot talked very fast, so it was hard to get all the details, so hopefully, I captured most of what I understood! This is for the diehard, deep DITA users.

XSLT is a standard programming language used for manipulating XML documents. It’s a mature standard with lots of support in tools, and allows the same transformations that can be used in many environments. It takes the input of one or more XML docs, and produces as output any of one or more XML docs including XHTML5, text data of any kind, JSON and so on.

Which XSLT engine should people use? Eliot recommends Saxon or Xalan, although he prefers Saxon as a default, as it supports XSLT2. FrameMaker comes with both Saxon and Xalan. You can upgrade Saxon independently if you want.

Why would you use XSLT in FrameMaker?

  • It’s an alternative to read/write rules. It’s applied before read rules and after write rules, and it can make it easier to adjust aspects of the XML on import and export.

  • It can generate additional outputs. You can use XSLT on export to generate additional outputs like HTML files, reports, etc. It’s a viable alternative to using DITA-OT.

  • You can adjust XML, such as moving elements around or adding elements or attributes needed by FrameMaker. You can group and sort elements (such as glossary terms), add index entries based on markup, adjust text details, and control line breaking in examples (add zero-width spaces or non-breaking spaces).

  • You can validate editorial and business rules. XSLT will allow you to check things not normally checkable with DTDs and EDDs (such as rules for content, co-occurrence rules), check rules not defined in DTD or EDD that reflect local usage, require elements not required by DITA or in a specific order; check rules for IDs, use of keys, etc.

  • Generalize on Import/specialize on Export – enable use of specialized content with generic DITA FM app; “Generalization”: Transforming specialized elements into one of their ancestors; generalization is always reversible

To set up this structure follow this path on FrameMaker:

  • From the Structured Application Designer, go to Advanced Settings

  • Specify the XSLT Transforms to use for pre-processing and post-processing, or set up a transformations XML file and refer to that from structure app.

Some XSLT Basics:

  • Style Sheets – shows the style sheet document root, such as templates, etc.

  • Templates – match elements and modes. They apply rules to elements that match and can generate literal result elements.

  • Nodes and Templates – XSLT treats XML docs as trees of nodes made up of elements, attributes, and text nodes. The input document is processed as a tree starting with the document root node, and then templates are applied to nodes. The first template that matches a node handles it.

  • Match Templates and Context – Templates use XPath expressions to match elements (and other types of nodes).

  • Default template – matches any node and applies the template to its child nodes.

  • Context Node – a node that matches a template is that template’s context node. “.” in XPath expression refers to the context node.

  • Applying templates to nodes – within a template, process additional nodes by applying templates to those nodes.

  • Selecting specific nodes – you can select specific nodes with “xsl:apply-templates”

  • Identity transformations – take a doc as input, and produce the equivalent doc as output. This forms the base for most of what you’ll want to do with FrameMaker.

XSLT modules can organize style sheeting into two or more files. One file is always the top-level mod. These can import or include mods, for simple transforms always use “import.” Don’t put catch-all templates in top-level mods as if can’t be overridden by imported modules.

“Magic Tricks”

  • First Trick: Make Notes into Hazard Statements, add transform to structure application, then try it

  • Second Trick: Move figure titles to the bottom of the figure. DITA content models put the figure titles at the top of the figure, but typical print layout puts them at the bottom of the figure. It’s a nice feature to have this in the editor. To do this, hack the DTDs, then create a pair of transforms, then update the structure application for each topic type, add pre- and post-processing entries in settings, then try it. The challenge with this one is the DOCTYPE as XSLT doesn’t allow dynamically setting of main result file DOCTYPE values.

Many XSLT resources out there! Check things out online, as books are a little too deep.

Click on the Slide Title to download the full presentation (PDF):




Dustin Vaughn: “From Chaos to Collaboration: Connecting the dots in DITA Projects”


As mentioned earlier, Dustin Vaughn, Solutions Consulting Manager at Adobe, jumped in at the last moment to give this presentation. During much of the talk, he had the participants respond in the Adobe Connect chat pod to make this an interactive session.

Often, we have problems trying to get everyone to cooperate towards a goal. Dustin used the example of trying to row in a tandem canoe. Without cooperation, you don’t go anywhere. When working on projects, technical communicators often have to work with others, most likely subject matter experts (SMEs). SME contributors can include a wide range of individuals involved with different technical backgrounds, so they each have different ways of how they want to contribute to projects. Once you have all this different content in different formats, how do you get it in DITA? Copy & Paste is common, but not really the best way. Smart-pasting in FrameMaker is one option.

The bottom line is that it’s an expensive, manual, continual content process. But things change, so there’s a cost to be considered to keep things current. In the content creation process, it’s best to hide the complexity, make it available across platforms, and remove the need to continually transform the content to DITA.

Once content is in your system, you need to review and validate that it’s as right as it needs to be. PDF is one tool that could be used by review content experts, plus other tools could be used as well. Sometimes, content can be shared through email, or shared drives, etc. The idea is that multiple people can access at the same time, and contribution depends on access rights. You want those comments and suggestions! The problems arise when the paper and ink review is done (printing out everything, and using the red pens). The physical process is slower, and not really collaborating. Geographically diverse teams make it even harder! Even so, you can also have problems with too many hands on the keyboards as well. Serious project management is needed to ensure that the edits are included in a judicious way.

The optimal review process includes real-time collaboration, simplified collaboration tools for casual contributors, making it easy to manage for the IT department, platform independence, an automated workflow, and an audit trail.

The last important component in the collaboration process is managing the translation and localization. This can be an extremely painful, manual process, and it’s mostly done using external vendors for review. There are many different ways that the content gets to the external language reviewers, and extracting their changes and notes can also be painful, requiring a careful process not to overwrite already approved translated text.

Solutions for this workflow include automating where possible, making sure that both delivering and receiving content from the translation vendor is in the format they expect, and ensure the approval of content before integrating the translation back into the source.

Click on the Slide Title to download the full presentation (PDF):




Jang F. M. Graat: “DITA is too complex! Let’s make it easy!”


Jang F. M. Graat, CEO at Smart Information Design, stated that the most common reasons why businesses resist DITA are:

  • Authors are scared of XML – but modern tools like Adobe FrameMaker hide the tags and attributes.

  • DITA has too many elements – each business domain adds their own semantics

  • Does one size fit all? – down to a discussion of minimalism vs. specialization in DITA

Minimalism is defined as being barely adequate or the least possible. Specialization is about adapting to special conditions. When writing content, specialization is endless, as there are always a growing number of elements.

So what is minimalism? More elements mean less usability. You need to constrain your elements just to what YOU need. Carry only what you need – use different contents for different jobs.

When writing in DITA, constraints came come to the rescue! This can be done by filtering out what is not required and using a minimal set of elements for maximal clarity. You should define the required elements, making sure your authors include essential information. You should also define the structure of your info to improve readability and completeness.

Constraints are DITA’s future, and perfect for every occasion. Constraints live in mod files, which are easy to switch on and off, but still too hard to edit. Sophisticated tools hide complexity, and no tech specialists should be required. We can do this by limiting the available elements without requiring formal DITA constraints. All can be done within FrameMaker options.

At this point in the talk, Jang started demonstrating how this is done with FrameMaker. Since there are many ways to configure FrameMaker, he showed us his set-up which was highly simplified.

You can use domain modules as text insets by using conditional text to allow excluding domains. It’s easy to configure in FrameMaker’s powerful conditional expression builder. It’s also easy to create constraints using variables to include or remove elements. There is no need to confuse authors with hiding, yet keeping unused attributes. This can be made even easier using ExtendScript to make changes in the EDD.

Jang is in the process of creating a website called www.smartdita.com, so check back at that website for more ideas on how to do these kinds of changes in the near future.

Jang’s approach is not dumbing down the standard, but using only what you need within the current standard in a more sophisticated way.


Jaquie Samuels & Bernard Aschwanden, CEO at Publishing Smarter: “Content is Content: End Users Need MarComm AND TechComm, DITA Can Help Create Both”


This presentation is based on a whitepaper that Jacquie Samuels, Consultant at Publishing Smarter, and Bernard Aschwanden, CEO at Publishing Smarter, wrote together, and you can find the slides on Slideshare. Bernard presented on his own, as Jacquie was unfortunately unavailable to attend.

Content Marketing has overlapped with technical writing. Technical content MUST be part of your marketing. Bernard showed a demo of a QR code to a video of how to use a product to show his point.

The informed consumer does their research first. It starts with problem recognition, then an information search, followed by an evaluation and selection of alternatives, and concluded with the purchase decision through to the post-purchase experience. With much of this, there’s usually some sort of documentation, which can include video, audio, or whatever else helps people learn before they buy.

Content convergence is a matter of mixing the pre-sale, or marketing, documentation with the post-sale, or technical communication documentation. Content creators need to identify the end users early, target for their needs, and provide an always-on dialogue aligned with the end user. Multichannel publishing can provide content in many appropriate formats. Differences do exist, but ultimately, it’s one customer. They should be looking for a seamless and consistent experience from start to end.

Integration matters! Consumers can make informed decisions, so content provides everything they need, and they can decide want they need and what they need.

DITA is for different readers, so let’s make it friendly. You can build your own content based on DITA. That way, you can change what customers use to see.

To go further, training and support – in addition to MarComm and TechComm – can create a consistent user experience, providing one user and one set of content. The end user doesn’t know the difference of who wrote it. Again, they just want the seamless experience and consistent content.

The initial output of content could apply to all users, but by using the dynamic content filter, it allows the user to see whatever’s relevant to them. Think of shopping experiences where you narrow down your choices through filters.

Implementation can be done through publishing with Adobe FrameMaker, as it’s integrated out of the box, and you can choose from many choices. You can mix, match, and repeat with information to create customized experiences. Using the cheapest available tool is not always cost-effective in the long term, after all! You need to be able to bring in all the information together from all aspects. Sometimes customer-created content can even be better than manufacturer content. That shouldn’t be, as it proves that the marketing information didn’t help and that diagrams alone didn’t do the job, even if both were still accurate.

User experience gets lost in TechComm and MarComm. We need to be more vigilant about the customer experience and having the ability to go seamlessly from one point of content to another for your product. Adobe, as a company, concentrates on content creation, which is crucial as a core business asset. Businesses are bought for their software content and the products they make, not just the customer base.

When making your users happier, focus on user documentation that’s actually helpful for the user, not technical documentation. Create content, making sure that the content provides value, is seamless, and not locked away from the public. Make your content a business asset.

Jacquie Samuels and Bernard Aschwanden have also published a whitepaper: The Convergence of Technical Communication and Marketing Communication.

You can download the whitepaper here.

Click on the Slide Title to download the full presentation (PDF):




Joe Pairman: “Give Your DITA Wings with Taxonomy and Modern Web Design”


Joe Pairman, Lead Consultant at Mekon starts with explaining, that companies want their content to be effective. But for it to be effective, it needs to be found and viewed in the first place!

Where is content found? Typically, it’s found on the web via a search engine. For one of Joe’s clients, 80 % of their page views came from organic search, and he found this to be typical.

What kind of pages do Google or other search engines rank highly?

  • Mobile-friendly sites, not highly designed sites

  • Fast loading sites, not bloated with useless code or irrelevant information. Sites need to be “mobile-friendly.” Does Google rank your website as “mobile-friendly”? You can check it here: https://search.google.com/test/mobile-friendly

  • Alternative pages with extremely simple HTML – parallel pages with hardly anything on them, especially on mobile devices

If fancy visual isn’t important, what is? The content itself, for a start.

What kind of content gets ranked highly and found?

  • Purposeful content

  • Content that matches users’ intent

  • Specifically, one of four intent types

    • I want to know

    • I want to go

    • I want to do

    • I want to buy

  • Two of these probably match what our content is aiming at

This can be achieved through knowing what DITA to know and use.

How do simple search results use DITA structures? They look at the title, and at snippets, which are often the first 170 characters of your <shortdesc> transformed to HTML in the meta description. <shortdesc> is a very important chunk of info. SERP link previews tend to use it, and it may improve your ranking into the first place of the search. For site searches, link previews are just as useful. Well-structured procedural steps are important for the “How do I do” intent.

However, goal-oriented DITA needs help. Each topic focuses on a goal, but how do we analyze? Taxonomy is the key. Taxonomy is a way to keep track of things that are important to your organization, a way to keep track of a name for those things, and a way to indicate some broad relationships for how they fit together. Joe showed what it looks like in categorizing items using taxonomy.

Taskonomy (note the “task” at the beginning of the word) is simplifying by focusing on user task organization based on tasks that need to be done. The term was coined in 1982, so it’s not a made up word! Content creators should start with top-level tasks that can bridge roles, tools, and environments, and may correspond to usage lifecycle. Insight deep in the hierarchy involves understanding that users need to see content related to their task that are detailed procedures but also overviews or scenarios. This may involve re-architecting content towards a user goal or session per page or URL. Focused pages reduced duplication and effort. For example, instead of a comprehensive user guide per product, convert to a single instance of each task, reducing product-specific tasks. Other benefits of taskonomy include top-level tasks used to unify the content, and when going to a more detailed page, it can include the properties of the version, task, and module.

There’s actually a logic for auto-related links. They are more relevant than fully automated related links, and far less creation and maintenance effort is needed to create them than manual links.

There is the matter of publishing the content to the web. There are two approaches, namely either having a dedicated DITA Portal or publishing directly to a web content management system (CMS). There are pros and cons for both, but a web CMS typically gives consistency to the branding /SEO, and the ability to show DITA next to related MarComm content. Doing it the “DIY” (Do-It-Yourself) way is very expensive. The XML Documentation Add-on for Adobe Experience Manager solves these problems.

Search engines do face challenges getting users to the right page and finding useful snippets to show relevancy to the users. DITA metadata and inline taxonomy can help search engines.

schema.org is an initiative by the big search engines to create and support a common set of schemas for structured data markup on web pages. The website provides some common help information, such as possible metadata for “Offer” (price, etc.), and how to generate this markup. Tools such as the Schema app can help with manual entry and creating manual relationships, etc. It can also integrate with e-commerce platforms. In DITA, we already classify chunks of content, so for more granular content, be sure to look at chunks.

To generate schema.org markup automatically from a DITA Source, you need to author in AEM (or any integrated editor), add the metadata, generate output from the map as normal, and publish it.

What does this look like in Google? It could be rich snippets displayed in a desktop search, content in the “carousel” of a mobile search, or images in a mobile image search.

In the end, it’s best to use DITA to shape your information to its purpose. Using taxonomy, and maybe even taskonomy, to define, explore and manage user tasks can help. Tag your content with taxonomy concepts for internal and external findability. Publish your content to the web, because that 80 % of organic searches are counting on this! Also, consider using schema.org markup.

Click on the Slide Title to download the full presentation (PDF):




Keith Schengili-Roberts: Why Agile and DITA Work So Well Together


Before agile, there was waterfall. This workflow started with requirements and analysis, then proceeded through stages of design, coding, testing and maintenance. It’s a sequential design process often used, and any tech writing typically fell between the coding and testing phases – “Just document what’s there.” The majority of documentation teams out there still follow this model, and Keith Schengili-Roberts, Market Researcher and DITA Evangelist, noted, there’s nothing wrong with that.

The problems with waterfall, however, are that it is prone to failure, and does not deal with change or adapting to customer needs gracefully.

Thus, the Agile Manifesto of 2001 was created by a group of software developers. The following points were made to be the mission of this Manifesto:

  • Individuals and interactions over processes and tools

  • Working software over comprehensive documentation

  • Customer collaboration over contract negotiation

  • Responding to change over a following a plan

  • Clear difference from the traditional waterfall approach to docs

Agile has several implications for the documentation process. Content creators have to work more closely with developers. Documentation may support broader communication, such as between teams, customers, the audit process, etc. Work cycles are faster, and feedback is more critical. Efficient documentation tools make things easier, like single-sourcing, structured content, and CMS.

In practice, content creators work more closely with developers by providing early feedback on the product as constant change and iterations are in play. That’s a good thing!

Agile is not for every business environments. Pseudo-agile documentation teams are actually common, usually involving a mix of agile and waterfall. Agile thrives in environments where short release cycles are possible, and it appears to be rare in highly regulated environments or heavy machinery industries, for example. It’s usually found in environments where business factors are doing the pushing.

Agile also thrives in the “shaping quadrant” of industry types, namely those environments that are unpredictable, but where expectations of customers can be modified or shaped. Rapid testing of releasable products helps shape the market and customer expectations. Keith pointed out that there is a lot of overlap between firms who utilize Agile and those who use DITA. DITA is clearly popular among agile teams for its structured content use.

The reasons why DITA works well with Agile include the following:

  • Agile and content reuse – no need to rewrite what already exists. Content consistency and single-sourcing are built in. Reuse is a big deal to be more efficient.

  • Topic reuse improves content consistency and is an additional benefit of content reuse.

  • Agile user stories map well to DITA task topics. Scrum-based agile often calls upon user stories to help craft development, often taking the form of various procedures that users will want to accomplish. This fits well with DITA task topic types. A possible DITA and Agile best practice for writing tasks emerged as encapsulating concepts as the context for a task instead that would describe the expected outcomes for individual steps or a conclusion, then use concept topics to link between tasks.

Agile Epics are a collection of related user stories that comprise of the complete workflow for a type of user. From a DITA standpoint, epics can be used to help refine conditional processing for audiences. User types may change during development, but the agility and flexibility of DITA make it possible to change quickly.

DITA best practices advocate focusing on the user. This follows from how user stories map to DITA task topics and lead to an emphasis on core tasks where the user doesn’t have to wade through irrelevant content in order to “get things done.” Similarly, the other main topic types of concept, reference, and troubleshooting are deliberately structured boxes that should make a good technical writer think about what information the user needs.

As technical writers, you become part of the feedback loop. The tight integration of technical writers with development teams opens possibilities starting with early feedback on product development.

DITA topics allow for document project measurements, and it makes it easy. Within a CMS, it’s also possible to track how “done” or completed topics are within a map. Non-structured documents are harder to track due to lack of granularity. The best practice of minimalism reduces waste. Genuinely useful information will be available when and where it’s needed, easy to find, immediately useful, and concise and to the point. Separation of content from formatting saves time, as time is spent writing rather than formatting, and separating content from formatting also saves considerable time.

Feedback is part of agile. In fact, documentation feedback is an Agile development requirement. Using DITA allows the turnaround time of a topic based review with SMEs to be greatly reduced.

Agile documentation and the “definition of done” can be a hot topic. In order to make documentation review work, it needs to be considered part of the definition of done, meaning that the writer needs to be fearless at Scrum meetings and say when something is done versus when it’s not done. Only document what’s necessary! Track online usage from published documents and prioritize user favored content, as it also fits with minimalist writing principles and Lean Agile practices. Short descriptions direct users to content. Writing <shortdesc> is already a best practice, but it’s also true in an agile environment.

DITA was built with multi-channel publishing in mind, as it reduces wasted time and resources that would otherwise be spent using additional tools. DITA-OT was designed with this in mind.

Separating content management from authoring would be ideal, as the information architects and managers are usually several iterations ahead and planning out future topics to be authored. Creating a map with stub topics in which SMEs and writers can fill in the blanks could be helpful.

Documentation doesn’t happen by magic. It will not solve understaffed technical documentation teams, especially if writers can’t attend meetings, are falling behind, etc. We need to help make documentation the “glue” for publications. The solution is to recognize this need up front and allow for it in the overall doc plan.

Click on the Slide Title to download the full presentation (PDF):




Day 3 Conclusions

And that was that! The last three days saw a whirlwind of topics all bringing the range and depth of how DITA can be used to thousands of people globally. We had a good time learning and listening to all the presenters, and interacting with each other in the chat pod. It’s always great to learn more about how DITA is evolving, and this year’s Adobe DITA World did not disappoint with its top-notch presenters on hand over three days.

I hope you’ve enjoyed these summaries, and you’ve found them helpful. Thanks, and see you at Adobe DITA World 2018!

Danielle M. Villegas

Danielle M. Villegas is a technical communicator who has most recently worked with International Refugee Committee (IRC), MetLife, Novo Nordisk, and BASF North America, with a background in content strategy, web content management, social media, project management, e-learning, and client services. She is also an adjunct instructor at NJIT, and has he own consultancy, Dair Communications. Danielle is best known in the technical communications world for her blog, TechCommGeekMom.com, which has continued to flourish since it was launched during her graduate studies at NJIT in 2012. She has presented webinars and seminars for Adobe, the Society for Technical Communication (STC), the IEEE ProComm, the Institute for Scientific and Technical Communication (ISTC)’s TCUK conference, and at Drexel University’s eLearning Conference. She has written articles for Adobe, STC Intercom, STC Notebook, the Content Rules blog, The Content Wrangler, and InSyncTraining as well. You can also follow Danielle on Twitter: @techcommgeekmom

4 thoughts to “Adobe DITA World 2017 – Day 3 Summary by Danielle M. Villegas”

  1. Thanks for the great writeups, Danielle. You distilled the essence from an information-packed conference!

    Just one small clarification regarding my talk — there would actually be other ways to generate pages with Schema.org markup from DITA, but we have made it very easy and smooth in AEM (by building on the well-designed customization points available in the XML Documentation Add-on).

    Looking forward to Adobe DITA World 2018!

Leave a Reply to Mary Wright Cancel reply

Your email address will not be published. Required fields are marked *