Scriptorium Publishing

content strategy consulting

New LearningDITA course: Using maps and bookmaps

April 4, 2016 by

More than 900 people have signed up for our free DITA courses on—thank you! You’ve had a basic introduction to DITA and learned how to write concept, task, reference, and glossary topics. Now you can learn how to collect those topics and establish relationships among them with our newest course: Using maps and bookmaps.

The course and supporting videos were created by a Scriptorium team: Simon Bate, Jake Campbell, and me. The supporting slide deck on DITA maps was created by Pam Noreault, Tracey Chalmers, and Julie Landman.

Flickr: Tom Stovall at Meadowlark Botanical Gardens

Flickr: Ducklings by Tom Stovall at Meadowlark Botanical Gardens

This course includes four lessons on:

  • Creating a map
  • Creating a bookmap
  • Advanced DITA map and bookmap concepts
  • Relationship tables

You’ll learn how to create maps and bookmaps in DITA and add relationship tables to these maps. With guided step-by-step practice and on-your-own exercises, you’ll gain hands-on experience and familiarity with maps.

You’ll also learn about the ways maps can enhance the publication process. Sprinkled throughout each lesson are tips and best practices for publishing your DITA content using maps and bookmaps. There will be more information about publishing in future LearningDITA courses, so stay tuned!

Got any feedback or suggestions for future courses? We welcome any content you’d like to contribute to the ditatraining GitHub repository! Check our project roadmap for information on current and future courses and let us know what else you’d like to see. For notifications about new content on the site, sign up for announcements (you can also sign up during site registration).

Special thanks to our LearningDITA sponsors: oXygen XML Editor, The Content Wrangler, and easyDITA.

Webcast: The technology is the easy part! Leading through change

March 28, 2016 by

Change is constant in technical communication. Whether dealing with new technology, shifts in organizational structures, or growing business requirements, content creators must be able to adapt. In this webcast recording, a panel of content experts—Jack Molisani of The LavaCon Conference and ProSpring Staffing, Erin Vang of Dolby Laboratories, Sarah O’Keefe of Scriptorium, and moderator Toni Mantych of ADP—answer questions and give advice about dealing with change in the industry.

Some of their words of wisdom include…

Advice for employees and job seekers in tech comm. Job titles are shifting constantly, and the skills required to make it in the industry are expanding. It’s not enough to be just a writer or an editor anymore—you need to be able to add value to your organizations by proving you can meet customer needs and solve business problems. Knowing the value of your content and being able to communicate it to your company’s C-level executives will help you succeed. And if you’re just entering the tech comm industry, subject matter expertise, effective use of social media, and a willingness to learn will help you get your foot in the door.

Tips for dealing with change management. Just as change is a given in tech comm, so is change resistance. Lack of information, talent, or interest within an organization can lead to fear of change, so it’s important for managers to lead their teams through it by providing education and modeling confidence. When you’re faced with inevitable changes, sometimes it can be difficult to adapt to them while remaining true to your core values—but avoiding change means you won’t learn anything or make any progress, so it’s better to make imperfect decisions than none at all.

How do consultants fit into the picture? If you’re having trouble communicating your content needs to your managers, or they refuse to listen to you, it might be time to bring in a consultant—especially given the growing trend of companies fostering long-term relationships with outside consultants or contractors. As technology continues to evolve, many organizations are using outside experts instead of investing in training for in-house employees. Organizations are engaging consultants not just for specific projects but for long-term follow-on support. If you’re struggling with change in your organization, a consultant might be the best person to help you manage it.

Do I need a content strategy consultant?

March 21, 2016 by

Do you need a content strategy consultant? If the following signs are uncomfortably familiar to you, the answer is yes:

  • You have contradictory content across departments. Customers get frustrated when the specifications in product literature don’t match what’s in the sales content they read earlier. They then call support to clear up the contradictions. It’s much more efficient to create the content once and reuse it across departments. Increased consistency and accuracy follow.

  • Content lags product releases, particularly in international markets. Gaps between the releases of a product and its content indicate your processes aren’t nimble enough—especially for the globalized industries of the 21st century. See our premium post about localization strategy (free registration required).
  • Content is not in formats the users want. You need to give customers options in how they engage with your content. And merely switching from PDF to some online format is not the best way to handle that.
  • Oh, please. Can’t I solve these problems without a consultant?

    Yes, you can solve these issues on your own. But your chances of success are far greater with help from a content strategy consultant:

    • A seasoned consultant has solved the same kinds of problems for multiple clients. That experience means you get an informed solution. You avoid pitfalls you wouldn’t have anticipated on your own.
      owl giving side eye

      Oh, please! (flickr: Steve Brace)

    • Consultants who have worked with many vendors know which tools and technologies are good fits for particular requirements and corporate profiles. A content strategy consultant can also act as a firewall against less than accurate claims from vendors about solution capabilities. (But be aware of any reselling agreements a consultant has with vendors.)
    • Developing a content strategy requires a dedicated effort. Strategy (and implementation later) cannot be done in the margins of day-to-day work. Often, it’s not feasible to shift internal resources away from existing work to the new strategy. Hiring a content strategy consultant will expedite strategy development without sacrificing resources for existing projects.
    • If a new content strategy requires you to convert source content from one file format to another, a consultant can help you more quickly identify and implement the best migration paths. A good consultant will also keep exit strategies in mind when developing your new processes. It’s smart to have an escape hatch, even if you don’t use it until years later.
    • Change management is the key to any successful project, including content strategy projects. A consultant will recognize the warning signs when people aren’t excited about the project and will have techniques for mitigating change resistance.
    • Often, executives are more receptive to recommendations coming from an outside party—even when the recommendations are identical to what employees have suggested. Is it fair? Not really. But don’t rely on this increased receptiveness to get a content initiative approved. Instead, work with your consultant to hone a strong business case with specifics on the return on investment. A consultant’s project experience can ensure your business case is realistic and compelling.

    So, do you need a content strategy consultant? Contact us.

    QA in techcomm: you need a test bed

    March 14, 2016 by

    When I first started as a QA tech at a small game company, I was immediately thrown into the QA test bed. It was a place where we could test production-ready content without being interrupted by ongoing development or server restarts. Functionality was well-documented and could be used to test against our users’ bug reports.

    When I started working at Scriptorium, one of my first tasks was to develop a content test bed to run alongside our PDF transform to help improve it. For example, one part of our test bed is a massive thread pitch table. It will readily flow across multiple pages, includes both vertical and horizontal straddles, and has varying column widths. If we run into a problem with large tables, I can run that content through, confident that I’ll be able to reproduce the issue and deal with it accordingly.

    Making a test bed

    A test bed is a useful tool when managing your content. In the context of content strategy, a test bed is a set of topics that represents a broad section of your content. However, a test bed is more than just a sample of your production-ready content; to be truly useful, a test bed should be:

    • Modular: You should be able to add and remove chunks of content from your test set. This allows you to quickly set up use cases to verify your output. You also need to be able to quickly trim the size of your test bed to cut down on processing time for tests, unless that test involves a large document.
    • Well-maintained: As your content grows and changes, so too should your test bed. Otherwise, you run the risk of missing critical errors that can creep in when you do your testing.
    • Representative: You should have examples of all major requirements for a particular publication. If you have a table that needs to be formatted in a specific way depending on its context, make sure that you have that example in your test bed. Also keep in mind errors that turned up in the past, so you can keep an eye out for bugs that might creep back into your workflow.
    • Content-neutral: Make the actual content of your test bed generic. While it can be useful to have your actual content in your test bed, it can also make it problematic if you need to hand that test bed off to an outside entity, like a friendly consulting group. If you need to have “real” content in your test bed, make sure that it’s already publicly available or otherwise non-sensitive.

    Using a test bed

    Once you have a solid test bed in place, it can provide benefits both internally and externally. Internally, if you’re using a platform to generate output, you can run your test bed through it to verify any changes made to that platform. Conversely, if you implement a major change to the format of your content, you can use your test bed to more accurately scope the impact of that change. You can also use it as an example to familiarize new authors with your content. Externally, you have content that you can hand off quickly, without spending time compiling a set of appropriate documents that represent your project needs.

    With a bit of maintenance as your documentation needs change, your test bed can become a versatile and powerful tool when both developing and maintaining your content.

    Translation and the complexity of simple content

    March 7, 2016 by

    Translating content for foreign markets can be an expensive and time-consuming endeavor. While it’s important to keep costs in check, the critical element to watch is quality. The only sure-fire way to ensure quality in translation is to build it into your source.

    GIGO: Garbage in, garbage out

    Storm Trooper action figures startled by a Storm Trooper trash bin

    That’s not what we meant! (flickr: JD Hancock)

    Developing content is often seen as a necessary evil. It’s easy to justify cutting corners to deliver it quickly or create it using fewer resources. After all, it’s just content, right?

    Not quite.

    More often than not, even technical content is being used in the pre-sales cycle. People want to evaluate a product or service before purchasing, and one of the easiest ways to do that is by searching for information online. Content quality in this case might make or break a sale.

    Content quality is also a major factor in retaining existing customers. People are extremely happy when they can easily find answers to their questions without having to call support. Likewise, the easier the answer is to understand, the better.

    So there’s a logical fallacy that content is a necessary evil. It plays a critical role in attracting and retaining happy customers. It is a business cornerstone. This is true in all markets. Despite your translators’ best efforts, the quality of their translation work is a direct reflection on the quality of your source content. And to keep translation costs down, you need to make that effort simple.

    The complexity of simple content

    Simple content does not mean dumbed-down or bare bones. In this context, simple content means streamlined content.

    • Messaging is clear and concise
    • Words are carefully chosen
    • Content is written in discrete chunks (topics)
    • Those chunks are written once and reused wherever appropriate
    • Content is consistently formatted (or better, tagged in XML)

    In short, the entirety of content development is closely monitored and skillfully performed.

    The effort involved is anything but simple, but the benefits far outweigh the work. All of the heavy lifting is done on the source content side, simplifying the translation process.

    When your source content is clear, concise, and complete, it can be translated easily. When content is written once and reused, translators only need to translate it once. When content is consistently tagged, formatting translated content becomes automatic.

    As a result, your content quality is consistent across all languages, your translation costs are reduced, and the translation work is completed quicker, allowing you to accelerate your time to market.

    When strategy meets the storm

    February 8, 2016 by

    plane in a snowstorm

    Perfect weather for flying! (image via Flickr: estudiante)

    Just before the blizzard that crippled a significant portion of the East Coast, I was returning from a business trip. I did eventually make it home, but the return flight included a bonus three-day layover in Charlotte, NC.

    I’ll spare you many of the details, but a few key events and situations really stand out from that trip. The lessons learned are applicable to any corporate strategy, content or not.

    Protocols that look good on paper may not fare well in practice

    Every strategy can be broken down into a series of protocols that need to be followed. Which protocols you follow will change based on the situation, but they all feed into the overarching strategy that drives your business goals.

    Many of these protocols are designed to account for specific situations. However, following them to the letter every time may do more harm than good. A certain amount of discretion is needed to alter protocol to effectively handle tricky situations.

    My flight home began without incident. We boarded on time, and were set to take off with a full flight. After everyone was seated, a flight attendant made an announcement: five people needed to volunteer their seats for additional crew members due to the impending East Coast storm. Volunteers would be given an alternate flight and a voucher toward future travel.

    Caïn by Henri Vidal, Tuileries Garden, Paris, 1896.

    I’m sure this is how everyone on that plane felt… passengers AND crew! (photo: wikipedia)

    As you could imagine, no one came forward. We were all concerned about the storm, and wanted to get to our destinations before we were stranded. The airline’s protocol called for volunteerism, and so we sat for an hour waiting for volunteers, which stirred great concern among the passengers and caused many to ultimately miss their connections. Finally, an executive decision was made by either the airline or crew to deplane the last people to purchase tickets, board the additional crew, and finally take off. But by then it was too late; many of us missed our connecting flights and were stuck to ride out the storm on an extended layover.

    While the proper protocol to solicit volunteers might work in normal situations, the storm factor should have allowed for other creative solutions–or at least a shorter timeline between asking for volunteers and removing the last few ticket holders from the plane. The crew would have boarded, the majority of passengers would have made their connections, and fewer people would have been stranded. Following protocol in this case caused more harm than good; customers were upset, customers were stranded (so many that the airline ran out of available hotels), and stranded customers now required new flights in an already chaotic backlog due to cancellations.

    When load testing meets reality

    Implementing any system requires a great deal of testing and tweaking. But true testing doesn’t begin until you’re using it live and encountering unforeseen or extreme situations. After launching a new system, it’s important to also have failsafes in place when (not if) the unforeseeable happens. Fallback systems are great, but two of the best failsafes to use are human communication and collaboration.

    On the day of my final flight home, the flight situation was understandably a nightmare. The airports on the East Coast were finally re-opening after their storm shutdowns, flights were still being cancelled and delayed due to weather conditions and missing crew, and displaced passengers were very unhappy.

    As I queued in the very long customer service line, I also called the main customer service number. There was an hour wait on the phone, so I opted for a call-back when it was my turn to talk to an agent. Meanwhile I made it to the service counter and began looking for earlier direct flights home. We found one, but it was full. I asked about standby, and after much fiddling with the system, the agent gave up. She could not put me on standby without voiding my later, valid ticket. (!!!)

    I begrudgingly kept my later flight and went off in search for coffee. Then I received a callback from the support line. They confirmed that I actually was on the standby list for the early flight, and that my other ticket was still valid should I not make it on the earlier flight.

    I quickly gathered my belongings and headed to the gate to confirm. Alas, their system did not show me on standby. But this agent worked some voodoo magic and was able to get me on standby and retain my later flight as backup.

    Obviously there was a breakdown in systems communication between the airline’s main system and the airport hubs, and the local agents were left to flail about their work as best they could. For some reason, they could not contact someone to confirm these differences, nor to troubleshoot or even report a system error. There needs to be a communication bridge between those using the system and those managing it.

    People are your greatest corporate asset

    So in the end, I made it home on that earlier flight. My standby status earned me a seat just before boarding began. It was all thanks to that one gate agent who ensured that their local systems showed that I was both on standby and had a valid later flight if needed. She truly went above and beyond, checking in on me from time to time and even rooting for me as my name climbed higher on the standby list.

    There are details about this horrible trip that I won’t forget. Some are mentioned in this post, and others are best left unmentioned to hopefully fade with time. But what I may never forget is that one gate agent’s actions, from her refusal to let a system glitch prevent her from doing what should normally be doable to the high-five she gave me as I boarded my flight.

    Technology fails happen. Unforeseeable events happen that can shake normal workflows. In fact, I’m sure that other airlines had issues during this storm. But it’s the human to human interaction that can build or destroy a customer’s impression of a company. Empower your workforce to put their best foot forward, and do everything possible to enable them to creatively solve problems when needed. It just might be your only saving grace with an unhappy customer.

    The second wave of DITA

    February 1, 2016 by

    You have DITA. Now what?

    More companies are asking this question as DITA adoption increases. For many of our customers, the first wave of DITA means deciding whether it’s a good fit for their content. The companies that choose to implement DITA find that it increases the overall efficiency of content production with better reuse, automated formatting, and so on.

    Now, clients are looking for the second wave of DITA: they want to connect DITA content to other information and explore innovative ways of using information. The focus shifts from cost reduction to quality improvements with questions like:

    • How will our content strategy evolve as DITA evolves?
    • How do we make the most of our DITA implementation?
    • How do we tailor our DITA implementation to better suit our needs?
    • What can DITA do for us beyond the basics?
    • What other content sources are available and how can we integrate them with our baseline DITA content?
    • What new information products can we create using DITA as a starting point?
    • How can we improve the customer experience?

    The second wave of DITA can go in two directions. In the apocalyptic scenario, the overhead and complexity of DITA exceeds any business value, so the organization looks for ways to get out of DITA. But if you measure implementation cost against business value before any implementation work begins, this scenario is unlikely. Instead, you can reap the benefits of a successful implementation and start exploring what else DITA can do for your business.

    A huge wave and a tiny surfer.

    Will you thrive or wipe out in the second wave? // flickr: jeffrowley

    Extending DITA beyond the basics

    Your first DITA implementation must achieve your objectives with minimum complexity. When the shock of using the system wears off, you can consider new initiatives:

    • Building additional specializations
    • Using advanced DITA techniques to accommodate complex requirements
    • Delivering new output files
    • Refining your reuse strategy

    Integrating with other systems

    In the first wave, organizations usually focus on getting their content in order—migrating to DITA and topic-based authoring, setting up reuse, establishing efficient workflows, and managing the staff transition into new systems.

    In the second wave of DITA, the new baseline is a functioning, efficient content production process, and attention turns to requirements that increase the complexity of the system. For example, a company might combine DITA content with information in a web CMS, a knowledge base, an e-learning system, or various business systems.

    Moving additional content types into the DITA CCMS is only one option to foster collaboration. Organizations can align content across different authoring systems. Another integration opportunity is combining business data (such as product specifications or customer information) with DITA content. Software connectors that allow disparate systems to exchange information are a huge opportunity in the second wave of DITA. You can share information as needed without forcing everyone into a single system.

    Focusing on the business value of content

    The emphasis is shifting. In the first wave, organizations focused on reducing the cost of producing content by improving operational efficiency. In effect, they built systems that reduced or eliminated waste in content manufacturing. In the second wave of DITA, the focus is on the business value of the content. After setting up the assembly line, the organization can build cars, er, content, with more and more features that authors and consumers need.

    Some trends in this area include the following:

    • In localization, a shift from authoring in a single source language toward multilingual authoring. Product expertise is not confined to employees who are proficient in English. If your subject matter expert is most comfortable in Chinese, why not allow her to work in that language?
    • In management, an increasing recognition of the value of good content, and a demand for improvements.
    • In content creation, a greater recognition of the importance of content strategy and an increasing focus on the big picture.

    DITA is a constantly evolving technology, and to get the most value out of your implementation, you must ensure that your content strategy evolves with it. Don’t stop at letting DITA solve your content problems—take advantage of the second wave of DITA and explore the many other ways it can advance your business.

    We had some interesting discussion about the second wave of DITA during our 2016 content trends webcast, and we’d like to continue that in the comments. Are you in DITA and figuring out what comes next? Let us know.

    The cost of DITA specialization

    January 18, 2016 by

    One of the greatest benefits of using DITA is specialization. However, specialized DITA is more challenging and expensive to implement than standard, out-of-the-box DITA, which is something you should consider before you take the plunge.

    In this follow-up post to Making metadata in DITA work for you and Tips for developing a taxonomy in DITA, you’ll learn about the cost of specialization, and how to decide whether it’s worthwhile for your business.

    Know what’s involved

    Is being a special snowflake worth the cost? (flickr: Dmitry Karyshev)

    Is being a special snowflake worth the cost?
    (flickr: Dmitry Karyshev)

    You’ve determined that DITA is the best solution for your company’s content, but now you have a choice to make—whether or not to specialize. Specialization means customizing the existing structure of DITA by adding, modifying, or removing elements to suit your needs.

    The first step in your decision should be learning about what’s involved in the specialization process. If you specialize, you will need to:

    • Create a content model, or framework that shows how your content will be structured in DITA.
    • Develop the specialization, including custom DTDs, elements, and attributes.
    • Test the specialization with your content.
    • Make sure that your conversion process, output transforms, and tools work with your specialization (or modify them accordingly).

    Implementing a DITA specialization will cost more—in terms of both time and expense—than standard DITA. Make sure you account for the added effort of specialization, especially if you’re on a tight schedule or budget.

    Analyze your content

    The structure of your content can help give you an idea of whether or not specialization is the best option for you. As you review your existing content, ask yourself:

    • How is your content structured? Keep in mind that even if your content is currently stored in an unstructured format, it probably still has an implied structure.
    • How closely does your content match the structure of standard DITA? If your content fits within standard DITA except for a few cases, it will likely be more cost-effective to rewrite those pieces of content than to create a specialization to handle them. However, if your structure differs significantly from standard DITA, you can probably make a strong case for specialization.
    • How consistent is your content? It can be tempting to use specialization to accommodate an inconsistent structure with numerous edge cases. But just because you can create specialized DITA doesn’t mean you should—especially if reworking your content to be more consistent is cheaper than specializing around it.
    • What semantic value does your content need? Can you assist your content creators by using element names that are more meaningful to them? If you’re in an industry with very specific language, such as pharmaceuticals, or if your company has a large, complex system of product names and categories, it might make sense to specialize—particularly when it comes to metadata.
    • How will your content be tracked? Do you or your audience need to search for and extract specific pieces of content (for example, a list of supplies from a datasheet for a certain product)? If so, creating a specialization that allows for semantic tagging might be the best (or only) way to accomplish this.

    Estimate the costs

    You’ve determined that your company could benefit from specialization based on the structure of your content. Now you’ll need to evaluate the costs involved in specialization so that you can present a strong business case for it. Here are some costs that could occur when you implement a DITA specialization:

    • Development costs. Do you have people in your organization who have the DITA knowledge and skill it takes to create your specialization? If so, you’ll need to account for their time and effort in your budget, especially if they already have other responsibilities. If not, you’ll need to reach out to an external resource (such as a consultant) or try to hire someone.
    • Conversion costs. Do you have legacy content that you plan to convert to DITA? How much? If you have enough content that you’ll be using a conversion vendor, ask them to estimate how much it will cost to convert your content using your specialization.
    • Output costs. What types of output will you need? How will your specialization affect the development of your output transforms? Depending on the nature of your specialization, your transforms may be more difficult or time-consuming to create than they would be with standard DITA.
    • Tool costs. What kind of support do the content management systems and authoring tools you’re considering have for your specialization? How difficult will it be to manage and update the specialization once your content is integrated? These factors can not only help you estimate the costs, but can also help you choose the right tools.
    • Localization costs. Do you need to translate your content into other languages? If so, keep in mind that the tool chain for any localization vendors you use must support your specialization, which could affect both vendor selection and implementation costs.
    • Testing costs. You’ll need to test your specialization at various stages throughout the implementation process, so make sure to allow for the cost of the time involved.

    Specialization isn’t cheap or easy, and the decision to implement it shouldn’t be taken lightly. However, if it’s the best approach for your content, the costs involved are probably worthwhile. Now that you have a better understanding of the factors and costs of DITA specialization, you can make a more informed decision about whether or not to specialize—and support that decision with a stronger business case.

    Top eight signs it’s time to move to XML

    January 11, 2016 by

    How do you know it’s time to move to XML? Consult our handy list of indicators.

    Versione italiana (Alessandro Stazi, January 28, 2016)

    This list is in rough order from problems to opportunities. That is, the first few items describe situations where the status quo is problematic. Later in the list, you’ll see items that are more focused on improving the quality of the information you are delivering.

    1. Overtaxed system


    Is your system fast enough? // flickr: Eirien

    Your current system (tools and workflow) worked well in the past, but now it’s overburdened. Tasks take too long because you don’t have enough software licenses, or enough people, or too many manual steps in your workflow.

    XML-based content is not the only way to solve this problem, but you can use an XML-based system to improve the efficiency of your operation:

    • XML content files have a smaller footprint than the equivalent binary files (because formatting is not stored in each XML file but instead centralized in the publishing layer).
    • You can use a variety of editors with XML files. Software developers might use their favorite programming text editors. Full-time content creators likely prefer an XML authoring tool. Getting software is less of a problem because not everyone needs a (potentially expensive) authoring tool.
    • Content creators spend a shocking amount of time on formatting tasks—up to 50% of their time. XML-based publishing replaces the ongoing formatting tasks with a one-time effort to create stylesheets.

    2. File management problems

    Box labeled Fragile: Do Not Drop that has been dropped and crushed.

    Not good. // flickr: urbanshoregirl

    Your days are filled with administrative problems, such as the following:

    • Trying to manage increasingly fragile authoring tools in which formatting spontaneously combusts when you so much as breathe near your computer. (I’m looking at you, Microsoft Word.)
    • Searching through shared network drives, local file storage, and random archives for a specific file and, most important, the latest version of that file.

    File administration is overhead at its worst.

    The authoring tool problems are addressed by the simplicity of XML files—formatting is applied later in the process, so it cannot muck up your source files. (Note: Some software offers the option of saving to “MySoftware XML” format. In most cases, that XML does include formatting information, which destroys much of the value of an XML-based approach.)

    The file search problem is a source and version control problem. The best solution for content is a component content management system (CCMS), in which you can track and manage the files. If, however, you cannot justify a CCMS for your organization, consider using a software source control system. Because XML files are text, you can use common system such as Git or Subversion to manage your files. This approach doesn’t give you all the features of a CCMS, but the price is appealing. It’s also possible to manage binary files in a source control system, but you will experience additional limitations. (For example, you cannot compare file versions using source control system.)

    3. Rising translation and localization demands

    Box with "no volcar" label.

    No volcar. // flickr: manuuuuuu

    Your “creative” workarounds to system limitations were acceptable when you only translated a few minor items into Canadian French, but now the company delivers in a dozen languages (with more expected next year), and correcting these problems in every language is getting expensive and time-consuming.

    Localization is by far the most common factor that drives organizations into XML. The cost savings from automated formatting across even a few language variants are compelling. Furthermore, because most organizations use outside vendors for translation, it’s quite easy to quantify the cost of translation—you can just look at the vendor’s invoices.

    4. Complex conditions

    Most unstructured authoring tools offer a way to label information as belonging to a specific content variant and produce two or more versions of a document from a single file. For example, by flagging test answers as “Instructor,” a teacher could generate both a test and an instructor’s answer key from a single file.

    In software documentation, a requirement to label information as belonging to a high-end “professional” version as opposed to the baseline product is common. Authors can then create documents for the baseline version and for the superset professional version from a single source.

    With more complex variants, however, the basic flagging and filtering is insufficient. Consider, for example, a product that has the following variants:

    • U.S. product and European product with different safety requirements
    • Product used in different environments, like factories, mines, and retail establishments
    • Optional accessories, which change the way the product works
    • Product components are shared across different products with small changes

    In this example, you would need to create the following types of filters and have the ability to generate combinations of filters:

    • Regulatory environment
    • Operating environment
    • Accessory
    • Product (to identify variance in shared components)

    In XML, you can use metadata to create these flags and filter on various combinations.

    5. No off-the-shelf solution meets your requirements

    If your output requirements are exotic, it’s quite likely that no authoring/publishing tool will give you the delivery format you need out of the box. For example, you might need to deliver warning messages in a format that the product software can consume. Or you need information in strings that are compatible with web applications, perhaps in PHP or Python. JSON is increasingly required for data exchange.

    If you are faced with building a pipeline to deliver an unusual format, starting from XML will be easier and less expensive than starting from any proprietary system.

    6. More part-time content creators

    In many XML environments, the full-time content staff is augmented with part-time content creators, often subject matter experts, who contribute information. This helps alleviate the shortage of full-time content people. Another strategy is to use XML to open up collaboration across departments. For example, tech comm and training departments can share the load of writing procedural information. Interchange via XML saves huge amounts of copying, pasting, and reformatting time.

    Part-time content creators have a different perspective on authoring than full-timers. Their tolerance for learning curves and interface “challenges” generally decreases with the following factors:

    • Level of expertise. Subject-matter experts want to get in, write what they need to, and get out.
    • Level of compensation. Put too many obstacles in front of a volunteer, and your volunteer will simply drop out.
    • Scarcity of knowledge. The fewer people understand the topic, the more likely that your part-time content creators resist any workflow changes.

    The solution is to focus on WIIFM (“What’s in it for me?”). If the content creator is accustomed to managing content in complex spreadsheets with extensive, time-consuming copy and paste, an XML system with bulletproof versioning and reuse will be quite popular.

    7. Metadata

    Text is no longer just text. You need the ability to provide supplemental data about text components. For example, you need to be able to identify the abstract section for each magazine article. Or you want to create a link from a book review to a site where you can purchase the book. Conveniently, a book’s ISBN provides the unique identifier you need, but you don’t want to display the ISBN in the review itself, so you need metadata.

    Most unstructured tools let you specify metadata for a document (often, in something like “Document Properties”). XML lets you assign metadata to any document or document component, so you can include more detailed background information. (And you can use that metadata to filter the output results.)

    8. Connectivity requirements

    In some contexts, your text connects to other systems. These might include the following:

    • For a repair procedure, a link from a part to your company’s parts inventory, so that you can see whether the part is available and order it if necessary.
    • For software documentation, the ability to embed error messages and UI strings both in content and in the software itself.
    • For medical content, the ability to accept information from medical devices and change the information displayed accordingly. (For example, a high blood pressure reading might result in a warning being displayed in your medical device instructions.)

    Does your organization show signs of needing XML? Can you justify the investment? Try our XML business case calculator for an instant assessment of your potential return on investment.