The Darwin Information Typing Architecture (DITA) provides an XML architecture for technical communication. Although implementing DITA is likely to be faster and easier than building your own XML architecture from the ground up, DITA is not suitable for everyone.
It is important to consider carefully whether you have a business case for DITA.
Localization provides easy cost justification
If you have localization in your workflow, you can probably justify the cost of DITA implementation. Typically, 30–50 percent of total localization cost in a traditional workflow is for desktop publishing. That is, after the files are translated from English into the target language, there is work to be done to accommodate text expansion and pagination changes.
With XML-based publishing, you can reduce the desktop publishing cost greatly. If you assume that DTP charges are reduced to around 10 percent, the result is that you show $20,000–$40,000 in cost savings per $100,000 in localization costs.
We highly recommend starting with localization costs; they are typically easy to quantify.
Increasing content reuse
There are two important assumptions made in justifying DITA through increased reuse:
- You have content that should be reused automatically, but currently you are not doing so.
- Implementing DITA will help you to increase the amount of reuse in your organization.
If reuse is not happening because of technical limitations of the current workflow, implementing DITA should help. If, however, reuse does not occur because of other factors, such as rival workgroups that refuse to share information, then implementing DITA will not solve the problem. You will need to work with your authors to improve the working relationships.
The content reuse calculation is as follows:
- Assume you have 10,000 topics.
- The development cost per topic is 4 hours @ $50/hour. Each topic costs $200.
- If you can increase reuse by 5 percent, that means you have 500 topics that arenotwritten (because they are reused instead). 500 topics @ $200 per topic = $100,000 in content development savings.
- You will also recoup additional savings during localization because the reused topics are only translated once.
- Software error messages. Error messages in the software and in the documentation could be sourced from one location.
- Product specifications in product database and datasheets. Datasheets can use information from the product development database.
- Product descriptions. You may be able to share a product overview between marketing and technical communication.
- Training. Instructional designers often source information out of technical documentation, and automating this process could yield great improvements in productivity.
- Tech support. The support organization may be a source for new technical information.
- Time required to copy and paste across multiple documents.
- Errors rates from manual reuse.
- Paying content professionals for copying and pasting rather than for valuable content creation.
- The likelihood that your best content creators will lose interest and look for more challenging work elsewhere.
Problems with content reuse
These problems are not DITA-specific, but when you begin to reuse content across an entire workgroup (or organization), it becomes very important that the writers collaborate and work together. They must also write to a consistent style standard so that information can be combined without great stylistic inconsistencies.
XML is not a panacea for process issues and collaboration problems. You must address these types of problems before you implement XML.
ROI depends on teamwork
The return on investment that you see in DITA depends on your team. A team whose members like and respect each other will do a much better job inside any environment including DITA.
- Share topics
- Communicate updates
- Minimize content ownership issues
- Smooth out conflicts
- Cooperate on assignments
All of these items are required to ensure that reuse and content sharing works properly. Dysfunctional teams will hoard information, refuse to communicate updates, and so on. All of these issues will reduce the ROI on your DITA investment.
Supporting complex conditions
- Multiple conditional dimensions, such as platform, customer, audience, and product
- Huge number of possible variations
- Dynamic versioning instead of static publishing of a limited number of versions
You may be able to justify conditional processing with improved quality. Better support for complex conditions will allow you to eliminate redundancy in your content and provide more targeted information. It could also help you meet customer requirements for personalized documentation and enable more versioning than the current toolset supports.
The cost argument centers on the automation of variant content delivery. If you had, for example, 40 variants and you need to configure and publish them one at a time, this might take one hour per variant or 40 hours of work per deliverable, per release. With dynamic publishing, you would publish just once. Of course, this does not take into account the programming effort required to enable dynamic publishing. I have also assumed that the task of tagging information as conditional is the same in unstructured and structured environment.
Implementing dynamic publishing is going to be expensive. It’s unlikely that you can justify this with cost savings. More often, the justification here is that you need this feature and you can’t accomplish the required configuration in unstructured content.
Accelerating time to market
Instead of looking at efficiency in your publishing workflow, you can also take a look at reduced time to market and how the earlier availability of documentation might allow you to ship product earlier. If a product sells a modest $1 million per year, then each week of availability is worth about $20,000. If you can deliver your content sooner and thereby accelerate the delivery of the first language or reduce the delays in shipping localized versions, you can potentially get your revenue faster.
The possibilities of new publishing architectures
In addition to the efficiency gains, creating content in an XML/DITA architecture opens up some interesting possibilities for new content. In particular, we know that mobile devices are going to be important, but it’s not yet clear whose technology might win. Storing your information in XML gives you great flexibility to implement new output types as needed. There is, of course, a development cost associated with each new output, but you don’t necessarily have to wait for a vendor to open up a new deliverable—you have the option of building your own. This power is especially appealing in larger organizations, which often have the resources for tools experts.
Integrating with user-generated content
With content in XML, you can transform your content into HTML and then present it side-by-side with user-generated content, such as forum posts on your web site. You can then use metadata to support unified search across all of the content—the professional, reviewed, official information and the information provided by customers. Again, the implementation of this sort of system is nontrivial, but this could be worthwhile if you have an energetic user community.
Instead of publishing a book or an entire deliverable, you publish topics as content is ready. There are definitely some challenges with this approach, but it’s appealing both for web publishing and also for localization, where you can do incremental localization instead of book-based localization.
If you do just-in-time publishing, you may also be able to decouple your content deliveries from your product (often, software) deliveries. You can get away from content being tied to the software delivery dates.
Analytics on a web site let you measure how topics are used. You can then act on this information. You might look at popular and unpopular topics, unsuccessful searches, and which topics are getting the most comments. You can, of course, do analytics without XML, but the underlying XML foundation will allow you to publish incremental updates in response to the analytics.
This varies by organization, of course, but our 2009 survey showed an average implementation cost of around $100,000. You need to show at least that much cost savings from the various factors we’ve discussed in this document.
- Software integration: The use of many complex systems and integrating them to work together.
- Complex formatting requirements, especially PDF: Good PDF is expensive.
- Inconsistent legacy files: Migration will be expensive if the original source files are bad.
- Narrative rather than topic-based legacy files.
- Content management system.
- People: Change resistance and inertia are common.
Change management is critical. Your implementation will not succeed if you cannot get support from the authors. Risk factors here include the following:
- Dysfunctional teams. Address this with improved communication, building trust, and providing a roadmap early.
- Information hoarding during implementation. Address this by rewarding information-sharing, avoiding communication bottlenecks, and documenting decisions.
- Tool-specific blinders. Ask for open minds and let people experiment with trial versions. Acknowledge that XML tools are less mature.
- Using XML/DITA to clone an existing, problematic workflow. Before implementation, examine the existing workflow and think about the best and worst features, identify new requirements that current workflow can’t do, and understand that new workflow greatly affects authors.