If you just want the slides, they are embedded below via Slideshare.
If you just want the slides, they are embedded below via Slideshare.
Tomorrow (February 5) at noon Eastern time, I’m doing a webinar, DITA 101–Why the Buzz?
This is a basic introduction to the Darwin Information Typing Architecture, an XML standard for technical communication content. If you’re wondering about this DITA “thing,” and want to get some basic information, this is the session for you.
Also, the price is right, as it’s free (register here). Audio will be Internet-based, so you don’t even have the expense of a phone call.
Many thanks to MadCap Software, who is organizing and sponsoring this series of free webinars. These sessions are “tool-independent” — they are not going to be pitches for MadCap products.
I have to mention Simon Bate’s new Hacking the DITA OT white paper again. It’s crammed with useful tips and tricks on how to get started configuring DITA output to your satisfaction. It’s not free, but at $20 for an instant download, it’s pretty cheap.
Conferences are more expensive than our $20 white paper, but they also give you the opportunity to talk with people face-to-face. My next conference event is DocTrain West (Palm Springs, CA). I have two sessions:
You can register for the event at a $400 savings until February 17. I hope to see you there.
STC Intercom, January 2009
As the many-to-many communication between blogs, forums, and the like grow in volume, official product information will become just one of the many sources available to readers. Product owners who isolate their official information from the conversation run the risk of not being heard at all.
XML authoring can help to close the documentation gap between official and user-generated content, integrating the two and ensuring their voice is in the mix.
Download the PDF (125 K)
I estimate that about 80 percent of our consulting work is XML implementation. And about 80 percent of our XML implementation work is based on DITA. So we spend a lot of time with DITA and the DITA Open Toolkit.
I’m starting to wonder, though, whether the adoption rate of DITA and the DITA Open Toolkit is going to diverge.
For DITA, what we hear most often is that it’s “good enough.” DITA may not be a perfect fit for a customer’s content, but our customer doesn’t see a compelling reason to build the perfect structure. In other words, they are willing to compromise on document structure. DITA structure, even without specialization, offers a reasonable topic-based solution.
But for output, the requirements tend to be much more exacting. Customers want any output to match their established look and feel requirements precisely.
Widespread adoption of DITA leads to a a sort of herd effect with safety in numbers. Not so for the Open Toolkit — output requirements vary widely and people are reluctant to contribute back to the Open Toolkit, perhaps because look and feel is considered proprietary.
The pattern we’re seeing is that customers adopt the Open Toolkit when:
Customers tend to adopt non-Open Toolkit solutions when:
The software vendors seem to be encouraging this trend. In part, I think they would like to find some way to get lock-in on DITA content. Consider the following:
The strategy of supporting DITA structure through a proprietary publishing engine actually makes a lot of sense to me. From a customer point of view, you can:
It’s not until you’re ready to publish that you move into a proprietary environment.
To me, the interesting question is this: Will the use of proprietary publishing engines be a temporary phenomenon, or will the Open Toolkit eventually displace them in the same way that DITA is displacing custom XML structure?
Originally published in STC Intercom, April 2008
DITA is a free, pre-made XML document structure. That statement can lead to a few erroneous assumptions: if it’s free, then it will cut down on costs, and if it’s pre-made, it will cut down on labor. There are several things to consider when choosing a DITA solution. Does your staff have the skills to author in a DITA environment? Will additional training be required? Does DITA even match your content model, and if it doesn’t, is it worth the effort to change?
Sarah’s conclusion? “DITA may be free, but it’s not cheap.”
Download the PDF (950 K)
Mark Wallis of IBM ISS on how to run a successful DITA pilot. Some great information in this presentation on how to reduce risks.
He recommends selecting your pilot project based on the following items:
They had one person out of a group of twelve, a “senior in name only” writer, leave because of this transition.
The ideal team for a pilot will need cross-functional and complementary skills:
Some advice on planning your content. (And it’s worth noting here that these apply to good writing and topic-oriented content rather than to DITA tools.)
Some interesting discussion of “task support clusters,” which include conceptual overviews, related tasks, deep concept, and reference information. (Michael Hughes did a presentation on this earlier today, which I unfortunately was not able to attend.)
They set up a DITA War Room in a small conference room and met at least daily (1.5 to 2 hours per day. Yikes). They set weekly goals and used small tasks to build momentum.
There was also heavy use of an internal wiki to put up initial “straw man” design, then revise, comment, and discuss.
Implementation deliverables were split out into smaller tasks, such as:
For the third time, he points out that they are no longer documenting how to use a check box, so I guess I’ll mention it.
Choosing the DITA toolset
Task Modeler (free) for building and managing ditamaps, defining relationships between topics, and creating skeleton topics (stub files).
DITA-compliant editor to edit your topics.
Compiler (part of open source toolkit). Compiler? What are they compiling? HTML Help? Oh. He just referred to Ant as a compiler. Ohhhhhkay.
Proof of concept
They picked a subset of the pilot to do the proof of concept.
The presenter’s boss is quoted as saying, “There’s no such thing as bad weather, only insufficient clothing.” I’m guessing that she’s never been to Minnesota in winter.
The objectives for the proof of concept:
They learned that deliverable formats matter because they must deliver several different formats.
Purchase toolsets only for pilot team.
After completing proof of concept (successfully!), invest in tools for the remaining writers.
They used their wiki to capture conventions and guidelines.
They paid attention to the change management issues. He doesn’t mention it here, but I would assume that the combination of an acquisition by IBM plus the requirement to change the authoring environment could have caused significant angst. Their approach included presentations, wiki content, email discussions, and online training.
At the point of transition, DITA boot camp was offered.
They used collaborative walkthroughs, or reviews, to help standardize their content development. Interesting. This sounds as though it could be a) threatening and b) an unbelievable time sink. But just maybe it might also c) help improve the content.
Other lessons learned
Think more, write less. (Don’t document the obvious, don’t document common user interface convention, write only if you’re really adding value.)
Don’t squander your ignorance. (If something makes you stumble in the interface, that will probably also cause problems for your users, so capture it.)
The more structured your content, the easier the transition to DITA.
Documenting the obvious teaches readers to ignore your text, so don’t document the obvious.
The handouts are available here: http://www.writersua.com/ohc/suppmatl/
Originally published in STC Intercom, November 2007
XML can benefit a publishing workflow in many ways: improving content reuse, consistency, and potentially automating much of the process. That all sounds wonderful, but XML is not the logical answer for everyone.
Implementing a structured authoring solution requires a significant change from the familiar desktop publishing routine to new tools, technologies, and processes. Switching to XML is going to cost time and money. Depending on your needs, it may not be the most efficient solution.
Download the PDF (350 K)
My post about tekom generated some interesting comments, including this one, which I will address in pieces:
Thanks for this info. I’ve been lobbying my company to send me to Tekom for the last few years, unsuccessfully. I submitted 2 times for presentations but both were rejected. Our company is in Concord, Massachusetts, USA.
Could you discuss the benefits to North American writers attending such an international event. Are there things you learned there you will not learn anywhere else (business/tech stuff of course )
Over on O’Reilly’s Radar blog, Andy Oram has a fascinating article about the demise (!) of the Information Age and what will be next:
[T]he Information Age was surprisingly short. In an age of Wikipedia, powerful search engines, and forums loaded with insights from volunteers, information is truly becoming free (economically), and thus worth even less than agriculture or manufacturing. So what has replaced information as the source of value?The answer is expertise. Because most activities offering a good return on investment require some rule-breaking–some challenge to assumptions, some paradigm shift–everyone looks for experts who can manipulate current practice nimbly and see beyond current practice. We are all seeking guides and mentors.
What comes after the information age? (be sure to read the comments, too)
It’s an interesting idea, but I don’t think we’re getting away from the Information Age into the Expertise Age. After all, expertise is just a specialized (useful!) form of information.
In the comments, Tim O’Reilly points out that the real change is in how information is gathered and distributed with “the rise of new forms of computer mediated aggregators and new forms of collective curation and communication.”
I believe that we are still firmly in the Information Age because information has not yet become a commodity product. There is, however, clearly a shift happening in how information is created and delivered. I think it’s helpful to look at communication dimensions:
Technical support is the most expensive option; it’s also often the most relevant. Technical writing is more efficient (because the answer to the question is provided just once), but also less personal and therefore less relevant.
Many technical writers are concerned about losing control over their content. For an example of the alarmist perspective, read Joanne Hackos’s recent article on wikis. Then, be sure to read Anne Gentle’s eponymous rebuttal on The Content Wrangler.
Keep in mind, though, that you can’t stop people from creating wikis, mailing lists, third-party books, forums, or anything else. You cannot control what people say about your products, and it’s possible that the “unauthorized” information will reach a bigger audience than the Official Documentation(tm). You can attempt to channel these energies into productive information, but our new information age is the Age of Uncontrolled Information.
Furthermore, the fact that people are turning to Google to find information says something deeply unflattering about product documentation, online help, and other user assistance. Why is a Google search more compelling than looking in the help?
What do killer Internet applications have in common?
Web 2.0: harness network effects to get better the more people use them.
Each of these companies is building a database whose values grows in proportion to the number of participants — a network-effective-driven data lock-in. (gulp)
Law of conversation of attractive profits
And thus, if digital content is becoming cheap, what’s next? What’s adjacent?
For publishers, the question is: where is value migrating to?
Curating user-generated content
What job does a book do? What is a book’s competition?
Search is most important benefit of content being online
“Piracy is progressive taxation”
DRM: “Like taking cat to a vet” (hold them very carefully and loosely!)
More options = happier users