Skip to main content
News Opinion

ePub + tech pub = ?

At Scriptorium earlier this week, we all watched live blogs of the iPad announcement. (What else would you expect from a bunch of techies?) One feature of the iPad that really got us talking (and thinking) is its support of the ePub open standard for ebooks.

ePub is basically a collection of XHTML files zipped up with some baggage files. Considering a lot of technical documentation groups create HTML output as a deliverable, it’s likely not a huge step further to create an ePub version of the content. There is a transform for DocBook to ePub; there is a similar effort underway for DITA. You can also save InDesign files to ePub.

While the paths to creating an ePub version seem pretty straightforward, does it make sense to release technical content as an ebook? I think a lot of the same reasons for releasing online content apply (less tree death, no printing costs, and interactivity, in particular), but there are other issues to consider, too: audience, how quickly ebook readers and software become widespread, how the features and benefits of the format stack up against those of PDF files and browser-based help, and so on. And there’s also the issue of actually leveraging the features of an output instead of merely doing the minimum of releasing text and images in that format. (In the PDF version of a user manual, have you ever clicked an entry in the table of contents only to discover the TOC has no links? When that happens, I assume the company that released the content was more interested in using the format to offload the printing costs on to me and less interested in using PDF as a way to make my life easier.)

The technology supporting ebooks will continue to evolve, and there likely will be a battle to see which ebook file format(s) will reign supreme. (I suspect Apple’s choice of the ePub format will raise that format’s prospects.) While the file formats get shaken out and ebooks continue to emerge as a way to disseminate content, technical communicators would be wise to determine how the format could fit into their strategies for getting information to end users.

What considerations come to your mind when evaluating the possibility of releasing your content in ePub (or other ebook) format?

Read More
Tools

White paper on whitespace (and removing it)

When I first started importing DITA and other XML files into structured FrameMaker, I was surprised by the excessive whitespace that appeared in the files. Even more surprising (in FrameMaker 8.0) were the red comments displayed via the EDD that said that some whitespace was invalid (these no longer appear in FrameMaker 9).

The whitespace was visible because of an odd decision by Adobe to handle all XML whitespace as if it were significant. (XML divides the world into significant and insignificant whitespace; most XML tools treat whitespace as insignficant except where necessary…think <codeblock> elements). This approach to whitespace exists in both FrameMaker and InDesign.

At first I handled the whitespace on a case-by-case basis, removing it by hand or through regular expressions. Eventually, I realized this was a more serious problem and created an XSL transform to eliminate the white space as a part of preprocessing. By using XSL that was acceptable to Xalan (not that hard), the transform can be integrated into a FrameMaker structured application.

I figured this whitespace problem must be affecting (and frustrating) more than a few of you out there,
so I made the stylesheet available on the Scriptorium web site. I also wrote a white paper “Removing XML whitespace in structured FrameMaker documents” that describes describes the XSL that went into the stylesheet and how to integrate it with your FrameMaker structured applications.

The white paper is available on the Scriptorium web site. Information about how to download the stylesheet is in the white paper.

If the stylesheet and whitepaper are useful to you, let us know!

Read More
Opinion Webinar

Behold, the power of free

Lately, our webcasts are getting great participation. The December event had 100 people in attendance (the registered number was even higher), and the numbers for the next few months are strong, as well. Previous webcasts had attendance of A Lot Less than 100. What changed? The webcasts are now free. (Missing an event? Check our archives.)

We’re going in a similar direction with white papers. We charge for some content, but we also offer a ton of free information.

The idea is that free (and high-quality) information raises our profile and therefore later brings in new projects. I’m not so sure, though, that we have any evidence that supports this theory yet.

So, I thought I’d ask my readers. Do you evaluate potential vendors based on offerings such as webcasts and white papers? Are there other, more important factors?

PS Upcoming events, including several DITA webcasts, are listed on our events page.

Read More
Opinion

2010 predictions for technical communication

It’s time for my (apparently biennial) predictions post. For those of you keeping score at home, you can see the last round of predictions here. Executive summary: no clear leader for DITA editing, reuse analyzers, Web 2.0 integration, global business, Flash. In retrospect, I didn’t exactly stick my neck out on any of those. Let’s see if I can do better this year.

Desktop authoring begins to fade

Everyone else is talking about the cloud, but what about tech comm? Many content creation efforts will shift into the cloud and away from desktop applications and their monstrous footprints (I’m looking at you, Adobe). When your content lives in the cloud, you can edit from anywhere and be much less dependent on a specific computer loaded with specific applications.

I expect to see much more content creation migrate into web applications, such as wiki software and blogging software. I do not, at this point, see much potential for the various “online word processors,” such as Buzzword or Zoho Writer, for tech comm. Creating documents longer than four or five pages in these environments is painful.

In the ideal universe, I’d like to see more support for DITA and/or XML in these tools, but I’m not holding my breath for this in 2010.

The ends justify the means

From what we are seeing, the rate of XML adoption is steady or even accelerating. But the rationale for XML is shifting. In the past, the benefits of structured authoring—consistency, template enforcement, and content reuse—have been the primary drivers. But in several newer projects, XML is a means to an end rather than a goal—our customers want to extract information from databases, or transfer information between two otherwise incompatible applications. The project justifications reach beyond the issues of content quality and instead focus on integrating content from multiple information sources.

Social-ism

Is the hype about social media overblown? Actually, I don’t think so. I did a webcast (YouTube link) on this topic in December 2009. The short version: Technical communicators must now compete with information being generated by the user community. This requires greater transparency and better content.

My prediction is that a strategy for integrating social media and official tech comm will be critical in 2010 and beyond.

Collaboration

The days of the hermit tech writer are numbered. Close collaboration with product experts, the user community, and others will become the norm. This requires tools that are accessible to non-specialists and that offer easy ways to manage input from collaborators.

Language shifts

There are a couple of interesting changes in language:

  • Content strategy rather than documentation plan
  • Decision engine (such as Hunch, Wolfram Alpha, and Aardvark) rather than search engine

What are your predictions for 2010?

Other interesting prediction posts:

Read More
XML

Handling XSL:FO’s memory issue with large page counts

Formatting Object (FO) processors (FOP, in particular) often fail with memory errors when processing very large documents for PDF output. Typically in XSL:FO, the body of a document is contained in a single fo:page-sequence element. When FO documents are converted to PDF output, the FO processor holds an entire fo:page-sequence in memory to perform pagination adjustments over the span of the sequence. Very large page counts can result in memory overflows or Java heap space errors.

Read More
Content strategy

The State of Structure

In early 2009, Scriptorium Publishing conducted a survey to measure how and why technical communicators are adopting structured authoring.

Of the 616 responses:

  • 29 percent of respondents indicated that they had already implemented structured authoring.
  • 16 percent indicated that they do not plan to implement structured authoring.
  • 14 percent were in the process of implementing structured authoring.
  • 20 percent were planning to do so.
  • 21 percent were considering it.
  • This report summarizes our findings on topics including the reasons for implementing structure, the adoption rate for DITA and other standards, and the selection of authoring tools.

    Download PDF file (2 MB, 56 pages)

    Discuss this document in our forum

Read More
Tools

Adding a DOCTYPE declaration on XSL output

In a posting a few weeks ago I discussed how to ignore the DOCTYPE declaration when processing XML through XSL. What I left unaddressed was how to add the DOCTYPE declaration back to the files. Several people have told me they’re tired of waiting for the other shoe to drop, so here’s how to add a DOCTYPE declaration.

First off: the easy solution. If the documents you are transforming always use the same DOCTYPE, you can use the doctype-public and doctype-system attributes in the <xsl:output> directive. When you specify these attributes, XSL inserts the DOCTYPE automatically.

However, if the DOCTYPE varies from file to file, you’ll have to insert the DOCTYPE declaration from your XSL stylesheet. In DITA files (and in many other XML architectures), the DOCTYPE is directly related to the root element of the document being processed. This means you can detect the name of the root element and use standard XSL to insert a new DOCTYPE declaration.

Before you charge ahead and drop a DOCTYPE declaration into your files, understand that the DOCTYPE declaration is not valid XML. If you try to emit it literally, your XSL processor will complain. Instead, you’ll have to:

  • Use entities for the less-than (“<” – “&lt;”) and greater-than (“>” – “&gt;”) signs, and
  • Disable output escaping so that the entities are actually emitted as less-than or greater-than signs (output escaping will convert them back to entities, which is precisely what you don’t want).

There are at least two possible approaches for adding DOCTYPE to your documents: use an <xsl:choose> statement to select a DOCTYPE, or construct the DOCTYPE using the XSL concat() function.

To insert the DOCTYPE declaration with an <xsl:choose> statement, use the document’s root element to select which DOCTYPE declaration to insert. Note that the entities “&gt;” and “&lt;” aren’t HTML errors in this post, they are what you need to use. Also note that the DOCTYPE statement text in this template is left-aligned so that the output DOCTYPE declarations will be left aligned. Most parsers seem to tolerate whitespace before the DOCTYPE declaration, but I prefer to err on the side of caution:


&lt;xsl:template match="/"&gt;
&lt;xsl:choose&gt;
&lt;xsl:when test="name(node()[1]) = 'topic'"&gt;
&lt;xsl:text disable-output-escaping="yes"&gt;
&lt;!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd"&gt;
&lt;/xsl:text&gt;
&lt;/xsl:when&gt;
&lt;xsl:when test="name(node()[1]) = 'concept'"&gt;
&lt;xsl:text disable-output-escaping="yes"&gt;
&lt;!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd"&gt;
&lt;/xsl:text&gt;
&lt;/xsl:when&gt;
&lt;xsl:when test="name(node()[1]) = 'task'"&gt;
&lt;xsl:text disable-output-escaping="yes"&gt;
&lt;!DOCTYPE task PUBLIC "-//OASIS//DTD DITA Task//EN" "task.dtd"&gt;
&lt;/xsl:text&gt;
&lt;/xsl:when&gt;
&lt;xsl:when test="name(node()[1]) = 'reference'"&gt;
&lt;xsl:text disable-output-escaping="yes"&gt;
&lt;!DOCTYPE reference PUBLIC "-//OASIS//DTD DITA Reference//EN" "reference.dtd"&gt;
&lt;/xsl:text&gt;
&lt;/xsl:when&gt;
&lt;/xsl:choose&gt;
&lt;xsl:apply-templates select="node()"/&gt;
&lt;/xsl:template&gt;

The preceding example contains statements for the topic, concept, task, and reference topic types; if you use other topic types, you’ll need to add additional statements. Rather than write a statement for each DOCTYPE, a more general approach is to process the name of the root element and construct the DOCTYPE declaration using the XSL concat() function.


&lt;xsl:variable name="ALPHA_UC" select="'ABCDEFGHIJKLMNOPQRSTUVWXYZ'"/&gt;
&lt;xsl:variable name="ALPHA_LC" select="'abcdefghijklmnopqrstuvwxyz'"/&gt;
&lt;xsl:variable name="NEWLINE" select="'&amp;#x0A;'"/&gt;

&lt;xsl:template match="/"&gt;
&lt;xsl:call-template name="add-doctype"&gt;
&lt;xsl:with-param name="root" select="name(node()[1])"/&gt;
&lt;/xsl:call-template&gt;
&lt;xsl:apply-templates select="node()"/&gt;
&lt;/xsl:template&gt;

<span style="color: green;">&lt;-- Create a doctype based on the root element --&gt;</span>
&lt;xsl:template name="add-doctype"&gt;
&lt;xsl:param name="root"/&gt;
<span style="color: green;">&lt;-- Create an init-cap version of the root element name. --&gt;</span>
&lt;xsl:variable name="initcap_root"&gt;
&lt;xsl:value-of
select="concat(translate(substring($root,1,1),$ALPHA_LC,$ALPHA_UC),
translate(substring($root,2 ),$ALPHA_UC,$ALPHA_LC))"
/&gt;
&lt;/xsl:variable&gt;
<span style="color: green;">&lt;-- Build the DOCTYPE by concatenating pieces.</span>
<span style="color: green;">Note that XSL syntax requires you to use the &amp;quot; entities for</span>
<span style="color: green;">quotation marks ("). --&gt;</span>

&lt;xsl:variable name="doctype"
select="concat('!DOCTYPE ',
$root,
' PUBLIC &amp;quot;-//OASIS//DTD DITA ',
$initcap_root,
'//EN&amp;quot; &amp;quot;',
$root,
'.dtd&amp;quot;') "/&gt;
&lt;xsl:value-of select="$NEWLINE"/&gt;
<span style="color: green;">&lt;-- Output the DOCTYPE surrounded by &lt; and &gt;. --&gt;</span>
&lt;xsl:text disable-output-escaping="yes"&gt;&lt;
&lt;xsl:value-of select="$doctype"/&gt;
&lt;xsl:text disable-output-escaping="yes"&gt;&gt;
&lt;xsl:value-of select="$NEWLINE"/&gt;
&lt;/xsl:template&gt;

The one caveat about this approach is that it depends on a consistent portion of the public ID form (“-//OASIS//DTD DITA “). If there are differences in the public ID for your various DOCTYPE declarations, those differences may complicate the template.

So there you have it: DOCTYPEs in a flash. Just remember to use disable-output-escaping=”yes” and use entities where appropriate and you’ll be fine.

Read More
Opinion

Would you use just a gardening trowel to plant a tree?

As technical communicators, our ultimate goal is to create accessible content that helps users solve problems. Focusing on developing quality content is the priority, but you can take that viewpoint to an extreme by saying that content-creation tools are just a convenience for technical writers:

The tools we use in our wacky profession are a convenience for us, as are the techniques we use. Users don’t care if we use FrameMaker, AuthorIt, Flare, Word, AsciiDoc, OpenOffice.org Writer, DITA or DocBook to create the content. They don’t give a hoot if the content is single sourced or topic based.

Sure, end users probably don’t know or care about the tools used to develop content. However, users do have eagle eyes for spotting inconsistencies in content, and they will call you out for conflicting information in a heartbeat (or worse, just abandon the official user docs altogether for being “unreliable”). If your department has implemented reuse and single-sourcing techniques that eliminate those inconsistencies, your end users are going to have a lot more faith in the validity of the content you provide.

Also, a structured authoring process that removes the burden of formatting content from the authoring process gives tech writers more time to focus on providing quality content to the end user. Yep, the end user doesn’t give a fig that the PDF or HTML file they are reading was generated from DITA-based content, but because the tech writers creating that content focused on just writing instead of writing, formatting, and converting the content, the information is probably better written and more useful.

Dogwood // flickr: hlkljgk

Dogwood // flickr: hlkljgk

All this talk about tools makes me think about the implements I use for gardening. A few years ago, I planted a young dogwood tree in my back yard. I could have used a small gardening trowel to dig the hole, but instead, I chose a standard-size shovel. Even though the tree had no opinion on the tool I used (at least I don’t think it did!), it certainly benefited from my tool selection. Because I was able to dig the hole and plant the tree in a shorter amount of time, the tree was able to develop a new root system in its new home more quickly. Today, that tree is flourishing and is about four feet taller than it was when I planted it.

The same applies to technical content. If a tool or process improves the consistency of content, gives authors more time to focus on the content, and shortens the time it takes to distribute that content, then the choice and application of a tool are much more than mere “conveniences.”

Read More
Conferences Webinar

Coming attractions for October and November

October 22nd, join Simon Bate for a session on delivering multiple versions of a help set without making multiple copies of the help:

We needed to generate a help set from DITA sources that applied to multiple products. However, serious space constraints prevent us from using standard DITA conditional processing to create multiple, product-specific versions of the help; there was only room for one copy of the help. Our solution was to create a single help set in which select content would be displayed when the help was opened.
In this webcast, we’ll show you how we used the DITA Open Toolkit to create a help set with dynamic text display. The webcast introduces some minor DITA Open Toolkit modifications and several client-side JavaScript techniques that you can use to implement dynamic text display in HTML files. Minimal programming skills necessary.

Register for dynamic text display webcast

I will be visiting New Orleans for LavaCon. This event, organized by Jack Molisani, is always a highlight of the conference year. I will be offering sessions on XML and on user-generated content. You can see the complete program here. In addition to my sessions, I will be bringing along a limited number of copies of our newest publication, The Compass. Find me at the event to get your free copy while supplies last. (Otherwise, you can order online Real Soon Now for $15.95.)

Register for LavaCon (note, early registration has been extended until October 12)

And last but certainly not least, we have our much-anticipated session on translation workflows. Nick Rosenthal, Managing Director, Salford Translations Ltd., will deliver a webcast on cost-effective document design for a translation workflow on November 19 at 11 a.m . Eastern time:

In this webcast, Nick Rosenthal discusses the challenges companies face when translating their content and offers some best practices to managing your localization budget effectively, including XML-based workflows and ways to integrate localized screen shots into translated user guides or help systems.

Register for the translation workflow webcast

As always, webcasts are $20. LavaCon is just a bit more. Hope to see you at all of these events.

Read More
Opinion

A strident defense of mediocre formatting

In addition to a gratuitous (and entertaining) swipe at “noisome” DITA “fanboys,” Roger Hart argues that we need to reconsider the disadvantages of automated formatting:

The thing is, [separation of content and formatting has] all been taken rather stridently to heart in certain quarters, leading to a knee jerk reaction whenever author-controlled formatting/pagination/lineation is mentioned as anything other than bleak, sulphurous devilry. This is twaddle. […]

Uncertainty in meaning is anathema to user intelligibility. If we’re going to make sure we’re not writing poetry, there’s definitely value in having poetry’s level of control over semantic blocks.

Of course, it’s fully possible that this is an expensive distraction.

Possible? It’s definitely expensive. It’s possible that it’s a distraction.

I think Hart perhaps unintentionally put his finger on the real issue: value. How much value (in the form of improved comprehension) is added to a technical document when you are able, in the words of commenter Brian Harris, to “lovingly handcraft” each page?

How much value (in the form of cost avoidance) is added to an organization when you are able to spit out a reasonably formatted document in a few minutes?

Actually, I have a different question. How far should we take this argument? Here’s an example of the pinnacle of handcrafting:

Book of Kells image
Can we all agree that this might perhaps take handcrafting a little too far?

Compared to the Book of Kells (above), the Gutenberg Bible looks quite pedestrian:

Gutenberg Bible image

You can just imagine the scribes with their quills, lapis, gold leaf, and other implements muttering, “That Gutenberg and his noisome fanboys. He can’t even render two colors without our help. Poser. It’ll never last.”

Formatting automation removes cost from the process of creating and delivering content. For technical documents that change often and are perhaps delivered in multiple languages, it removes a lot of cost. Let’s assume that handcrafted pages can improve ease of reading and comprehension with careful copy-fitting and adjusted spacing (Hart’s article mentions “headings, line breaks, intra-word, etc”). This increases the cost of the content.

What happens when content is expensive? Fewer people get to see it.

Books in Europe went from 50000 before Gutenberg to 12 million 50 years later.

I think we can all agree that e-books offer none of the typographic sophistication in question here. Bill Gates (yes, that Bill Gates) wrote in 1999:

It is hard to imagine today, but one of the greatest contributions of e-books may eventually be in improving literacy and education in less-developed countries. Today people in poor countries cannot afford to buy books and rarely have access to a library. 

Essentially, we can produce documents inexpensively and give more people access to them as a direct result of lower cost, or we can climb on our typographic high horse and whine about word spacing.

I’m with the noisome fanboys.

Read More