Skip to main content

Content management

Change management Content management Structured content

Interview with a vampire: interviewing to find processes that drain efficiency

When you’re considering an overhaul of your publishing workflow, you may find yourself becoming a metaphorical version of Van Helsing, the vampire-hunting character from Bram Stoker’s Dracula (and the many, many movies based on the Dracula story). You need to find all the efficiency-draining aspects of your current process and eliminate them.

Read More
Content management

Talk amongst yourselves…introducing forums.scriptorium.com

Our web site now has forums for discussions of technical communication issues. We want to give you, our readers, a venue where you can set your own agenda instead of just responding to our blog posts.

Given Scriptorium’s particular interests, I expect to see a lot of emphasis on publishing automation and XML. But frankly, we don’t know exactly what might happen. Communities often develop in unexpected ways. It will be up to you—and us—to figure out what direction these forums go.

(We have an internal pool on how long before Godwin’s law is applied.)

The forums are available in our main site navigation. There are also RSS feeds so you can subscribe to a topic or category of interest. Or, if you prefer, you can get email notifications for new forum posts.

And how do we feel about this launch? We’re…perfectly calm.

Please join the conversation.

Read More
Content management

The elephant in the room—publishers and e-books

Two years ago, Nate Anderson wrote this on ars technica:

The book business, though far older than the recorded music business, is still lucky enough to have time on its side: no e-book reader currently offers a better reading experience than paper.

That’s what makes Apple’s iPad announcement so important. Books will now face stiff competition from e-books as the e-book experience improves.

Elephant in the room // flickr: mobilestreetlife

Elephant in the room // flickr: mobilestreetlife

Meanwhile, the publishing industry (with the notable exception of O’Reilly Media) is desperately trying to avoid the inevitable. (For a slighty happier take, see BusinessWeek.)

Publishers are supposed to filter, edit, produce, distribute, and market content. pre-Internet, all of these things were difficult and required significant financial resources. Today, many are easy and all are cheap.

There’s only one other thing.

Content.

But the revenue split between publishers and authors does not—yet—reflect the division of labor. The business relationships are still built on the idea that authors can’t exist without publishers. In fact, it’s the reverse that’s true.

Only the big publishers can get your book into every bookstore in the country. However, I’ve got news for you: Unless your name is on an elite shortlist with the likes of Dan Brown, John Grisham, Nora Roberts, and J.K. Rowling, it probably doesn’t matter.

If you know your audience, you can reach them at least as well as a big publisher can. And you need to reach a lot fewer people to succeed as an independent. The general rule of thumb is a 10-to-1 ratio. You’ll make the same amount selling 10,000 books through a traditional publisher as 1,000 books on your own.

It’s not so difficult to hire freelancers (especially in this economy) to edit and produce your book, if that’s not your cup of tea. Distribution is doable—Amazon is easy, bookstores a little more challenging. This is where e-books will accelerate the change—the challenges of shelf space and returns simply disappear.

And even if you have a publisher, they will expect you to do most of the marketing.

So, what will successful publishers look like in 2020?

  • They will provide editorial and production support for writers who do not want to deal with technical issues.
  • They will support authors in marketing by helping them with blogging platforms and other social media efforts.
  • They will get a much smaller cut of revenues than they currently do.

Actually, that looks a lot like Lulu.

    Read More
    Content management

    ePub + tech pub = ?

    At Scriptorium earlier this week, we all watched live blogs of the iPad announcement. (What else would you expect from a bunch of techies?) One feature of the iPad that really got us talking (and thinking) is its support of the ePub open standard for ebooks.

    ePub is basically a collection of XHTML files zipped up with some baggage files. Considering a lot of technical documentation groups create HTML output as a deliverable, it’s likely not a huge step further to create an ePub version of the content. There is a transform for DocBook to ePub; there is a similar effort underway for DITA. You can also save InDesign files to ePub.

    While the paths to creating an ePub version seem pretty straightforward, does it make sense to release technical content as an ebook? I think a lot of the same reasons for releasing online content apply (less tree death, no printing costs, and interactivity, in particular), but there are other issues to consider, too: audience, how quickly ebook readers and software become widespread, how the features and benefits of the format stack up against those of PDF files and browser-based help, and so on. And there’s also the issue of actually leveraging the features of an output instead of merely doing the minimum of releasing text and images in that format. (In the PDF version of a user manual, have you ever clicked an entry in the table of contents only to discover the TOC has no links? When that happens, I assume the company that released the content was more interested in using the format to offload the printing costs on to me and less interested in using PDF as a way to make my life easier.)

    The technology supporting ebooks will continue to evolve, and there likely will be a battle to see which ebook file format(s) will reign supreme. (I suspect Apple’s choice of the ePub format will raise that format’s prospects.) While the file formats get shaken out and ebooks continue to emerge as a way to disseminate content, technical communicators would be wise to determine how the format could fit into their strategies for getting information to end users.

    What considerations come to your mind when evaluating the possibility of releasing your content in ePub (or other ebook) format?

    Read More
    Content management DITA

    White paper on whitespace (and removing it)

    When I first started importing DITA and other XML files into structured FrameMaker, I was surprised by the excessive whitespace that appeared in the files. Even more surprising (in FrameMaker 8.0) were the red comments displayed via the EDD that said that some whitespace was invalid (these no longer appear in FrameMaker 9).

    The whitespace was visible because of an odd decision by Adobe to handle all XML whitespace as if it were significant. (XML divides the world into significant and insignificant whitespace; most XML tools treat whitespace as insignficant except where necessary…think <codeblock> elements). This approach to whitespace exists in both FrameMaker and InDesign.

    At first I handled the whitespace on a case-by-case basis, removing it by hand or through regular expressions. Eventually, I realized this was a more serious problem and created an XSL transform to eliminate the white space as a part of preprocessing. By using XSL that was acceptable to Xalan (not that hard), the transform can be integrated into a FrameMaker structured application.

    I figured this whitespace problem must be affecting (and frustrating) more than a few of you out there,
    so I made the stylesheet available on the Scriptorium web site. I also wrote a white paper “Removing XML whitespace in structured FrameMaker documents” that describes describes the XSL that went into the stylesheet and how to integrate it with your FrameMaker structured applications.

    The white paper is available on the Scriptorium web site. Information about how to download the stylesheet is in the white paper.

    If the stylesheet and whitepaper are useful to you, let us know!

    Read More
    Content management

    Adding a DOCTYPE declaration on XSL output

    In a posting a few weeks ago I discussed how to ignore the DOCTYPE declaration when processing XML through XSL. What I left unaddressed was how to add the DOCTYPE declaration back to the files. Several people have told me they’re tired of waiting for the other shoe to drop, so here’s how to add a DOCTYPE declaration.

    First off: the easy solution. If the documents you are transforming always use the same DOCTYPE, you can use the doctype-public and doctype-system attributes in the <xsl:output> directive. When you specify these attributes, XSL inserts the DOCTYPE automatically.

    However, if the DOCTYPE varies from file to file, you’ll have to insert the DOCTYPE declaration from your XSL stylesheet. In DITA files (and in many other XML architectures), the DOCTYPE is directly related to the root element of the document being processed. This means you can detect the name of the root element and use standard XSL to insert a new DOCTYPE declaration.

    Before you charge ahead and drop a DOCTYPE declaration into your files, understand that the DOCTYPE declaration is not valid XML. If you try to emit it literally, your XSL processor will complain. Instead, you’ll have to:

    • Use entities for the less-than (“<” – “&lt;”) and greater-than (“>” – “&gt;”) signs, and
    • Disable output escaping so that the entities are actually emitted as less-than or greater-than signs (output escaping will convert them back to entities, which is precisely what you don’t want).

    There are at least two possible approaches for adding DOCTYPE to your documents: use an <xsl:choose> statement to select a DOCTYPE, or construct the DOCTYPE using the XSL concat() function.

    To insert the DOCTYPE declaration with an <xsl:choose> statement, use the document’s root element to select which DOCTYPE declaration to insert. Note that the entities “&gt;” and “&lt;” aren’t HTML errors in this post, they are what you need to use. Also note that the DOCTYPE statement text in this template is left-aligned so that the output DOCTYPE declarations will be left aligned. Most parsers seem to tolerate whitespace before the DOCTYPE declaration, but I prefer to err on the side of caution:


    &lt;xsl:template match="/"&gt;
    &lt;xsl:choose&gt;
    &lt;xsl:when test="name(node()[1]) = 'topic'"&gt;
    &lt;xsl:text disable-output-escaping="yes"&gt;
    &lt;!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd"&gt;
    &lt;/xsl:text&gt;
    &lt;/xsl:when&gt;
    &lt;xsl:when test="name(node()[1]) = 'concept'"&gt;
    &lt;xsl:text disable-output-escaping="yes"&gt;
    &lt;!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd"&gt;
    &lt;/xsl:text&gt;
    &lt;/xsl:when&gt;
    &lt;xsl:when test="name(node()[1]) = 'task'"&gt;
    &lt;xsl:text disable-output-escaping="yes"&gt;
    &lt;!DOCTYPE task PUBLIC "-//OASIS//DTD DITA Task//EN" "task.dtd"&gt;
    &lt;/xsl:text&gt;
    &lt;/xsl:when&gt;
    &lt;xsl:when test="name(node()[1]) = 'reference'"&gt;
    &lt;xsl:text disable-output-escaping="yes"&gt;
    &lt;!DOCTYPE reference PUBLIC "-//OASIS//DTD DITA Reference//EN" "reference.dtd"&gt;
    &lt;/xsl:text&gt;
    &lt;/xsl:when&gt;
    &lt;/xsl:choose&gt;
    &lt;xsl:apply-templates select="node()"/&gt;
    &lt;/xsl:template&gt;

    The preceding example contains statements for the topic, concept, task, and reference topic types; if you use other topic types, you’ll need to add additional statements. Rather than write a statement for each DOCTYPE, a more general approach is to process the name of the root element and construct the DOCTYPE declaration using the XSL concat() function.


    &lt;xsl:variable name="ALPHA_UC" select="'ABCDEFGHIJKLMNOPQRSTUVWXYZ'"/&gt;
    &lt;xsl:variable name="ALPHA_LC" select="'abcdefghijklmnopqrstuvwxyz'"/&gt;
    &lt;xsl:variable name="NEWLINE" select="'&amp;#x0A;'"/&gt;

    &lt;xsl:template match="/"&gt;
    &lt;xsl:call-template name="add-doctype"&gt;
    &lt;xsl:with-param name="root" select="name(node()[1])"/&gt;
    &lt;/xsl:call-template&gt;
    &lt;xsl:apply-templates select="node()"/&gt;
    &lt;/xsl:template&gt;

    <span style="color: green;">&lt;-- Create a doctype based on the root element --&gt;</span>
    &lt;xsl:template name="add-doctype"&gt;
    &lt;xsl:param name="root"/&gt;
    <span style="color: green;">&lt;-- Create an init-cap version of the root element name. --&gt;</span>
    &lt;xsl:variable name="initcap_root"&gt;
    &lt;xsl:value-of
    select="concat(translate(substring($root,1,1),$ALPHA_LC,$ALPHA_UC),
    translate(substring($root,2 ),$ALPHA_UC,$ALPHA_LC))"
    /&gt;
    &lt;/xsl:variable&gt;
    <span style="color: green;">&lt;-- Build the DOCTYPE by concatenating pieces.</span>
    <span style="color: green;">Note that XSL syntax requires you to use the &amp;quot; entities for</span>
    <span style="color: green;">quotation marks ("). --&gt;</span>

    &lt;xsl:variable name="doctype"
    select="concat('!DOCTYPE ',
    $root,
    ' PUBLIC &amp;quot;-//OASIS//DTD DITA ',
    $initcap_root,
    '//EN&amp;quot; &amp;quot;',
    $root,
    '.dtd&amp;quot;') "/&gt;
    &lt;xsl:value-of select="$NEWLINE"/&gt;
    <span style="color: green;">&lt;-- Output the DOCTYPE surrounded by &lt; and &gt;. --&gt;</span>
    &lt;xsl:text disable-output-escaping="yes"&gt;&lt;
    &lt;xsl:value-of select="$doctype"/&gt;
    &lt;xsl:text disable-output-escaping="yes"&gt;&gt;
    &lt;xsl:value-of select="$NEWLINE"/&gt;
    &lt;/xsl:template&gt;

    The one caveat about this approach is that it depends on a consistent portion of the public ID form (“-//OASIS//DTD DITA “). If there are differences in the public ID for your various DOCTYPE declarations, those differences may complicate the template.

    So there you have it: DOCTYPEs in a flash. Just remember to use disable-output-escaping=”yes” and use entities where appropriate and you’ll be fine.

    Read More
    Content management RFP

    To bid or not to bid—a vendor’s guide to RFPs

    Request for Proposal (RFP) documents usually arrive in the dead of night, addressed to sales@scriptorium or sometimes info@scriptorium.

    Dear Vendor,

    LargeCompany is looking for a partner who can work magic and walk on water. Please refer to the attached RFP.

    Signed,

    Somebody in Purchasing

    Our instinct is to crack open the RFP and start writing a proposal. But over time, we’ve learned to take a step back and evaluate the RFP first to ensure that it’s worth our time.

    In this post, I’ve outlined some of the issues that we consider before responding to an RFP.

    Read More
    Content management

    Would you use just a gardening trowel to plant a tree?

    As technical communicators, our ultimate goal is to create accessible content that helps users solve problems. Focusing on developing quality content is the priority, but you can take that viewpoint to an extreme by saying that content-creation tools are just a convenience for technical writers:

    The tools we use in our wacky profession are a convenience for us, as are the techniques we use. Users don’t care if we use FrameMaker, AuthorIt, Flare, Word, AsciiDoc, OpenOffice.org Writer, DITA or DocBook to create the content. They don’t give a hoot if the content is single sourced or topic based.

    Sure, end users probably don’t know or care about the tools used to develop content. However, users do have eagle eyes for spotting inconsistencies in content, and they will call you out for conflicting information in a heartbeat (or worse, just abandon the official user docs altogether for being “unreliable”). If your department has implemented reuse and single-sourcing techniques that eliminate those inconsistencies, your end users are going to have a lot more faith in the validity of the content you provide.

    Also, a structured authoring process that removes the burden of formatting content from the authoring process gives tech writers more time to focus on providing quality content to the end user. Yep, the end user doesn’t give a fig that the PDF or HTML file they are reading was generated from DITA-based content, but because the tech writers creating that content focused on just writing instead of writing, formatting, and converting the content, the information is probably better written and more useful.

    Dogwood // flickr: hlkljgk

    Dogwood // flickr: hlkljgk

    All this talk about tools makes me think about the implements I use for gardening. A few years ago, I planted a young dogwood tree in my back yard. I could have used a small gardening trowel to dig the hole, but instead, I chose a standard-size shovel. Even though the tree had no opinion on the tool I used (at least I don’t think it did!), it certainly benefited from my tool selection. Because I was able to dig the hole and plant the tree in a shorter amount of time, the tree was able to develop a new root system in its new home more quickly. Today, that tree is flourishing and is about four feet taller than it was when I planted it.

    The same applies to technical content. If a tool or process improves the consistency of content, gives authors more time to focus on the content, and shortens the time it takes to distribute that content, then the choice and application of a tool are much more than mere “conveniences.”

    Read More
    Content management DITA DITA XML—authors

    Ignoring DOCTYPE in XSL Transforms using Saxon 9B

    Recently I had to write some XSL transforms in which I wanted to ignore the DOCTYPE declarations in the source XML files. In one case, I didn’t have access to the DTD (and the files wouldn’t have validate even if I did). In the other case, the XML files were DITA files, but I had no need or interest in validating the files; I simply needed to run a transform that modified some character data in the files.

    In the first case, I ended up writing a couple of SED scripts that removed and re-inserted the DOCTYPE declaration. By the time I encountered the second case, I wanted to do something less ham-fisted, so I started investigating how to direct Saxon to ignore the DOCTYPE declaration.

    My first thought was to use the -x switch in Saxon. Perhaps I didn’t use it correctly, but I couldn’t get it to work. Even though I was using a non-validating parser (Piccolo), Saxon kept telling me that the DTD couldn’t be found.

    I went back to the drawing board (aka Google) and found a note from Michael Kay that said, “to ignore the DTD completely, you need to use a catalog that redirects the DTD reference to some dummy DTD.” Michael provided a link to a very useful page in the Saxon Wiki that discussed using a catalog with Saxon. After a bit of experimentation, I got it working correctly. In this blog post, I’ve distilled the information to make it useful to others who need to ignore the DOCTYPE in their XSL.

    Before I describe the catalog implementation, I’d like to point out a simple solution. This solution works best when a set of XML files are in a single directory and all files use the same DOCTYPE declaration in which the system ID specifies a file:

    &lt;!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd"&gt;

    In this case, you don’t need a catalog. It’s easier to create an empty file named “topic.dtd” (a dummy DTD) and save it in the same directory as the XML files. The XML parser looks first for the system ID; if it finds a DTD file, it uses it. Case closed.

    However, there are many cases in which this simple solution doesn’t work. The system ID (“topic.dtd” in the previous example) might specify a path that cannot be reproduced on your machine…or the XML files could be spread across multiple directories…or there could be many different DOCTYPEs…or…

    In these cases, it makes more sense to set up a catalog file. To specify a catalog with Saxon, you must use the XML Commons Resolver from Apache (resolver.jar). You can download the resolver from SourceForge. The good thing is, if you have the DITA Open Toolkit installed on your machine, you already have a copy of the resolver.jar file. The file is in %DITA-OT%libresolver.jar. You specify the class path for the resolver in the Java command using the -cp switch (shown below).

    The resolver requires you to specify a catalog.xml file, in which you map the the public ID (or system ID) in the DOCTYPE declaration to a local DTD file. The catalog.xml file I created looks like this:

    &lt;catalog prefer="public" xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog"&gt;
    &lt;public publicId="-//OASIS//DTD DITA Topic//EN" uri="dummy.dtd"/&gt;
    &lt;public publicId="-//OASIS//DTD DITA Concept//EN" uri="dummy.dtd"/&gt;
    &lt;public publicId="-//OASIS//DTD DITA Task//EN" uri="dummy.dtd"/&gt;
    &lt;public publicId="-//OASIS//DTD DITA Reference//EN" uri="dummy.dtd"/&gt;
    &lt;/catalog&gt;

    Note that the uri attribute in each entry points to a dummy DTD (an empty file). The file path used for the dummy.dtd file is relative to the location of the catalog file.

    Putting it all together, I created a DOS batch file to run Java and invoke Saxon:

    java -cp c:saxon9saxon9.jar;C:DITA-OT1.4.3libresolver.jar ˆ
    -Dxml.catalog.files=catalog.xml ˆ
    net.sf.saxon.Transformˆ
    -r:org.apache.xml.resolver.tools.CatalogResolver ˆ
    -x:org.apache.xml.resolver.tools.ResolvingXMLReader ˆ
    -y:org.apache.xml.resolver.tools.ResolvingXMLReader ˆ
    -xsl:my_transform.xsl ˆ
    -s:my_content.xml

    The Java -cp switch adds class paths for the saxon.jar and resolver.jar files. The -D switch sets the system property xml.catalog.files to the location of the catalog.xml file.

    The switches following the Java class (net.sf.saxon.Transform) are Saxon switches.

    • -r – class of the resolver
    • -x – class of the source file parser
    • -y – class of the stylesheet parser

    Note, I’m using Windows (DOS) syntax here. If you are using Unix (Linux, Mac), separate the paths in the class path with a colon (:) and use the backslash () as a line continuation character.

    When you run Saxon this way, you’ll notice two things: first, Saxon doesn’t complain about the DTD (yay!), but secondly, there is no DOCTYPE declaration in the output. I’ll address how to add the DOCTYPE declaration back to the output XML file in my next blog post.

    Read More
    Content management

    HTML 5: Browser Wars Reprise?

    Recently, I ran across an article by Rob Cherny in Dr. Dobb’s Journal. He suggests that the added features in HTML 5 combined with an end to the development of XHTML point to a brighter standards-based future. He sees closed solutions like Flash, Silverlight, and JavaFX being supplanted directly by HTML 5 code. His view is that the web owes its success to standards.

    It’s tempting to agree. Standards certainly allow for collaborative growth. Though I’m not the least bit convinced that collaborative growth is the foundation of the web’s success. I believe that the web’s incredible success is really traceable to the simplicity and flexibility of HTML. Each new version takes us further from that simplicity.

    Through the browser war years we saw the impact of new features in HTML—incompatibility among browsers. My sense is that the success of Flash is largely due to the fact that Adobe owns both ends of the problem. They create the tools that generate Flash code as well as the viewer. Web developers can pretty much assume that what they see, when they build a Flash-based solution, is what the end user will see.

    I fear that we will head right back to the bad old days if HTML 5’s complex capabilities are widely employed. I suspect that ‘wait and see’ will last a pretty long time. I have other concerns about HTML 5—more on that later. What do you think—will your organization take advantage of these new capabilities as soon as they are available?

    Read More
    Content management

    Font snobbery? (I don’t think so.)

    For its 2010 catalog, IKEA used Verdana font instead of the customized Futura it’s used for years. To say people noticed the switch would be an understatement:

    “Ikea, stop the Verdana madness!” pleaded Tokyo’s Oliver Reichenstein on Twitter. “Words can’t describe my disgust,” spat Ben Cristensen of Melbourne. “Horrific,” lamented Christian Hughes in Dublin. The online forum Typophile closed its first post on the subject with the words, “It’s a sad day.” On Aug. 26, Romanian design consultant Marius Ursache started an online petition to get Ikea to change its mind. That night, Verdana was already a trending topic on Twitter, drawing more tweets than even Ted Kennedy.

    As a fan of IKEA and its products, I can understand the reaction. If you showed me a page out of an IKEA catalog with just text and prices (and no pictures or funky product names, of course!), I could tell you in a heartbeat that the content was from IKEA.

    Verdana may be easier to read if you’re looking at the IKEA catalog online, but that font lacks the designer-y flair of Futura. Because IKEA is known for its affordable cutting-edge design, Verdana just doesn’t seem to quite fit the bill.

    This situation reminds me of a comment a friend made about a failed hotel in Raleigh, NC. He said, “Did you see the awful Brush Script on the hotel’s sign? Those people clearly didn’t know how to run a business.” I doubt the Brush Script killed the hotel, but that bad design decision gave my friend (and probably many others) a very unfavorable impression about the company.

    Earlier this week, Sarah O’Keefe and I were doing some web research and came upon a web site that used Comic Sans. My reaction to that site was less than positive. I loathe Comic Sans, and I find it hard to take any company seriously that uses a font that emulates text in a comic book.

    A company’s use of fonts can become iconic–think about the fonts used by Coca-Cola and FedEx in their logos, for example. Font choice does have an effect on how people perceive content, a product, or a company.

    I don’t think reactions to fonts are limited to just those who work in publishing and design. No snobbery here at all. (But if noticing fonts makes me a card-carrying font snob, you better believe that card would have no Comic Sans on it.)

    For more about the impact of fonts, check out the documentary Helvetica:

    Read More
    Content management

    Error message melodrama

    The Shanghai Tech Writer blog has posted a screen capture of a rather ominous error message in FrameMaker:

    The licensing subsystem has failed catastrophically. You must reinstall or call customer support.

    I have never been the unfortunate recipient of that particular message in the many years I’ve worked with FrameMaker. If I did encounter that message, I would fully expect it to be accompanied by the shrieking strings from the Psycho shower scene. The use of “catastrophically” is a bit over the top. The fact I need to reinstall or contact customer support sets the tone enough, thank you very much–no soundtrack or scary adverb required.

    The editor in me wants “catastrophically” removed from that message. If that bit of text came across my desk for review, I would have pushed back hard on the use of that word. It’s bad enough the user has to get a solution to the error, and referring to the problem as “catastrophic” is certainly not doing the user any favors.

    Read More
    Content management

    Our first experience with print on demand (POD)

    It’s been a little over a month since we released the third edition of Technical Writing 101. The downloadable PDF version is the primary format for the new edition, and we’ve seen more sales from outside the U.S. because downloads eliminate shipping costs and delays.

    Selling Technical Writing 101 as a PDF file has made the book readily available to a wider audience (and at a cheaper price of $20, too). However, we know that a lot of people still like to read printed books, so we wanted to offer printed copies—but without the expense of printing books, storing them, and shipping them out.

    We have published several books over the past nine years, and declining revenue from books made it difficult for us to justify spending thousands of dollars to do an offset print run of 1000+ copies of Technical Writing 101 and then pay the added expense of preparing individual books for shipment as they are ordered. Storage has also been a problem: we have only so much space for storing books in our office, and we didn’t want to spend money on climate-controlled storage for inventory. (Book bindings would melt and warp without air conditioning during our hot, humid summers here in North Carolina.) For us, the logical solution was print on demand (POD): when a buyer orders the book, a publishing company prints a copy using a digital printing process and then ships it.

    We chose Lulu.com for our first experiment with POD, and so far, we have been happy with the quality of the books from there. We are still exploring our options with POD and may try some other companies’ services in the future, but based on our experience so far, I can offer two pieces of advice:

    • Follow the specs and templates provided by the printer, and consider allowing even a bit more wiggle room for interior margins. The first test book I printed had text running too close to the binding, so I made some adjustments to add more room for the interior margins before we sold the book to the public.
    • Look at the page sizes offered by the different POD publishers before choosing a size. If you choose a page size that multiple POD publishers support, you’ll have more flexibility in using another publisher’s services in the future, particularly if they offer other services (distribution, etc.) that better suit your needs. Also, ensure the page size you choose is supported when printing occurs in a country other than your own; some publishers have facilities and partners in multiple countries. In an attempt to minimize the amount of production work for the third edition, I chose a page size for Technical Writing 101 that was the closest match to the footprint of the previous edition’s layout. However, I likely would have chosen a different page size if I had known more about the common sizes across the various POD companies. The page size I chose at Lulu is not supported by CreateSpace, which is Amazon’s POD arm. When you publish through CreateSpace, you get distribution through Amazon.com, which isn’t the necessarily the case with other POD publishers. (I’ve read several blog posts about how some authors use the same sets of files to simultaneously publish books through multiple POD firms to maximize the distribution of their content.)

    In these tight economic times, POD publishing makes a lot of sense, particularly when you want to release content in print but don’t want to invest a lot of money in printing multiple copies that you have no guarantee of selling. The POD model certainly was a good match for Technical Writing 101, so we decided to give it a try.

    I’ll keep you updated on our experiences with POD publishing in this blog. If you have experience with POD, please leave a comment about how it’s worked for you.

    Read More
    Content management

    A different take on Twittering and technical writers

    by Sheila Loring

    Technical writers abound on Twitter as do blog posts on how Twitter can make you a better tech writer.

    I’d Rather Be Writing has an alternate take in the article Following the NBA Can Make You a Better Writer. Tom Johnson uses the analogy of Kobe Bryant and Lebron James playing their respective positions on the court. He argues that unless you’re a one-person shop, you’re doing yourself a disservice by trying to be a Jack- or Jill-of-all-trades. Play up your strengths, and minimize your weaknesses, tech writers. Read Tom’s article for more.

    Read More
    Content management

    Technical writing and social networks

    There is an interesting thread on techwr-l about using social networking sites to deliver product information. In the thread, Geoff Hart notes there is a generation gap in those who turn to unofficial online resources vs. product documentation:

    The young’uns go to the net and social networks more than we older folk, who still rely on developer-provided documentation. We ignore this change at our peril. Cheryl Lockett Zubak had a lovely anecdote at WritersUA a few years ago about how she and her son both set out to solve an iPod problem; they both found the solution in roughly equal amounts of time, but she found it in Apple’s documentation, while her son found it on YouTube.

    My experience as a user straddles both relying on official docs and information available elsewhere. When my iPod locked up a few years ago, I found decent information on Apple’s web site, but the best resource for my particular problem turned out to be on YouTube. A user had made a video showing step-by-step what to do.

    The dilemma of official docs vs. Web 2.0 information partially boils down to question of audience. As part of the process for planning and developing content, technical communicators should evaluate and remember the audience, and that audience consideration now needs to extend to how a company distributes the content. I don’t think there are cut-and-dried answers here; for example, it’s unwise to make the assumption that all folk over a certain age are unaware of or don’t use social networks and other Web 2.0 resources. Ignoring unofficial information channels is certainly not the solution, however.

    Read More
    Content management

    Don’t type, drag to the cmd window

    I spend a good deal of time with a Windows cmd.exe window open on my desktop. If I’m not running the DITA OT, I’m testing some Perl script, or Ant, or Python, or who knows.

    A few years ago (in the Windows 98 days), I discovered a nifty cmd window trick. People are consistently amazed when I demonstrate it to them. Now I’m going to share it with you.

    Say you need to change directory to some long and gnarly path name. You could type the whole thing in. Or, if you have Windows Explorer open on your desktop, you can:

    1. Type “cd ” in the cmd window (the space is important).
    2. Go to Windows Explorer and find the folder you want to navigate to.
    3. Drag and drop the folder from Windows Explorer to the cmd window.

    Hey presto! The path name is copied to the cmd window. What’s more, if there are spaces in the path, the path is automatically quoted.

    Now you can click in the cmd window and press Enter to perform the command.

    Cool! No more typing long path names for this ToolSmith.

    This works for filenames too. If I’m running a Perl script that needs to work on a file way down my directory tree, I type “perl myScriptName.pl “, then drag and drop the file name from Windows Explorer into my cmd window.

    I’ll keep adding more ToolSmith’s Tricks as I use them. What’s your favorite trick?

    Read More
    Content management

    WMF…that’ll shut ’em up

    Which graphics formats should you use in your documentation? For print, the traditional advice is EPS for line drawings and TIFF for screen captures and photographs. That’s still good advice. These days, you might choose PDF and PNG for the same purposes. There are caveats for each of these formats, but in general, these are excellent choices.

    Of course, everybody knows to stay away from WMF, the Windows Metafile Format. WMF doesn’t handle gradients, can’t have more than 256 colors, and refuses to play nice with anything other than Windows.

    Think you’re too good to hang out with WMF? For your print and online documentation, perhaps. But it may be a great choice to give to your company’s PowerPoint users.

    Are you familiar with this scenario? PowerPoint User saw some graphics in your documentation and thought they would work for some sales presentations. The screen captures are easy; you just give PowerPoint User PNGs or BMPs or whatever. It’s the line drawings that are the problem. PowerPoint User doesn’t have Illustrator and has never heard of EPS. PowerPoint User says, “Can you give me a copy of those pictures in a format that I can use in PowerPoint? Oh, and can make that box purple and change that font for me first? And move that line just a little bit? And make that line thicker? And remove that entire right side of the picture and split it into two pictures?”

    You want PowerPoint User to reuse the graphics; you’re all about reuse. But you have dealt with PowerPoint User before, and you know you will never get your real job done if you get pulled into the sucking vortex of PowerPoint User’s endless requests.

    The secret is to give PowerPoint User the graphics in a format that can be edited from within PowerPoint (or Word): WMF. Here’s the drill that will make you a hero:

    1. Save your graphics as WMF.
    2. Place each WMF on a separate page in a PowerPoint or Word file.
    3. Tell PowerPoint User to double-click on a graphic to make it editable.(If you think your PowerPoint User is really dumb, you can double-click the graphic and respond to the dialog box asking if you want to make the drawing editable yourself before saving the file, but nobody is that dumb.)

    WMF. It will make PowerPoint User go away…happy!

    Read More
    Content management

    And now, a word from FrameMaker product management…

    Posted today on the Adobe TechComm blog by Aseem Dokania, FrameMaker product manager:

    I have noticed discussions on some blogs and mailing lists regarding the future of FrameMaker. Let me assure you, as the Product Manager of FrameMaker, that FrameMaker is here to stay. We would do what it takes to keep FrameMaker at the leading edge of technology.

    Aseem also requests feedback, and I know my readers have opinions, so get those comments going, either here or directly on his post.

    Read More
    Content management

    A student reviews a web-based class

    In the November 2006 newsletter of the STC UK chapter, Mark Buffery of Salford Translations writes about his experience with the web-based version of our XML and Structured Authoring class.

    The course consists of 4 half-day sessions (approximately 2 hours in length each), and is presented as a web-based meeting with all participants in direct communication with one another through a telephone conference call. This enables the tutor to field any questions raised during their presentations, as and when they are raised. Once logged on, each participant can view the tutor’s screen in real-time as they demonstrate and talk them through the various functionalities being discussed. This was the first time I had ever attended a webinar, and I was not sure just how effective this would be.

    I firmly believe that, in theory, classroom training is better than web-based training. The trouble is that classroom training is also much more expensive than web-based training. Typically, the cost of travel (at least) doubles the basic tuition expense, and when you take into account the time spent traveling to and from the training site, costs are even higher. Web-based training allows you to fit the training into your regular workday. You do miss out on the many delights of the airport security line, but I think you can probably manage to contain your disappointment.

    Being in direct vocal contact over the telephone was useful, and a better compromise than I had imagined (being used to the more traditionally reciprocal teaching environment of the classroom or lecture theatre). However, once we had been online for a few minutes, it did not seem so strange.

    If you’re considering this or other courses with Scriptorium, please read his article for an overview of how things work from the student’s point of view.

    One common question that Mark does not touch on is class times. Our commitment to our students is that we will make every attempt to schedule the class so that class meetings are during regular business hours for each student. Most often, that results in an 11 a.m. to 1 p.m. meeting at our local time (U.S. East Coast). If we have only East Coast and European students, we move the time earlier; if participants are west of us, we move the time later. So far, we have not had any participants dial in from east Asia or Australia, but please feel free to sign up and we’ll make sure we meet at a time that’s reasonable for you.

    Read More