Skip to main content

Opinion

Opinion

Cardinal sin of blog (and technical) writing: making the reader feel stupid

Want to get me riled up? You can easily achieve that by making me feel stupid while reading your blog.

I read a lot of blogs about technology, and I’ll admit that I’m on the periphery of some of these blogs’ intended audiences. That being said, there is no excuse for starting a blog entry like this:

Everyone’s heard of Gordon Moore’s famous Law, but not everyone has noticed how what he said has gradually morphed into a marketing message so misleading I’m surprised Intel doesn’t require people to play the company jingle just for mentioning it.

Well, I must not be part of the “everyone” collective because I had to think long and hard about “Gordon Moore’s famous law,” and I drew a blank. (Here’s a link for those like me who can’t remember or don’t know what Moore’s Law is.)

Making a broad generalization such as “everyone knows x” is always dangerous. This is true in blog writing as well as in technical writing. In our style guide, we have a rule that writers should “not use simple or simply to describe a feature or step.” By labeling something as simple, it’s guaranteed you will offend someone who doesn’t understand the concept. For example, while editing one of our white papers about the DITA Open Toolkit, I saw the word “simply” and took great delight in striking through it. From where I stand, there is little that is “simple” about the toolkit, particularly when you’re talking about configuring output.

Don’t get me wrong: I’m not saying that a blog entry, white paper, or user manual about very technical subjects has to explain every little thing. You need to address the audience at the correct level, which can be a delicate balancing act with highly technical content: overexplaining can also offend the reader. For example, in a user manual, it’s probably wise to explain up front the prerequisite knowledge. Also consider offering resources where the reader can get that knowledge: that way, you get around explaining concepts but still offer assistance to readers who need it.

In the case of online content and blog entries, you can link to definitions of terms and concepts: readers who need help can click the links to get a quick refresher course on the information, and those who don’t can keep on reading. The author of the blog post in question could have inserted a link to Moore’s Law.  Granted, he does define the law in the second paragraph, but he lost me with the  “everyone has heard”  bit at the start.

That “everyone has heard” assumption brings me back to my primary point: don’t make your readers feel stupid, particularly by making sweeping assumptions about what “everyone” knows or by labeling something as “simple.” Insulted readers move on—and may never come back.


Read More
Opinion

Information as a right

Bear with me in a post that’s going to be even less coherent than usual. (And that’s on the heels of the Great Graphic Debacle.)

Is access to information a right or a privilege?

In democracies, we believe that citizens have a right to their government’s information.

U.S. citizens are likely familiar with the Freedom of Information Act (FoIA) and the various sunshine and open meeting laws. In 2005, India passed a Right to Information Act, which “requires every public authority to computerise their records for wide dissemination and to proactively publish certain categories of information so that the citizens need minimum recourse to request for information formally.” Other countries have similar legislation; the Right2Info organization “brings together information on the constitutional and legal framework for the right of access to information as well case law from more than 80 countries, organized and analyzed by topic.”

In the absence of a compelling government interest (the FoIA has nine, which include national security and personnel privacy issues), governmental information should be available to citizens. (This does assume, of course, that we are talking about governments who acknowledge that they are accountable to their citizens.)

If governments have an obligation to make information accessible to their citizens, does that equate to a right to the information? What about equal access to information? Is that a right?

For example, if certain public information information is readily available only on the Internet, does it follow that a citizen has a general right to Internet access? This question was actually considered by the European Union parliament last year, in the context of a new French law that cuts off Internet access to repeat offenders who infringe on copyrights with file-sharing:

Opponents of the legislation have responded by suggesting that Internet access is fundamental to liberty, an argument that suffered a setback on Wednesday as the European Parliament voted against codifying Internet access as a basic human right. (Is Internet Access a Fundamental Right?, CNet.com, May 6, 2009)

There are also interesting developments in financial information. The U.S. Securities and Exchange Commission (SEC) requires publicly traded companies to make certain information available to the public. This information is delivered through the EDGAR (Electronic Data Gathering, Analysis, and Retrieval) system.

Currently, the official submission format for EDGAR data is plain text or HTML, but the SEC is phasing in the use of an XML vocabulary called XBRL (Extensible Business Reporting Language).

“The purpose of the XBRL mandate is to make corporate financial information more easily available to stockholder.” (The XBRL mandate is here: Is IT ready?, Ephraim Schwarz, InfoWorld, November 25, 2008)

So in addition to mandating disclosure of corporate financial information, the SEC is now mandating easier access to the disclosed information. (A simple implication of XBRL is that you could more easily find executive compensation numbers.)

But what about non-governmental, non-regulated information? Is there a right to access? The business model of analyst firms (Gartner Group), business research companies (Dun & Bradstreet, Hoover’s), and, for that matter, the entire publishing industry (!!) says no. If you want information, you pay.

But look at the evolution of government philosophies and with that, content disclosure requirements. A king who reigns by divine right discloses what he wants to. A democratically elected leader must justify a lack of disclosure. It seems clear that we have shifted to the idea that access to government information is a right.

Will commercial information evolve in the same direction? There are actually some developments that point toward information as a right. In particular, the idea that information must be accessible—that information presentation should not exclude those with visual impairments or other disabilities—begins to build a foundation for equal access to information as a right.

What do you think? Will the right to information access be considered a bedrock principle in 50 or 100 years?

Read More
Opinion

The good manager litmus test: process change

For Kai Weber, a good manager is pivotal in making a job satisfying:

It’s the single most important factor in my satisfaction with a job. Nothing else shapes my memory and my judgment of a past job as much.

What really tests the mettle of a manager is how he or she handles process change. A good manager is absolutely critical when a documentation department switches to new authoring and publishing processes, particularly when moving from a desktop publishing environment to an XML-based one. Without good management, the implementation of new processes will likely fail. (I’ve seen bad management kill an implementation, and it’s ugly.)

So, what does a good manager do to ensure a smooth(er) transition? From my point of view, they will take the following actions (and this list is by no means all encompassing):

  • Demonstrate the value of the change to both upper management and those in the trenches. A manager can often get the approval from upper management on a workflow change by showing cost savings in localization expenses, for example; (less) money talks to those higher up on the corporate chain. But mentions of reduced costs don’t usually warm the hearts of those who are doing the work. A good manager should show team members how the new process eliminates manual drudgery that everyone hates, explain how authors can focus more on writing good content instead of on secondary tasks (such as formatting), and so on. Demonstrating how the team’s work experience improves is more important than showing improvements in the bottom line—even though the cost savings are a result of  those very changes. There is also the angle of professional development for a staff  moving to a new environment; more on that in the next bullet.
  • Ensure those working in the new process understand the new tools and technologies by offering training/knowledge transfer. A good manager knows that changing processes and not including some sort of training as part of the transition is foolish; knowledge transfer should be part of the project cost. Sure, not all companies can afford formal classroom training, but there are less expensive options to consider. Web-based training is very cost effective, particularly when team members are geographically dispersed. Another option is training one or two team members and then having them share their expertise with the rest of the group (“train the trainer”). The benefits of knowledge transfer are two-fold: team members can ramp up on the new processes in less time (thereby more quickly achieving the cost savings that upper management likes so much), and the team members themselves gain new skills in their profession. A good manager recognizes that training benefits both the company as a whole and individual employees (and he or she can help team members recognize how they benefit in the long term professionally from learning new skills).
  • Know the difference between staff members who are bringing up legitimate issues with the new workflow and those who are being recalcitrant just to maintain the status quo. During periods of change, a manager will get pushback from staff. That’s a given. However, that pushback is a very healthy thing because it can point out deficiencies in the new process. A good manager will take feedback, consider it, and modify the process when there are weaknesses. Addressing genuine feedback in such a manner can also help a manager win “converts” to the new process.  However, there may be an employee (or two) who won’t be receptive to change, regardless of how well the manager has explained the change, how much training is offered, and so on. In these cases, the manager may need to consider other assignments for that employee: for example, maintaining legacy documentation in the old system, particularly when that employee’s domain knowledge is too important to lose. There are more unpleasant options (including termination) the manager may need to consider if the recalcitrant team member isn’t providing other value to the organization as a whole. Basically, a good manager won’t let one individual poison the working environment for everyone else.

I will be the first to say that these tasks are not easy, particularly dealing with an employee who is utterly against change. But managers need to address all of the preceding issues to ensure a smooth transition and to keep the work environment positive and productive for the staff as a whole.

I won’t even pretend I have covered all the issues managers need to address when a department changes workflows, and each company will face its own particular challenges because of differences in corporate culture, and so on. If you’ve been through a workflow transition, please share your experiences in the comments: I’d love to hear from both managers and team members on what worked well (and what didn’t) in how management handled the changes.

PS: You can read a more detailed discussion about managing process change in our white paper, Managing implementation of structured authoring (HTML and PDF).

Read More
Conferences Opinion

Conferences versus social media

The information you can get from a conference presentation is usually available online—in blogs, webcasts, forums, and/or white papers. So why should you invest the time and the money to attend an event in person? In the end, there’s something very powerful about eating and drinking with a group of people. (And no, alcohol is not required, although it doesn’t hurt. Until the next day, when it hurts a lot.)

The value of conferences, which is not (yet) replicated online is in the “hallway track”—the discussions that happen between the formal sessions:

“[B]eing able to establish a one-to-one personal connection with other professionals in your field is critical to being a success.” (Dave Taylor in The Critical Business Value of Attending Conferences)


“I’ve found that time and again, I’ll hear speakers or audience members or participate in conversations and lie awake that night jam-packed with new ideas (some that don’t even correspond remotely to the concepts discussed that day). Conferences are a brainstorming paradise and a terrific opportunity for new ideas to come bubbling to the surface.” (Rand Fishkin, The Secret Value of Attending Conferences)

Scriptorium has quite a few social media “features”:

  • This blog, started in 2005
  • Webcasts, 2006 (recordings available for recent events)
  • Forums, this week (currently in the “awkward silence” phase. Help us out by posting, please!)
  • Twitter

But there’s something missing. I’ve attended and presented quite a few webcasts, and I can tell you that it’s actually far more difficult to deliver a compelling webcast than a compelling conference presentation. As the presenter, you lose the ability to gauge your audience’s body language. As an attendee, you have the temptation of your email and other distractions. The audio coming through your computer or phone is somehow not real—it’s easy to forget that there’s an actual person on the other end giving the presentation online. (There’s also the problem that many webcasts are sales pitches rather than useful presentations, but let’s leave that for another time.)

In my experience, it’s much easier to sustain online friendships with people that I have met in real life. Even a brief meeting at a conference means that I will remember a person as “that red-haired woman with the funky scarf” rather than as an email ID or Twitter handle. So, I think it’s important to go to conferences, meet lots of people, and then sustain those new professional relationships via social media.

In other words, conferences and social media complement each other. Over time, I think we’ll see them merge until a new interaction model. For example, we are already seeing Twitter as a real-time feedback engine at conference events. (Here’s an excellent discussion of how presenters should handle this.) Joe Welinske’s WritersUA is experimenting with a community site tied to the conference.

What are your thoughts? How important are conferences to your career?

Read More
Humor Opinion

Finding the blogging superhero in yourself

Power blogger.

That’s a new phrase to me, and it was new to Maria Langer, too, as she noted in her An Eclectic Mind blog. As part of a podcast panel, she was asked to offer advice on how to become a power blogger. Some of her fellow panelists mentioned the quantity of posts, but Maria’s focus was elsewhere:

The number of blog posts a blogger publishes should have nothing to do with whether he’s a power blogger. Instead, it should be the influence the blogger has over his readership and beyond. What’s important is whether a blog post makes a difference in the reader’s life. Does it teach? Make the reader think? Influence his decisions? If a blogger can consistently do any of that, he’s a power blogger.

I couldn’t agree more. I appreciate reading any blog that gives me useful information or analysis that hadn’t occurred to me. For example, I recently had issues with a new PC I’m using at home as a media center. It was not picking up all the channels in my area, and an excellent blog post helped me solve the problem with little fuss. To me, that author is a power blogger.

What I frankly find irritating—and certainly not my worth my time—are blogs that are basically what I’ll call “link farms”: posting links or excerpts from other blogs with no valuable information added. I’m quite the cynic, so when I stumble upon such a blog, I figure the blogger is merely trying to generate Google hits and ad revenue, is lazy, or both. Quantity—particularly when said quantity is composed of rehashed material from other bloggers—does not a power blogger make.

When it comes to contributing to this blog, I try to write posts that have a least one nugget of helpful information, analysis, or humor, and I think that’s true of the posts from my fellow coworkers. (At the risk of sounding like I’m bragging about my coworkers, I can’t tell you how many times I’ve read one of their posts and thought, “That’s smart!” or “That’s cool!”) Frankly, I’d rather not write anything at all than to publish something just because it’s been a few days since I posted. And I have a lot more respect for bloggers who write quality posts once in a while over those who put out lots of material that is borrowed from elsewhere.

And on that note, I’ll leave you with a short clip showing superheroes using their powers for a practical solution. (See, I’m trying to entertain you, too!)

Read More
Opinion

The elephant in the room—publishers and e-books

Two years ago, Nate Anderson wrote this on ars technica:

The book business, though far older than the recorded music business, is still lucky enough to have time on its side: no e-book reader currently offers a better reading experience than paper.

That’s what makes Apple’s iPad announcement so important. Books will now face stiff competition from e-books as the e-book experience improves.

Elephant in the room // flickr: mobilestreetlife

Elephant in the room // flickr: mobilestreetlife

Meanwhile, the publishing industry (with the notable exception of O’Reilly Media) is desperately trying to avoid the inevitable. (For a slighty happier take, see BusinessWeek.)

Publishers are supposed to filter, edit, produce, distribute, and market content. pre-Internet, all of these things were difficult and required significant financial resources. Today, many are easy and all are cheap.

There’s only one other thing.

Content.

But the revenue split between publishers and authors does not—yet—reflect the division of labor. The business relationships are still built on the idea that authors can’t exist without publishers. In fact, it’s the reverse that’s true.

Only the big publishers can get your book into every bookstore in the country. However, I’ve got news for you: Unless your name is on an elite shortlist with the likes of Dan Brown, John Grisham, Nora Roberts, and J.K. Rowling, it probably doesn’t matter.

If you know your audience, you can reach them at least as well as a big publisher can. And you need to reach a lot fewer people to succeed as an independent. The general rule of thumb is a 10-to-1 ratio. You’ll make the same amount selling 10,000 books through a traditional publisher as 1,000 books on your own.

It’s not so difficult to hire freelancers (especially in this economy) to edit and produce your book, if that’s not your cup of tea. Distribution is doable—Amazon is easy, bookstores a little more challenging. This is where e-books will accelerate the change—the challenges of shelf space and returns simply disappear.

And even if you have a publisher, they will expect you to do most of the marketing.

So, what will successful publishers look like in 2020?

  • They will provide editorial and production support for writers who do not want to deal with technical issues.
  • They will support authors in marketing by helping them with blogging platforms and other social media efforts.
  • They will get a much smaller cut of revenues than they currently do.

Actually, that looks a lot like Lulu.

    Read More
    News Opinion

    ePub + tech pub = ?

    At Scriptorium earlier this week, we all watched live blogs of the iPad announcement. (What else would you expect from a bunch of techies?) One feature of the iPad that really got us talking (and thinking) is its support of the ePub open standard for ebooks.

    ePub is basically a collection of XHTML files zipped up with some baggage files. Considering a lot of technical documentation groups create HTML output as a deliverable, it’s likely not a huge step further to create an ePub version of the content. There is a transform for DocBook to ePub; there is a similar effort underway for DITA. You can also save InDesign files to ePub.

    While the paths to creating an ePub version seem pretty straightforward, does it make sense to release technical content as an ebook? I think a lot of the same reasons for releasing online content apply (less tree death, no printing costs, and interactivity, in particular), but there are other issues to consider, too: audience, how quickly ebook readers and software become widespread, how the features and benefits of the format stack up against those of PDF files and browser-based help, and so on. And there’s also the issue of actually leveraging the features of an output instead of merely doing the minimum of releasing text and images in that format. (In the PDF version of a user manual, have you ever clicked an entry in the table of contents only to discover the TOC has no links? When that happens, I assume the company that released the content was more interested in using the format to offload the printing costs on to me and less interested in using PDF as a way to make my life easier.)

    The technology supporting ebooks will continue to evolve, and there likely will be a battle to see which ebook file format(s) will reign supreme. (I suspect Apple’s choice of the ePub format will raise that format’s prospects.) While the file formats get shaken out and ebooks continue to emerge as a way to disseminate content, technical communicators would be wise to determine how the format could fit into their strategies for getting information to end users.

    What considerations come to your mind when evaluating the possibility of releasing your content in ePub (or other ebook) format?

    Read More
    Opinion

    Unedited content will get you deleted

    flickr: Nics events

    The abundance of information today forces content consumers to filter out redundant and unworthy information—much like an editor would. That, however, doesn’t mean content creators can throw up their hands and send out unreviewed content for readers to sort through. Instead, authors (and particularly their managers) need to understand how editing skills can ensure their information doesn’t get filtered out:

    [A]re we getting any better at editing in a broader context, which is editing ourselves? Or to rephrase it, becoming a better critic of our own work? Penelope Trunk (again) lists the reasons why she works with an editor for whatever she writes in public:

    • Start strong – cut boring introduction
    • Be short – and be brave
    • Have a genuine connection – write stuff that matters to the readers
    • Be passionate – write stuff that matters to you
    • Have one good piece of research – back your idea up

    They have one thing in common: difficult to do on our own.

    Granted, some of those bullet points don’t completely apply to technical writing, but it is hard to edit your own work, regardless of the kind of content. For that very reason, folks at Scriptorium get someone else to review their writing. Whether the content is in a proposal, book, white paper, important email to a client, or a blog post, we understand that somebody else’s feedback is generally going to make that information better.

    The same is true of technical content. A lot of documentation departments may no longer hire dedicated editors, so peer reviewers handle editing tasks. Electronic review tools also make it easier than ever to offer feedback: even a quick online review of content by another writer will likely catch some potentially embarrassing typos and yield suggestions to make information more accessible to the end user. (You can read more about the importance of editing in a PDF excerpt from the latest edition of Technical Writing 101.)

    With so much competing information out on the Internet, companies can’t afford to have their official documentation ignored because it contains technical errors, misspellings, and other problems that damage the content’s credibility. Even if you don’t have the time or budget for a full-blown edit, take just a little time to have someone do a quick technical review of your work. Otherwise, end users seeking information about your product will likely do their own editing—in their minds, they’ll delete you as a source of reliable information. And that’s a deletion that’s hard to STET.

    PS: Software that checks spelling and grammar is helpful, but it’s not enough: it won’t point out technical inaccuracies.

    Read More
    Opinion Webinar

    Behold, the power of free

    Lately, our webcasts are getting great participation. The December event had 100 people in attendance (the registered number was even higher), and the numbers for the next few months are strong, as well. Previous webcasts had attendance of A Lot Less than 100. What changed? The webcasts are now free. (Missing an event? Check our archives.)

    We’re going in a similar direction with white papers. We charge for some content, but we also offer a ton of free information.

    The idea is that free (and high-quality) information raises our profile and therefore later brings in new projects. I’m not so sure, though, that we have any evidence that supports this theory yet.

    So, I thought I’d ask my readers. Do you evaluate potential vendors based on offerings such as webcasts and white papers? Are there other, more important factors?

    PS Upcoming events, including several DITA webcasts, are listed on our events page.

    Read More
    Opinion

    2010 predictions for technical communication

    It’s time for my (apparently biennial) predictions post. For those of you keeping score at home, you can see the last round of predictions here. Executive summary: no clear leader for DITA editing, reuse analyzers, Web 2.0 integration, global business, Flash. In retrospect, I didn’t exactly stick my neck out on any of those. Let’s see if I can do better this year.

    Desktop authoring begins to fade

    Everyone else is talking about the cloud, but what about tech comm? Many content creation efforts will shift into the cloud and away from desktop applications and their monstrous footprints (I’m looking at you, Adobe). When your content lives in the cloud, you can edit from anywhere and be much less dependent on a specific computer loaded with specific applications.

    I expect to see much more content creation migrate into web applications, such as wiki software and blogging software. I do not, at this point, see much potential for the various “online word processors,” such as Buzzword or Zoho Writer, for tech comm. Creating documents longer than four or five pages in these environments is painful.

    In the ideal universe, I’d like to see more support for DITA and/or XML in these tools, but I’m not holding my breath for this in 2010.

    The ends justify the means

    From what we are seeing, the rate of XML adoption is steady or even accelerating. But the rationale for XML is shifting. In the past, the benefits of structured authoring—consistency, template enforcement, and content reuse—have been the primary drivers. But in several newer projects, XML is a means to an end rather than a goal—our customers want to extract information from databases, or transfer information between two otherwise incompatible applications. The project justifications reach beyond the issues of content quality and instead focus on integrating content from multiple information sources.

    Social-ism

    Is the hype about social media overblown? Actually, I don’t think so. I did a webcast (YouTube link) on this topic in December 2009. The short version: Technical communicators must now compete with information being generated by the user community. This requires greater transparency and better content.

    My prediction is that a strategy for integrating social media and official tech comm will be critical in 2010 and beyond.

    Collaboration

    The days of the hermit tech writer are numbered. Close collaboration with product experts, the user community, and others will become the norm. This requires tools that are accessible to non-specialists and that offer easy ways to manage input from collaborators.

    Language shifts

    There are a couple of interesting changes in language:

    • Content strategy rather than documentation plan
    • Decision engine (such as Hunch, Wolfram Alpha, and Aardvark) rather than search engine

    What are your predictions for 2010?

    Other interesting prediction posts:

    Read More
    News Opinion

    Are you ready for mobile content?

    A report from Morgan Stanley states that mobile Internet use will be twice that of desktop Internet and that the iPhone/smartphone “may prove to be the fastest ramping and most disruptive technology product / service launch the world has ever seen.” That “disruption” is already affecting the methods for distributing technical content.

    With users having Internet access at their fingertips anywhere they go, Internet searches will continue to drive how people find product information. Desktop Internet use has greatly reshaped how technical communicators distribute information, and having twice as many people using mobile Internet will only push us toward more online delivery—and in formats (some yet to be developed, I’d guess) that are compatible with smaller smartphone screens.

    The growing number of people with mobile Internet access underscores the importance of high Internet search rankings and a social media strategy for your information. If you haven’t already investigated optimizing your content for search engines and integrating social media as part of your development and distribution efforts, it’s probably wise to do that sooner rather than later. Also, have you looked at how your web site is displayed on a smartphone?

    If you don’t consider the impact of the mobile Internet, your documentation may be relegated to the Island of Misfit Manuals, where change pages and manuals in three-ring binders spend their days yellowing away.

    Read More
    Opinion

    Angst and authority

    Clay Shirky has a fascinating post on the concept of algorithmic authority; the idea that large systems, such as Google PageRank or Wikipedia have authority (that is, credibility) because of the way that the system works. In other words, a page that is returned first in a Google search is assumed by the searcher to be more credible because it is ranked first.

    That made me think about authority in technical content.

    As an in-house technical writer, your words have authority and your content carries the corporate logo. But although this should theoretically increase your credibility, it seems that the reverse is true. Consider, for instance, the following hypothetical book titles:

    • XYZ User’s Guide—This document, produced by the makers of XYZ, is shipped in the product box (or downloaded as a PDF with the software)
    • XYZ Classroom in a Book—This document is available in bookstores and is produced by XYZ Press
    • XYZ: The Complete Reference*—This document is available in bookstores and is produced by a third-party publisher

    Which of these books would you turn to for help? What are your expectations of each document?

    I believe that credibility and thus authority increases with distance from the product’s maker. In other words, the third-party book has more authority than either of the other two. Credibility is compromised by close association with the organization that makes the product.

    When we apply this concept to information on the web, the implications are troubling for professional content creators who work inside corporations. If corporate authorship decreases authority, we get this result:

    online help < user forums on corporate site < user forums on third-party site

    Will people looking for user assistance gravitate toward independent third-party sites? What does that mean for in-house authors? How can you increase your credibility as a corporate technical communicator?

    * Feel free to substitute your favorite book series title: XYZ for Dummies, XYZ: The Missing Manual, The Complete Idiot’s Guide to XYZ, XYZ Annoyances, …. I should probably also mention that I have written both a Dummies book and a Complete Reference.

    Read More
    Opinion

    To bid or not to bid—a vendor’s guide to RFPs

    Request for Proposal (RFP) documents usually arrive in the dead of night, addressed to sales@scriptorium or sometimes info@scriptorium.

    Dear Vendor,

    LargeCompany is looking for a partner who can work magic and walk on water. Please refer to the attached RFP.

    Signed,

    Somebody in Purchasing

    Our instinct is to crack open the RFP and start writing a proposal. But over time, we’ve learned to take a step back and evaluate the RFP first to ensure that it’s worth our time.

    In this post, I’ve outlined some of the issues that we consider before responding to an RFP.

    Read More
    Opinion

    Would you use just a gardening trowel to plant a tree?

    As technical communicators, our ultimate goal is to create accessible content that helps users solve problems. Focusing on developing quality content is the priority, but you can take that viewpoint to an extreme by saying that content-creation tools are just a convenience for technical writers:

    The tools we use in our wacky profession are a convenience for us, as are the techniques we use. Users don’t care if we use FrameMaker, AuthorIt, Flare, Word, AsciiDoc, OpenOffice.org Writer, DITA or DocBook to create the content. They don’t give a hoot if the content is single sourced or topic based.

    Sure, end users probably don’t know or care about the tools used to develop content. However, users do have eagle eyes for spotting inconsistencies in content, and they will call you out for conflicting information in a heartbeat (or worse, just abandon the official user docs altogether for being “unreliable”). If your department has implemented reuse and single-sourcing techniques that eliminate those inconsistencies, your end users are going to have a lot more faith in the validity of the content you provide.

    Also, a structured authoring process that removes the burden of formatting content from the authoring process gives tech writers more time to focus on providing quality content to the end user. Yep, the end user doesn’t give a fig that the PDF or HTML file they are reading was generated from DITA-based content, but because the tech writers creating that content focused on just writing instead of writing, formatting, and converting the content, the information is probably better written and more useful.

    Dogwood // flickr: hlkljgk

    Dogwood // flickr: hlkljgk

    All this talk about tools makes me think about the implements I use for gardening. A few years ago, I planted a young dogwood tree in my back yard. I could have used a small gardening trowel to dig the hole, but instead, I chose a standard-size shovel. Even though the tree had no opinion on the tool I used (at least I don’t think it did!), it certainly benefited from my tool selection. Because I was able to dig the hole and plant the tree in a shorter amount of time, the tree was able to develop a new root system in its new home more quickly. Today, that tree is flourishing and is about four feet taller than it was when I planted it.

    The same applies to technical content. If a tool or process improves the consistency of content, gives authors more time to focus on the content, and shortens the time it takes to distribute that content, then the choice and application of a tool are much more than mere “conveniences.”

    Read More
    Opinion

    Fear the peer

    (This post is late. In my defense, I had the flu and the glow of the computer monitor was painful. Also, neurons were having trouble firing across the congestion in my head. At least, that’s my medical explanation for it. PS I don’t recommend the flu. Avoid if possible.)

    Which of these scenarios do you think is most intimidating?

    1. Giving a presentation to a dozen executives at a prospective client, which will decide whether we get a project or not
    2. Giving a presentation to 50 people, including half a dozen supportive fellow consultants
    3. Giving a presentation to 400 people at a major conference

    I’ve faced all three of these, and while each scenario presents its own set of stressors, the most intimidating, by far, is option #2.

    In general, I’m fairly confident in my ability to get up in front of a group of people and deliver some useful information in a reasonably interesting fashion. But there is something uniquely terrifying about presenting in front of your peers.

    Torches // Flickr: dantaylor

    At LavaCon, I faced the nightmare—a murderers’ row of consultants in the back of the room, fondling various tweeting implements.

    Here are some of the worst-case scenarios:

    • No new information. I have nothing to say that my colleagues haven’t heard before, and they could have said it better.
    • Disagreement. My peers think that my point of view is incorrect or, worse, my facts are wrong.
    • Boring. I have nothing new to say, my information is wrong, and I’m not interesting.

    Of course, my peers were gracious, participated in the session in a constructive way, and said nice things afterwards. I didn’t even see any cheeky tweets. (I’m looking at you, @scottabel.)

    All in all, I’d have to say that it’s a lot more fun to sit in the back of someone else’s presentation, though. Neil Perlin handled his peanut gallery deftly, asking questions like, “With the exception of the back row, how many of you enjoy writing XSLT code?”

    Rahel Bailie said it best, I think. After completing her excellent presentation, she noted that presenting in front of peers is terribly stressful because, “I really want you to like it.”

    Read More
    Opinion

    Don’t forget localization

    I was reading a list of seven tips for improving technical writing, and the first tip gave me pause:

    1. Analogy – provide a comparison or analogy to describe how something abstract works.

    Not everyone is as familiar with the system as you are. Try to help the reader along by giving as much direction as possible so they see the bigger picture.

    Once they understand how the system works at a high level, they will have more confidence in reading the more technical details.

    If your content is going to be localized, comparisons and analogies are going to be problematic because they are often culturally specific. Here’s a good example of how an analogy had to be changed in marketing material so that it made sense to audiences in different parts of the world:

    When the Walt Disney World Resort created promotional material for a North American audience, it stated that the resort is 47 square miles or “roughly half the size of Rhode Island.”

    Outside of North America, many people don’t know about Rhode Island, and this analogy would have no meaning. Walt Disney wisely chose to customize the material for each target market. For instance, in the UK version, the material states that the resort is “the size of greater Manchester,” and in Japan, the resort is described as the size of the subway system.

    Disney may have the deep pockets to pay for rewriting marketing content for various audiences, but I suspect there are few technical documentation departments these days that have the money or resources to reformulate analogies for different regions. You’re better off avoiding analogies altogther when writing technical content.

    Read More
    Opinion

    Strategy < tactics < execution

    I read Execution: The Discipline of Getting Things Done several years ago, and much of this post is based on the information in that book. 

    Because of a Series of Troublesome Committees, I find myself thinking about three big-picture concepts: strategy, tactics, and execution:

    • Strategy is the overall plan. For example, one strategy for getting new projects at Scriptorium is to establish ourselves as experts in our chosen field.
    • Tactics are specific actions to achieve the plan. Our tactics include writing articles and delivering conference presentations that buttress our claim of expertise.
    • Execution is what happens after you pick a strategy and develop some tactics. That’s when we write the articles and attend the conferences.

    Each of these stages is a prerequisite for the other. That is, you start by developing a strategy and can then pick your tactics. Finally, you have to execute on the plan.

    You can fail at every point in the process:

    • Choose the wrong strategy, and not much else matters. Great tactics and excellent execution will not rescue you if you have chosen the wrong approach.
    • If you have the right strategy, but the wrong tactics, you may have some limited success, but poor tactics will work against you.
    • Worst of all, though, is bad execution. You pick the right strategy and the right tactics, and then sabotage the whole thing with poor follow-through or lousy performance. For example, writing an article full of bad grammar or delivering a boring, technically inaccurate presentation would be bad execution for us.

    At every stage, you face constraints. For instance, if your budget is limited, you might not be able to justify the most expensive tactic, even though it might have been the most effective. But once you work through your constraints and choose your tactics, there’s really no excuse for doing those activities badly.

    Some things to keep in mind when working with technical communicators:

    • Forget the spin. Exorcising marketing and PR messages from technical content is a core job skill for many of us. Simple, honest, and straightforward messages are better. If you try to spin your message, expect a scornful response.
    • Language matters. Writers become writers because they care about language. If you want their respect, you need to show that you also care about language. At a minimum, grammar and mechanics need to be accurate. If you can go beyond the basics and demonstrate graceful writing, you will score bonus points.

    Read More
    Opinion

    A strident defense of mediocre formatting

    In addition to a gratuitous (and entertaining) swipe at “noisome” DITA “fanboys,” Roger Hart argues that we need to reconsider the disadvantages of automated formatting:

    The thing is, [separation of content and formatting has] all been taken rather stridently to heart in certain quarters, leading to a knee jerk reaction whenever author-controlled formatting/pagination/lineation is mentioned as anything other than bleak, sulphurous devilry. This is twaddle. […]

    Uncertainty in meaning is anathema to user intelligibility. If we’re going to make sure we’re not writing poetry, there’s definitely value in having poetry’s level of control over semantic blocks.

    Of course, it’s fully possible that this is an expensive distraction.

    Possible? It’s definitely expensive. It’s possible that it’s a distraction.

    I think Hart perhaps unintentionally put his finger on the real issue: value. How much value (in the form of improved comprehension) is added to a technical document when you are able, in the words of commenter Brian Harris, to “lovingly handcraft” each page?

    How much value (in the form of cost avoidance) is added to an organization when you are able to spit out a reasonably formatted document in a few minutes?

    Actually, I have a different question. How far should we take this argument? Here’s an example of the pinnacle of handcrafting:

    Book of Kells image
    Can we all agree that this might perhaps take handcrafting a little too far?

    Compared to the Book of Kells (above), the Gutenberg Bible looks quite pedestrian:

    Gutenberg Bible image

    You can just imagine the scribes with their quills, lapis, gold leaf, and other implements muttering, “That Gutenberg and his noisome fanboys. He can’t even render two colors without our help. Poser. It’ll never last.”

    Formatting automation removes cost from the process of creating and delivering content. For technical documents that change often and are perhaps delivered in multiple languages, it removes a lot of cost. Let’s assume that handcrafted pages can improve ease of reading and comprehension with careful copy-fitting and adjusted spacing (Hart’s article mentions “headings, line breaks, intra-word, etc”). This increases the cost of the content.

    What happens when content is expensive? Fewer people get to see it.

    Books in Europe went from 50000 before Gutenberg to 12 million 50 years later.

    I think we can all agree that e-books offer none of the typographic sophistication in question here. Bill Gates (yes, that Bill Gates) wrote in 1999:

    It is hard to imagine today, but one of the greatest contributions of e-books may eventually be in improving literacy and education in less-developed countries. Today people in poor countries cannot afford to buy books and rarely have access to a library. 

    Essentially, we can produce documents inexpensively and give more people access to them as a direct result of lower cost, or we can climb on our typographic high horse and whine about word spacing.

    I’m with the noisome fanboys.

    Read More
    Opinion

    HTML 5: Browser Wars Reprise?

    Recently, I ran across an article by Rob Cherny in Dr. Dobb’s Journal. He suggests that the added features in HTML 5 combined with an end to the development of XHTML point to a brighter standards-based future. He sees closed solutions like Flash, Silverlight, and JavaFX being supplanted directly by HTML 5 code. His view is that the web owes its success to standards.

    It’s tempting to agree. Standards certainly allow for collaborative growth. Though I’m not the least bit convinced that collaborative growth is the foundation of the web’s success. I believe that the web’s incredible success is really traceable to the simplicity and flexibility of HTML. Each new version takes us further from that simplicity.

    Through the browser war years we saw the impact of new features in HTML—incompatibility among browsers. My sense is that the success of Flash is largely due to the fact that Adobe owns both ends of the problem. They create the tools that generate Flash code as well as the viewer. Web developers can pretty much assume that what they see, when they build a Flash-based solution, is what the end user will see.

    I fear that we will head right back to the bad old days if HTML 5’s complex capabilities are widely employed. I suspect that ‘wait and see’ will last a pretty long time. I have other concerns about HTML 5—more on that later. What do you think—will your organization take advantage of these new capabilities as soon as they are available?

    Read More
    Opinion

    Font snobbery? (I don’t think so.)

    For its 2010 catalog, IKEA used Verdana font instead of the customized Futura it’s used for years. To say people noticed the switch would be an understatement:

    “Ikea, stop the Verdana madness!” pleaded Tokyo’s Oliver Reichenstein on Twitter. “Words can’t describe my disgust,” spat Ben Cristensen of Melbourne. “Horrific,” lamented Christian Hughes in Dublin. The online forum Typophile closed its first post on the subject with the words, “It’s a sad day.” On Aug. 26, Romanian design consultant Marius Ursache started an online petition to get Ikea to change its mind. That night, Verdana was already a trending topic on Twitter, drawing more tweets than even Ted Kennedy.

    As a fan of IKEA and its products, I can understand the reaction. If you showed me a page out of an IKEA catalog with just text and prices (and no pictures or funky product names, of course!), I could tell you in a heartbeat that the content was from IKEA.

    Verdana may be easier to read if you’re looking at the IKEA catalog online, but that font lacks the designer-y flair of Futura. Because IKEA is known for its affordable cutting-edge design, Verdana just doesn’t seem to quite fit the bill.

    This situation reminds me of a comment a friend made about a failed hotel in Raleigh, NC. He said, “Did you see the awful Brush Script on the hotel’s sign? Those people clearly didn’t know how to run a business.” I doubt the Brush Script killed the hotel, but that bad design decision gave my friend (and probably many others) a very unfavorable impression about the company.

    Earlier this week, Sarah O’Keefe and I were doing some web research and came upon a web site that used Comic Sans. My reaction to that site was less than positive. I loathe Comic Sans, and I find it hard to take any company seriously that uses a font that emulates text in a comic book.

    A company’s use of fonts can become iconic–think about the fonts used by Coca-Cola and FedEx in their logos, for example. Font choice does have an effect on how people perceive content, a product, or a company.

    I don’t think reactions to fonts are limited to just those who work in publishing and design. No snobbery here at all. (But if noticing fonts makes me a card-carrying font snob, you better believe that card would have no Comic Sans on it.)

    For more about the impact of fonts, check out the documentary Helvetica:

    Read More
    Opinion

    In defense of English majors: we can understand business issues, too

    In his latest blog entry, Neil Perlin explains how important it is for technical writers to have an understanding of business issues. With such knowledge, they can contribute to cost justifications for decisions that affect them directly. I couldn’t agree more with that. It is absolutely in writers’ best interests (and a matter of self-preservation) to understand processes and costs.

    I strongly disagree, however, with the following assertion:

    Writers from fine arts or English backgrounds can rarely discuss cost-justification in finance terms, so they have little input on buying decisions.

    I am an English major, and I freely admit I am more of a “words” person than a “numbers” person. That being said, I am no slouch in the finance department. (Calculus is another matter, though.) I know many people with degrees in English and the liberal arts who are quite adept at understanding The Big Picture and developing business cases. Lumping all of us into a “can rarely discuss cost-justification” group is unfair.

    Now I need to remind myself not to group software developers into a “can rarely write a coherent procedure” category. (It’s easy to make generalizations when you’re not the target of them.)

    Read More
    Opinion

    Error message melodrama

    The Shanghai Tech Writer blog has posted a screen capture of a rather ominous error message in FrameMaker:

    The licensing subsystem has failed catastrophically. You must reinstall or call customer support.

    I have never been the unfortunate recipient of that particular message in the many years I’ve worked with FrameMaker. If I did encounter that message, I would fully expect it to be accompanied by the shrieking strings from the Psycho shower scene. The use of “catastrophically” is a bit over the top. The fact I need to reinstall or contact customer support sets the tone enough, thank you very much–no soundtrack or scary adverb required.

    The editor in me wants “catastrophically” removed from that message. If that bit of text came across my desk for review, I would have pushed back hard on the use of that word. It’s bad enough the user has to get a solution to the error, and referring to the problem as “catastrophic” is certainly not doing the user any favors.

    Read More
    Opinion

    Authoring tools do matter

    “I can write in anything.”
    “The tool doesn’t matter.”
    “I can learn any new tool.”

    Most of the time, I agree. But then, there are the exceptions.

    One of our customers is using FrameMaker to produce content that is delivered in HTML. (They use structured FrameMaker, generate XML, and then transform via XSLT into HTML.) Their rationale for using FrameMaker was:

    • The project was on an extreme deadline.
    • The writers already knew FrameMaker.
    • FrameMaker is already installed on the writers’ systems.

    All valid points.

    But.

    We have had a continuous stream of requests from the writers to make adjustments to the FrameMaker formatting. Things like “the bullets seem a little too far from the text; can you move them over?”

    FrameMaker is being used as an authoring tool only. FrameMaker formatting is discarded on export; HTML formatting is controlled mainly by CSS. However, even after repeated explanations, we continue to receive requests to modify the FrameMaker formatting.

    In this specific case, the authoring tool does matter. Writers are focusing on the wrong set of issues (leading, kerning, print formatting), none of which is actually relevant for the output.

    Why are they focused on this stuff? Because they can. It seems to me that moving authors to a WYSIOO (what you see is one option) tool, such as oXygen or XMetaL, instead of a WYSIWYG tool (FrameMaker) would eliminate the obsession with irrelevant formatting.

    Read More
    Opinion

    This is the future of technical communication

    First, read this article in the New York Times about the struggle to keep a reporter’s kidnapping quiet:

    For seven months, The New York Times managed to keep out of the news the fact that one of its reporters, David Rohde, had been kidnapped by the Taliban. But that was pretty straightforward compared with keeping it off Wikipedia. 

    Now, think about these issues as applied to technical communication. Let’s assume that your organization has online community — forums and a wiki, maybe. Technical communicators are responsible for monitoring and managing the community. Under what circumstances do you delete information? How do you respond when:

    • Information is inaccurate
    • Information is unflattering
    • Both

    What if the information is accurate but incomplete?
    What if someone describes a way of using your product that could cause injury, even though it’s technically possible? Do you delete the information? Do you add a comment warning of possible injury? What if the reader sees the original post but not the comment?

    In the absence of safety concerns, I think that accuracy must win. Thus, as the information curator, you have a responsibility to correct inaccurate information. If the inaccuracy is truly dangerous, you may need to edit the post directly. Make sure that you disclosure what you’ve done with brackets. For example:

    I like riding my scooter down mountains, especially without guardrails. Wheee! [This is a really bad idea because You Might Die. -moderator]

    or

    I like [really bad idea redacted by moderator]. Wheee!

    Deleting unflattering (but accurate) information will probably backfire on the organization. Instead of censoring negative content, try addressing the concern being identified. Think of an impolite forum post as customer feedback. Does the poster have a valid point? Can you fix the problem that’s been identified?

    I hate your scooters. They don’t come in enough colors. And they suck. 

    What colors would you like to see? We do have two dozen available, see this list.
    – Joe in TechComm

    The life-or-death issues around Mr. Rohde’s kidnapping are relatively straightforward. We are likely to have much more difficult judgment calls in typical technical communication. Imagine, for example, that information were being suppressed because it criticized security arrangements and not because of safety concerns for the reporter. In that case, I think we can agree that Wikipedia’s response would have (and should have) been different. What would an equivalent scenario look like in your organization?

    Read More