Skip to main content

Cardinal sin of blog (and technical) writing: making the reader feel stupid

Want to get me riled up? You can easily achieve that by making me feel stupid while reading your blog.

I read a lot of blogs about technology, and I’ll admit that I’m on the periphery of some of these blogs’ intended audiences. That being said, there is no excuse for starting a blog entry like this:

Everyone’s heard of Gordon Moore’s famous Law, but not everyone has noticed how what he said has gradually morphed into a marketing message so misleading I’m surprised Intel doesn’t require people to play the company jingle just for mentioning it.

Well, I must not be part of the “everyone” collective because I had to think long and hard about “Gordon Moore’s famous law,” and I drew a blank. (Here’s a link for those like me who can’t remember or don’t know what Moore’s Law is.)

Making a broad generalization such as “everyone knows x” is always dangerous. This is true in blog writing as well as in technical writing. In our style guide, we have a rule that writers should “not use simple or simply to describe a feature or step.” By labeling something as simple, it’s guaranteed you will offend someone who doesn’t understand the concept. For example, while editing one of our white papers about the DITA Open Toolkit, I saw the word “simply” and took great delight in striking through it. From where I stand, there is little that is “simple” about the toolkit, particularly when you’re talking about configuring output.

Don’t get me wrong: I’m not saying that a blog entry, white paper, or user manual about very technical subjects has to explain every little thing. You need to address the audience at the correct level, which can be a delicate balancing act with highly technical content: overexplaining can also offend the reader. For example, in a user manual, it’s probably wise to explain up front the prerequisite knowledge. Also consider offering resources where the reader can get that knowledge: that way, you get around explaining concepts but still offer assistance to readers who need it.

In the case of online content and blog entries, you can link to definitions of terms and concepts: readers who need help can click the links to get a quick refresher course on the information, and those who don’t can keep on reading. The author of the blog post in question could have inserted a link to Moore’s Law.  Granted, he does define the law in the second paragraph, but he lost me with the  “everyone has heard”  bit at the start.

That “everyone has heard” assumption brings me back to my primary point: don’t make your readers feel stupid, particularly by making sweeping assumptions about what “everyone” knows or by labeling something as “simple.” Insulted readers move on—and may never come back.

Read More

Information as a right

Bear with me in a post that’s going to be even less coherent than usual. (And that’s on the heels of the Great Graphic Debacle.)

Is access to information a right or a privilege?

In democracies, we believe that citizens have a right to their government’s information.

U.S. citizens are likely familiar with the Freedom of Information Act (FoIA) and the various sunshine and open meeting laws. In 2005, India passed a Right to Information Act, which “requires every public authority to computerise their records for wide dissemination and to proactively publish certain categories of information so that the citizens need minimum recourse to request for information formally.” Other countries have similar legislation; the Right2Info organization “brings together information on the constitutional and legal framework for the right of access to information as well case law from more than 80 countries, organized and analyzed by topic.”

In the absence of a compelling government interest (the FoIA has nine, which include national security and personnel privacy issues), governmental information should be available to citizens. (This does assume, of course, that we are talking about governments who acknowledge that they are accountable to their citizens.)

If governments have an obligation to make information accessible to their citizens, does that equate to a right to the information? What about equal access to information? Is that a right?

For example, if certain public information information is readily available only on the Internet, does it follow that a citizen has a general right to Internet access? This question was actually considered by the European Union parliament last year, in the context of a new French law that cuts off Internet access to repeat offenders who infringe on copyrights with file-sharing:

Opponents of the legislation have responded by suggesting that Internet access is fundamental to liberty, an argument that suffered a setback on Wednesday as the European Parliament voted against codifying Internet access as a basic human right. (Is Internet Access a Fundamental Right?,, May 6, 2009)

There are also interesting developments in financial information. The U.S. Securities and Exchange Commission (SEC) requires publicly traded companies to make certain information available to the public. This information is delivered through the EDGAR (Electronic Data Gathering, Analysis, and Retrieval) system.

Currently, the official submission format for EDGAR data is plain text or HTML, but the SEC is phasing in the use of an XML vocabulary called XBRL (Extensible Business Reporting Language).

“The purpose of the XBRL mandate is to make corporate financial information more easily available to stockholder.” (The XBRL mandate is here: Is IT ready?, Ephraim Schwarz, InfoWorld, November 25, 2008)

So in addition to mandating disclosure of corporate financial information, the SEC is now mandating easier access to the disclosed information. (A simple implication of XBRL is that you could more easily find executive compensation numbers.)

But what about non-governmental, non-regulated information? Is there a right to access? The business model of analyst firms (Gartner Group), business research companies (Dun & Bradstreet, Hoover’s), and, for that matter, the entire publishing industry (!!) says no. If you want information, you pay.

But look at the evolution of government philosophies and with that, content disclosure requirements. A king who reigns by divine right discloses what he wants to. A democratically elected leader must justify a lack of disclosure. It seems clear that we have shifted to the idea that access to government information is a right.

Will commercial information evolve in the same direction? There are actually some developments that point toward information as a right. In particular, the idea that information must be accessible—that information presentation should not exclude those with visual impairments or other disabilities—begins to build a foundation for equal access to information as a right.

What do you think? Will the right to information access be considered a bedrock principle in 50 or 100 years?

Read More

The good manager litmus test: process change

For Kai Weber, a good manager is pivotal in making a job satisfying:

It’s the single most important factor in my satisfaction with a job. Nothing else shapes my memory and my judgment of a past job as much.

What really tests the mettle of a manager is how he or she handles process change. A good manager is absolutely critical when a documentation department switches to new authoring and publishing processes, particularly when moving from a desktop publishing environment to an XML-based one. Without good management, the implementation of new processes will likely fail. (I’ve seen bad management kill an implementation, and it’s ugly.)

So, what does a good manager do to ensure a smooth(er) transition? From my point of view, they will take the following actions (and this list is by no means all encompassing):

  • Demonstrate the value of the change to both upper management and those in the trenches. A manager can often get the approval from upper management on a workflow change by showing cost savings in localization expenses, for example; (less) money talks to those higher up on the corporate chain. But mentions of reduced costs don’t usually warm the hearts of those who are doing the work. A good manager should show team members how the new process eliminates manual drudgery that everyone hates, explain how authors can focus more on writing good content instead of on secondary tasks (such as formatting), and so on. Demonstrating how the team’s work experience improves is more important than showing improvements in the bottom line—even though the cost savings are a result of  those very changes. There is also the angle of professional development for a staff  moving to a new environment; more on that in the next bullet.
  • Ensure those working in the new process understand the new tools and technologies by offering training/knowledge transfer. A good manager knows that changing processes and not including some sort of training as part of the transition is foolish; knowledge transfer should be part of the project cost. Sure, not all companies can afford formal classroom training, but there are less expensive options to consider. Web-based training is very cost effective, particularly when team members are geographically dispersed. Another option is training one or two team members and then having them share their expertise with the rest of the group (“train the trainer”). The benefits of knowledge transfer are two-fold: team members can ramp up on the new processes in less time (thereby more quickly achieving the cost savings that upper management likes so much), and the team members themselves gain new skills in their profession. A good manager recognizes that training benefits both the company as a whole and individual employees (and he or she can help team members recognize how they benefit in the long term professionally from learning new skills).
  • Know the difference between staff members who are bringing up legitimate issues with the new workflow and those who are being recalcitrant just to maintain the status quo. During periods of change, a manager will get pushback from staff. That’s a given. However, that pushback is a very healthy thing because it can point out deficiencies in the new process. A good manager will take feedback, consider it, and modify the process when there are weaknesses. Addressing genuine feedback in such a manner can also help a manager win “converts” to the new process.  However, there may be an employee (or two) who won’t be receptive to change, regardless of how well the manager has explained the change, how much training is offered, and so on. In these cases, the manager may need to consider other assignments for that employee: for example, maintaining legacy documentation in the old system, particularly when that employee’s domain knowledge is too important to lose. There are more unpleasant options (including termination) the manager may need to consider if the recalcitrant team member isn’t providing other value to the organization as a whole. Basically, a good manager won’t let one individual poison the working environment for everyone else.

I will be the first to say that these tasks are not easy, particularly dealing with an employee who is utterly against change. But managers need to address all of the preceding issues to ensure a smooth transition and to keep the work environment positive and productive for the staff as a whole.

I won’t even pretend I have covered all the issues managers need to address when a department changes workflows, and each company will face its own particular challenges because of differences in corporate culture, and so on. If you’ve been through a workflow transition, please share your experiences in the comments: I’d love to hear from both managers and team members on what worked well (and what didn’t) in how management handled the changes.

PS: You can read a more detailed discussion about managing process change in our white paper, Managing implementation of structured authoring (HTML and PDF).

Read More

Technology matters

[Update, March 5: corrected the graphic. It now shows that increased expertise does not produce increased value on the limited curve and does produce increased value on the unlimited curve.]

It’s the third rail of technical writing debates: writing ability or technical expertise? And this week, I ran across two articles that argue that good writing is the key to successful technical writing.

I agree that good writing is important. It’s just that I think that domain expertise and tools expertise are also important. To succeed as a technical communicator, you need all three of these qualifications. (A healthy sense of skepticism about any information that you are given is also helpful. Trust, but verify.)

Here, we have Sandhya, the outgoing President of STC’s India chapter:

If I’ve managed to make a minor dent in a paradigm shift away from the importance of tools and years of experience to the importance of basic technical communication and leadership skills, I’d be thrilled. (Sandhya, 7 Habits of Highly Effective Technical Communicators, INDUS)

These skills are not mutually exclusive, and technical writers need all of them. An excellent writer with more experience is better than an excellent writer with less experience. An average writer with great tools knowledge is better than an average writer with average tools knowledge.

That said, I think there’s a point of diminishing returns.

Diminishing returns for extra tools knowledge

Diminishing returns for extra tools knowledge

The value curve for writing ability follows the “unlimited” line. But the value curve for tools expertise is different. Once a writer exceeds the baseline required tools knowledge, there’s not much additional value in additional tools expertise. That’s the limited curve. (The curve for domain expertise depends on the topic, I think. If you write about consumer software, you’re probably on the limited curve. If you write about highly specialized topics (biochemistry, semiconductors, nuclear medicine), domain expertise is probably on the unlimited curve.

Here is another perspective from Ramana Murthy:

A good product documentation is one that helps users achieve their goals easily, irrespective of the tool it has been authored with – be it RoboHelp, Author-it or the unglamorous Microsoft Word. Product documentation does not arrive with a label like “Developed with the best documentation tools”; nor are there instances of customers preferring product documentation authored with a particular tool. (Ramana Murthy, Technical Communication: Content is the key, tcworld)

True , but it’s also irrelevant. The corporation who is paying for content to be created may care a great deal if option A allows you to create content better, faster, or (especially) cheaper than option B.

The tools and technologies you choose for your content-creation efforts matter because they affect the quality and the development cost of your final deliverables. And therefore, in addition to writing ability, technical communicators must master the required tools, technologies, and templates at the appropriate level.

Read More

Sleepless in Seattle—our agenda at WritersUA

Simon Bate and I will be attending WritersUA this year.

I will be mainly camped in Scriptorium’s exhibit booth. Hours for that are Monday 8:00 am – 6:00 pm and Tuesday 8:00 am – 5:30 pm. Please stop by when you get a chance. Simon will be joining me, but is also presenting on XSL Techniques for XML-to-XML Transformations on Monday at 3:25. Here’s a bit of the description:

In a recent project, we used XSL to correct markup and fix conversion errors in 55,000 XML files containing 2000-year-old Greek texts. The clean-up work included correcting errors in the Greek numbering system, converting text-based markup to XML, replacing or repairing missing markup, and ensuring the accuracy of our work in such a large document set. This session uses this work to illustrate how XML-to-XML transforms differ from XML-to-output transforms. Along the way we describe some XSL techniques we created for processing XML data in which there is a close relationship between the content and the markup.

This year, we’re bringing swag in the form of free copies of The Compass, a printed compilation of Scriptorium white papers. For WritersUA, we have two new white papers, and the book is now almost 200 pages long. (Our white papers are also available, for free, in HTML and PDF format.)

If that’s not a sufficiently sweet enticement, you can also expect local chocolates. The leading contender is currently Fran’s, but I’m open to suggestions, especially from Seattle locals. (We generally pick up chocolate once we arrive rather than attempting to ship it. Ask me some about the Great Truffle Shipping Debacle.)

Simon and I are both scheduling private meetings during the event. If you are a current or prospective client of ours, or if you just want to talk, let us know and we’ll set something up.

Read More
Conferences Opinion

Conferences versus social media

The information you can get from a conference presentation is usually available online—in blogs, webcasts, forums, and/or white papers. So why should you invest the time and the money to attend an event in person? In the end, there’s something very powerful about eating and drinking with a group of people. (And no, alcohol is not required, although it doesn’t hurt. Until the next day, when it hurts a lot.)

The value of conferences, which is not (yet) replicated online is in the “hallway track”—the discussions that happen between the formal sessions:

“[B]eing able to establish a one-to-one personal connection with other professionals in your field is critical to being a success.” (Dave Taylor in The Critical Business Value of Attending Conferences)

“I’ve found that time and again, I’ll hear speakers or audience members or participate in conversations and lie awake that night jam-packed with new ideas (some that don’t even correspond remotely to the concepts discussed that day). Conferences are a brainstorming paradise and a terrific opportunity for new ideas to come bubbling to the surface.” (Rand Fishkin, The Secret Value of Attending Conferences)

Scriptorium has quite a few social media “features”:

  • This blog, started in 2005
  • Webcasts, 2006 (recordings available for recent events)
  • Forums, this week (currently in the “awkward silence” phase. Help us out by posting, please!)
  • Twitter

But there’s something missing. I’ve attended and presented quite a few webcasts, and I can tell you that it’s actually far more difficult to deliver a compelling webcast than a compelling conference presentation. As the presenter, you lose the ability to gauge your audience’s body language. As an attendee, you have the temptation of your email and other distractions. The audio coming through your computer or phone is somehow not real—it’s easy to forget that there’s an actual person on the other end giving the presentation online. (There’s also the problem that many webcasts are sales pitches rather than useful presentations, but let’s leave that for another time.)

In my experience, it’s much easier to sustain online friendships with people that I have met in real life. Even a brief meeting at a conference means that I will remember a person as “that red-haired woman with the funky scarf” rather than as an email ID or Twitter handle. So, I think it’s important to go to conferences, meet lots of people, and then sustain those new professional relationships via social media.

In other words, conferences and social media complement each other. Over time, I think we’ll see them merge until a new interaction model. For example, we are already seeing Twitter as a real-time feedback engine at conference events. (Here’s an excellent discussion of how presenters should handle this.) Joe Welinske’s WritersUA is experimenting with a community site tied to the conference.

What are your thoughts? How important are conferences to your career?

Read More

Talk amongst yourselves…introducing

Our web site now has forums for discussions of technical communication issues. We want to give you, our readers, a venue where you can set your own agenda instead of just responding to our blog posts.

Given Scriptorium’s particular interests, I expect to see a lot of emphasis on publishing automation and XML. But frankly, we don’t know exactly what might happen. Communities often develop in unexpected ways. It will be up to you—and us—to figure out what direction these forums go.

(We have an internal pool on how long before Godwin’s law is applied.)

The forums are available in our main site navigation. There are also RSS feeds so you can subscribe to a topic or category of interest. Or, if you prefer, you can get email notifications for new forum posts.

And how do we feel about this launch? We’re…perfectly calm.

Please join the conversation.

Read More
Humor Opinion

Finding the blogging superhero in yourself

Power blogger.

That’s a new phrase to me, and it was new to Maria Langer, too, as she noted in her An Eclectic Mind blog. As part of a podcast panel, she was asked to offer advice on how to become a power blogger. Some of her fellow panelists mentioned the quantity of posts, but Maria’s focus was elsewhere:

The number of blog posts a blogger publishes should have nothing to do with whether he’s a power blogger. Instead, it should be the influence the blogger has over his readership and beyond. What’s important is whether a blog post makes a difference in the reader’s life. Does it teach? Make the reader think? Influence his decisions? If a blogger can consistently do any of that, he’s a power blogger.

I couldn’t agree more. I appreciate reading any blog that gives me useful information or analysis that hadn’t occurred to me. For example, I recently had issues with a new PC I’m using at home as a media center. It was not picking up all the channels in my area, and an excellent blog post helped me solve the problem with little fuss. To me, that author is a power blogger.

What I frankly find irritating—and certainly not my worth my time—are blogs that are basically what I’ll call “link farms”: posting links or excerpts from other blogs with no valuable information added. I’m quite the cynic, so when I stumble upon such a blog, I figure the blogger is merely trying to generate Google hits and ad revenue, is lazy, or both. Quantity—particularly when said quantity is composed of rehashed material from other bloggers—does not a power blogger make.

When it comes to contributing to this blog, I try to write posts that have a least one nugget of helpful information, analysis, or humor, and I think that’s true of the posts from my fellow coworkers. (At the risk of sounding like I’m bragging about my coworkers, I can’t tell you how many times I’ve read one of their posts and thought, “That’s smart!” or “That’s cool!”) Frankly, I’d rather not write anything at all than to publish something just because it’s been a few days since I posted. And I have a lot more respect for bloggers who write quality posts once in a while over those who put out lots of material that is borrowed from elsewhere.

And on that note, I’ll leave you with a short clip showing superheroes using their powers for a practical solution. (See, I’m trying to entertain you, too!)

Read More

The elephant in the room—publishers and e-books

Two years ago, Nate Anderson wrote this on ars technica:

The book business, though far older than the recorded music business, is still lucky enough to have time on its side: no e-book reader currently offers a better reading experience than paper.

That’s what makes Apple’s iPad announcement so important. Books will now face stiff competition from e-books as the e-book experience improves.

Elephant in the room // flickr: mobilestreetlife

Elephant in the room // flickr: mobilestreetlife

Meanwhile, the publishing industry (with the notable exception of O’Reilly Media) is desperately trying to avoid the inevitable. (For a slighty happier take, see BusinessWeek.)

Publishers are supposed to filter, edit, produce, distribute, and market content. pre-Internet, all of these things were difficult and required significant financial resources. Today, many are easy and all are cheap.

There’s only one other thing.


But the revenue split between publishers and authors does not—yet—reflect the division of labor. The business relationships are still built on the idea that authors can’t exist without publishers. In fact, it’s the reverse that’s true.

Only the big publishers can get your book into every bookstore in the country. However, I’ve got news for you: Unless your name is on an elite shortlist with the likes of Dan Brown, John Grisham, Nora Roberts, and J.K. Rowling, it probably doesn’t matter.

If you know your audience, you can reach them at least as well as a big publisher can. And you need to reach a lot fewer people to succeed as an independent. The general rule of thumb is a 10-to-1 ratio. You’ll make the same amount selling 10,000 books through a traditional publisher as 1,000 books on your own.

It’s not so difficult to hire freelancers (especially in this economy) to edit and produce your book, if that’s not your cup of tea. Distribution is doable—Amazon is easy, bookstores a little more challenging. This is where e-books will accelerate the change—the challenges of shelf space and returns simply disappear.

And even if you have a publisher, they will expect you to do most of the marketing.

So, what will successful publishers look like in 2020?

  • They will provide editorial and production support for writers who do not want to deal with technical issues.
  • They will support authors in marketing by helping them with blogging platforms and other social media efforts.
  • They will get a much smaller cut of revenues than they currently do.

Actually, that looks a lot like Lulu.

    Read More