Skip to main content

Opinion

Conferences Opinion

The Ideal Tech Comm Association?

There’s been a ton of discussion about the various organizations, especially STC, recently. With established associations, it can be difficult to take a completely fresh look because of the constraints of structure, organization, and tradition.

So, I thought I’d ask this question: What does your ideal association for technical communicators look like?

Read More
Opinion

But will it blend?

In choral music, “blend” refers to bringing together a diverse group of voices into a pleasing sound in which no single voice is dominant. As technical communication moves into a more collaborative approach to content, it occurs to me that both writers and musicians need to blend. Here are some choral archetypes and their writerly equivalents:

  • Soloists—singers with big, powerful voices or writers with distinctive styles—are a challenge to blend. The singers must reduce their volume to match the voices around them and sing the music without adding ornamentation. Writers must refrain from their favorite distracting rhetorical flourishes.
  • Section anchors—journeyman singers who know their parts cold and provide support to others singing the same part. Section anchors may not have the vocal quality of a soloist, but they are competent singers who are always on pitch, learn the music quickly, and follow the director. On the writing side, this is a person who writes competently, knows the product being documented, always follows the style guide, and learns quickly. In a writing group, these may be the team leaders. They are not necessarily the flashiest or the most gifted writers, but their content ranges from acceptable to excellent.
  • Supporting players—these singers lack confidence, but can learn their part and sing it, provided that they have support from a section anchor. Left to their own devices, they may drift off into another part (usually abandoning harmony to sing the melody line). But as long as someone nearby is singing their part with them, they can stay on pitch and contribute their voices. This equates to writers, often with less experience, who need support, encouragement, and editing to stay within the style guide. They need help in most aspects of the content creation process. Over time, supporting players can improve and grow into more confident section anchors both in writing and in singing.
  • Blissfully tone deaf—Fortunately, many people who are tone deaf (or simply cannot write) are aware of their limitation. But if you’ve spent any time at all in a volunteer choir, you’ve probably experienced people who make up for their lack of pitch by singing louder. In a writing context, your best bet for the tone-deaf (short of a new job!) is to give them assignments that minimize actual writing, such as creating basic reference information (not a lot of room to maneuver).

Our challenge, as writers, is that we have been accustomed to working solo, and now we must learn to blend our authorial voice into the larger group. The skills that make great soloists are not the same skills that make great contributors.

Read More
Opinion

Cardinal sin of blog (and technical) writing: making the reader feel stupid

Want to get me riled up? You can easily achieve that by making me feel stupid while reading your blog.

I read a lot of blogs about technology, and I’ll admit that I’m on the periphery of some of these blogs’ intended audiences. That being said, there is no excuse for starting a blog entry like this:

Everyone’s heard of Gordon Moore’s famous Law, but not everyone has noticed how what he said has gradually morphed into a marketing message so misleading I’m surprised Intel doesn’t require people to play the company jingle just for mentioning it.

Well, I must not be part of the “everyone” collective because I had to think long and hard about “Gordon Moore’s famous law,” and I drew a blank. (Here’s a link for those like me who can’t remember or don’t know what Moore’s Law is.)

Making a broad generalization such as “everyone knows x” is always dangerous. This is true in blog writing as well as in technical writing. In our style guide, we have a rule that writers should “not use simple or simply to describe a feature or step.” By labeling something as simple, it’s guaranteed you will offend someone who doesn’t understand the concept. For example, while editing one of our white papers about the DITA Open Toolkit, I saw the word “simply” and took great delight in striking through it. From where I stand, there is little that is “simple” about the toolkit, particularly when you’re talking about configuring output.

Don’t get me wrong: I’m not saying that a blog entry, white paper, or user manual about very technical subjects has to explain every little thing. You need to address the audience at the correct level, which can be a delicate balancing act with highly technical content: overexplaining can also offend the reader. For example, in a user manual, it’s probably wise to explain up front the prerequisite knowledge. Also consider offering resources where the reader can get that knowledge: that way, you get around explaining concepts but still offer assistance to readers who need it.

In the case of online content and blog entries, you can link to definitions of terms and concepts: readers who need help can click the links to get a quick refresher course on the information, and those who don’t can keep on reading. The author of the blog post in question could have inserted a link to Moore’s Law.  Granted, he does define the law in the second paragraph, but he lost me with the  “everyone has heard”  bit at the start.

That “everyone has heard” assumption brings me back to my primary point: don’t make your readers feel stupid, particularly by making sweeping assumptions about what “everyone” knows or by labeling something as “simple.” Insulted readers move on—and may never come back.


Read More
Opinion

Information as a right

Bear with me in a post that’s going to be even less coherent than usual. (And that’s on the heels of the Great Graphic Debacle.)

Is access to information a right or a privilege?

In democracies, we believe that citizens have a right to their government’s information.

U.S. citizens are likely familiar with the Freedom of Information Act (FoIA) and the various sunshine and open meeting laws. In 2005, India passed a Right to Information Act, which “requires every public authority to computerise their records for wide dissemination and to proactively publish certain categories of information so that the citizens need minimum recourse to request for information formally.” Other countries have similar legislation; the Right2Info organization “brings together information on the constitutional and legal framework for the right of access to information as well case law from more than 80 countries, organized and analyzed by topic.”

In the absence of a compelling government interest (the FoIA has nine, which include national security and personnel privacy issues), governmental information should be available to citizens. (This does assume, of course, that we are talking about governments who acknowledge that they are accountable to their citizens.)

If governments have an obligation to make information accessible to their citizens, does that equate to a right to the information? What about equal access to information? Is that a right?

For example, if certain public information information is readily available only on the Internet, does it follow that a citizen has a general right to Internet access? This question was actually considered by the European Union parliament last year, in the context of a new French law that cuts off Internet access to repeat offenders who infringe on copyrights with file-sharing:

Opponents of the legislation have responded by suggesting that Internet access is fundamental to liberty, an argument that suffered a setback on Wednesday as the European Parliament voted against codifying Internet access as a basic human right. (Is Internet Access a Fundamental Right?, CNet.com, May 6, 2009)

There are also interesting developments in financial information. The U.S. Securities and Exchange Commission (SEC) requires publicly traded companies to make certain information available to the public. This information is delivered through the EDGAR (Electronic Data Gathering, Analysis, and Retrieval) system.

Currently, the official submission format for EDGAR data is plain text or HTML, but the SEC is phasing in the use of an XML vocabulary called XBRL (Extensible Business Reporting Language).

“The purpose of the XBRL mandate is to make corporate financial information more easily available to stockholder.” (The XBRL mandate is here: Is IT ready?, Ephraim Schwarz, InfoWorld, November 25, 2008)

So in addition to mandating disclosure of corporate financial information, the SEC is now mandating easier access to the disclosed information. (A simple implication of XBRL is that you could more easily find executive compensation numbers.)

But what about non-governmental, non-regulated information? Is there a right to access? The business model of analyst firms (Gartner Group), business research companies (Dun & Bradstreet, Hoover’s), and, for that matter, the entire publishing industry (!!) says no. If you want information, you pay.

But look at the evolution of government philosophies and with that, content disclosure requirements. A king who reigns by divine right discloses what he wants to. A democratically elected leader must justify a lack of disclosure. It seems clear that we have shifted to the idea that access to government information is a right.

Will commercial information evolve in the same direction? There are actually some developments that point toward information as a right. In particular, the idea that information must be accessible—that information presentation should not exclude those with visual impairments or other disabilities—begins to build a foundation for equal access to information as a right.

What do you think? Will the right to information access be considered a bedrock principle in 50 or 100 years?

Read More
Opinion

The good manager litmus test: process change

For Kai Weber, a good manager is pivotal in making a job satisfying:

It’s the single most important factor in my satisfaction with a job. Nothing else shapes my memory and my judgment of a past job as much.

What really tests the mettle of a manager is how he or she handles process change. A good manager is absolutely critical when a documentation department switches to new authoring and publishing processes, particularly when moving from a desktop publishing environment to an XML-based one. Without good management, the implementation of new processes will likely fail. (I’ve seen bad management kill an implementation, and it’s ugly.)

So, what does a good manager do to ensure a smooth(er) transition? From my point of view, they will take the following actions (and this list is by no means all encompassing):

  • Demonstrate the value of the change to both upper management and those in the trenches. A manager can often get the approval from upper management on a workflow change by showing cost savings in localization expenses, for example; (less) money talks to those higher up on the corporate chain. But mentions of reduced costs don’t usually warm the hearts of those who are doing the work. A good manager should show team members how the new process eliminates manual drudgery that everyone hates, explain how authors can focus more on writing good content instead of on secondary tasks (such as formatting), and so on. Demonstrating how the team’s work experience improves is more important than showing improvements in the bottom line—even though the cost savings are a result of  those very changes. There is also the angle of professional development for a staff  moving to a new environment; more on that in the next bullet.
  • Ensure those working in the new process understand the new tools and technologies by offering training/knowledge transfer. A good manager knows that changing processes and not including some sort of training as part of the transition is foolish; knowledge transfer should be part of the project cost. Sure, not all companies can afford formal classroom training, but there are less expensive options to consider. Web-based training is very cost effective, particularly when team members are geographically dispersed. Another option is training one or two team members and then having them share their expertise with the rest of the group (“train the trainer”). The benefits of knowledge transfer are two-fold: team members can ramp up on the new processes in less time (thereby more quickly achieving the cost savings that upper management likes so much), and the team members themselves gain new skills in their profession. A good manager recognizes that training benefits both the company as a whole and individual employees (and he or she can help team members recognize how they benefit in the long term professionally from learning new skills).
  • Know the difference between staff members who are bringing up legitimate issues with the new workflow and those who are being recalcitrant just to maintain the status quo. During periods of change, a manager will get pushback from staff. That’s a given. However, that pushback is a very healthy thing because it can point out deficiencies in the new process. A good manager will take feedback, consider it, and modify the process when there are weaknesses. Addressing genuine feedback in such a manner can also help a manager win “converts” to the new process.  However, there may be an employee (or two) who won’t be receptive to change, regardless of how well the manager has explained the change, how much training is offered, and so on. In these cases, the manager may need to consider other assignments for that employee: for example, maintaining legacy documentation in the old system, particularly when that employee’s domain knowledge is too important to lose. There are more unpleasant options (including termination) the manager may need to consider if the recalcitrant team member isn’t providing other value to the organization as a whole. Basically, a good manager won’t let one individual poison the working environment for everyone else.

I will be the first to say that these tasks are not easy, particularly dealing with an employee who is utterly against change. But managers need to address all of the preceding issues to ensure a smooth transition and to keep the work environment positive and productive for the staff as a whole.

I won’t even pretend I have covered all the issues managers need to address when a department changes workflows, and each company will face its own particular challenges because of differences in corporate culture, and so on. If you’ve been through a workflow transition, please share your experiences in the comments: I’d love to hear from both managers and team members on what worked well (and what didn’t) in how management handled the changes.

PS: You can read a more detailed discussion about managing process change in our white paper, Managing implementation of structured authoring (HTML and PDF).

Read More
Opinion

Technology matters

[Update, March 5: corrected the graphic. It now shows that increased expertise does not produce increased value on the limited curve and does produce increased value on the unlimited curve.]

It’s the third rail of technical writing debates: writing ability or technical expertise? And this week, I ran across two articles that argue that good writing is the key to successful technical writing.

I agree that good writing is important. It’s just that I think that domain expertise and tools expertise are also important. To succeed as a technical communicator, you need all three of these qualifications. (A healthy sense of skepticism about any information that you are given is also helpful. Trust, but verify.)

Here, we have Sandhya, the outgoing President of STC’s India chapter:

If I’ve managed to make a minor dent in a paradigm shift away from the importance of tools and years of experience to the importance of basic technical communication and leadership skills, I’d be thrilled. (Sandhya, 7 Habits of Highly Effective Technical Communicators, INDUS)

These skills are not mutually exclusive, and technical writers need all of them. An excellent writer with more experience is better than an excellent writer with less experience. An average writer with great tools knowledge is better than an average writer with average tools knowledge.

That said, I think there’s a point of diminishing returns.

Diminishing returns for extra tools knowledge

Diminishing returns for extra tools knowledge

The value curve for writing ability follows the “unlimited” line. But the value curve for tools expertise is different. Once a writer exceeds the baseline required tools knowledge, there’s not much additional value in additional tools expertise. That’s the limited curve. (The curve for domain expertise depends on the topic, I think. If you write about consumer software, you’re probably on the limited curve. If you write about highly specialized topics (biochemistry, semiconductors, nuclear medicine), domain expertise is probably on the unlimited curve.

Here is another perspective from Ramana Murthy:

A good product documentation is one that helps users achieve their goals easily, irrespective of the tool it has been authored with – be it RoboHelp, Author-it or the unglamorous Microsoft Word. Product documentation does not arrive with a label like “Developed with the best documentation tools”; nor are there instances of customers preferring product documentation authored with a particular tool. (Ramana Murthy, Technical Communication: Content is the key, tcworld)

True , but it’s also irrelevant. The corporation who is paying for content to be created may care a great deal if option A allows you to create content better, faster, or (especially) cheaper than option B.

The tools and technologies you choose for your content-creation efforts matter because they affect the quality and the development cost of your final deliverables. And therefore, in addition to writing ability, technical communicators must master the required tools, technologies, and templates at the appropriate level.

Read More
Conferences Opinion

Conferences versus social media

The information you can get from a conference presentation is usually available online—in blogs, webcasts, forums, and/or white papers. So why should you invest the time and the money to attend an event in person? In the end, there’s something very powerful about eating and drinking with a group of people. (And no, alcohol is not required, although it doesn’t hurt. Until the next day, when it hurts a lot.)

The value of conferences, which is not (yet) replicated online is in the “hallway track”—the discussions that happen between the formal sessions:

“[B]eing able to establish a one-to-one personal connection with other professionals in your field is critical to being a success.” (Dave Taylor in The Critical Business Value of Attending Conferences)


“I’ve found that time and again, I’ll hear speakers or audience members or participate in conversations and lie awake that night jam-packed with new ideas (some that don’t even correspond remotely to the concepts discussed that day). Conferences are a brainstorming paradise and a terrific opportunity for new ideas to come bubbling to the surface.” (Rand Fishkin, The Secret Value of Attending Conferences)

Scriptorium has quite a few social media “features”:

  • This blog, started in 2005
  • Webcasts, 2006 (recordings available for recent events)
  • Forums, this week (currently in the “awkward silence” phase. Help us out by posting, please!)
  • Twitter

But there’s something missing. I’ve attended and presented quite a few webcasts, and I can tell you that it’s actually far more difficult to deliver a compelling webcast than a compelling conference presentation. As the presenter, you lose the ability to gauge your audience’s body language. As an attendee, you have the temptation of your email and other distractions. The audio coming through your computer or phone is somehow not real—it’s easy to forget that there’s an actual person on the other end giving the presentation online. (There’s also the problem that many webcasts are sales pitches rather than useful presentations, but let’s leave that for another time.)

In my experience, it’s much easier to sustain online friendships with people that I have met in real life. Even a brief meeting at a conference means that I will remember a person as “that red-haired woman with the funky scarf” rather than as an email ID or Twitter handle. So, I think it’s important to go to conferences, meet lots of people, and then sustain those new professional relationships via social media.

In other words, conferences and social media complement each other. Over time, I think we’ll see them merge until a new interaction model. For example, we are already seeing Twitter as a real-time feedback engine at conference events. (Here’s an excellent discussion of how presenters should handle this.) Joe Welinske’s WritersUA is experimenting with a community site tied to the conference.

What are your thoughts? How important are conferences to your career?

Read More
Humor Opinion

Finding the blogging superhero in yourself

Power blogger.

That’s a new phrase to me, and it was new to Maria Langer, too, as she noted in her An Eclectic Mind blog. As part of a podcast panel, she was asked to offer advice on how to become a power blogger. Some of her fellow panelists mentioned the quantity of posts, but Maria’s focus was elsewhere:

The number of blog posts a blogger publishes should have nothing to do with whether he’s a power blogger. Instead, it should be the influence the blogger has over his readership and beyond. What’s important is whether a blog post makes a difference in the reader’s life. Does it teach? Make the reader think? Influence his decisions? If a blogger can consistently do any of that, he’s a power blogger.

I couldn’t agree more. I appreciate reading any blog that gives me useful information or analysis that hadn’t occurred to me. For example, I recently had issues with a new PC I’m using at home as a media center. It was not picking up all the channels in my area, and an excellent blog post helped me solve the problem with little fuss. To me, that author is a power blogger.

What I frankly find irritating—and certainly not my worth my time—are blogs that are basically what I’ll call “link farms”: posting links or excerpts from other blogs with no valuable information added. I’m quite the cynic, so when I stumble upon such a blog, I figure the blogger is merely trying to generate Google hits and ad revenue, is lazy, or both. Quantity—particularly when said quantity is composed of rehashed material from other bloggers—does not a power blogger make.

When it comes to contributing to this blog, I try to write posts that have a least one nugget of helpful information, analysis, or humor, and I think that’s true of the posts from my fellow coworkers. (At the risk of sounding like I’m bragging about my coworkers, I can’t tell you how many times I’ve read one of their posts and thought, “That’s smart!” or “That’s cool!”) Frankly, I’d rather not write anything at all than to publish something just because it’s been a few days since I posted. And I have a lot more respect for bloggers who write quality posts once in a while over those who put out lots of material that is borrowed from elsewhere.

And on that note, I’ll leave you with a short clip showing superheroes using their powers for a practical solution. (See, I’m trying to entertain you, too!)

Read More
Opinion

The elephant in the room—publishers and e-books

Two years ago, Nate Anderson wrote this on ars technica:

The book business, though far older than the recorded music business, is still lucky enough to have time on its side: no e-book reader currently offers a better reading experience than paper.

That’s what makes Apple’s iPad announcement so important. Books will now face stiff competition from e-books as the e-book experience improves.

Elephant in the room // flickr: mobilestreetlife

Elephant in the room // flickr: mobilestreetlife

Meanwhile, the publishing industry (with the notable exception of O’Reilly Media) is desperately trying to avoid the inevitable. (For a slighty happier take, see BusinessWeek.)

Publishers are supposed to filter, edit, produce, distribute, and market content. pre-Internet, all of these things were difficult and required significant financial resources. Today, many are easy and all are cheap.

There’s only one other thing.

Content.

But the revenue split between publishers and authors does not—yet—reflect the division of labor. The business relationships are still built on the idea that authors can’t exist without publishers. In fact, it’s the reverse that’s true.

Only the big publishers can get your book into every bookstore in the country. However, I’ve got news for you: Unless your name is on an elite shortlist with the likes of Dan Brown, John Grisham, Nora Roberts, and J.K. Rowling, it probably doesn’t matter.

If you know your audience, you can reach them at least as well as a big publisher can. And you need to reach a lot fewer people to succeed as an independent. The general rule of thumb is a 10-to-1 ratio. You’ll make the same amount selling 10,000 books through a traditional publisher as 1,000 books on your own.

It’s not so difficult to hire freelancers (especially in this economy) to edit and produce your book, if that’s not your cup of tea. Distribution is doable—Amazon is easy, bookstores a little more challenging. This is where e-books will accelerate the change—the challenges of shelf space and returns simply disappear.

And even if you have a publisher, they will expect you to do most of the marketing.

So, what will successful publishers look like in 2020?

  • They will provide editorial and production support for writers who do not want to deal with technical issues.
  • They will support authors in marketing by helping them with blogging platforms and other social media efforts.
  • They will get a much smaller cut of revenues than they currently do.

Actually, that looks a lot like Lulu.

    Read More
    News Opinion

    ePub + tech pub = ?

    At Scriptorium earlier this week, we all watched live blogs of the iPad announcement. (What else would you expect from a bunch of techies?) One feature of the iPad that really got us talking (and thinking) is its support of the ePub open standard for ebooks.

    ePub is basically a collection of XHTML files zipped up with some baggage files. Considering a lot of technical documentation groups create HTML output as a deliverable, it’s likely not a huge step further to create an ePub version of the content. There is a transform for DocBook to ePub; there is a similar effort underway for DITA. You can also save InDesign files to ePub.

    While the paths to creating an ePub version seem pretty straightforward, does it make sense to release technical content as an ebook? I think a lot of the same reasons for releasing online content apply (less tree death, no printing costs, and interactivity, in particular), but there are other issues to consider, too: audience, how quickly ebook readers and software become widespread, how the features and benefits of the format stack up against those of PDF files and browser-based help, and so on. And there’s also the issue of actually leveraging the features of an output instead of merely doing the minimum of releasing text and images in that format. (In the PDF version of a user manual, have you ever clicked an entry in the table of contents only to discover the TOC has no links? When that happens, I assume the company that released the content was more interested in using the format to offload the printing costs on to me and less interested in using PDF as a way to make my life easier.)

    The technology supporting ebooks will continue to evolve, and there likely will be a battle to see which ebook file format(s) will reign supreme. (I suspect Apple’s choice of the ePub format will raise that format’s prospects.) While the file formats get shaken out and ebooks continue to emerge as a way to disseminate content, technical communicators would be wise to determine how the format could fit into their strategies for getting information to end users.

    What considerations come to your mind when evaluating the possibility of releasing your content in ePub (or other ebook) format?

    Read More
    Opinion

    Unedited content will get you deleted

    flickr: Nics events

    The abundance of information today forces content consumers to filter out redundant and unworthy information—much like an editor would. That, however, doesn’t mean content creators can throw up their hands and send out unreviewed content for readers to sort through. Instead, authors (and particularly their managers) need to understand how editing skills can ensure their information doesn’t get filtered out:

    [A]re we getting any better at editing in a broader context, which is editing ourselves? Or to rephrase it, becoming a better critic of our own work? Penelope Trunk (again) lists the reasons why she works with an editor for whatever she writes in public:

    • Start strong – cut boring introduction
    • Be short – and be brave
    • Have a genuine connection – write stuff that matters to the readers
    • Be passionate – write stuff that matters to you
    • Have one good piece of research – back your idea up

    They have one thing in common: difficult to do on our own.

    Granted, some of those bullet points don’t completely apply to technical writing, but it is hard to edit your own work, regardless of the kind of content. For that very reason, folks at Scriptorium get someone else to review their writing. Whether the content is in a proposal, book, white paper, important email to a client, or a blog post, we understand that somebody else’s feedback is generally going to make that information better.

    The same is true of technical content. A lot of documentation departments may no longer hire dedicated editors, so peer reviewers handle editing tasks. Electronic review tools also make it easier than ever to offer feedback: even a quick online review of content by another writer will likely catch some potentially embarrassing typos and yield suggestions to make information more accessible to the end user. (You can read more about the importance of editing in a PDF excerpt from the latest edition of Technical Writing 101.)

    With so much competing information out on the Internet, companies can’t afford to have their official documentation ignored because it contains technical errors, misspellings, and other problems that damage the content’s credibility. Even if you don’t have the time or budget for a full-blown edit, take just a little time to have someone do a quick technical review of your work. Otherwise, end users seeking information about your product will likely do their own editing—in their minds, they’ll delete you as a source of reliable information. And that’s a deletion that’s hard to STET.

    PS: Software that checks spelling and grammar is helpful, but it’s not enough: it won’t point out technical inaccuracies.

    Read More
    Opinion Webinar

    Behold, the power of free

    Lately, our webcasts are getting great participation. The December event had 100 people in attendance (the registered number was even higher), and the numbers for the next few months are strong, as well. Previous webcasts had attendance of A Lot Less than 100. What changed? The webcasts are now free. (Missing an event? Check our archives.)

    We’re going in a similar direction with white papers. We charge for some content, but we also offer a ton of free information.

    The idea is that free (and high-quality) information raises our profile and therefore later brings in new projects. I’m not so sure, though, that we have any evidence that supports this theory yet.

    So, I thought I’d ask my readers. Do you evaluate potential vendors based on offerings such as webcasts and white papers? Are there other, more important factors?

    PS Upcoming events, including several DITA webcasts, are listed on our events page.

    Read More
    Opinion

    2010 predictions for technical communication

    It’s time for my (apparently biennial) predictions post. For those of you keeping score at home, you can see the last round of predictions here. Executive summary: no clear leader for DITA editing, reuse analyzers, Web 2.0 integration, global business, Flash. In retrospect, I didn’t exactly stick my neck out on any of those. Let’s see if I can do better this year.

    Desktop authoring begins to fade

    Everyone else is talking about the cloud, but what about tech comm? Many content creation efforts will shift into the cloud and away from desktop applications and their monstrous footprints (I’m looking at you, Adobe). When your content lives in the cloud, you can edit from anywhere and be much less dependent on a specific computer loaded with specific applications.

    I expect to see much more content creation migrate into web applications, such as wiki software and blogging software. I do not, at this point, see much potential for the various “online word processors,” such as Buzzword or Zoho Writer, for tech comm. Creating documents longer than four or five pages in these environments is painful.

    In the ideal universe, I’d like to see more support for DITA and/or XML in these tools, but I’m not holding my breath for this in 2010.

    The ends justify the means

    From what we are seeing, the rate of XML adoption is steady or even accelerating. But the rationale for XML is shifting. In the past, the benefits of structured authoring—consistency, template enforcement, and content reuse—have been the primary drivers. But in several newer projects, XML is a means to an end rather than a goal—our customers want to extract information from databases, or transfer information between two otherwise incompatible applications. The project justifications reach beyond the issues of content quality and instead focus on integrating content from multiple information sources.

    Social-ism

    Is the hype about social media overblown? Actually, I don’t think so. I did a webcast (YouTube link) on this topic in December 2009. The short version: Technical communicators must now compete with information being generated by the user community. This requires greater transparency and better content.

    My prediction is that a strategy for integrating social media and official tech comm will be critical in 2010 and beyond.

    Collaboration

    The days of the hermit tech writer are numbered. Close collaboration with product experts, the user community, and others will become the norm. This requires tools that are accessible to non-specialists and that offer easy ways to manage input from collaborators.

    Language shifts

    There are a couple of interesting changes in language:

    • Content strategy rather than documentation plan
    • Decision engine (such as Hunch, Wolfram Alpha, and Aardvark) rather than search engine

    What are your predictions for 2010?

    Other interesting prediction posts:

    Read More
    News Opinion

    Are you ready for mobile content?

    A report from Morgan Stanley states that mobile Internet use will be twice that of desktop Internet and that the iPhone/smartphone “may prove to be the fastest ramping and most disruptive technology product / service launch the world has ever seen.” That “disruption” is already affecting the methods for distributing technical content.

    With users having Internet access at their fingertips anywhere they go, Internet searches will continue to drive how people find product information. Desktop Internet use has greatly reshaped how technical communicators distribute information, and having twice as many people using mobile Internet will only push us toward more online delivery—and in formats (some yet to be developed, I’d guess) that are compatible with smaller smartphone screens.

    The growing number of people with mobile Internet access underscores the importance of high Internet search rankings and a social media strategy for your information. If you haven’t already investigated optimizing your content for search engines and integrating social media as part of your development and distribution efforts, it’s probably wise to do that sooner rather than later. Also, have you looked at how your web site is displayed on a smartphone?

    If you don’t consider the impact of the mobile Internet, your documentation may be relegated to the Island of Misfit Manuals, where change pages and manuals in three-ring binders spend their days yellowing away.

    Read More
    Opinion

    Angst and authority

    Clay Shirky has a fascinating post on the concept of algorithmic authority; the idea that large systems, such as Google PageRank or Wikipedia have authority (that is, credibility) because of the way that the system works. In other words, a page that is returned first in a Google search is assumed by the searcher to be more credible because it is ranked first.

    That made me think about authority in technical content.

    As an in-house technical writer, your words have authority and your content carries the corporate logo. But although this should theoretically increase your credibility, it seems that the reverse is true. Consider, for instance, the following hypothetical book titles:

    • XYZ User’s Guide—This document, produced by the makers of XYZ, is shipped in the product box (or downloaded as a PDF with the software)
    • XYZ Classroom in a Book—This document is available in bookstores and is produced by XYZ Press
    • XYZ: The Complete Reference*—This document is available in bookstores and is produced by a third-party publisher

    Which of these books would you turn to for help? What are your expectations of each document?

    I believe that credibility and thus authority increases with distance from the product’s maker. In other words, the third-party book has more authority than either of the other two. Credibility is compromised by close association with the organization that makes the product.

    When we apply this concept to information on the web, the implications are troubling for professional content creators who work inside corporations. If corporate authorship decreases authority, we get this result:

    online help < user forums on corporate site < user forums on third-party site

    Will people looking for user assistance gravitate toward independent third-party sites? What does that mean for in-house authors? How can you increase your credibility as a corporate technical communicator?

    * Feel free to substitute your favorite book series title: XYZ for Dummies, XYZ: The Missing Manual, The Complete Idiot’s Guide to XYZ, XYZ Annoyances, …. I should probably also mention that I have written both a Dummies book and a Complete Reference.

    Read More
    Opinion

    To bid or not to bid—a vendor’s guide to RFPs

    Request for Proposal (RFP) documents usually arrive in the dead of night, addressed to sales@scriptorium or sometimes info@scriptorium.

    Dear Vendor,

    LargeCompany is looking for a partner who can work magic and walk on water. Please refer to the attached RFP.

    Signed,

    Somebody in Purchasing

    Our instinct is to crack open the RFP and start writing a proposal. But over time, we’ve learned to take a step back and evaluate the RFP first to ensure that it’s worth our time.

    In this post, I’ve outlined some of the issues that we consider before responding to an RFP.

    Qualifications

    Are we qualified to do the work that the RFP is asking for? More importantly, are we qualified in the customer’s eyes? Many RFPs delineate evaluation criteria, which may not be relevant (in our opinion) to the task at hand. For example, we once saw a proposal for an Interleaf to FrameMaker conversion project that demanded expert knowledge of Interleaf. I would think that expert knowledge of FrameMaker would be more important, but the customer wanted Interleaf knowledge. FrameMaker was mentioned in passing. We decided in that case that our expert knowledge of the Interleaf-to-FrameMaker conversion process(es) should compensate for our minimal Interleaf expertise.

    But if the RFP demands expertise that we simply do not have, and if that expertise, however irrelevant, will be weighted heavily in the evaluation, investing in a proposal is risky. The fact that we know we can do the project well is irrelevant if that customer will discard our proposal for lacking a required element.

    Some days you’re the windshield, some days you’re the bug

    It’s critically important to figure out whether an RFP is really looking for a vendor or not. Especially in large organizations and for larger projects, multiple bids are often required. The customer may already know exactly who they want to work with, but their purchasing department requires a minimum number of bids (usually three). Thus, the customer writes an RFP to get “column fodder”—the sacrificial second and third proposals to go along with the vendor that they really want.

    We have experience on both sides of this issue. Sometimes, we’re the preferred vendor; sometimes, we’re the one getting the unwinnable RFP. Identifying cases where we have no chance of getting the business is critical to avoid wasting our time. Unfortunately, it’s also a bit of a black art. But here are some factors to consider:

    • Did the RFP parachute in from nowhere, or have we had previous contact with the organization?
    • Are there unusual requirements in the RFP? (For example, we decided against responding to an RFP that stated that close proximity to the customer would be a significant weighting factor. Not-so-coincidentally, one of our competitors just happened to have an office nearby.)
    • Is the project a marginal fit for our services? (In an effort to find additional vendors, the purchasing department might do a quick Google search and then broadcast the RFP. For instance, we get quite a few RFPs requesting outsourced technical writing services. Not the best fit for us.)
    • Is the customer unwilling to have a conference call to discuss questions we have about the RFP?

    Most importantly, if your intuition tells you that there’s a problem, pay attention.

    Funding

    First and most importantly, is the project funded? In one painful case, we wrote a proposal and made the trek to the customer’s office (which was, naturally, on the other end of the country) to present our solution. (This site visit was explicitly required under the terms of the RFP.) Six weeks later, we found out that the evaluation committee had presented their recommendation to upper management and had been denied funding. In other words, the people who issued the RFP did not have approval for the project.

    Assuming that funding is in place, there is the question of whether the funding will be sufficient. If the customer thinks their problem can be solved for $20,000 and our proposal totals $100,000, the sticker shock alone will probably be enough to kill our chances. This is why we often ask prospects about their expectations: how long do they think the project will take? How many people do they think should be assigned to the project? In some cases, we ask directly what the budget is.

    How are bids evaluated?

    RFPs that demand a list of services and hourly rates are a bad sign. Interestingly, government RFPs usually spell out evaluation criteria, using language such as this:

    Selection of an offeror for contract award will be based on an evaluation of proposals against four factors. The factors in order of importance are: technical, past performance, cost/price and Small Disadvantaged Business (SDB) participation. Although technical factors are of paramount consideration in the award of the contract, past performance, cost/price and SDB participation are also important to the overall contract award decision. All evaluation factors other than cost or price, when combined, are significantly more important than cost or price. In any case, the Government reserves the right to make an award(s) to that offeror whose proposal provides the best overall value to the Government.

    Corporate RFPs seem to avoid any mention of evaluation criteria, other than to state that certain items are required in a proposal or the proposal will be deemed “non-responsive.” The dreaded “non-responsive” label is attached to a proposal that does not meet the requirements of the RFP. For instance, if the RFP indicates that a response must include resumes and resumes are not included, the entire proposal could be discarded as non-responsive.

    Evaluating risk

    The decision whether or not to bid on an RFP comes down to evaluating the risk. Factors include:

    • Do we have other RFPs that look more appealing?
    • Are the deadlines reasonable (both for the proposal and for the project described therein)?
    • How many others will respond to the proposal?
    • How well does the customer understand the issues?
    • Does the RFP demonstrate an understanding of the problem the customer is trying to solve?
    • Has the customer already decided on their preferred solution and do we agree with the approach they want to use?
    • Are the evaluation criteria clear and relevant?
    • What is the pricing structure?
    • Does the project look interesting to us?
    • Do we want to work with this customer?

    Of course, even with all these factors, the decision often hinges on intuition. Perhaps there’s something not quite right about the RFP. Turning away work (or potential work) is always difficult, but I think we’re getting better at sniffing out RFPs where we truly stand no chance of success. Or perhaps, despite several risk factors, we believe that we can work well with the customer and make a useful contribution.

    Read More
    Opinion

    Would you use just a gardening trowel to plant a tree?

    As technical communicators, our ultimate goal is to create accessible content that helps users solve problems. Focusing on developing quality content is the priority, but you can take that viewpoint to an extreme by saying that content-creation tools are just a convenience for technical writers:

    The tools we use in our wacky profession are a convenience for us, as are the techniques we use. Users don’t care if we use FrameMaker, AuthorIt, Flare, Word, AsciiDoc, OpenOffice.org Writer, DITA or DocBook to create the content. They don’t give a hoot if the content is single sourced or topic based.

    Sure, end users probably don’t know or care about the tools used to develop content. However, users do have eagle eyes for spotting inconsistencies in content, and they will call you out for conflicting information in a heartbeat (or worse, just abandon the official user docs altogether for being “unreliable”). If your department has implemented reuse and single-sourcing techniques that eliminate those inconsistencies, your end users are going to have a lot more faith in the validity of the content you provide.

    Also, a structured authoring process that removes the burden of formatting content from the authoring process gives tech writers more time to focus on providing quality content to the end user. Yep, the end user doesn’t give a fig that the PDF or HTML file they are reading was generated from DITA-based content, but because the tech writers creating that content focused on just writing instead of writing, formatting, and converting the content, the information is probably better written and more useful.

    Dogwood // flickr: hlkljgk

    Dogwood // flickr: hlkljgk

    All this talk about tools makes me think about the implements I use for gardening. A few years ago, I planted a young dogwood tree in my back yard. I could have used a small gardening trowel to dig the hole, but instead, I chose a standard-size shovel. Even though the tree had no opinion on the tool I used (at least I don’t think it did!), it certainly benefited from my tool selection. Because I was able to dig the hole and plant the tree in a shorter amount of time, the tree was able to develop a new root system in its new home more quickly. Today, that tree is flourishing and is about four feet taller than it was when I planted it.

    The same applies to technical content. If a tool or process improves the consistency of content, gives authors more time to focus on the content, and shortens the time it takes to distribute that content, then the choice and application of a tool are much more than mere “conveniences.”

    Read More