Calculating document quality (QUACK)
[I am working on a white paper version of the presentation I just gave at the STC Summit in Dallas. This is an excerpt. If you didn’t get a chance to see the session, I’m doing it as a webcast in mid-June (event details) and also (presumably updated) at the tekom conference in November.]
Creating a useful measurement system for document quality requires you to go deeper than just pages per hour. (For software developers, the equivalent sloppy metric is “lines of code per month.”)
We recommend developing a measurement system based on the following ducky components:
- Quality: This measures the correct application of the grammar, mechanics, style guide, consistency, and similar properties. Writing quality is more important for an audience of low literacy users, English as a second language users (assuming the content is in English), and picky users, such as English teachers. Writing quality is generally less important for an audience of highly motivated specialists; for example, software developers reading very technical API documentation.
- Usability: Writing quality measures the ease of comprehension of the text, graphics, or other content by the intended audience. Usability measures the ease of access to the information. To measure usability, you look at factors such as the document navigation system (headers and footers for print; breadcrumbs and the like for online). Did the author employ the proper medium for a particular piece of content? For example, are illustrations provided instead of—or in addition to—lengthy textual explanations? Is the content presented in an attractive, appealing way? Are simulations and video available? High usability is especially important if users can simply choose not to use the product. For example, for a consumer product, such as a cell phone, high usability is important because consumers have lots of options. For products that people must use as part of their job, usability is important to ensure that people can get the job done.
- Accuracy: Does the content describe the product’s features correctly? This factor is especially important for high-stakes documents, such as how to use a machine that delivers radioactive isotopes for nuclear medicine. A mistake in casual game–playing instructions is not of much concern.
- Completeness: Are all of the product features documented? Game documentation often includes only the bare minimum and allows players to discover features for themselves as they play the game. On the other hand, regulated products, such as medical devices, are required to have complete documentation.
- Conciseness: Documents should have as much content as required, and no more. Verbose documents are more difficult to understand, and they increase the cost of localization and printing. This principle is closely related to minimalism.
The overall equation is as follows:
(Q+U+A+C+K)/Cost
Consider your specific environment in refining the calculation for your environment. For
example, you might divided 100 total quality points among the five measurements in
different ways depending on your industry.
Metric | Regulated documentation | Consumer documentation |
---|---|---|
Quality | 9 | 30 |
Usability | 10 | 30 |
Accuracy | 40 | 10 |
Completeness | 40 | 10 |
Conciseness | 1 | 20 |
Your biggest cost factor is probably the cost of technical communicators. You may also want to factor in other costs, such as software and hardware. Unless you are printing actual paper books, your production costs are probably minimal. (Web servers are cheap!)
What are your thoughts? Do these factors give you a way to calculate your overall document quality?
Larry Kunz
Sarah, I really wanted to include a duck pun here but I’m drawing a complete blank. Anyway, I think your QUACK approach has promise.
The ideal measurement is simply “How well does the document satisfy its intended use?” But that’s very hard, if not impossible, to quantify. If we focus on the realm of what’s possible, then your five QUACK criteria get us about as close as we can to a meaningful measurement. And I really like the idea of weighting the criteria based on industry.
Unfortunately, some of these are still pretty subjective. But maybe that’s not really a problem. The way I define these for my organization, doesn’t have to match the way you define them for your organization. Do you see it that way too?
Sarah O'Keefe
@Larry Yes, my thought is that you take the five QUACKtors (sorry), and figure out how to weight them for your particular organization — or perhaps even differently for different document types. I provided two examples, but I would assume that your specific situation would be different.
Larry Kunz
I’m thinking that not just the weighting would differ, but also the way that we score each metric. Under Usability, for example, how do you assign a numeric value to the “document navigation system” or to whether the author “employed the proper medium”?
You can probably find a way to do that — a way that works for you. And I can find a way that works for me. If my way and your way are different, who cares? Each of still has a tool for comparing documents within our own organizations.
Or have I misunderstood?
Sarah O'Keefe
No, I think we’re in vehement agreement. 🙂
Patti Vaitaitis
I really like the idea! I have struggled for the last few years to try to explain to non-Tech Comm people how you can judge the overall quality of a piece of documentation.
That said, I think I see where Larry is going with the need for a little more quantification for some of the QUACKtors. But, my hunch is that there are some scoring and rating techniques that English teachers use to evaluate student essays, right? To me, this is a very similar case. Once you figure out what those are, then the scoring should be relatively easy and objective.
Just some thoughts…
Mark Waite
Can you explain further what you hope to gain by reducing a complex, intellectual property centered activity like creating documentation (or creating software) to a single valued score?
Who will use the score, and how will they use it? Will writers use it to compare within their team? Will managers use it to decide the value of writers? Will it be ignored?
How can / will the score be misused to harm otherwise excellent team members whose writing fails to fit the model?
Have you considered Robert Austin’s “Measuring and Managing Performance in Organizations” as a source of the dangers of attempting to oversimplify intellectual property work?
Have you considered Cem Kaner’s “A Short Course in Metrics and Measurement Dysfunction” as another source of insights into the risks inherent in applying scores to writing?
Have you considered Michael Bolton’s “The Metrics Minefield” for another view of the risks associated with metrics on creative endeavors?
One of Bolton’s quotes is “I Don’t Hate Numbers”, followed by “I love numbers so much that I can’t stand to see them abused as they are by people in our profession”. Is this a case of abusing numbers by assigning them unjustified values?
Could you get the same (or better) results from a scoring “rubric” which guides a skilled reviewer in assessing the value of a document?
I’m not claiming that we should not measure (or score papers in school, or grade creative writing in school). I think there are more axes than 5 to consider, and more ways to evaluate the quality of a document than assigning weighted measures to each of the five axes.