Skip to main content

Opinion

Humor Opinion

More cowbell!

About a year ago, we added Google Analytics to our web site. I have done some research to see what posts were the most popular in the past year:

  1. The clear winner was our FrameMaker 9 review. With 21 comments, I think it was also the most heavily commented post. Interestingly, the post itself is little more than a pointer to the PDF file that contains the actual review.
  2. InDesign CS4 = Hannibal post, which discussed InDesign’s encroachment on traditional FrameMaker features.
  3. A surprise…a post from 2006 in which Mark Baker discussed the merits (or lack thereof) of DITA in To DITA or not to DITA

Our readers appear to like clever headlines, because I don’t think the content quality explains the high numbers for posts such as:

We noticed this pattern recently, when a carefully crafted, meticulously written post was ignored in favor of a throwaway post dashed off in minutes with a catchy title (Death to Recipes!).

For useful, thoughtful advice on blogging, I refer you to Tom Johnson and Rich Maggiani. I, however, have a new set of blogging recommendations:

  1. Write catchy titles
  2. Have an opinion, preferably an outrageous one
  3. More cowbell

Read More
Opinion

Our first experience with print on demand (POD)

It’s been a little over a month since we released the third edition of Technical Writing 101. The downloadable PDF version is the primary format for the new edition, and we’ve seen more sales from outside the U.S. because downloads eliminate shipping costs and delays.

Selling Technical Writing 101 as a PDF file has made the book readily available to a wider audience (and at a cheaper price of $20, too). However, we know that a lot of people still like to read printed books, so we wanted to offer printed copies—but without the expense of printing books, storing them, and shipping them out.

We have published several books over the past nine years, and declining revenue from books made it difficult for us to justify spending thousands of dollars to do an offset print run of 1000+ copies of Technical Writing 101 and then pay the added expense of preparing individual books for shipment as they are ordered. Storage has also been a problem: we have only so much space for storing books in our office, and we didn’t want to spend money on climate-controlled storage for inventory. (Book bindings would melt and warp without air conditioning during our hot, humid summers here in North Carolina.) For us, the logical solution was print on demand (POD): when a buyer orders the book, a publishing company prints a copy using a digital printing process and then ships it.

We chose Lulu.com for our first experiment with POD, and so far, we have been happy with the quality of the books from there. We are still exploring our options with POD and may try some other companies’ services in the future, but based on our experience so far, I can offer two pieces of advice:

  • Follow the specs and templates provided by the printer, and consider allowing even a bit more wiggle room for interior margins. The first test book I printed had text running too close to the binding, so I made some adjustments to add more room for the interior margins before we sold the book to the public.
  • Look at the page sizes offered by the different POD publishers before choosing a size. If you choose a page size that multiple POD publishers support, you’ll have more flexibility in using another publisher’s services in the future, particularly if they offer other services (distribution, etc.) that better suit your needs. Also, ensure the page size you choose is supported when printing occurs in a country other than your own; some publishers have facilities and partners in multiple countries. In an attempt to minimize the amount of production work for the third edition, I chose a page size for Technical Writing 101 that was the closest match to the footprint of the previous edition’s layout. However, I likely would have chosen a different page size if I had known more about the common sizes across the various POD companies. The page size I chose at Lulu is not supported by CreateSpace, which is Amazon’s POD arm. When you publish through CreateSpace, you get distribution through Amazon.com, which isn’t the necessarily the case with other POD publishers. (I’ve read several blog posts about how some authors use the same sets of files to simultaneously publish books through multiple POD firms to maximize the distribution of their content.)

In these tight economic times, POD publishing makes a lot of sense, particularly when you want to release content in print but don’t want to invest a lot of money in printing multiple copies that you have no guarantee of selling. The POD model certainly was a good match for Technical Writing 101, so we decided to give it a try.

I’ll keep you updated on our experiences with POD publishing in this blog. If you have experience with POD, please leave a comment about how it’s worked for you.

Read More
Opinion

A different take on Twittering and technical writers

by Sheila Loring

Technical writers abound on Twitter as do blog posts on how Twitter can make you a better tech writer.

I’d Rather Be Writing has an alternate take in the article Following the NBA Can Make You a Better Writer. Tom Johnson uses the analogy of Kobe Bryant and Lebron James playing their respective positions on the court. He argues that unless you’re a one-person shop, you’re doing yourself a disservice by trying to be a Jack- or Jill-of-all-trades. Play up your strengths, and minimize your weaknesses, tech writers. Read Tom’s article for more.

Read More
Opinion

Technical writing and social networks

There is an interesting thread on techwr-l about using social networking sites to deliver product information. In the thread, Geoff Hart notes there is a generation gap in those who turn to unofficial online resources vs. product documentation:

The young’uns go to the net and social networks more than we older folk, who still rely on developer-provided documentation. We ignore this change at our peril. Cheryl Lockett Zubak had a lovely anecdote at WritersUA a few years ago about how she and her son both set out to solve an iPod problem; they both found the solution in roughly equal amounts of time, but she found it in Apple’s documentation, while her son found it on YouTube.

My experience as a user straddles both relying on official docs and information available elsewhere. When my iPod locked up a few years ago, I found decent information on Apple’s web site, but the best resource for my particular problem turned out to be on YouTube. A user had made a video showing step-by-step what to do.

The dilemma of official docs vs. Web 2.0 information partially boils down to question of audience. As part of the process for planning and developing content, technical communicators should evaluate and remember the audience, and that audience consideration now needs to extend to how a company distributes the content. I don’t think there are cut-and-dried answers here; for example, it’s unwise to make the assumption that all folk over a certain age are unaware of or don’t use social networks and other Web 2.0 resources. Ignoring unofficial information channels is certainly not the solution, however.

Read More
Opinion

Death to Recipes!

I love food. I enjoy cooking and I especially enjoy eating. One of my favorite web sites is epicurious.com, and the kitchen shelf devoted to cookbooks sags alarmingly. Many Saturday mornings, you will find me here.

But I am not happy about how recipes have insinuated themselves into my work life. For some reason, the recipe is the default example of structured content. Look at what happens when you search Google for xml recipe example. Recipes are everywhere, not unlike high fructose corn syrup. Unfortunately, I am not immune to the XML recipe infiltration myself.

I understand the appeal. Recipes are:

  • highly structured content
  • well understood

But I think the example is getting a little tired and wilted. Let’s try working with something new. Try out a new kind of lettuce, er, example. This week, I’m trying to write a very basic introduction to structured authoring, and I’m paralyzed by my inability to think of any non-recipe examples.

I’m considering using a glossary as an example. After all, it’s a highly structure piece of content whose organization is well understood. Maybe I’ll use food items as my glossary entries. Baby steps…

PS It’s totally unrelated, but this article about two chefs eating their way through Durham (“nine restaurants in one night, at least five hours of eating and drinking”) is quite fun.

Read More
Opinion

I am not a Pod Person

Confession time: I don’t like podcasts.

And I think I know why.

I am a voracious reader. And by voracious, I mean that I often cook with a stirring spoon in one hand and a book in the other. I go through at least a dozen books a months (booksfree is my friend).

So why don’t I like podcasts?

  1. They’re inconvenient. I don’t have a lot of interrupted listening time, other than at the gym. And frankly, there’s a bizarre cognitive dissonance listening to Tom Johnson interview Bogo Vatovec while I’m lifting weights. I tried listening to a crafting podcast, but that was worse — my brain can’t handle auditory input describing crocheting techniques while simultaneously operating an elliptical machine. So I went back to Dr. Phil on the gym TV. It may rot my brain, but at least it doesn’t hurt.
  2. They’re inefficient. I can listen to a 30-minute podcast, or I can skim the equivalent text in 90 seconds.

I’ve been thinking about what would make a podcast more appealing to me, and realized that it’s not really the medium I object to, it’s my inability to control the delivery.

I’ll become a podcasting proponent when I perceive these properties:

  1. Better navigation. Podcasts, like other content, need to be divided into logical chunks. These chunks should be accessible via a table of contents and an index.
  2. Ability to skim. Podcasts need to provide the audio equivalent of flipping pages in a book or scrolling through a document while only reading the headings.

Depending on the software you use to consume podcasts, you may already have some of the features. For instance, a colleague told me that he listened to my recent DITA webinar at five times the normal speed:

I wanted to let you know about something in particular. I listened to it at 5x fast fwd in Windows Media Player while drinking a coke. My heart is still racing. You should try it. :o)

Do you enjoy podcasts? Do you have any special techniques for managing them efficiently?

Read More
Opinion

DITA isn’t magic

The WritePoint staff blog makes a very good point about DITA: it isn’t a magic wand that fixes documentation problems. Also, it’s worth noting that:

… DITA didn’t introduce something completely new. DITA incorporates achievements made in a wide variety of approaches to organizing content that were being proactively conducted starting from 1960’s.

Don’t get me wrong: DITA can be a good solution for many departments that want to set up an XML-based single-sourcing environment. Just don’t expect that a twitch of your nose will convert your legacy content or make the output from the Open Toolkit match your formatting requirements.

Read More
Opinion

Publishing DITA without the DITA Open Toolkit: A Trend or a Temporary Detour?

I estimate that about 80 percent of our consulting work is XML implementation. And about 80 percent of our XML implementation work is based on DITA. So we spend a lot of time with DITA and the DITA Open Toolkit.

I’m starting to wonder, though, whether the adoption rate of DITA and the DITA Open Toolkit is going to diverge.

For DITA, what we hear most often is that it’s “good enough.” DITA may not be a perfect fit for a customer’s content, but our customer doesn’t see a compelling reason to build the perfect structure. In other words, they are willing to compromise on document structure. DITA structure, even without specialization, offers a reasonable topic-based solution.

But for output, the requirements tend to be much more exacting. Customers want any output to match their established look and feel requirements precisely.

Widespread adoption of DITA leads to a a sort of herd effect with safety in numbers. Not so for the Open Toolkit — output requirements vary widely and people are reluctant to contribute back to the Open Toolkit, perhaps because look and feel is considered proprietary.

The pattern we’re seeing is that customers adopt the Open Toolkit when:

  • They intend to deploy onto multiple servers, and open source avoids licensing headaches.
  • The Open Toolkit provides a useful starting point for their output format.

Customers tend to adopt non-Open Toolkit solutions when:

  • They need attractive PDF. Getting this result from the Open Toolkit isn’t quite impossible, but it’s hard. There are other options that are faster, cheaper, and easier.
  • They need a format that the Open Toolkit doesn’t provide. The most common requirement here is web-based help. Getting from the XHTML output in the Open Toolkit over to a sophisticated tri-pane help system with table of contents, index, and search….well, let’s just say that it keeps me gainfully employed. AIR is another platform that we need to support.

The software vendors seem to be encouraging this trend. In part, I think they would like to find some way to get lock-in on DITA content. Consider the following:

  • Adobe FrameMaker can output lovely PDF from DITA content through FrameMaker (no Open Toolkit). You can also use the Open Toolkit to generate formats such as HTML.
  • ePublisher Pro uses the Open Toolkit under the covers, but provides a GUI that attempts to hide the complexities.
  • MadCap’s software will support DITA (initially) by importing DITA content and letting you publish through MadCap’s existing publishing engine.
  • Several other vendors provide support for publishing DITA, but do not use the Open Toolkit at all.

The strategy of supporting DITA structure through a proprietary publishing engine actually makes a lot of sense to me. From a customer point of view, you can:

  • Set up an XML-based authoring workflow
  • Manage XML content

It’s not until you’re ready to publish that you move into a proprietary environment.

To me, the interesting question is this: Will the use of proprietary publishing engines be a temporary phenomenon, or will the Open Toolkit eventually displace them in the same way that DITA is displacing custom XML structure?

Read More
Opinion

The Golden Rule of technical writing

I stumbled upon a list of tips for technical writers, and I was glad to see tip 7:

Understand Your Target Audience. Write and revise your content according to how your target audience thinks and understands things. Getting into their heads–knowing how their minds process information, how they might react, what they feel is important–allows you to customize your content to tailor-fit their needs.

I would put that tip at the top of the list, but that’s a quibble.

Sarah and I mention the topic of audience a lot in our Technical Writing 101 book; I think it is the most important thing for writers to remember as they create content. You can have an elegant XML-based publishing system that generates all sorts of output with the push of a button, but if your information doesn’t address the needs of users, all the work put into the content and into the process itself is wasted.

That waste becomes even more acutely painful when a user abandons your information and finds helpful content on a blog, wiki, or forum. The contributors of that information probably don’t know (or even care) that they followed the Golden Rule of technical documentation: Audience, audience, audience.

Read More
Opinion

Content creation isn’t just for tech writers

We’ve seen an increase in the number of clients who need documentation processes that include input from part-time contributors (particularly engineers). XML-based workflows make it easier to handle this sort of input. Part-time contributors can enter their information into forms or can edit XML documents in an editor that doesn’t require them to know a thing about publishing tools.

UC Irvine seems to have picked up on this trend in collaboration: the school’s extension program just announced a technical writing class for engineers:

“This course is designed to provide students with writing skills tailored for the science and engineering fields and to correct common problems,” said Jessica Scully, M.J., instructor of the course. “It covers the importance of writing for a particular audience, and applies journalism skills to help students effectively create a focused and concise document.”

The benefits of such a program go beyond engineering. Improvement in the quality of developers’ writing would likely mean a reduction in the cost of creating a more unified voice in content (which in turn would lead to a smoother localization process). And last but not least, the end users (internal or external) would get better documentation.

This class could also help engineers gain an appreciation of the skill sets technical writers bring to an organization. That being said, it would be unfortunate if a company made the short-sighted mistake of thinking that sending engineers to a class like this would transform them into instant technical communicators.

Read More
Opinion

The Age of … Expertise?

Over on O’Reilly’s Radar blog, Andy Oram has a fascinating article about the demise (!) of the Information Age and what will be next:

[T]he Information Age was surprisingly short. In an age of Wikipedia, powerful search engines, and forums loaded with insights from volunteers, information is truly becoming free (economically), and thus worth even less than agriculture or manufacturing. So what has replaced information as the source of value?The answer is expertise. Because most activities offering a good return on investment require some rule-breaking–some challenge to assumptions, some paradigm shift–everyone looks for experts who can manipulate current practice nimbly and see beyond current practice. We are all seeking guides and mentors.

What comes after the information age? (be sure to read the comments, too)

It’s an interesting idea, but I don’t think we’re getting away from the Information Age into the Expertise Age. After all, expertise is just a specialized (useful!) form of information.

In the comments, Tim O’Reilly points out that the real change is in how information is gathered and distributed with “the rise of new forms of computer mediated aggregators and new forms of collective curation and communication.”

I believe that we are still firmly in the Information Age because information has not yet become a commodity product. There is, however, clearly a shift happening in how information is created and delivered. I think it’s helpful to look at communication dimensions:

  • Traditional technical writing is one-to-many. One person/team writes, many people consume it.
  • Wikis are many-to-many. Many people write; many people use the information.
  • Mailing lists are many-to-one. Many people respond to one persons’ question.
  • Technical support is one-to-one. One person calls; one person responds.

Technical support is the most expensive option; it’s also often the most relevant. Technical writing is more efficient (because the answer to the question is provided just once), but also less personal and therefore less relevant.

Many technical writers are concerned about losing control over their content. For an example of the alarmist perspective, read Joanne Hackos’s recent article on wikis. Then, be sure to read Anne Gentle’s eponymous rebuttal on The Content Wrangler.

Keep in mind, though, that you can’t stop people from creating wikis, mailing lists, third-party books, forums, or anything else. You cannot control what people say about your products, and it’s possible that the “unauthorized” information will reach a bigger audience than the Official Documentation(tm). You can attempt to channel these energies into productive information, but our new information age is the Age of Uncontrolled Information.

Furthermore, the fact that people are turning to Google to find information says something deeply unflattering about product documentation, online help, and other user assistance. Why is a Google search more compelling than looking in the help?

Read More
Opinion

Why XML and structured authoring is a tough transition

Found on technicalwriter’s blog:

There are several applications that incorporate features for DITA use, such as XMetal and Altova Authentic, but how much value do they provide? (Looking over the online documentation for XMetal, you will see some pretty shaky formatting and copyfitting.)

There may well be formatting and copyfitting issues. Wouldn’t surprise me at all. But talk about missing the forest for the trees!

DITA/XML/structured authoring are important because they improve how information is stored. To question their value because somebody produced documentation using them that doesn’t look so great…let’s try an analogy:

Last week, I went to a restaurant and the food was terrible. I looked in the kitchen and saw Calphalon pots and pans. I conclude that you should not buy Calphalon because the food they produce is terrible.

The quality of your food is determined by things such as the quality of the ingredients and the skill of the chef. The pan you choose does contribute — it helps to use the right size and a high-quality pan, but to dismiss DITA because one example doesn’t look quite right is pretty much like dismissing Calphalon because somebody once cooked something that didn’t taste very good in it.

PS I like Calphalon. And I have produced my share of problematic entrees.
PPS DITA is not right for everybody.

Read More
Opinion

chutzpah

Look in the dictionary, see a reference to MadCap Software. Their latest:

MadCap Blaze is the heir apparent to Adobe FrameMaker.

I haven’t seen Blaze, and as far as I know, it is not yet available in beta. Therefore, this claim seems just a tiny bit premature.

Also, Blaze is going to have tight integration with XML Paper Specification, otherwise known as “PDF-Killer.”

I blogged about XPS early on, when it was code-named “Metro.” I’m very skeptical about XPS; dislodging PDF will take a huge effort. I’m puzzled by MadCap’s focus on XPS.

Read More
Opinion

“Perception is reality”

Once upon a time, a long time ago, a wise manager told me this in response to some whining from me. Things were happening, life was unfair, and I couldn’t understand why my wonderful contributions weren’t being appreciated.

“Perception is reality.”

The perception was wrong, and reality was irrelevant. Never mind whether I was doing a fantastic job — upper management didn’t see it that way, and their evaluations are based on their perception.

It seems that RoboHelp has a similar problem. Ellis Pratt writes on the Cherryleaf Technical Authors’ Blog: “The challenge for Adobe, I believe, is to develop a better product and to try and rebuild relationships that haven’t been nurtured properly for the past four or five years. Maybe it’s time they read ‘The Tipping Point’.”

Read More
Opinion

Oh, this is not a good idea

[Update: According to Aseem, comments are back on and turning them off was unintentional.]

In an earlier post, I linked to a blog posting from the Adobe Product Manager for FrameMaker, who requested product suggestions via meetings and email. But, unsurprisingly, the requests went into the comments. And most of the commenters are asking for a Mac version. And now we have this (from a comment on my post):

It appears the ability to comment on that post has been turned off. If I had been allowed to comment, here is what I would have written.
[another request for Mac support with a detailed recommendation on how to do it]

I suppose that it’s possible that Adobe’s blog system limits each entry to 16 comments?

<crickets>

Probably not.

I don’t think that a flood of “gimme back my Mac” was what Aseem was looking for. (Hi, Aseem!)

Blogs are a two-way conversation. Sometimes, the person you’re talking with changes the subject. And hitting the mute button is really not the best way to deal with that.

[I will now await a flood of comments that will make me eat my words.]

Read More
Opinion

Authoring styles and art

Norm Walsh tackles topic-oriented authoring and makes a comparison to art.

Imagine that instead of authors, we were painters. In the narrative style, a painter (or perhaps a group of painters) begins at one side of the canvas and paints it from beginning to end (from left-to-right and top-to-bottom). They may not paint it in a strictly linear fashion, but the whole canvas (the narrative whole) is always clearly in view.

Interesting point, and he uses an image of a Vincent Van Gogh painting, chopped into unattractive bits to illustrate what goes wrong in topic-oriented authoring. The flow of the picture is lost.

But what if your content more closely resembles something by Mondrian?

one of Piet Mondrian's cube paintings

Writing useful technical documentation is really, really hard. Using a narrative flow makes it a little easier to ensure that you’ve got the big picture — missing information jumps out at you just as Norm’s chopped-up painting shows.

But topic-based authoring has advantages, too.

Do you need those connections from piece to piece or can individual parts stand on their own?

Are your documents Mondrian or Van Gogh?

I hope for your sake that the product you’re documenting does not resemble Jackson Pollock‘s work.

Read More
Opinion

If it’s not Scottish, it’s…

[refresh your memory here]

The Aberdeen Group has released a new report, entitled The Next-Generation Product Documentation Report: Getting Past the ‘Throw-It-over-the-Wall’ Approach. (Could that title be any less, um, Scottish?) Access is free until February 23 when you provide your email address.
The Content Wrangler
is not impressed:

The folks at Aberdeen do not truly understand the market, despite many interviews with thought leaders in the documentation arena. […] The survey appeared to be designed to obtain results for each of the sponsors […], instead of questions designed to paint an accurate picture of the documentation industry without regard for the concerns of sponsors.

I lost interest because I think their basic hypothesis is wrong:

Causing a missed product launch because of incomplete product documentation is
the nightmare of every documentation manager.

The vast majority of documentation groups that I’ve worked for/in/with don’t worry about “causing a missed product launch.” If the documentation isn’t ready, the product will still ship. The documentation may be incomplete, inaccurate, unreviewed, or otherwise problematic, but the product will ship.

I know that there are some industries (medical devices come to mind) where documentation is in fact regulated just as the product itself is. But the vast majority of technical writers that I’ve worked with are concerned with triage — how much of the doc can be completed by the (generally ridiculous) deadline?

Am I missing something? Is there a world of documentation managers out there who stress about actual product delays because of documentation?

Read More
Opinion

My First Blog Ethics Challenge

So today, this is all over the various tech comm lists:

If you’ve got a blog that appeals to technical communication professionals, we’d got a special offer for you. Blog about [deleted] and we’ll send you a free [shameless sponsor] T-shirt courtesy of [shameless sponsor].

[boring details snipped]

Supplies are limited. T-shirts XL only.

XL only?!? In that case, forget it.

Now, if they were offering chocolate

On a completely unrelated note, I’d like to mention that I’ll be presenting at the STC Trans-Alpine Chapter conference on April 18-20. In Switzerland, which as you probably know is famous for watches, banks, and <cough> chocolate.

Read More
Opinion

Somebody does NOT like DITA

From Jon Bosak’s closing keynote at XML 2006:

Another ancient subject that seems to be popping up again is the idea of modular document creation. This is one of those concepts that comes through about once a decade, seduces all the writing managers with the prospect of greater efficiency, takes over entire writing departments for a couple of years, and then falls out of favor as people finally realize that document reuse is not a solvable problem in document delivery but rather an intractable problem in document writing — which is, how to retain any sense of logical connection between pieces of information while writing as if your target audience consisted entirely of people afflicted with ADD.

I don’t think I agree completely, but he does have a point.

I could go on at length about this, but instead I’ll simply leave you with the observation that my personal love affair with modular documentation occurred in 1978 and that I haven’t seen a thing since then that would change the conclusions I reached about it almost thirty years ago. This is not to say that I’m trying to discourage the technical writing community whence I came from their enthusiasm for the modular authoring technology du jour, since engagement in such efforts is virtually guaranteed to buy tech writers a few years in which they can act like software engineers and present themselves as engaged in cutting-edge informational technology development rather than plain old technical writing. That strategy has worked great for some of us.

I think perhaps the arguments for and against single-source publishing are a better place to look. There is a school of thought that argues that single sourcing results in inferior deliverables, both in print and online. But the cost savings from single sourcing are so compelling that nobody really argues for hand-crafting printed and online materials separately any more. (Based on my experience, I think that the quality difference between material that is single sourced (well) and material that is hand-crafted (well) is quite small; perhaps around 10 percent. But that last ten percent is extremely expensive.)

With XML/DITA/modular documentation, there is a similar cost argument. Document reuse and especially localization workflows benefit from modular documentation. For localization teams, getting content in topics rather than monolithic books can result in incremental localization and thus the ability to “sim-ship”; to ship the product in the source language and target languages simultaneously. This, in turn, means a global product launch and a shorter wait for revenue from the markets for which localization is required.

Thus, requirement to accelerate product deliverables and save money on localization (because of more efficient reuse) are going to drive implementation of modular documentation. The argument that non-modular documentation is better documentation will become irrelevant.

Read More
Opinion

Play nicely, and share your code

[updated to fix broken link]

Quadralay’s new ePublisher Pro was released with, shall we say, minimal documentation. The user guide describes how to manipulate the basic interface, but details on how to go under the covers and customize the XSL transformations that make up the core of the product are absent.

It appears that the company is trying to address this shortcoming with a wiki. There are some concerns about this approach, though. Char James-Tanny points out that “no one seems to have the rights to any of the material that’s posted.” And Bill Swallow writes this in waxing techcomm: “If the intent is to supply users with a means of online support/reference, I think it would be best to triage the contributed content, have it validated by a company representative, and then published.”

I think the wiki approach raises a larger question–how much documentation should a product creator be responsible for? A product like ePublisher Pro provides a configuration platform–the customization possibilities are endless. For advanced customization, ePublisher Pro is more comparable to a software development environment than a menu-driven application. Documenting a “development platform” is very, very tricky.

Nonetheless, there are some things that Quadralay should have provided and hasn’t. These include:

  • An inventory of the XSL transformation files provided with the product and an overview of what each file does.
  • Examples of how to perform common customizations that cannot be accomplished inside the user interface.
  • Documentation of the Quadralay-provided XSLT extension functions.

Posting these inside the wiki would be a nice start.

Quadralay has tried before to put the expert user community to work (anyone remember the WebWorks Publisher forums on their web site?). But speaking as a consultant, I’m not likely to post into the wiki when the ownership of that code is so unclear. Furthermore, we already have the wwp-users mailing list, which has over 3,000 members. Why bother with the wiki?

Finally, there is a massive disclaimer as part of the wiki:

All projects, code snippets, suggestions presented in this medium are colloborative [sic] materials expressed by both Quadralay personnel and WebWorks power users. Material taken from this medium and implemented into your existing production workflow or testing environments should be carefully considered and is done at your own risk. Although our product support consultants can and will place material on this medium to faciliate collaboration between Quadralay and its customers, Support Incidents submitted through webworks.com regarding issues with implementation of this material will not be accepted. Support for the implementations expressed here will only be supported through this medium.

In other words, if you use information posted on the wiki by Quadralay to customize your project, and it doesn’t work, Quadralay support will not help you.

Boo.

I understand the concerns surrounding wikis and the ability of anyone to edit a page, but it seems this could actually be resolved quite easily. The wiki can be set up with Official and Unofficial pages. Official pages are built by Quadralay employees, are editable only by Quadralay employees, and Quadralay support will provide support for those pages. Unofficial pages are those created by ePublisher Pro users; they are the “use at your own risk” section of the wiki.

Read More
Opinion

A student reviews a web-based class

In the November 2006 newsletter of the STC UK chapter, Mark Buffery of Salford Translations writes about his experience with the web-based version of our XML and Structured Authoring class.

The course consists of 4 half-day sessions (approximately 2 hours in length each), and is presented as a web-based meeting with all participants in direct communication with one another through a telephone conference call. This enables the tutor to field any questions raised during their presentations, as and when they are raised. Once logged on, each participant can view the tutor’s screen in real-time as they demonstrate and talk them through the various functionalities being discussed. This was the first time I had ever attended a webinar, and I was not sure just how effective this would be.

I firmly believe that, in theory, classroom training is better than web-based training. The trouble is that classroom training is also much more expensive than web-based training. Typically, the cost of travel (at least) doubles the basic tuition expense, and when you take into account the time spent traveling to and from the training site, costs are even higher. Web-based training allows you to fit the training into your regular workday. You do miss out on the many delights of the airport security line, but I think you can probably manage to contain your disappointment.

Being in direct vocal contact over the telephone was useful, and a better compromise than I had imagined (being used to the more traditionally reciprocal teaching environment of the classroom or lecture theatre). However, once we had been online for a few minutes, it did not seem so strange.

If you’re considering this or other courses with Scriptorium, please read his article for an overview of how things work from the student’s point of view.

One common question that Mark does not touch on is class times. Our commitment to our students is that we will make every attempt to schedule the class so that class meetings are during regular business hours for each student. Most often, that results in an 11 a.m. to 1 p.m. meeting at our local time (U.S. East Coast). If we have only East Coast and European students, we move the time earlier; if participants are west of us, we move the time later. So far, we have not had any participants dial in from east Asia or Australia, but please feel free to sign up and we’ll make sure we meet at a time that’s reasonable for you.

Read More