Scriptorium Publishing

content strategy consulting

The temperature check

December 24, 2013 by

This anonymous guest post is part of the Blog Secret Santa project. There’s a list of all Secret Santa posts, including one written by Sarah O’Keefe, on Santa’s list of 2013 gift posts.


I recently took a trip to the emergency room, and there it was: The How Are You Feeling chart. Ten yellow faces, ranging from terrified shriek to cheerful giggle. In case you’re wondering, I picked #7, but that’s neither here nor there.

People don’t always have words for what they’re feeling, but the feelings are still there. And just like they matter when you’re huddled on a plastic chair in the ER, they matter when you’re working on content, too.

We have so much to keep track of in our content strategy projects. Things like: *Who do we want to reach?* and *What kinds of content do we need to reach them?* and *Which resources do we have at our disposal?*

We want our content to be relevant, up to date, aligned with business objectives, and so much more. ALL of those things are important. So how can we integrate feelings? I’d like to suggest something I call the Temperature Check. It’s not quite the same as the How Are You Feeling chart, but it’s close.

I started using the Temperature Check during a recent client workshop, when I noticed people’s expressions changing as we talked through our content plan. Sometimes they smiled and nodded. Sometimes they looked irritated. Sometimes their eyes glazed over.

In a brief burst of inspiration, I pulled up our draft plan and asked them to describe how they felt about each part of it: Love it, Hate it, or Meh.

Hello, feelings. It was like a key that unlocked important new insights into to what we were doing. Though we had already put our plan through several other filters, this exercise helped us to see nuances we had missed before. We took a closer look at the things marked “Hate it” and asked why. Some of them we hated because we’d failed at them before, or our process for getting them done was painful. We talked about how we could change that. Some of them were things we had convinced ourselves we “should” do, which led to an excellent discussion about whether we were being lazy–or if there was a good to reason to eliminate them from our plan. Negative emotions have a whole lot of wisdom in them, if only we’ll listen.

Next, we explored how we could make the Meh stuff better. Here again, we discovered some things we thought we should do, but when we were honest, didn’t want to. We wondered out loud why this was such a pervasive issue. Were we trying too hard to imitate strategies others had found successful, but that didn’t seem quite right for our culture? What were we trying to prove, anyway?

Content ownership issues came up as well. Not everyone had the same feelings about the same things. For instance, someone who felt ambivalent about a particular part of the plan found that someone else was feeling the love. Bingo, new owner.

Finally, we used the things tagged “Love it” to understand what the team was truly jazzed about. It wasn’t just the sparkly stuff – some of us loved the nerdy bits, data and spreadsheets and the like, while others gravitated to the more social elements. And surprise, surprise – that led us back to the beginning, talking about what it would take to move more of the “Hate it” stuff to the “Love it” category.

Did the Temperature Check change our strategy? Well, duh.

People are the ones who make content, and manage it, and share it – and those people have feelings. Feelings that impact the quality of the content they make and maintain, and the effectiveness of all of our strategizing and plotting and planning.What’s happening on your teams? Do you see frowny faces or excited ones? The difference matters.

Give the Temperature Check a try and see what happens.

Perils of DITA publishing, part 6: EPUB and Kindle

October 8, 2012 by

In which we jump through flaming hoops for EPUB and Kindle.

Plugin central

For the EPUB version of Content Strategy 101, we decided to create a DITA Open Toolkit plugin instead of going the usual route of hand-compiling the HTML, manifest, and table of contents package. Among other things, we wanted an automated way to handle the indexing, as well as the part and chapter numbering and cover image. Starting with the OT’s base XHTML plugin, we created overrides to the default XSLT templates to incorporate our in-house styles and custom fonts. This is standard fare, and easily achieved with small changes to the base plugin. Considering the extent of the work required to create the full EPUB package, however, we opted to create an altogether new transform type in the plugin.xml file and went to work from there.

Manifest destiny

Every EPUB requires manifest (OPF) and table of contents (NCX) files — read the spec here — so we knew we needed additional ANT targets to crawl through the DITA sources and collect the pertinent information. This required creating a new build file for our plugin that would pull from the ANT targets in the base XHTML transforms and add our new targets alongside.

Creating an EPUB can be like jumping through flaming hoops.

flick/imaginedhorizons

For the EPUB TOC, we created an ANT target to loop through the bookmap and write the headings you see in the left-hand column of the EPUB reader. A couple of challenges arose here, as our part and cover information was stored in <data> elements in the bookmap, so there weren’t HTML files to link to. We got around this by writing additional templates to create HTML files for the cover image and parts information based on corresponding metadata in the map file. We also wanted the TOC file to display the chapter and part numbers, so we had to do some wrangling to make our transforms pick off the chapters and parts and to number them appropriately without interfering with the order of the TOC itself. We did this by counting the parts and chapters and checking their positions against the number of preceding siblings.

The manifest file has two parts, the manifest itself and the spine block that corresponds to the TOC NCX file and sets the “play order” for the book. For the manifest section, file order isn’t as big a deal, but you must account for every single file in the package, and you must designate the cover image as such using the @properties attribute in the cover’s item entry (more on this at the Threepress blog). To achieve this, we added another ANT target to accomplish two things: 1) peek in the source directories and drum up a list of all HTML, images, CSS, and font files and give them unique IDs that correspond to the entries in the spine block, and 2) crawl the bookmap itself to set the order of the spine block, including entries for the cover and parts files generated at runtime.

Index your heart out

DITA indexing has its own attendant heartaches, which Alan Pringle heroically overcame for this book. For the EPUB, to keep it (relatively) simple, we created a further ANT task to crawl the DITA files and compile all the index entries into an intermediate file for grouping and sorting. From there, the index gets written to an HTML file for inclusion in our manifest and TOC files.

Fun with mimetype

Next, in our plugin build file, we created an ANT task to clean up the output directories and create the EPUB package. The .epub extension is just an ordinary ZIP file by another name, and of course zipping packages is no problem in ANT. However, every EPUB requires not just a mimetype, but an uncompressed mimetype. When zipping in ANT, you can set compression at zero to forego compression of any kind, but we did want to compress the HTML, CSS, images, and fonts — everything except for the mimetype. To get this right, we did a triple zip, where we zipped the mimetype with zero compression into its own file, zipped all the other content with compression to a second file, then combined the two as ANT ZipFileSets. ANT’s ZIP task, among other things, allows you to combine existing ZIP files with any level of compression. Being the content files were already compressed, we simply combined the uncompressed mimetype ZIP file and the compressed content ZIP file while retaining the source compression, and voila, EPUB. Once we had our final product, we used EpubCheck to validate our work.

Here, Kindle Kindle …

For the Kindle version, we had to make minor tweaks to the EPUB, the most significant of which was removing the custom fonts, for sake of keeping it dirt simple. From there, we used KindleGen (Amazon’s command line utility) to create a MOBI file directly from our EPUB, and we were ready to go.

Hit me with your thoughts on Twitter (@ryan_fulcher) or in the comments below, and check back soon for more from our ongoing Perils of DITA publishing series.

Migration for the un(der)funded

August 10, 2012 by

Content migration from format A to format B is a challenge in the best of times. And then there are the worst of times, like the depressing situation in this message (published with permission from the author):

How do you suggest handling this situation: We are a 5-person team that is already maxed out resource-wise, and who is working in an Agile environment with overlapping sprints. We are currently working in unstructured FrameMaker, although we are very conscientious about our tag use. We are being asked by corporate to move to DITA after the first of the year. We don’t feel we have the time or the resources and we will not be afforded the luxury of having someone else convert our documents. Help!

To summarize, a small team is being asked to take on conversion of their content without any support or accommodation. This is a failure of management. (“But,” splutter the managers reading this, “what if they are exaggerating their level of business in order to avoid the DITA conversion??” If your team is actively trying to avoid DITA, that’s still the fault of management because you have not sold them on the process. So, this is either a legitimate “not enough resources” problem or it’s a change management problem. Both of those are the manager’s responsibility.

migrating geese spell out DITA

DITA migration, source for illustration: flickr, thirdworld

Let’s assume that our anonymous correspondent is not the only person facing this problem. Here are some suggestions (and we welcome additional input from our readers, anonymous or otherwise):

  1. Talk to management about the resource issue. Are they willing to negotiate on some other responsibilities to make room for the conversion project?
  2. 99% of our customers tell us that they are “very conscientious” about their tag use. Sometimes, that description is even accurate. Files with clean tagging are much easier to convert automatically than the other kind. Preconversion cleanup is much more efficient than postconversion cleanup.
  3. Know what you have. Topic-based content is much easier to convert to DITA, whereas conversion of poorly organized, repetitious content defeats the purpose altogether. If your content isn’t ready (from an information architecture standpoint), concentrate on getting it where it needs to be before worrying about conversion paths.
  4. Dig in to the free stuff. There are lots of great free tools out there that work quite well (see below). With a little customization, most of these can be, er, persuaded to do almost anything.
  5. Be reasonable. No conversion is perfect.

Tools

Google DITA conversion free and watch the results pile up. And although there are file formats that can’t be converted directly to DITA, most anything can be converted to an intermediate format that can then be converted to DITA. The most important consideration, again, is the organization of the content from the outset.

With an eye toward converting from the most two common source formats (FrameMaker and Microsoft Word), there are several good options for basic conversion that should work right out of the box (and again, bear in mind that configuration and customization can go a long, long way toward improving your results).

FrameMaker-to-DITA:

  • FrameMaker conversion tables. A good conversion table will do the work for you. Map your paragraph styles to elements via the table, then run your content against it and voilà: FrameMaker will structure your unstructured content, which you can then save as XML. Conversion tables are fairly simple to set up, and, depending on your source content, can produce solid DITA output.
  • FrameScript and ExtendScript. If you’re using a FrameMaker release lower than 10, FrameScript (US $149.95) is a reasonably cheap macro scripting program that can help clean up your content both before and after structuring. If you’re using FrameMaker 10-plus, ExtendScript functionality comes built-in. Note that scripts for the two are not interchangeable, and that ExtendScript, while free, has its own woes (see Simon Bate’s post on his early experiences with ExtendScript).
  • HTML-to-DITA via the DITA Open Toolkit. Save your FrameMaker files as HTML, clean them up using HTML Tidy (free, GUI wrappers available, can be run against multiple source files with standard batch commands), then run them through the DITA-OT’s h2d tool (also free).

MS Word-to-DITA:

  • DITA for Publishers Word-to-DITA plugin for the DITA Open Toolkit. Define a style-to-tag map and run your Word files through this plugin to produce workable DITA output. Customization can be quite tricky, but the plugin itself is a solid solution for basic conversion.
  • HTML-to-DITA (again). Much the same as with FrameMaker—save as HTML, clean files using HTML Tidy, then run them through the DITA-OT’s h2d tool.
  • Word-to-FrameMaker-to-DITA. In the unlikely event that you have both of these tools at your disposal, slurp the Word documents into FrameMaker, then use a conversion table to structure your content.

And remember, when you are unhappily retagging your former coworker’s “very clean” files…at least it’s not Interleaf.

Webcast: Six easy ways to control your localization costs

July 17, 2012 by

In this webcast recording Bill Swallow, the manager of the GlobalScript division at LinguaLinx, discusses some of the ways you can cut your localization costs while still delivering quality content.

Update 9/26/2014: Bill now works for Scriptorium.

The convergence of information science and tech comm

July 2, 2012 by

Until I started working at Scriptorium, my educational and work background was in information and library science.


I have worked at the circulation and reference desks in academic libraries at both the community college and four-year university level. Over the last few months, amazingly enough, so much of what I’ve learned from past work experience works hand in hand with tech comm.

People ask me what information science is, and I usually respond jokingly with “computer science without the math.”  However, it is obviously much more than that.  It covers web design, database design and management, information security, information search and retrieval, and the list goes on.

books come out of laptopLibrary science is also an extremely broad field that ranges from cataloging books to collection development to reference and instruction services.  Today’s libraries are at the cutting edge of technology, believe it or not.  They work with web 2.0 tools, social media, and the latest gadgets.

There are many different types of libraries, but the major types are academic, public, school library media centers, and corporate.

The biggest misconception of what a librarian does can be summed up in this statement:

You mean you have to have a master’s to check out books and shush people???

Now how do these disciplines work with tech comm?  Well, basically, tech comm is the flip side of the information life cycle from information and library science.  Technical communicators create the containers of information, whether it is a PDF, HTML, e-book, or some other form of electronic content.

Librarians and information professionals are responsible for searching and retrieving content from the resource that technical communicators create.  They have to search knowledge bases, navigate websites, and access databases.

Something to think about: Not everyone has the level of technology expertise that we do.  That is why it is important to structure content in a way that makes it simple and easy to understand, and navigate.  This makes our lives as a middleman easier and helps the end users gain access to top quality information. [Ed. Note: Did you know that a middleman was originally a "maker of girdles"?]

You’d be surprised at how many people don’t automatically know how to search to get the answers they actually need.  Understandable really, because Google is full of junky websites trying to outrank each other.

I still have my toes in the library world moonlighting as a reference librarian for NCKnows, a statewide chat based reference service.  As I get deeper into the tech comm industry in my day job, the questions I answer from NCKnows serve as a constant reminder to create content that is intuitive and easy to use.

Cloudy with a chance of WOMBAT

June 6, 2012 by

For remote work, file management in the cloud is way easy. Other methods, not so much…

Remote control

For the past year, I’ve been working remotely from sunny Austin, TX. Flying back and forth has become the monthly norm — I can definitely tell you where the four least-crowded Starbucks are in DFW — but for the day-to-day, we’ve kept the workflow rolling through a combination of cloud tools and brute, manual labor (read: 7-ZIP + Basecamp). Consequently, I’ve had some time to think about what works well and what doesn’t in the context of file management for remote work. As you might have guessed, the cloud can be a tremendous help.

Go cloud or go home

This begs the question: why not go totally cloud? The short answer is: it’s complicated. I’m dealing with whatever our clients throw at us — Microsoft Word files and random Excel spreadsheets, FrameMaker files, the occasional XML or DITA file, you name it. Any team currently working with a random gob of files like this will tell you that it’d be nice, but…. The reason being, primarily, that options for cloud integration are limited. For Microsoft Office docs, you could go with Microsoft SharePoint, which can get costly. Or, you could go with the Microsoft Office 365 Web apps or Google Documents solutions (the cloud office suite list goes on and on), which are definitely feasible, but only if you have the time and inclination to upload, convert and review. FrameMaker 10 integrates with FrameMaker Server as well as a host of content management systems, which is really great, but only if you have the budget for FrameMaker Server and/or a full-blown CMS. For other local-only files, there’s, unfortunately, not a whole lot to be done. Pre-cloud, my usual procedure looked something like:

  1. ZIP files
  2. Upload archive(s) to Basecamp or FTP
  3. Bang head on desk

Ideas, anyone?

There’s always SVN. Using a program like Tortoise or oXygen’s Syncro SVN client in conjunction with a cloud storage solution like Unfuddle works quite well, especially for non-binary files. But SVN was intended as a version control technology, and so using it like this is sort of bending the rules. Oh well, I say. It does work. Sure, the problem remains that it’s easier than not to bring files into conflict when anyone touches anything without diligently committing changes, or when someone starts working on a local copy without first updating from the repository. Then you have to break out the diffing program, which is never fun. And while it still looks an awful lot like the old 7-Zip + Basecamp method I mentioned above, it definitely beats massive ZIP uploads.

Heavy cloud, no rain (flickr: Robyn's Nest)

Heavy cloud, no rain (flickr: Robyn's Nest)

The other solution is filesyncing. At the risk of sounding like an advertisement, using Dropbox or Google Drive makes it ridiculously simple (and free). With a shared Dropbox directory, you can work on active files straightaway from the directory itself, without having to worry about uploading to a repo or your team members not having the latest and greatest. Additional storage is generally super cheap, should you exceed the limits. The only hang-up to note with this solution is that older versions of files become irrecoverable. There is a way around this, however, as you can always couple it with a cloud backup solution like Jungle Disk (again, dirt cheap) and include the Dropbox directory in your daily backup. This gives you the ability to fetch older versions of your files in a snap, as with SVN, but without the hassle of dealing with a repo. It can get messy if you’re not careful with how you set everything up, but most solutions like this can be set to run automatically, in the background, on a daily basis (or more frequently), which definitely gets a thumbs up.

Pass the Tylenol…

Overall, working remotely has been a great learning experience, and I have a much better understanding of the pitfalls of remote work. File management, rudimentary though it may be, can be a supreme headache — at best, something to remain attentive to; at worst, a complete and total WOMBAT that’ll have you lamenting the endless 404 that has become your workday. Most likely, however, there’s some brew of cloud services that can help you and your organization save a little time and dough. Feel free to hit me in the comments or on Twitter (@ryan_fulcher) if you’d like to discuss further solutions and hybrids.

In vinegar veritas

As for me, Austin has been great (food trucks! live music!), but I’m glad to be heading back to NC (trees! seasons! groundwater!). It’s funny because, as a native North Carolinian, I showed up in Austin with a hefty chip on my shoulder about what barbecue actually is, but the Texas style has definitely grown on me. And while I do dig Austin, as the joke goes, there’s no place like 127.0.0.1.

Learning DITA

May 17, 2012 by

I started this internship at Scriptorium with very little knowledge of DITA other than the basic definition of it.  One of the major goals of this internship is to learn DITA since it plays such a major role in tech comm.

I have been working on a training exercise that involves on using DITA in oXygen.  The exercise started with the top level, bare bones aspects of DITA.  All I had to do was create topics and add them to the DITA Map.

From there I have worked my way into more specific tags and menu functions.  Learning DITA never stops.  You can always go deeper.  I have spent a lot of time asking questions and searching the Help files.

One thing I highly recommend doing before learning DITA is to learn HTML.  DITA uses a lot of the basic HTML element tags such as <p>, <ul>, <li>, etc.  Knowing HTML gave me a foundation to work with so that the prospect of learning a new language didn’t seem as daunting.

I also recommend familiarizing yourself with XML and its structure. Again, knowledge of HTML comes in handy here because XML uses element tags.  The difference between XML and HTML is that element tags are defined by the developer rather than predefined.

I am still trying to grasp the lingo and understand what each function is.  Misunderstanding the terms for various aspects of DITA such as elements or attributes can make it challenging to ask questions about them. oXygen is helpful though because it shows when certain element tags are not allowed. I think once I actually see the final output, it will make more sense to me.

The only major frustration I have using oXygen is using the Help section. My search terms didn’t seem to bring up relevant results. I also was not pleased that I couldn’t enlarge the font. The font size can easily be adjusted in the main interface.

I worked around the search result and font enlargement problem by going to the user manual on the oXygen website. On the website I can choose HTML or PDF format, and can enlarge the font as big as I need it.

The “learn as I go” approach has been effective for me.  I am a visual learner. I have to interact with the content in order to fully understand it.

Any thoughts or recommendations on the best ways to learn and understand DITA?

Integrating accessibility features into technical content

April 16, 2012 by

Accessibility is a term commonly associated with the process of making content available for people with vision, hearing, and mobility impairments. 

In reality it should also include the process of making content accessible for everyone regardless of ability or background.

Hello, my name is Holly Mabry.  I joined Scriptorium as an intern in the middle of February.  One of my biggest interests is working with accessibility issues in relation to electronic information.

One of the biggest mistakes that developers make in terms of accessibility is adding it as an afterthought after the product is finished.  It would save a lot of time and money if it was added into the project development cycle from the start.

Here are a few things to think about when developing websites, apps, or any other accessible electronic content:

  • Keep it simple. Say what you want to say and be done with it.  Include well defined headings and separate your content into readable chunks.  Often, flashy extras designed to make information look pretty can create frustration for screen reader users.
  • Good color contrast. Foreground colors should be easy to read over the background colors. AccessColor is a good color contrast analyzer for websites.
  • Captioning or transcripts. Videos should be captioned, or include a transcript alternative.  YouTube offers a captioning option.  This extra step is helpful for the deaf and hard of hearing, and also allows those who are in a setting where audio is not appropriate, to still get the most from the video.
  • Labels. All images and tables that display useful content should have a description with the basic point the image or table is trying to portray.  Screen readers such as JAWS for Windows rely on these descriptions.  They are also helpful for browsers with images disabled.
  • Text size adjustment.  Most modern browsers allow for text size adjustments through the “ctrl ++” or “ctrl -” commands.  Use text wraps to prevent the text from crowding when enlarged.
  • Keyboard navigation. Provide keyboard access points to allow screen reader users and other non mouse users to navigate through the page. Include a skip content option so that the keyboard commands don’t have to slog through unwanted information.

Additional resources

Web Content Accessibility Guidelines:  Official set of web accessibility guidelines from the World Wide Web Consortium.

Section508.gov:  Resources for understanding and implementing Section 508. This law requires all federal affiliated websites and other electronic content to be accessible for people with disabilities.

I also keep a fairly exhaustive list of accessibility related resources on my blog: Accessibility and Technology Geek listed under the page Accessibility Resources.