Scriptorium Publishing

content strategy consulting

XML product literature

October 27, 2014 by

Your industrial products become part of well-oiled machines. Unfortunately, your workflow for developing product literature may not be as well-oiled.

Using desktop publishing tools (such as InDesign) to develop product literature means you spend a lot of time applying formatting, designing complex tables, and so on. These time-consuming, manual chores:

  • Lengthen the amount of time it takes to get information to customers
  • Make it difficult to update information quickly
  • Provide many opportunities for introducing errors into content

XML-based workflows can solve these challenges in the development of product literature for machinery and industrial components. This post provides three examples of how XML can improve your processes for developing product literature:

  • Creating specification sheets
  • Managing common content across models, product lines, and departments
  • Handling OEM rebranding

Creating specification sheets

Putting together spec sheets and datasheets in a traditional desktop publishing environment is just painful. It’s easy to introduce significant errors by merely transposing digits in a part number, for example, and let’s not get into the horror of composing tables in a DTP tool. By the time you finish the layout, the information may be outdated—and customers haven’t even seen it yet!

Maria robot from movie Metropolis

“Machine Human” Maria in Metropolis (1927)

Part numbers, product dimensions, and the like often exist in a database (or multiple databases). It’s better to extract that content as some kind of markup language from the database and then and insert it into the source files for product literature.

The exact process will vary depending on the database and other tools involved, but generally, you want a workflow that extracts the information from the database and formats it automatically. By eliminating the need for human intervention (typing information, applying formatting, and so on), you reduce the possibility of introducing errors and shorten the amount of time it takes to get content to customers.

Because the workflow is automated, you can also release updates more frequently. If you release specs or datasheets in electronic format (web pages, for example), you could set up nightly updates to distribute the latest information.

Managing common content across models, product lines, and departments

The different models of a product usually have shared features, and those common features can stretch across product lines that contain the same parts.

In a traditional desktop publishing environment, it’s very easy to end up with multiple versions of content about a particular part or feature because there is no “single source of truth.”

A modular content workflow eliminates this problem: you develop chunks of content and mix and match them for a particular information product (a user manual or web page, for example) according to product features. Generally, a component content management system (CCMS) manages the chunks, and authors can search the CCMS to find the modules of content they need.

Sharing content modules has two big benefits: the reuse means you’re reducing the amount of time it takes to develop content, and you present customers with consistent information within and across product lines.

Content chunks can also be shared across departments. For example, a table with specs for a part can appear in a user guide, a trade show handout, and on the web site. Even though that table may be presented with different formatting in those information products, the XML source is still the same for all. That’s the great benefit of XML-based content: formatting (usually applied through automated processes) is completely separate from the content itself.

You really need XML at the core of your content to implement industrial strength (ahem) modular processes. One XML content standard, the Darwin Information Typing Architecture (DITA), is specifically for developing modular technical content. Even if an XML standard isn’t an exact fit for your requirements, you can adapt and modify it. After all, the X in XML stands for “extensible.”

Handling OEM rebranding

If your company provides components to other companies in an OEM relationship, an XML workflow streamlines the rebranding of content.

The separation of content and formatting inherent in XML workflows means you don’t have to open up and modify multiple source files to change logos, corporate fonts and colors, and so on. Instead, you create a new automated formatting process (possibly using your company’s transformation as a starting point), or you apply the other company’s existing formatting transformation if they are already in XML. The correct formatting is applied automatically, saving both companies a great deal of time—and that automatic formatting means you and your partner dramatically shorten the time to market for OEM equipment. By the way, all this talk about the separation of content and formatting has another huge benefit: decreased localization costs and faster release of localized content because you eliminate the manual reformatting work associated with translating content.

XML workflows also provide mechanisms for quickly switching out company and product names, addresses, and so on. The modular nature of many XML workflows also enables a partner company to select just the chunks of information they need about an OEM component.

Even if two companies are using two different “flavors” of XML, scripting can automate conversion. It is much easier to convert XML to XML than to convert content in one desktop publishing program to another.

 

Desktop publishing tools are wonderful for creating visually rich information. But for product literature, you need a system that produces attractive content, speeds up content production, eliminates tedious reformatting work, and streamlines translation.

XML is a better fit for product literature.

Need more information about how XML product literature can help your company? Contact us.

DITA localization for output (premium)

October 20, 2014 by

The first step in DITA localization is to translate the actual content of your DITA files. The second step is to address DITA localization requirements for your output. This article provides an in-depth explanation of the localization support in the DITA Open Toolkit.

The DITA Open Toolkit (DITA OT) includes several DITA localization features. When you set up your publishing system (and whenever you add new languages), you need to do the following:

  • Check the language-specific strings files
  • Ensure that language- or locale-specific images are accessible
  • Select typefaces for the target language

(Most of the information in this post applies to all versions of the DITA Open Toolkit. Information about specific file paths applies to the DITA OT version 1.8.)

Trays of Chinese movable type.

Not all translatable strings are found in your content [Flickr: othree]

Check the language-specific strings files

When generating output, the DITA OT inserts text strings, such as “Chapter” or “Appendix”, types of admonitions (“Note”, “Warning”, “Caution”), text and slogans on the cover pages, and copyright messages. When the output is intended for a specific language, these pieces of text must match the output language. You want “Chapter 4” to render as “Capítulo 4” in Spanish, as “Chapitre 4” in French, or as “第4章” in Japanese.

To handle this, the strings used by the DITA OT are externalized, that is, they are stored in language-specific files that are separate from the rest of the XSL transforms. Each language (or language and locale) has one or more separate files. Usually, a core plugin provides a base set of strings, then plugins that are built on that core plugin can add their own strings. Within these files, each string has an identifier, which is not translated, and the string itself.

A large number of these strings are provided by the core DITA OT. For HTML-based transforms, the DITA OT supplies strings files for over 50 languages and locales; for PDF, support for 14 languages is included.

The default translated strings may not meet your needs. The words used in the strings may not align with the word choice, tone, emphasis, or punctuation your organization requires. Also, the PDF strings files are not consistently populated; all of the strings in the English strings files may not be translated in the strings files for other localizations.

Additionally, there may be some strings for which there are no definitions in the core plugin strings files.

Work with your localization team to check the locale-specific strings files provided by the DITA-OT. You may have to do this for strings used with core HTML and PDF plugins. If the editor or language checker recommends a change, you (or the localizer) should:

  • Identify the strings in the core strings files that you need to change.
  • Copy the elements that define those strings to the corresponding plugin strings file.
  • Change the string definition in the copied element to the new string.

When generating output for new localizations, check the DITA OT log file for missing string errors. These will be in the target “transform.topic2fo.main” with the task identifier “[xslt]”. If you find that there are missing strings, you’ll need to add them to the plugin strings file, using the English definitions as a basis for the translation.

File structure for HTML strings files

As of DITA OT version 1.8, the language-specific strings files for the core HTML-based transforms are stored in %DITA-OT%/xsl/common. The file names are in the form strings-xx-yy.xml, where xx-yy is the language identifier as defined by IETF RFC 4646 and implemented by the ISO 639-1 language codes (this is the same language code as used in the xml:lang attribute). An additional file strings.xml (in the same folder) lists the language files that are currently in use.

Each HTML strings file has the form:

<?xml version="1.0" encoding="utf-8"?>
<strings xml:lang="xx-yy">
   <str name="identifier">String</str>
   …
</strings>

Note that the file’s root element (<strings>) contains the xml:lang attribute, which specifies the language (as does the name of the file). Within the root element are one or more <str> elements. Each <str> element has a unique identifier (name attribute); contained in the <str> element is the text that is pushed into your output. The contents of the name attribute should NEVER be translated.

The file strings.xml has the form:

<?xml version="1.0" encoding="utf-8"?>
<langlist>
   <lang xml:lang="xx-yy” filename="strings-xx-yy.xml"/>
   ...
</langlist>

The strings.xml file contains one lang element for each supported language.

File structure for PDF string files

As of DITA OT version 1.8, the strings files for the core PDF-based transforms are stored in %DITA-OT%/plugins/org.dita.pdf2/cfg/common/vars. The file names are in the form xx.xml, where xx is the language identifier as defined by IETF RFC 4646 and implemented by the ISO 639-1 language codes.

Each PDF strings file has the form:

<?xml version="1.0" encoding="UTF-8"?>
<vars xmlns="http://www.idiominc.com/opentopic/vars">
   <variable id="identifier">String</variable>
   ...
</vars>

Each file contains one or more <variable> elements. Each <variable> element has a unique identifier (id attribute); contained in the <variable> element is the actual string. Some PDF strings may include one or more parameters which allow the transform to insert text into the strings. For example, the Italian strings file contains this entry for a figure title:

<variable id="Figure"> Figura <param ref-name="number"/>: <param ref-name="title"/></variable>

Note that the variable id attribute and the param element’s ref-name attribute should NEVER be translated.

Make sure the translator understands that their job is only to translate the contents of the <str> or <variable> elements. They should not translate the attributes (apart from modifying contents of the xml:lang attribute), nor should they translate the comments (any text surrounded by “<!–” and “–>”).

Additionally, within the strings, there may be spaces or non-breaking spaces (usually represented with the entity “&#160;”), these should remain just as they are in the original (as much as possible).

Most strings files contain comments and notes to the translator. In particular, some strings files contain paths to images; most of these are accompanied by a note NOT to translate the paths.

Additionally, the strings files may contain URLs for partner organizations or language- or locale-specific web sites. You may want to examine the contents of the strings files and determine which URLs should be made locale-specific and which should be left untouched.

When the strings files are returned from the translator, add the translated (and renamed) strings file to the plugin folders as described.

For HTML-based plugins you must also:

  • Ensure that the translator correctly modified the xml:lang attribute to the <strings> element in the file containing the translated strings.
  • Update the plugin-specific strings.xml file so that it contains a reference to the translated strings file. (You should run the integrator after updating this file.)

For PDF-based plugins you must also:

  • Ensure that all strings in the English strings file exist in the strings file for your localization. If they don’t you’ll need to provide these strings in your plugin’s string files.

Ensure that locale- or language-specific images are available

Just as the DITA OT inserts strings into output when necessary, it can also insert icons and other images as required; for example, icons for admonitions (notes and hazard statements) and company logos in page headers or footers.

Most icons and images are intended for use in all languages. But sometimes, specific icons are required for a locale or language. These reasons may include:

  • Icons or images that include language-specific text
  • Icons or images that are culturally sensitive

What do you have to do?
If you need to substitute images based on the output language, do the following:

  • Ensure that locale- or language-specific image files are available in the appropriate artwork folder
  • Ensure that the paths to the output location of these image files are saved as strings in the language-specific strings files. Generally, the path to each image will be the same except for the file name.

Select typefaces for the target language

To generate PDF files, the transforms need typeface specifications. The DITA OT allows us to define classes of typefaces (“logical fonts”) that are associated with specific types of text. For instance, you might define that your body text uses a serif font, titles use a heavy-weight sans serif font, and that running heads use a lighter form of that same sans serif font.

Each of the logical fonts is associated with a physical font. The physical fonts are often determined by the style guidelines for your company or organization; they ensure that your information products project a consistent look and feel.

The fonts you select must support all characters used by the target localization.

If you are creating a localization for a language that requires extensive use of a non-Western character set, you may need to:

  • Identify typefaces that are associated with your organization’s look and feel in specific locales.
  • Specify how those typefaces are to be associated with specific text applications. That is, the fonts that will be used for body text, titles, heads, and so on.

Summary

When localizing your DITA content, remember that DITA OT plugins do contain localized information. The strings, images, icons, and fonts that are a part of your final work products must be translated or localized with the same care and cultural sensitivity as your content.

For more information about localizing your plugins, localizing your content, or developing a content strategy to facilitate localization, contact us at www.scriptorium.com/contact-us.

The case for XML marketing content

October 14, 2014 by

What’s the first thing that comes to your mind when I say “XML and content”? If large technical documents and back-end databases pop into your mind, you’re in good company. But many content-heavy groups can benefit from adopting XML. Marketing is one of these groups.

If you’ve ever worked in or with a marketing department, you are painfully aware of the vast amount of content that’s produced. From web and social media to brochures, catalogs, and product sheets, marketing content comes in all shapes and sizes.

To be effective, marketing content needs to echo similar information yet cater to varying audiences or suit different purposes (product sheets, web content, and other promotional material, for example). The design of the finished content may widely vary, but the information needs to be current and accurate.

This can be quite a lot to manage—regardless of how large or small the team—and takes time and diligence to ensure that the correct information is used at all times. Even with centralized project folders and shared information sources, the chance of human error is high. One change, such as a version number or a small product update, may need to be made in dozens of places. Chasing all of these uses down is time consuming and inefficient.

One benefit of XML marketing content is the separation of form and content. This allows you to focus on the message and not the look and feel of one particular final product. Through meaningful tagging, you can mark up your content in a meaningful manner (“product tag line” vs. “14pt italic”) and then render it in a variety of ways. The focus is on the content itself, leaving the look and feel to templates and transforms. Once you flow the content into the template, you can still modify the formatting while reaping the benefit of managed, centralized content.

Batman slapping Robin; XML marketing content can help you break the copy/paste habitAnother benefit is content reuse and the conditional inclusion and exclusion of content. Not all content is created equal; sometimes you need to omit some information in favor of other content. You could certainly manage this with cut and paste and a bit of editing, but then you’d be managing multiple sets of content. With all content in one place and tagged for specific uses, you can assemble what you need through content references and conditionally exclude portions that you don’t need.

If you localize your content, you’ll benefit from significant cost savings. Content reuse not only reduces the number of words requiring translation; it can reduce the chance of fuzzy matches against translation memory that are usually introduced by formatting inconsistencies and manual line breaks. And, since XML is raw text, there are no DTP-associated costs or delays accompanying your translation.

An XML workflow can benefit any group with many content deliverables, hefty translation requirements, and the need to reuse information in multiple places. If you’re feeling overwhelmed with your existing workflow, contact us to see if XML might be a good fit for you.

Content strategy and engineers

September 29, 2014 by

When developing a content strategy, you consider marcom, techcomm, and other groups whose primary role is creating content.

But don’t forget about engineering. Just ask the NASA Mohawk Guy.

Bobak Ferdowsi of NASA’s Jet Propulsion Laboratory—a.k.a. NASA Mohawk Guy of the Curiosity mission—recently pointed out how important communication and content are in his job:

Most people don’t realize how much…communication…an engineering job requires. I think about the things I end up sometimes spending a little more time on: for example, PowerPoint, making slides. … You still have to tell other people what you are doing, try to convince people of a certain approach to something, or demonstrate why one decision is more important than other. That communication skill is a real part of the job that most people don’t see.

self-portrait by Curiosity Rover

Self-portrait by Curiosity Rover (photo by NASA/JPL-Caltech/Malin Space Science Systems via Wikimedia Commons)

Ferdowsi recognizes the big part content plays in engineering and other departments perceived “as always cranking away on things, and turning wrenches.” Unfortunately, a lot of employees in the more content-heavy groups (marcom and techcomm, in particular) don’t always share his wisdom.

It’s too easy to follow stereotypical thinking: Engineers aren’t professional writers, so their content is of little value. I’ve encountered that attitude often in my career, and I’ll admit to succumbing to such stupid thoughts on occasion myself.

Bottom line: you cannot implement a successful content strategy without considering the content contributions of engineers, support, and other groups who create content as a secondary part of their jobs.

It shouldn’t take a rocket scientist to figure that out.

Content strategy burdens: cost-shifting

September 22, 2014 by

In assessing an organization’s requirements, it’s important to identify content strategy burdens. That is, what practices or processes impose burdens on the organization? A content strategy burden might be an external cost, such as additional translation expense, or it might be an internal cost, such as a practice in one department that imposes additional work on a different department. A key to successful content strategy is to understand how these burdens are distributed in the organization.

Pain points are areas where an organization is inefficient or has unpleasant workarounds. When department A choose a workflow without consulting department B, it may create an internal pain point. For example, a software engineering group might use practices that cause a lot of trouble for their QA group. HR creates a policy, but IT has to enforce it—and the infrastructure isn’t available to do so.

Over and over, we see this pattern in our client companies. We discover that one department has a tremendously inefficient approach to one of their responsibilities. As we dig into the issue, we discover that the efficient approach is blocked to them because of another department’s lack of cooperation.

“Wait, you retyped these specifications? Aren’t they in a database somewhere?”

“Yes, but the product manager refuses to give us access to the database because he doesn’t want to pay for another database license.”

“How much is the license?”

“$500.”

“How many hours did it take to retype this information?”

“40, and we have to redo it every quarter.”

So…the process takes longer, costs more, and is less accurate. But the other manager saves $500.

This is a trivial example of a potentially huge problem. In content strategy assessments, we look for high-value information that flows through an organization. Common examples of this include the following:

  • Product specifications (developed by product management; used by engineering to build products and in customer-facing content)
  • Repair procedures (used by support organizations, quality organizations, and in customer-facing content)
  • Product descriptions (used throughout the organization)
  • Technical illustrations (used in a variety of customer-facing content and in the product design/manufacturing process)
  • Product interface labels (used in the product interface and customer-facing content)

The goal for each of these items is as follows:

  1. Store information in a known location (“single source of truth”).
  2. When information is reused, reuse is automated and accurate (no copying and pasting, rekeying, manual editing of copies, and so on).
  3. Make information updates in the original content, not in the downstream copies.

Managers must be held accountable not just for the performance of their individual departments, but for their cooperation and collaboration across the organization. This requires executive management to understand the dependencies and conflicting priorities, not just to tell a line employee to “do some content strategy.” If one department refuses to make information available in a format that other departments can use efficiently, that’s a problem.

The organization’s content strategy must be defined and agreed to at the executive level. Executives are responsible for making sure that their departments have the resources they need to implement the strategy. Otherwise, the content strategy burden will fall disproportionately on the department with the least political clout.

And here, for your listening pleasure, is something VERY related…

Content without a face: anonymity, egos, and corporate content

September 15, 2014 by

The novels of Italian author Elena Ferrante are getting a lot of attention, but “Elena Ferrante” doesn’t actually exist. The writer behind the pen name prefers anonymity and shies away from publicity. Creators of corporate content should take a few pointers from the author when seeking recognition for their work.

Regardless of the kind of information a content creator develops for a company—marketing, technical, training, and so on—the final information product must support the company’s business goals (which usually revolve around making money, saving money, or both).

woman holding mask to face

Anonymity has its place in corporate content (photo by W H, flickr)

Note those business goals don’t include “show the world that content creator John Smith is a brilliant writer” or “enable Jane Doe to show her dazzling proficiency in using Tool X to design content.” Content creators who approach corporate content as a way to get personal recognition are not doing themselves (or their employers) any favors.

There’s nothing wrong with satisfaction from a job well done or expecting to be properly compensated for one’s work. However, the primary purpose of the content produced at work is to support the company and its goals (and, by extension, the company’s success in the marketplace). Corporate content and the processes surrounding it are not about the content creators themselves—unless those authors are the subject of a piece in the company newsletter, and then the content is all about them.

It can be difficult for content creators (and all employees, really) to keep their eyes on the corporate goals while doing their day-to-day work and meeting deadlines. It can be even harder for their managers to offer gentle reminders of those goals when authors get too wrapped up in their daily grind.

So, what are corporate content creators to do? First, check that ego at the office door. Easier said than done, I know. My first draft of this blog post was ripped to shreds. I managed to survive.

To show the world writing or design prowess and to gain personal recognition for it, content creators are better off developing projects on their own time. When authors find other channels for their creative skills, they get to set the goals because those outlets are theirs. Pen names and shying away from publicity are completely optional!

P.S. Don’t equate anonymity in corporate content development with “bland and voiceless.” For example, the marketing group can’t afford to crank out dull, cookie-cutter content. Differentiating the company from its competitors with a distinct voice is key. Otherwise, the marketing content sabotages a primary business goal: making money.

P.P.S. Here’s where my mind went to get the headline for this blog post:

XML workflow costs (premium)

September 8, 2014 by

Everyone wants to know how much an XML workflow is going to cost. For some reason, our prospective clients are rarely satisfied with the standard consultant answer of “It depends.”

This post outlines typical XML projects at four different budget levels: less than $50,000, $150K, $500K, and more than $500K.

The companies described are fictional composites. You should not make any major budgetary (or life) decisions based on these rough estimates. Your mileage may vary. Insert any other disclaimers I have forgotten.

First, some context. The numbers I’m quoting here include the following:

  • Software licenses, such as a content management system, authoring tools, linguistic analysis, translation management software, and others
  • Software installation and configuration
  • Content migration from an outside vendor
  • Content strategy,  implementation, and training services from external consultants (like Scriptorium)

They do not take into account the following “soft” costs:

  • Employee time spent on managing the project, reviewing deliverables, researching options, and negotiating with vendors
  • Lost productivity during the transition
  • Costs from any staff turnover that might occur

They also do not include IT costs:

  • Hardware costs (that said, server costs are usually an insignificant fraction of the overall implementation budget)
  • Network infrastructure costs (network bandwidth or latency issues need to be addressed if authors are storing content in shared repositories rather than their local file systems)
  • IT resources to install, configure, and maintain a new system

$50,000 or less

lots of currency

Count your pennies; you’re going to need them // flickr: epsos

This mid-sized organization has ten or fewer content creators who want to reduce the amount of time spent on formatting and increase their reuse percentage. Translation is required for FIGS (French, Italian, German, Spanish) and CJK (Chinese, Japanese, Korean).

The company uses a source control system to manage file versioning. XML is implemented using DITA with no specialization. The reuse strategy is straightforward and mostly at the topic level. Conditional processing is needed for a few audience variants. The organization pushes output to PDF and HTML, and content is published and translated a few times a year.

The localization vendor provides some support for translation management efforts.

The company moved away from a help authoring tool or a desktop publishing too, perhaps with single sourcing, because of increasing scalability problems. Over the next several years, the company expects to increase the number of languages that must be supported to more than 20.

Small XML workflow costs

  • Authoring software: $5,000
  • Information architecture/reuse strategy: $5,000
  • PDF and HTML stylesheets: $19,000
  • Content migration: $15,000
  • Training: $6,000

$150,000(ish)

This organization has 20 content creators and two production editors spread across four offices in three countries (and two continents). Authors create content in English and French. Translation is required into over two dozen languages, including Russian, Arabic, and Thai.

The translation effort is costing several million dollars per year, and at least 30% of that effort is in reformatting work. Although there is a lot of reuse potential, small inconsistencies mean that reuse in translation is only about 10%. The goal is to increase the translation memory usage to around 30%. Industry benchmarks indicate that this number is conservative; similar companies are reporting over 50%.

The company implements (relatively) inexpensive content management and translation systems and a reuse strategy intended to maximize reuse down to the sentence level. They choose DITA as the content model and specialize attributes (two new ones) and elements (five new ones) to support company-specific content requirements. Output is PDF and mobile-compatible HTML. Both outputs are required for all languages, so the stylesheets must include support for all languages.

Medium XML workflow costs

  • Content management and translation management systems (including authoring software): $75,000
  • Information architecture/reuse strategy: $15,000
  • PDF and HTML stylesheets: $25,000
  • Content migration: $25,000
  • Training: $10,000

$500,000(ish)

This organization has 50 content creators in half a dozen locations worldwide. Authors create content in English only. Translation is required for more than 30 languages.

The company implements relatively expensive content management and translation management systems, along with linguistic support software, which helps make content consistent as it is written. The cost is easily justified because of the large numbers of authors. For example, a 10% increase in author efficiency is equivalent to 5 extra full-time employees, or roughly $500,000 per year.

The planning phase for this XML workflow takes six months. The company builds out a formal business case and a content strategy document. These serve as the roadmap for the XML workflow. Vendor selection includes a formal Request for Proposal process.

The company reviews existing content and determines that rewrites are needed. This reduces migration costs.

Training costs are reduced by using a single delivery of a train-the-trainer class, along with live, web-based instruction instead of in-person classroom training.

The company is phasing out PDF delivery, but needs a basic PDF stylesheet, along with HTML output. The company is also building an app for iOS devices that customers can use to display content. Search functionality is a big concern because there is so much information available.

  • Content strategy, along with information architecture and reuse strategy: $50,000
  • Content management and translation management systems (including authoring software and linguistic support): $350,000
  • PDF and HTML stylesheets: $50,000
  • Mobile app: $30,000
  • Content migration: $40,000
  • Training: $15,000

More than $500,000

Take the $500,000ish version and expand it with more authors, more languages, and more output requirements. The line item that most commonly results in very expensive implementation is integration—such as a requirement to deliver XML content combined with data from product lifecycle management (PLM) or enterprise resource planning (ERP) software.

It’s quite easy to spend $500,000 just on software.

Difficult output challenges can also increase the cost.

 

What level of spending makes sense for you? Consult our XML business case calculator to find out.

Three factors for evaluating localization vendors

September 3, 2014 by

Localizing content can be a frustrating and expensive effort. In addition to per-word costs and turnaround times, keep these three key factors in mind when choosing a vendor.

localization checklist

Keep your content in check throughout the localization cycle. \\ pixabay: Nemo

Technological aptitude

Your localization vendor should fit seamlessly into your content development workflow. They must be able to work with the source content in its raw form. Whether you have a template-driven WYSIWYG authoring environment or an XML-based environment, your localization vendor should work with your files without performing extra work. Most translation tools used today can handle nearly any source format, whether it’s Word, InDesign, FrameMaker, or DITA. When you have XML-based content or very rigid templates in place, there should be no need for DTP services. Formatting is handled by your templates or your transforms, not the vendor.

Before you engage with a localization vendor, send them samples for a test translation. Evaluate what you receive back, and if there are issues, determine whether they are correctable or not. DTP costs are expenses limited to reformatting text or recreating a layout that didn’t exist in the source copy (for example, translating a PDF) and should not apply when your content is completely separate from the visual design.

Subject matter expertise

You’ve spent a good deal of time, effort, and money producing your content. You’ve not only invested in technology, training, and workflow development; you have carefully written your content, edited it, and sent it through reviews for technical and stylistic correctness. Therefore, your content isn’t a necessary evil in delivering product or supporting customers—it’s an asset that should be treated with care.

Take the time to determine if your localization vendor truly knows your domain. Ask for translator credentials to determine whether they have direct or past experience working in your industry. Ask them to discuss examples of past work in your domain. (You may not get actual copy to review, but you can learn a lot by how they address your questions.) If the vendor seems knowledgeable about your company’s industry and has translators who are also adept, send a small job first to evaluate the vendor’s actual work.

Skipping this level of investigation is detrimental to your business. Erroneously translated content can result in confusing text, frustrated end users, and in cases where the content is used in dangerous settings, bodily harm or death.

Knowledge of local or regional demands

Subject matter expertise isn’t the only important element in producing effective translation. You also need to be mindful of your audience. A big part of this is understanding not just the core language they speak but the nuances local culture adds. There are also many local and regional regulations that stipulate what you must (and cannot) say about your product.

To ensure that your content is understandable and usable for all of your audiences, your localization vendor must be aware of the audiences’ local needs and use translators who are well versed in local dialects and regulations. You must communicate not only your language needs, but also the locations where the languages are spoken. For example, you may need to translate into German, but your German-speaking audiences may live and work in Dresden, Germany, as well as DuPage County, Illinois.

You’ve worked hard to produce quality content that strengthens your company’s reputation. Make sure it reaches all of your audiences at that same level of quality. Do you have questions or need help? Contact us.

Fall 2014 Scriptorium conference round-up

August 25, 2014 by

We have a full schedule of stellar conferences coming up this fall. We hope to see you at one or more of these events.

Let’s start with the big news. You should be able to recognize us at these events, as we have finally updated our web site photos and profiles. Yes, after only six years, we took some new portraits.

Lavacon: Bring on the zombies

That’s probably not the official conference theme (and we hope it’s unrelated to the photo news), but Bill Swallow is presenting Content Strategy vs. The Undead, and Alan Pringle will be at the Scriptorium booth talking about content strategy—and handing out chocolate (of course). Come very early and there might be a few donuts.

October 12-15, Portland, Oregon

Lavacon conference web site

Information Development World

At IDWorld, Sarah O’Keefe is presenting Risky Business: The Challenge of Content Silos. Gretyl Kinsey will be at the booth with chocolate. No word on whether there will be any reenactments of famous movie scenes.

October 22-24, San Jose, California

IDWorld conference web site

tekom/tcworld

Sarah and Alan will team up for a workshop on Adapting Content for the US Market.

November 11-13, Stuttgart, Germany

tekom/tcworld conference web site

Gilbane Conference

Sarah will be participating on a content strategy panel.

December 2-4, Boston, Massachusetts

Gilbane Conference web site

 

As always, we are delighted to meet with you at any time. Contact us to set up a meeting, or just find us during an event and introduce yourself.