Scriptorium Publishing

content strategy consulting

Sturm und DITA-Drang at tekom

November 16, 2015 by

This year’s tekom/tcworld conference reinforced the ongoing doctrinal chasm between North American technical communication and German technical communication.

I am speaking, of course, of the proper role of DITA in technical communication. If any.

Executive summary: There may be a valid use case for German non-DITA CMS systems, but vendors are shooting themselves in the foot with factually inaccurate information about DITA as a starting point for their arguments.

The program this year included several presentations, in both English and German, that provided the German perspective on DITA. They included the following:

The DERCOM effect

We also heard a great deal from a new organization, DERCOM. Founded in 2013, this organization is an association of German CMS manufacturers (the acronym sort of works in German) and includes Schema, Docufy, Empolis, Acolada, and three other member companies.

DERCOM has released a position paper entitled “Content Management und Strukturiertes Authoring in der Technischen Kommunikation” or (as you might expect) “Content Management and Structured Authoring in Technical Communication.” This document is available both in German and in English translation. Unfortunately, the link seems to be obfuscated. Go to the main DERCOM page and find a clickable link under “News.” DERCOM member Noxum has a direct link for the German version.

Uwe Reissenweber explicitly introduced his presentation as providing the official position of DERCOM.

Note that he used the German word “Lobbyist,” but perhaps “advocate” would be a better English translation than “lobbyist” since the latter term is so loaded with negative political connotations. Marcus Kesseler said that he was not speaking for DERCOM but rather for Schema in his individual presentation. Here is what I observed across the various presentations:

  • There was remarkable uniformity in the statements made by the various DERCOM members, even when they said they were speaking for their employer rather than the association.
  • There were a number of talking points that were repeated over and over again.
  • The descriptions of DITA were so technically inaccurate that they destroyed the credibility of the speakers’ entire argument and made it rather difficult to extract valid information.

For example, Uwe Reissenweber asserted that the DITA specialization mechanism, if used to create new elements (as opposed to removing them), does not allow for interoperability with other environments. That is, once you create new, specialized elements, you can no longer exchange your content with other organizations. This statement is technically inaccurate and betrays a fundamental misunderstanding of specialization. When you create a new element (for example, a warning), you base it on an existing element (for example, note). Because DITA maintains inheritance information, a downstream user would know that the warning element is based on note and can process it as a regular note element via a fallback mechanism. This is a critical—and unique—feature of the DITA architecture. Marcus Kesseler asserted that vendor lock-in with DITA-based content is no different than a system (like his) that uses a proprietary content model because so much of the business logic is tied up in the CMS rather than the content model. This overall accuracy of this statement depends on how tightly business processes and other information are bound into the CMS. But it seems indisputable that it would be easier to move DITA content from CMS A to CMS B (with any attendant business logic issues) than it would be to move XML Flavor A from CMS A to XML Flavor B in CMS B. In the second case, you have to move all of the business logic and worry about possible incompatibilities between XML Flavor A and XML Flavor B. “You can’t learn specialization in an afternoon.” This is a completely true statement from Uwe Reissenweber to which I say, with great professionalism, “SO WHAT??” Surely we are not advocating the idea that anything that takes more than an afternoon to learn cannot be worth the effort. After hearing these statements and others (see my Twitter feed for increasingly agitated coverage), it becomes difficult to take any of the presenters’ arguments seriously. And this is unfortunate, because I do want to understand their position. Kesseler, for example, displayed a chart in which he made the case that business logic is embedded either in the CMS or possibly in the DITA Open Toolkit, but not in the core DITA topics.

His Schema co-founder, Stefan Freisler, believes that only 5–10% of return on investment realized from a CMS system is in the content model. Instead, the vast majority of the value resides in the workflow layer.

These are interesting points and worthy of further discussion.

DITA support in DERCOM CMSs?

Eliot Kimber, who has a lot more patience than I do (also, I had a scheduled meeting), stayed through a heated post-presentation Q&A with Kesseler. Kimber had this to say in his trip report:

It was an entertaining presentation with some heated discussion but the presentation itself was a pretty transparent attempt to spread fear, uncertainty, and doubt (FUD) about DITA by using false dichotomies and category errors to make DITA look particularly bad. This was unfortunate because Herr Kesseler had a valid point, which came out in the discussion at the end of his talk, which is that consultants were insisting that if his product (Schema, and by extension the other CCMS systems like Schema) could not do DITA to a fairly deep degree internally then they were unacceptable, regardless of any other useful functionality they might provide.

This lack of support is another starting point for discussion. (I would also note that it’s often not the Evil Consultants, but rather the Opinionated Clients, who are insisting on DITA.)

With a proprietary content model, you are putting your trust and a good bit of your ability to execute in the hands of your CMS vendor. Provided that the vendor does a good job of introducing new features that meet your needs, you could have a long and mutually beneficial relationship. But what if your vendor starts to falter? What if they are acquired and change their strategy to something that doesn’t meet your requirements? DERCOM members are asserting first that they are better at adapting information models to the needs of their clients and second, that the content model provides only a small part of the value of the CMS.

Do you throw your lot in with a vendor, their release cycle, their software development/R&D efforts, or do you choose to rely on a standard and therefore rely on the OASIS technical committee, with all of the advantages and disadvantages of the committee-based standards-building process?

If the content model is largely irrelevant to the CMS functionality, why not just have the best of both worlds and support DITA inside the Very Superior DERCOM systems? Some of the vendors are doing just that. Empolis supports DITA both in its core CLS offering and in a new, low-cost SaaS system that is under development.

It remains as an exercise for the reader to understand why the other vendors are not following suit. Eliot says this:

DITA poses a problem for these products to the degree that they are not able to directly support DITA markup internally, for whatever reason, e.g., having been architected around a specific XML model such that supporting other models is difficult.So there is a clear and understandable tension between the vendors and happy users of these products and the adoption of DITA. Evidence of this tension is the creation of the DERCOM association (, which is, at least in part, a banding together of the German CCMS vendors against DITA in general, as evidenced by the document “Content Management and Struktured Authoring in Technical Communication – A progress report”, which says a number of incorrect or misleading things about DITA as a technology.

During the German-language Intelligente Information panel, Professor Sissi Closs pointed out the importance of DITA as a multiplier. She mentioned that widespread adoption of DITA would lead to a network effect, in which the standard becomes more valuable because more and more people are using it and therefore training, support, community, and qualified employees are more readily available.

Some statistics

In North America, DITA is the clear leader in XML-based content work. We estimate that at least 80% of structure implementations are using DITA. The equivalent number for Germany is in the 5-10% range, based on research done by tcworld.

This chart was shown in Reissenweber’s presentation and attributed to tcworld as of 2015:


Here is my English translation. In each grouping, the upper bar is for software companies and the lower bar for industrial companies.


Scriptorium’s perspective

For Scriptorium consulting projects, we use a standard methodology with roots in management consulting. In the assessment phase, we develop the following:

  • Business goals for the organization
  • Content strategy to support the identified business goals
  • Needs analysis
  • Gap analysis
  • Requirements
  • Implementation plan, ROI, and budget

The decision whether or not to use DITA is generally made in the requirements phase. Most North American companies, at this point in time, assume that DITA is the path of least resistance because of the large numbers of authoring tools, CMS systems, and supporting systems (like translation management and content delivery platforms) that support it.

DERCOM companies will have difficulty making inroads into this market without an affirmation that they can provide DITA support. Any advantages that they might have in workflow or editorial management are irrelevant because they will be ruled out as a prospective vendor by the DITA requirement. Additionally, most of these vendors do not have much presence in North America, so support, training, and maintenance are a risk.

Classic case of disruption

In Germany, the DERCOM vendors are clearly dominant at this time. However, their continued insistence that their technology is superior and the upstart DITA-based options should be ignored follows the classic pattern seen with disruptive technologies. When a disruptive technology offers a clear advantage that is different from the main feature set of the incumbent approach, the incumbents have much difficulty responding to the new challenge.

In the case of DITA, the disruptions are in the following areas:

  • A wide variety of options at each point in the tool chain (authoring tool, content management system, translation management system, content delivery portal, and others)
  • Incremental implementation. Because DITA can work both on a file system and on a CCMS, organizations can opt for an incremental implementation, where pilot projects are built and validated on the file system before CCMS selection begins.
  • Open standard. Interested parties can participate in the standards-development process through OASIS. Vendor assessment is based on conformance to the standard, which makes evaluation across vendors easier for the buyer. Content exchange is easier to implement for the buyer.
  • The ecosystem of DITA architecture, software vendors, consultants, open-source community, and more. (Take a look at the DITA projects available just on GitHub.)

NOTE: Translations from German are mine. If the original authors would like to provide a better or alternate translation, please leave a comment. My tweets of German presentations are real-time translations by me. I apologize in advance for any errors.

Additional reading (if by some miracle you made it this far):

Content strategy triage

October 26, 2015 by

Who lives? Who dies? Who do you fight to save?

This article is a summary of a presentation first delivered October 21, 2015, at LavaCon in New Orleans.

Triage is a medical concept. If you have a large number of patients flooding in for treatment, you use triage to decide who gets treated first (or at all). Patients are color-coded as follows:

  • Red (immediate): Need immediate treatment.
  • Yellow (delayed): Need treatment, but can wait.
  • Green (minor): Can wait longer than yellow.
  • Black (deceased): either deceased or cannot survive their injuries.

If you think this is a bit morbid, you’re right. If you immediately wonder how to combine the concept of triage with content strategy, read on!

In theory, you want to find the perfect project in which you have high visibility, high return on investment (ROI), and low risk.

Venn diagram with circles for high visibility, high ROI, and low risk

Content strategy, the theory

The reality is that you are probably going to have to settle for two out of three:

Same Venn diagram as previous, but the intersection of all three circles is labeled "As if..."

Content strategy, in practice

We divide content strategy efforts into three broad phases:

  • Pilot
  • Main effort
  • Ongoing effort

For a pilot project, you want high visibility and low risk. This maps to red triage–do it immediately!

Another Venn diagram with highlighting where low risk intersects with the other two factors.

Ideal pilot project is high visibility and low risk. High ROI and low risk is acceptable.

The purpose of a pilot project is survive and advance.

Pilot projects do not need to prove cost savings, or show huge improvements. They are intended to convince the Money People that your content strategy has merit, and that they should continue to invest in it. Therefore, the ideal pilot is the smallest, easiest possible project that will gain momentum for the overall effort. A high-visibility improvement that is easy to achieve is a good choice.

After the pilot phase, you move into the main phase of the project. Here, you want to focus on high-impact projects. You take a few more risks because you expect a high ROI. These projects should focus on visibility and ROI. This is the yellow triage tag; things that are important, but can wait a little while.

Venn diagram with intersection of high visibility and high ROI highlighted.

The intersection of visibility and ROI is where you find the main project effort.

If you look at content strategy over time, you can see a pattern emerging. Early on, you need low risk and high visibility. Later, you focus more on ROI. You can also gradually increase the complexity of your projects as your content strategy team gains experience and knowledge.

Bar chart shows risk, ROI, visibility, and complexity. Risk lowest, then complexity, then ROI. Visibility is very high.

For a pilot project, you need high visibility and low complexity.

You can gradually increase the risk of your projects and the overall complexity as your team gains knowledge and experience.

Same four bars as previous, but now risk and complexity are high. Visibility and ROI are lower.

As you gain content strategy experience, you can increase the complexity of your projects.


Content strategy triage helps you assess your projects. Measure the following factors:

  • Risk
  • ROI
  • Visibility
  • Complexity

Map each project against these factors, and you will be able to determine which projects should be done early on and which can (and should) wait until you gain more experience.

One of the keys is to figure out what sort of projects should be black-tagged. They may or may not be dead already, but you cannot save them. What sort of content strategy challenges might cause you to just walk away instead of trying to fix it and, as a result, not saving a bunch of other content strategy patients on whom you could have spent your resources instead??

The commodity trap

October 13, 2015 by

In a recent post on lean content strategy, I wrote about a focus on waste reduction:

After creating a nice automated XML-based process, waste in formatting is eliminated, and we declare victory and go home. Unfortunately, the organization is now producing irrelevant content faster, and the content organization is now positioned as only a cost center.

Is your content perceived as a commodity?

Given a critical mass of content, a move away from desktop publishing (DTP) and into XML publishing offers compelling benefits—faster publishing, more efficient translation, efficient reuse, and so on. (Try the XML business case calculator to see whether it makes sense for your situation.)

Over the past decade or so, many organizations have moved into XML. For the most part, they have implemented what we might call XML Workflow Version 1, which has the following characteristics:

  • Focus on automation, especially in translation, as the justification for the change.
  • Refactoring content to improve consistency, which improves efficiency for authors and translators.
  • Reducing formatting edge cases that are difficult to automate.

All of these improvements are useful and necessary, but they focus on how information is encoded. Many organizations are now experiencing pricing pressure from management. Because the content creators have shown that they could be more efficient, management assumes that there must be more efficiency gains available.

Because the justification for XML Workflow Version 1 positioned content as a commodity, management now assumes that content is a commodity.

If you are in the commodity trap, you will experience the following:

  • Pressure to lower content creator costs via staff reductions, outsourcing, and so on
  • A lack of interest in content initiatives other than cost reduction
  • A flat or declining budget
  • A focus on lowest-cost suppliers across all aspects of content and localization and on commodity metrics, such as price per word
  • No budget for staff development

So how do you avoid the commodity trap?

First, it is a fact that XML Workflow Version 1 is mostly about efficiency—and many content groups need to be more efficient. When negotiating for a shift to XML, however, make sure that your argument includes XML Workflow Version 2, in which you can begin to use XML in more sophisticated ways. For instance:

  • Integrate XML-based content with information derived from business systems (such as SAP)
  • Deliver content to other business systems (such as software source code) in a compatible format to provide for better integration and collaboration across the organization
  • Improve the semantics of content (for example, embed an ISBN number with a book reference or a part number with a part reference) and provide for useful cross-linking
  • Focus on touchpoints in the customer journey and how to deliver information that supports the journey
  • Improve the localization and globalization process to deliver information that meshes with each locale, rather than just a somewhat awkward translation

Efficiency in content creators is a means to an end. By freeing content creators from formatting responsibilities and from copying/pasting/verifying repeated information, you can make them available for more high-value tasks. Avoid the commodity trap by ensuring that your content vision goes beyond efficiency and automation.

Lean content strategy

September 28, 2015 by

Lean manufacturing begat lean software development which in turn begat lean content strategy.

What does lean content strategy look like?

Here are the seven key principles of lean software development.

  1. Eliminate waste
  2. Build quality in
  3. Create knowledge
  4. Defer commitment
  5. Deliver fast
  6. Respect people
  7. Optimize the whole

How well do they map over to content strategy?

1. Eliminate waste

Waste bin labeled TEAM WASTE

Eliminate waste // flickr: jeffdjevdet

Interestingly, many content strategy efforts focus only on eliminating waste.

Here are some common types of waste in content:

  • Waste in formatting (formatting and reformatting and re-reformatting)
  • Waste in information development (end users do not want or need what’s being produced)
  • Waste in delivery—information cannot be used by end user because it’s not in the right language or the right format
  • Waste in review—oh, so much waste in the review cycles

Too often, strategy projects end with waste reduction. After creating a nice automated XML-based process, waste in formatting is eliminated, and we declare victory and go home. Unfortunately, the organization is now producing irrelevant content faster, and the content organization is now positioned as only a cost center. Typically, the next step is that executive management demands additional, ongoing cost reductions rather than looking at possible quality improvements. Eliminating waste cannot be the only priority. (I expanded on this theme in The commodity trap.)

Ellis Pratt has a great lightning talk overview of types of waste in lean content strategy. I believe that he is the first person to combine the concept of lean manufacturing/lean software development with content strategy.

2. Build quality in

How do you measure quality in content? “I know it when I see it” is really not a good answer. Some content quality factors include:

  • Writing quality—mechanics and grammar
  • Usability—the ease of access to information
  • Technical accuracy
  • Completeness
  • Conciseness

All of which Scriptorium notoriously put together into the QUACK quality model.

Building quality in means that the process of creating content supports a high-quality end result. Accountability in content reviews is one technique; content validation to ensure it conforms with required structures another. Software authoring assistance can help with writing quality.

The process of creating and managing content should assist the content creator in producing high-quality information.

3. Create knowledge

The fundamental purpose of content is of course to create and disseminate knowledge. As an aspect of lean content strategy, we can identify several groups that need knowledge:

  • End users need information to use products successfully.
  • Content creators need to accumulate domain knowledge, process knowledge, and tools knowledge to become better at their jobs.
  • The user community needs a way to share knowledge.

Any content strategy must include ways to support knowledge creation inside and outside the organization.

4. Defer commitment

Our basic process for content strategy is to first identify key business requirements, and then build out an appropriate solution. The temptation, however, is to make critical decisions first, especially in tool and technology selection. Defer commitment means that you should:

  • Store content in a flexible format that allows for multiple types of output.
  • Keep your options open on deliverable formats.
  • Be open to adding new content based on user feedback or other new information.
  • Assess localization requirements regularly as business conditions change. Look at a list of supported languages as an evolving set, not as set in stone forever.

Also identify areas where commitment is required. If your content needs to meet specific regulatory requirements, these requirements change very slowly. Don’t defer a commitment to a legal requirement.

5. Deliver fast

This is true across the entire effort: content creation, management, review, delivery, and governance. Reexamine those six-month production cycles and lengthy review cycles, and find ways to shorten them.

Keep up with new products and new output requirements. Don’t let the market pass you by.

6. Respect people

Lots to think about in this area, but here are some basics:

  • Content creators: Respect their hard-won product and domain expertise.
  • End user: Respect the end user’s time and provide efficient ways to get information. Do not insult end users with useless information, like “In the Name field, type your name.”
  • Reviewer: Respect their limited time and help to focus reviews on adding value.

7. Optimize the whole

Optimizing inside a content team will only take you so far. The content team must reach into other parts of the organization, where they can:

  • Identify the origin of information and use it. For example, if product specifications are stored in a product database, then product datasheets should pull information directly from the database. Here’s what they should not do: Export from the product database to an Excel file, send the Excel file via email to the content creator, have the content creator copy and paste from the Excel file to the product data sheet file.
  • Identify content reuse across the organization and eliminate redundant copies.
  • Understand silos and why they occur. Find ways to eliminate or align silos.
  • Reduce the number of content processes in the organization.


Lean content strategy. What do you think?

Roles and responsibilities in XML publishing

September 14, 2015 by

The roles and responsibilities in an XML (and/or DITA) environment are a little different than in a traditional page layout environment. Figuring out where to move people is a key part of your implementation strategy.

Flamenco dancers and singer on a dark stage
In an unstructured (desktop publishing) workflow, content creators need a variety of skills. The three most important are:

  1. Domain knowledge (expertise about the product being documented)
  2. Writing ability (duh)
  3. Knowledge of the template and formatting expertise in the tool being used

For a structured workflow, the first two stay the same, but paragraph and character styles are replaced by elements. Formatting expertise is less critical—the formatting is embedded in a stylesheet, which is applied to content when it is time to create output. Knowledge of copyfitting and production tricks is no longer relevant and can even be detrimental if the content creator insists on trying to control the output by overriding default settings.

The content creator needs less template and formatting expertise, especially if the content is highly structured and provides guidance on what goes where. Generally, content creators need to focus more on how to organize their information and less on how to format it.

The role of the technical editor (assuming you are lucky enough to have one) also changes. Document structure is enforced by the software, so technical editors can focus on overall organization, word choice, and grammar. Technical editors are often responsible for reviewing large amounts of content. This perspective can be helpful in establishing an information architecture.

Speaking of information, we have the information architect, who is responsible for determining how information should be organized and tagged. Typical tasks for the information architect are:

  • Developing guidelines for topic-based authoring (for example, how big should a topic be?).
  • Establishing rules for tagging. For example, when should an author use the <cite> tag and when the <i> tag?
  • Organizing shared content and establishing guidelines for reuse.

The equivalent responsibilities were typically handled by the technical editor and the production editor in an unstructured workflow.

In an unstructured workflow, production editors are responsible for finalizing the layout/composition of unstructured content. They typically have deep expertise in the publishing tool and know all of the tricks to make output look good. Very often, production editors are permitted to override templates to copyfit pages and make the final result look better.

The role of the stylesheet programmer is new in an XML workflow and replaces the production editor. The stylesheet programmer creates a script that transforms XML directly into output (such as PDF or HTML). In effect, the handiwork of the production editor is replaced by a script. Stylesheet programmers need a thorough understanding of XML and especially of publishing scripts, such as XSLT, but they need almost no domain knowledge.

Here are the typical roles in a structured workflow:

Role Tags Publishing Domain
Content creator User User Advanced
Information architect Expert User Basic
Stylesheet programmer User Expert Basic
Reviewer None None Expert

Did we miss any? What do you think?

Portions excerpted from our Structured authoring and XML white paper.

Design versus automation: a strategic approach to content

August 10, 2015 by

Design and automation are often positioned as mutually exclusive–you have to choose one or the other. But in fact, it’s possible to deliver content in an automated workflow that uses a stellar design. To succeed, you need a designer who can work with styles, templates, and other building blocks instead of ad hoc formatting.

More content across more devices requires scalability–and that means more automation. A strategic approach to content needs to incorporate both design and automation as constraints and find the right balance between the two.


First, a few definitions.

Design–specifically graphic design–is the process of deciding how information is presented to the person who needs it. The design effort may include text, graphics, sound, video, tactile feedback, and more, along with the interaction among the various types of information delivery. In addition to the content itself, design also includes navigational elements, such as page numbers, headers and footers, breadcrumbs, and audio or video “bumpers” (to indicate the start or end of a segment). Sometimes, the designer knows the final delivery device, such a kiosk in a train station or a huge video board in a sports stadium. In other cases, the delivery is controlled by the end user–their phone, desktop browser, or screen reader.

Automation is a workflow in which information is translated from its raw markup to a final packaged presentation without human intervention.


Design and automation are not mutually exclusive.


Instead, think of design and automation as different facets of your content. Each quadrant of the design-automation relationship results in different types of documents. High design and low automation is where you find coffee table books. High automation and low design encompasses invoices, bad WordPress sites, and 30 percent of the world’s data sheets. Low design/low automation is just crap–web sites hand-coded by people with no skills, anything written in Comic Sans, and the other 70 percent of data sheets. (Seriously, where did people get the idea that using InDesign without any sort of styles was appropriate for a collection of technical documents? But I digress…)

The interesting quadrant is the last one: high design and high automation. In this area, you find larger web sites, most fiction books, and, increasingly, marketing content (moving out of lovingly handcrafted as automation increases) and technical content (moving out of “ugly templates” and up the design scale).

Design/automation on X/Y coordinates. Automation is the X axis; design is the Y axis.

Design and automation and different facets of content creation.

The world of structured content inhabits a narrow slice on the extreme right.

Automation on the X axis; design on the Y axis. Structured content is a band in the high-automation area.

Structured content goes with high automation.

Design gets a similar swath of the top.

Automation on the X axis; design on the Y axis. Design-centered content is a band in the high-design area.

Design-centered content at the top of the design region.

When you combine a requirement for high design with a requirement for high automation, you get The Region of Doom.

Same grid at before. The region of doom is the top left corner, where you have extreme design and extreme automation requirements.

You can accommodate 90% design and 100% automation or 90% automation and 100% design, but if you are unwilling to compromise on either axis, expect to spend a lot of money.

A better strategy is to focus on the 90% area. By eliminating 5–10% of the most challenging design requirements, or by allowing for a small amount of post-processing after automated production, you can get an excellent result at a much lower cost that what the Region of Doom requires.

The intersection of design and automation bars in the upper right is the best value.

Small compromises in design and/or automation result in big cost savings.


When we discuss design versus automation, we are really arguing about when to implement a particular design. An automated workflow requires a designer to plan the look and feel of the document and provide templates for the documents. The publishing process is then a matter of applying the predefined design to new content.

The traditional design process ingests content and then people apply design elements to it manually.

In other words, automation requires design first, and this approach disrupts the traditional approach to design. Like any disruptive innovation, this new approach is inferior at first to the “old way” (hand-crafting design). As the technology improves, it takes over more and more use cases.

Disruptive technology first takes over the low end of the market, and then gradually moves up to more demanding users.

Evolution of disruptive technology over time, public domain image found at Wikimedia

Travis Gertz writes an impassioned defense of editorial design in Design Machines: How to survive the digital apocalypse:

Editorial designers know that the secret isn’t content first or content last… it’s content and design at the same time.

[…] When we design with content the way we should, design augments the message of the content.

[…] None of these concepts would exist if designed by a content-first or content-last approach. It’s not enough. This level of conceptual interpretation requires a deep understanding of and connection to the content. A level we best achieve when we work with our editors and content creators as we design. This requirement doesn’t just apply to text. Notice how every single photo and illustration intertwines with the writing. There are no unmodified stock photos and no generic shots that could be casually slipped into other stories. Every element has a purpose and a place. A destiny in the piece it sits inside of.

He provides wonderful examples of visuals entwined with content, mostly from magazine covers. And here is the problem:

  • As Gertz acknowledges earlier in the article, for many small businesses, basic templates and easy web publishing are a step up from what they are otherwise able to do. Their choice is between a hand-coded, ugly web page (or no web presence at all), and a somewhat generic design via SquareSpace or a somewhat customized WordPress site. Magazine-level design is not an option. In other words, automation gives small business the option of moving out of the dreaded Meh quadrant.
  • What is the pinnacle of good design? Is it a design in which graphics and text form a seamless whole that is better than the individual components? Many designers forget that not everyone can handle all the components. A fun audio overlay is lost on a person who is hard of hearing. Without proper alternate text, a complex infographic or chart is not be usable by someone who relies on a screen reader.
  • The vast majority of content does not need or deserve the high-design treatment.
  • An advantage of visual monoculture is that readers know what to expect and where.
  • All of these examples are for relatively low volumes of content. I don’t think these approaches scale.


What do you think? Scriptorium builds systems that automate as much as possible, and then use additional resources as necessary for the final fit and finish. Only some limited subset of content is worth this investment. I know that I have a bias toward efficient content production so that companies can focus on better information across more languages.

For more on this topic, come see my session at Big Design Dallas in mid-September.

Waterfall content development? You’re doing it wrong.

June 29, 2015 by

Product development, content development, and localization processes are too often viewed as a waterfall process.

This is not at all accurate.

Waterfall: product to content to localization is highly unlikely.

Waterfall. As if.

Product development decisions do feed into content and content into localization, but, in the other direction, localization decisions also drive product development. For example:

  • Every time you add a new locale, you need to make sure that your product can support the local language, regulations, currency, and so on. Regulatory requirements might drive product decisions—the European Union, for example, has strict requirements around personal data protection.
  • In a country with multiple official languages (like Switzerland or Canada), you may need to provide a way to switch a product interface from one language to another.
  • If you want to deliver content as part of the product, you need to make sure that your product has enough storage space available for product content.
  • If you want to deliver web-based content with periodic updates, how do you handle the connection from the product to the content?
  • What if you want to have troubleshooting instructions that use product information directly? How do you integrate the instructions with the product status? How do you do that in 57 different languages?

This leads me to the unified theory for development:

Product, content, and localization maturity are interdependent. A mature process in one area requires alignment with mature processes in other areas.

Instead of waterfall, think of content, localization, and product as sides of a pyramid.


Three-face pyramid with content, product, and localization on the faces.

Content, product, and localization all contribute to the overall development process.

Your development process is a slice across the pyramid that intersects all of the faces.

Three-face pyramid with horizontal slice.

If your processes are equally mature, development works well.

What you want is a horizontal slice—the development process in sync across all of the faces. If the processes are out of alignment and do not have similar levels of maturity, you end up with an unstable platform that is impossible to stand on. Problems on the lower (less mature) side will cause instability everywhere.

Three-face pyramid with angled slice.

Different levels of process maturity make for an unstable development platform.

Here’s are some examples of misaligned processes:

  • Localization needs to deliver languages that the product didn’t account for. Suddenly, you have a need for Thai characters, and no way to embed them in the software.
  • Product is cloud-based software that is updated weekly. Content development process can only provide PDF that is updated every six months.
  • Product ships with all languages enabled, but localization process requires another eight months to provide content in all languages.
  • Content development process is frenetic and unpredictable. Localization costs skyrocket because of lack of style guides, consistent terminology, and formatting templates.

When we develop content and localization strategies, we must assess the maturity of the product development process. The right content strategy may require the organization to make changes in product development and vice versa. Product, content, and localization strategies all need to be aligned at similar levels of maturity.

Put another way: Your content strategy can’t get too far ahead of your product strategy.

Webcast: Risky business: the challenge of content silos

June 25, 2015 by

In this webcast recording, Sarah O’Keefe discusses how content silos make it difficult to deliver a consistent, excellent customer experience. After all the hard work that goes into landing a customer, too many organizations destroy the customer’s initial goodwill with mediocre installation instructions and terrible customer support.

Do you have a unified customer experience? Do you know what your various content creators are producing? Join us for this thought-provoking webcast.

Localization strategy and the customer journey (premium)

June 8, 2015 by

This premium post is a recap of a presentation delivered by Sarah O’Keefe at Localization World Berlin on June 4, 2015. It describes how and why to align localization strategy to the customer journey.

The new buzzword in marketing is the customer journey. What does this mean for localization?

The customer journey describes the evolving relationship between a company and a customer. For instance, a simple customer journey might include the following stages:

  • Prospect: conducts product research
  • Buyer: purchases product
  • Learner: needs help to figure out how to use the product
  • User: uses the product
  • Customer: owns the product, uses it occasionally
  • Upgrader: needs new features or has worn out the product and needs a new one
  • Repeat customer: buys the next version
Funnel with trade show, research, SEO, engagement, white paper, prospects, and emails going in at the bottom. Purchase is the output at the bottom of the funnel.

The marketing funnel ends with a purchase.

The idea of the customer journey is replacing the sales and marketing funnel, in which the end state (the bottom of the funnel) is a purchase.

Instead, the customer journey acknowledges a more complex relationship with the customer.

Stages of customer journey in a circle: research, buyer, learner, user, customer, upgrader

The customer journey continues after buying.

In the traditional marketing funnel, content is critical before the “Buy” decision. After that, content is not important. In a customer journey, all stages are critical and consistency is important.

There is content required at each stage of the customer journey:

  • Research: web site, marcomm, and white papers
  • Buyer: e-commerce and proposals
  • Learner: training
  • User: documentation
  • Customer: knowledge base, support
  • Upgrader: what’s new

Unfortunately, delivering this content with consistency can be quite difficult because it is created in lots of different places in the organization.

Corporate organization chart shows content being developed in different locations: training is under the CIO, proposals are under the COO, and so on.

The organizational chart makes consistent content difficult.

The localization maturity model is helpful here. The original was developed by Common Sense Advisory, but I have created a slight variant:

Instead of reactive, managed, optimized, negligent, and so on, we have anger, denial, bargaining, depression, and acceptance

With apologies to Common Sense Advisory, I have simplified their maturity model

Minimum viable localization is somewhere between level 1 and level 2 (between reactive and repeatable in the true maturity model). In most organizations, the localization maturity is different for different types of content. If you think about your content and map it against the customer journey, it probably looks something like the following:

Marketing content gets a 2; user documentation gets a -3 on the CSA localization maturity model.

Localization maturity varies by content type

Seen from the customer journey point of view, the problems are obvious. The prospect and buyer gets pretty good localized content, the learned gets something acceptable, and the user/customer gets the dregs. The company then attempts to redeem itself as the customer moves into the new buying cycle with better delivery for the upgrader/potential customer. This seems like a dangerous approach. A better strategy is to move all of the content into alignment at the same maturity level.

Consistency is critical. We recommend starting with the following:

  • Consider using a single vendor to make consistency easier. At a minimum, avoid fragmented, siloed localization efforts.
  • Work on voice and tone in source and target languages. Assess how they are different for different kinds of information.
  • Implement consistent terminology.

For some localization service providers (LSP), the need for consistency presents a business opportunity. A customer might choose a single vendor to make consistent content delivery a little easier. For a specialist LSP, this could be a problem. For example, a company that focuses on transcreation of marketing content would not be well-positioned to take on technical training materials. A company that specializes in a particular industry, such as biotechnology, might be in a position to argue for more investment by their customers.

For localization buyers, here are some recommendations:

  • Establish long-term vendor relationships. Commodity buying is not going to get you the quality you need to support a great customer journey.
  • Make sure the translation memory is available, updated, and shared among all your vendors.
  • Consider assigning LSPs by product rather than content type.

Localization strategy needs to change to support a customer journey. Here are some basic tips:

  • Understand your (or your client’s) customer journey
  • Understand localization requirements at each point in the journey
  • Develop a strategy that addresses each requirement
  • Ensure that you have terminology management, translation memory, and other assets in place across the enterprise
  • Different parts of the customer journey need different approaches to voice and tone. Include those in your customer journey planning.
  • Different locales may have different customer journeys. Align your translation priorities accordingly.

The customer journey is only as good as the weakest link in the content and localization chain.


Renovate or rebuild? Construction as a content strategy model

May 26, 2015 by

Does your house have good bones? Ugly paint, terrible carpet, and dated appliances are all fixable. But if a room is too small, a door is in the wrong place, or the rooms don’t match your requirements (need a downstairs master bedroom?), then you have a serious problem.

Content can also have good bones. Or not.

The content audit is like a home inspection. What information do you have already? Is it the right information? How is it put together? What sort of issues are there in the content? Do you need to update your kitchen, er, content?

Meeting building codes

If you don’t meet building codes, you are going to have a serious problem with your city building inspector when you try to sell your house.

Your content strategy needs to take into account the content building codes, which may include the following:

  • Requirements for accessibility
  • Regulatory requirements
  • Reader expectations

Build these into your upfront construction plan, or face huge expenses later when you fail your inspection.

The foundation

You need a strong foundation for your content. Unfortunately, that can be difficult because foundation requirements vary by industry.

Back to our house analogy: Here in the Piedmont region of North Carolina, we have clay soil. It’s fabulous for making bricks and growing tobacco, and very little else. With clay soil, you generally build either slab or pier-and-beam foundations. Very few houses have basements, unless they are built on the side of a hill. Unlike California, we don’t worry too much about earthquake-proof foundations. In coastal areas, we worry about hurricane tie-downs and possibly flooding.

With content foundations, you have a couple of different problems: Only a few lucky industries have bedrock content architecture that you can rely on. Everywhere else, you can count on seismic shifts in your requirements every few years. There will be new content platform that you must support, new regulatory requirements, new languages, changes in available skill sets, and new requirements for integration across the organization. (You thought you were building a house for your family, but now your quirky cousin from Maine is living with you. And she brought two tons of books with her.)


Kitchen with missing appliances. Remodeling in progress.

Kitchen remodeling // flickr: sellis

One of the biggest home renovation risks is overbuilding. A little granite here, an appliance upgrade there, a few dozen custom cabinets with inlaid wood, an inability to say no to those organic bamboo floors, and suddenly your kitchen looks like something out of Food & Wine. It’s fabulous, but you are the Queen of Takeout. Why do you have a built-in wok and a triple oven?

Some organizations need industrial-strength content. Others just need to reface the cabinets or maybe clean the range for once. Before you start your content strategy work, understand your neighborhood. Will you get your investment in content back? What is your ROI? Will additional investment give you better results?

Scorched earth: when is razing appropriate?

In some housing markets, complete tear-downs are common. The house is sold, and then the new owners raze the building and build something brand-new on the same site. Tear-downs usually occur in desirable areas where land is very expensive—the land is relatively valuable compared to the building.

Perhaps you have a great web site URL and terrible content that populates it? Or perhaps your content is so dingy that it would be cheaper to start over. If you need to redesign all aspects of content production from who creates it to its delivery mechanism to the information architecture, it may be less expensive to start over than to try to glue your new design onto avocado-colored 70s content.

Cosmetic fixes are cheap

All renovations are deeply painful for the content owners and the inhabitants of the house. Basic fixes, like new paint or a new look-and-feel for your output, are faster, cheaper, and easier than moving walls or breaking down content silos.

But is a cosmetic fix going to fix your problem?

A word about rental property

If you live in rental property, your redesign options are limited. You can paint and move furniture around, but you probably can’t do more. If you are publishing content on a host platform, like Facebook, Medium, or Pinterest, you are constrained by what your landlord allows.