Scriptorium Publishing

content strategy consulting

QA in techcomm: you need a test bed

March 14, 2016 by

When I first started as a QA tech at a small game company, I was immediately thrown into the QA test bed. It was a place where we could test production-ready content without being interrupted by ongoing development or server restarts. Functionality was well-documented and could be used to test against our users’ bug reports.

When I started working at Scriptorium, one of my first tasks was to develop a content test bed to run alongside our PDF transform to help improve it. For example, one part of our test bed is a massive thread pitch table. It will readily flow across multiple pages, includes both vertical and horizontal straddles, and has varying column widths. If we run into a problem with large tables, I can run that content through, confident that I’ll be able to reproduce the issue and deal with it accordingly.

Making a test bed

A test bed is a useful tool when managing your content. In the context of content strategy, a test bed is a set of topics that represents a broad section of your content. However, a test bed is more than just a sample of your production-ready content; to be truly useful, a test bed should be:

  • Modular: You should be able to add and remove chunks of content from your test set. This allows you to quickly set up use cases to verify your output. You also need to be able to quickly trim the size of your test bed to cut down on processing time for tests, unless that test involves a large document.
  • Well-maintained: As your content grows and changes, so too should your test bed. Otherwise, you run the risk of missing critical errors that can creep in when you do your testing.
  • Representative: You should have examples of all major requirements for a particular publication. If you have a table that needs to be formatted in a specific way depending on its context, make sure that you have that example in your test bed. Also keep in mind errors that turned up in the past, so you can keep an eye out for bugs that might creep back into your workflow.
  • Content-neutral: Make the actual content of your test bed generic. While it can be useful to have your actual content in your test bed, it can also make it problematic if you need to hand that test bed off to an outside entity, like a friendly consulting group. If you need to have “real” content in your test bed, make sure that it’s already publicly available or otherwise non-sensitive.

Using a test bed

Once you have a solid test bed in place, it can provide benefits both internally and externally. Internally, if you’re using a platform to generate output, you can run your test bed through it to verify any changes made to that platform. Conversely, if you implement a major change to the format of your content, you can use your test bed to more accurately scope the impact of that change. You can also use it as an example to familiarize new authors with your content. Externally, you have content that you can hand off quickly, without spending time compiling a set of appropriate documents that represent your project needs.

With a bit of maintenance as your documentation needs change, your test bed can become a versatile and powerful tool when both developing and maintaining your content.

Translation and the complexity of simple content

March 7, 2016 by

Translating content for foreign markets can be an expensive and time-consuming endeavor. While it’s important to keep costs in check, the critical element to watch is quality. The only sure-fire way to ensure quality in translation is to build it into your source.

GIGO: Garbage in, garbage out

Storm Trooper action figures startled by a Storm Trooper trash bin

That’s not what we meant! (flickr: JD Hancock)

Developing content is often seen as a necessary evil. It’s easy to justify cutting corners to deliver it quickly or create it using fewer resources. After all, it’s just content, right?

Not quite.

More often than not, even technical content is being used in the pre-sales cycle. People want to evaluate a product or service before purchasing, and one of the easiest ways to do that is by searching for information online. Content quality in this case might make or break a sale.

Content quality is also a major factor in retaining existing customers. People are extremely happy when they can easily find answers to their questions without having to call support. Likewise, the easier the answer is to understand, the better.

So there’s a logical fallacy that content is a necessary evil. It plays a critical role in attracting and retaining happy customers. It is a business cornerstone. This is true in all markets. Despite your translators’ best efforts, the quality of their translation work is a direct reflection on the quality of your source content. And to keep translation costs down, you need to make that effort simple.

The complexity of simple content

Simple content does not mean dumbed-down or bare bones. In this context, simple content means streamlined content.

  • Messaging is clear and concise
  • Words are carefully chosen
  • Content is written in discrete chunks (topics)
  • Those chunks are written once and reused wherever appropriate
  • Content is consistently formatted (or better, tagged in XML)

In short, the entirety of content development is closely monitored and skillfully performed.

The effort involved is anything but simple, but the benefits far outweigh the work. All of the heavy lifting is done on the source content side, simplifying the translation process.

When your source content is clear, concise, and complete, it can be translated easily. When content is written once and reused, translators only need to translate it once. When content is consistently tagged, formatting translated content becomes automatic.

As a result, your content quality is consistent across all languages, your translation costs are reduced, and the translation work is completed quicker, allowing you to accelerate your time to market.

When strategy meets the storm

February 8, 2016 by

plane in a snowstorm

Perfect weather for flying! (image via Flickr: estudiante)

Just before the blizzard that crippled a significant portion of the East Coast, I was returning from a business trip. I did eventually make it home, but the return flight included a bonus three-day layover in Charlotte, NC.

I’ll spare you many of the details, but a few key events and situations really stand out from that trip. The lessons learned are applicable to any corporate strategy, content or not.

Protocols that look good on paper may not fare well in practice

Every strategy can be broken down into a series of protocols that need to be followed. Which protocols you follow will change based on the situation, but they all feed into the overarching strategy that drives your business goals.

Many of these protocols are designed to account for specific situations. However, following them to the letter every time may do more harm than good. A certain amount of discretion is needed to alter protocol to effectively handle tricky situations.

My flight home began without incident. We boarded on time, and were set to take off with a full flight. After everyone was seated, a flight attendant made an announcement: five people needed to volunteer their seats for additional crew members due to the impending East Coast storm. Volunteers would be given an alternate flight and a voucher toward future travel.

Caïn by Henri Vidal, Tuileries Garden, Paris, 1896.

I’m sure this is how everyone on that plane felt… passengers AND crew! (photo: wikipedia)

As you could imagine, no one came forward. We were all concerned about the storm, and wanted to get to our destinations before we were stranded. The airline’s protocol called for volunteerism, and so we sat for an hour waiting for volunteers, which stirred great concern among the passengers and caused many to ultimately miss their connections. Finally, an executive decision was made by either the airline or crew to deplane the last people to purchase tickets, board the additional crew, and finally take off. But by then it was too late; many of us missed our connecting flights and were stuck to ride out the storm on an extended layover.

While the proper protocol to solicit volunteers might work in normal situations, the storm factor should have allowed for other creative solutions–or at least a shorter timeline between asking for volunteers and removing the last few ticket holders from the plane. The crew would have boarded, the majority of passengers would have made their connections, and fewer people would have been stranded. Following protocol in this case caused more harm than good; customers were upset, customers were stranded (so many that the airline ran out of available hotels), and stranded customers now required new flights in an already chaotic backlog due to cancellations.

When load testing meets reality

Implementing any system requires a great deal of testing and tweaking. But true testing doesn’t begin until you’re using it live and encountering unforeseen or extreme situations. After launching a new system, it’s important to also have failsafes in place when (not if) the unforeseeable happens. Fallback systems are great, but two of the best failsafes to use are human communication and collaboration.

On the day of my final flight home, the flight situation was understandably a nightmare. The airports on the East Coast were finally re-opening after their storm shutdowns, flights were still being cancelled and delayed due to weather conditions and missing crew, and displaced passengers were very unhappy.

As I queued in the very long customer service line, I also called the main customer service number. There was an hour wait on the phone, so I opted for a call-back when it was my turn to talk to an agent. Meanwhile I made it to the service counter and began looking for earlier direct flights home. We found one, but it was full. I asked about standby, and after much fiddling with the system, the agent gave up. She could not put me on standby without voiding my later, valid ticket. (!!!)

I begrudgingly kept my later flight and went off in search for coffee. Then I received a callback from the support line. They confirmed that I actually was on the standby list for the early flight, and that my other ticket was still valid should I not make it on the earlier flight.

I quickly gathered my belongings and headed to the gate to confirm. Alas, their system did not show me on standby. But this agent worked some voodoo magic and was able to get me on standby and retain my later flight as backup.

Obviously there was a breakdown in systems communication between the airline’s main system and the airport hubs, and the local agents were left to flail about their work as best they could. For some reason, they could not contact someone to confirm these differences, nor to troubleshoot or even report a system error. There needs to be a communication bridge between those using the system and those managing it.

People are your greatest corporate asset

So in the end, I made it home on that earlier flight. My standby status earned me a seat just before boarding began. It was all thanks to that one gate agent who ensured that their local systems showed that I was both on standby and had a valid later flight if needed. She truly went above and beyond, checking in on me from time to time and even rooting for me as my name climbed higher on the standby list.

There are details about this horrible trip that I won’t forget. Some are mentioned in this post, and others are best left unmentioned to hopefully fade with time. But what I may never forget is that one gate agent’s actions, from her refusal to let a system glitch prevent her from doing what should normally be doable to the high-five she gave me as I boarded my flight.

Technology fails happen. Unforeseeable events happen that can shake normal workflows. In fact, I’m sure that other airlines had issues during this storm. But it’s the human to human interaction that can build or destroy a customer’s impression of a company. Empower your workforce to put their best foot forward, and do everything possible to enable them to creatively solve problems when needed. It just might be your only saving grace with an unhappy customer.

The second wave of DITA

February 1, 2016 by

You have DITA. Now what?

More companies are asking this question as DITA adoption increases. For many of our customers, the first wave of DITA means deciding whether it’s a good fit for their content. The companies that choose to implement DITA find that it increases the overall efficiency of content production with better reuse, automated formatting, and so on.

Now, clients are looking for the second wave of DITA: they want to connect DITA content to other information and explore innovative ways of using information. The focus shifts from cost reduction to quality improvements with questions like:

  • How will our content strategy evolve as DITA evolves?
  • How do we make the most of our DITA implementation?
  • How do we tailor our DITA implementation to better suit our needs?
  • What can DITA do for us beyond the basics?
  • What other content sources are available and how can we integrate them with our baseline DITA content?
  • What new information products can we create using DITA as a starting point?
  • How can we improve the customer experience?

The second wave of DITA can go in two directions. In the apocalyptic scenario, the overhead and complexity of DITA exceeds any business value, so the organization looks for ways to get out of DITA. But if you measure implementation cost against business value before any implementation work begins, this scenario is unlikely. Instead, you can reap the benefits of a successful implementation and start exploring what else DITA can do for your business.

A huge wave and a tiny surfer.

Will you thrive or wipe out in the second wave? // flickr: jeffrowley

Extending DITA beyond the basics

Your first DITA implementation must achieve your objectives with minimum complexity. When the shock of using the system wears off, you can consider new initiatives:

  • Building additional specializations
  • Using advanced DITA techniques to accommodate complex requirements
  • Delivering new output files
  • Refining your reuse strategy

Integrating with other systems

In the first wave, organizations usually focus on getting their content in order—migrating to DITA and topic-based authoring, setting up reuse, establishing efficient workflows, and managing the staff transition into new systems.

In the second wave of DITA, the new baseline is a functioning, efficient content production process, and attention turns to requirements that increase the complexity of the system. For example, a company might combine DITA content with information in a web CMS, a knowledge base, an e-learning system, or various business systems.

Moving additional content types into the DITA CCMS is only one option to foster collaboration. Organizations can align content across different authoring systems. Another integration opportunity is combining business data (such as product specifications or customer information) with DITA content. Software connectors that allow disparate systems to exchange information are a huge opportunity in the second wave of DITA. You can share information as needed without forcing everyone into a single system.

Focusing on the business value of content

The emphasis is shifting. In the first wave, organizations focused on reducing the cost of producing content by improving operational efficiency. In effect, they built systems that reduced or eliminated waste in content manufacturing. In the second wave of DITA, the focus is on the business value of the content. After setting up the assembly line, the organization can build cars, er, content, with more and more features that authors and consumers need.

Some trends in this area include the following:

  • In localization, a shift from authoring in a single source language toward multilingual authoring. Product expertise is not confined to employees who are proficient in English. If your subject matter expert is most comfortable in Chinese, why not allow her to work in that language?
  • In management, an increasing recognition of the value of good content, and a demand for improvements.
  • In content creation, a greater recognition of the importance of content strategy and an increasing focus on the big picture.

DITA is a constantly evolving technology, and to get the most value out of your implementation, you must ensure that your content strategy evolves with it. Don’t stop at letting DITA solve your content problems—take advantage of the second wave of DITA and explore the many other ways it can advance your business.

We had some interesting discussion about the second wave of DITA during our 2016 content trends webcast, and we’d like to continue that in the comments. Are you in DITA and figuring out what comes next? Let us know.

The cost of DITA specialization

January 18, 2016 by

One of the greatest benefits of using DITA is specialization. However, specialized DITA is more challenging and expensive to implement than standard, out-of-the-box DITA, which is something you should consider before you take the plunge.

In this follow-up post to Making metadata in DITA work for you and Tips for developing a taxonomy in DITA, you’ll learn about the cost of specialization, and how to decide whether it’s worthwhile for your business.

Know what’s involved

Is being a special snowflake worth the cost? (flickr: Dmitry Karyshev)

Is being a special snowflake worth the cost?
(flickr: Dmitry Karyshev)

You’ve determined that DITA is the best solution for your company’s content, but now you have a choice to make—whether or not to specialize. Specialization means customizing the existing structure of DITA by adding, modifying, or removing elements to suit your needs.

The first step in your decision should be learning about what’s involved in the specialization process. If you specialize, you will need to:

  • Create a content model, or framework that shows how your content will be structured in DITA.
  • Develop the specialization, including custom DTDs, elements, and attributes.
  • Test the specialization with your content.
  • Make sure that your conversion process, output transforms, and tools work with your specialization (or modify them accordingly).

Implementing a DITA specialization will cost more—in terms of both time and expense—than standard DITA. Make sure you account for the added effort of specialization, especially if you’re on a tight schedule or budget.

Analyze your content

The structure of your content can help give you an idea of whether or not specialization is the best option for you. As you review your existing content, ask yourself:

  • How is your content structured? Keep in mind that even if your content is currently stored in an unstructured format, it probably still has an implied structure.
  • How closely does your content match the structure of standard DITA? If your content fits within standard DITA except for a few cases, it will likely be more cost-effective to rewrite those pieces of content than to create a specialization to handle them. However, if your structure differs significantly from standard DITA, you can probably make a strong case for specialization.
  • How consistent is your content? It can be tempting to use specialization to accommodate an inconsistent structure with numerous edge cases. But just because you can create specialized DITA doesn’t mean you should—especially if reworking your content to be more consistent is cheaper than specializing around it.
  • What semantic value does your content need? Can you assist your content creators by using element names that are more meaningful to them? If you’re in an industry with very specific language, such as pharmaceuticals, or if your company has a large, complex system of product names and categories, it might make sense to specialize—particularly when it comes to metadata.
  • How will your content be tracked? Do you or your audience need to search for and extract specific pieces of content (for example, a list of supplies from a datasheet for a certain product)? If so, creating a specialization that allows for semantic tagging might be the best (or only) way to accomplish this.

Estimate the costs

You’ve determined that your company could benefit from specialization based on the structure of your content. Now you’ll need to evaluate the costs involved in specialization so that you can present a strong business case for it. Here are some costs that could occur when you implement a DITA specialization:

  • Development costs. Do you have people in your organization who have the DITA knowledge and skill it takes to create your specialization? If so, you’ll need to account for their time and effort in your budget, especially if they already have other responsibilities. If not, you’ll need to reach out to an external resource (such as a consultant) or try to hire someone.
  • Conversion costs. Do you have legacy content that you plan to convert to DITA? How much? If you have enough content that you’ll be using a conversion vendor, ask them to estimate how much it will cost to convert your content using your specialization.
  • Output costs. What types of output will you need? How will your specialization affect the development of your output transforms? Depending on the nature of your specialization, your transforms may be more difficult or time-consuming to create than they would be with standard DITA.
  • Tool costs. What kind of support do the content management systems and authoring tools you’re considering have for your specialization? How difficult will it be to manage and update the specialization once your content is integrated? These factors can not only help you estimate the costs, but can also help you choose the right tools.
  • Localization costs. Do you need to translate your content into other languages? If so, keep in mind that the tool chain for any localization vendors you use must support your specialization, which could affect both vendor selection and implementation costs.
  • Testing costs. You’ll need to test your specialization at various stages throughout the implementation process, so make sure to allow for the cost of the time involved.

Specialization isn’t cheap or easy, and the decision to implement it shouldn’t be taken lightly. However, if it’s the best approach for your content, the costs involved are probably worthwhile. Now that you have a better understanding of the factors and costs of DITA specialization, you can make a more informed decision about whether or not to specialize—and support that decision with a stronger business case.

Top eight signs it’s time to move to XML

January 11, 2016 by

How do you know it’s time to move to XML? Consult our handy list of indicators.

Versione italiana (Alessandro Stazi, January 28, 2016)

This list is in rough order from problems to opportunities. That is, the first few items describe situations where the status quo is problematic. Later in the list, you’ll see items that are more focused on improving the quality of the information you are delivering.

1. Overtaxed system


Is your system fast enough? // flickr: Eirien

Your current system (tools and workflow) worked well in the past, but now it’s overburdened. Tasks take too long because you don’t have enough software licenses, or enough people, or too many manual steps in your workflow.

XML-based content is not the only way to solve this problem, but you can use an XML-based system to improve the efficiency of your operation:

  • XML content files have a smaller footprint than the equivalent binary files (because formatting is not stored in each XML file but instead centralized in the publishing layer).
  • You can use a variety of editors with XML files. Software developers might use their favorite programming text editors. Full-time content creators likely prefer an XML authoring tool. Getting software is less of a problem because not everyone needs a (potentially expensive) authoring tool.
  • Content creators spend a shocking amount of time on formatting tasks—up to 50% of their time. XML-based publishing replaces the ongoing formatting tasks with a one-time effort to create stylesheets.

2. File management problems

Box labeled Fragile: Do Not Drop that has been dropped and crushed.

Not good. // flickr: urbanshoregirl

Your days are filled with administrative problems, such as the following:

  • Trying to manage increasingly fragile authoring tools in which formatting spontaneously combusts when you so much as breathe near your computer. (I’m looking at you, Microsoft Word.)
  • Searching through shared network drives, local file storage, and random archives for a specific file and, most important, the latest version of that file.

File administration is overhead at its worst.

The authoring tool problems are addressed by the simplicity of XML files—formatting is applied later in the process, so it cannot muck up your source files. (Note: Some software offers the option of saving to “MySoftware XML” format. In most cases, that XML does include formatting information, which destroys much of the value of an XML-based approach.)

The file search problem is a source and version control problem. The best solution for content is a component content management system (CCMS), in which you can track and manage the files. If, however, you cannot justify a CCMS for your organization, consider using a software source control system. Because XML files are text, you can use common system such as Git or Subversion to manage your files. This approach doesn’t give you all the features of a CCMS, but the price is appealing. It’s also possible to manage binary files in a source control system, but you will experience additional limitations. (For example, you cannot compare file versions using source control system.)

3. Rising translation and localization demands

Box with "no volcar" label.

No volcar. // flickr: manuuuuuu

Your “creative” workarounds to system limitations were acceptable when you only translated a few minor items into Canadian French, but now the company delivers in a dozen languages (with more expected next year), and correcting these problems in every language is getting expensive and time-consuming.

Localization is by far the most common factor that drives organizations into XML. The cost savings from automated formatting across even a few language variants are compelling. Furthermore, because most organizations use outside vendors for translation, it’s quite easy to quantify the cost of translation—you can just look at the vendor’s invoices.

4. Complex conditions

Most unstructured authoring tools offer a way to label information as belonging to a specific content variant and produce two or more versions of a document from a single file. For example, by flagging test answers as “Instructor,” a teacher could generate both a test and an instructor’s answer key from a single file.

In software documentation, a requirement to label information as belonging to a high-end “professional” version as opposed to the baseline product is common. Authors can then create documents for the baseline version and for the superset professional version from a single source.

With more complex variants, however, the basic flagging and filtering is insufficient. Consider, for example, a product that has the following variants:

  • U.S. product and European product with different safety requirements
  • Product used in different environments, like factories, mines, and retail establishments
  • Optional accessories, which change the way the product works
  • Product components are shared across different products with small changes

In this example, you would need to create the following types of filters and have the ability to generate combinations of filters:

  • Regulatory environment
  • Operating environment
  • Accessory
  • Product (to identify variance in shared components)

In XML, you can use metadata to create these flags and filter on various combinations.

5. No off-the-shelf solution meets your requirements

If your output requirements are exotic, it’s quite likely that no authoring/publishing tool will give you the delivery format you need out of the box. For example, you might need to deliver warning messages in a format that the product software can consume. Or you need information in strings that are compatible with web applications, perhaps in PHP or Python. JSON is increasingly required for data exchange.

If you are faced with building a pipeline to deliver an unusual format, starting from XML will be easier and less expensive than starting from any proprietary system.

6. More part-time content creators

In many XML environments, the full-time content staff is augmented with part-time content creators, often subject matter experts, who contribute information. This helps alleviate the shortage of full-time content people. Another strategy is to use XML to open up collaboration across departments. For example, tech comm and training departments can share the load of writing procedural information. Interchange via XML saves huge amounts of copying, pasting, and reformatting time.

Part-time content creators have a different perspective on authoring than full-timers. Their tolerance for learning curves and interface “challenges” generally decreases with the following factors:

  • Level of expertise. Subject-matter experts want to get in, write what they need to, and get out.
  • Level of compensation. Put too many obstacles in front of a volunteer, and your volunteer will simply drop out.
  • Scarcity of knowledge. The fewer people understand the topic, the more likely that your part-time content creators resist any workflow changes.

The solution is to focus on WIIFM (“What’s in it for me?”). If the content creator is accustomed to managing content in complex spreadsheets with extensive, time-consuming copy and paste, an XML system with bulletproof versioning and reuse will be quite popular.

7. Metadata

Text is no longer just text. You need the ability to provide supplemental data about text components. For example, you need to be able to identify the abstract section for each magazine article. Or you want to create a link from a book review to a site where you can purchase the book. Conveniently, a book’s ISBN provides the unique identifier you need, but you don’t want to display the ISBN in the review itself, so you need metadata.

Most unstructured tools let you specify metadata for a document (often, in something like “Document Properties”). XML lets you assign metadata to any document or document component, so you can include more detailed background information. (And you can use that metadata to filter the output results.)

8. Connectivity requirements

In some contexts, your text connects to other systems. These might include the following:

  • For a repair procedure, a link from a part to your company’s parts inventory, so that you can see whether the part is available and order it if necessary.
  • For software documentation, the ability to embed error messages and UI strings both in content and in the software itself.
  • For medical content, the ability to accept information from medical devices and change the information displayed accordingly. (For example, a high blood pressure reading might result in a warning being displayed in your medical device instructions.)

Does your organization show signs of needing XML? Can you justify the investment? Try our XML business case calculator for an instant assessment of your potential return on investment.

How to get budget for content strategy

January 4, 2016 by

One common roadblock to content strategy is a lack of funding. This post describes how to get budget, even in lean years (and recently, they have all been lean years!).

1. Establish credibility

Well before you ask for anything, you need a good reputation in your organization. You need management to know that you are:

  • Smart
  • Reliable
  • Productive
  • Great at solving problems
  • Not a complainer

Does your executive team occasionally ask for miracles? Make it happen, and be sure that they understand what you had to do to pull off the miracle.

Find ways to improve content that are inexpensive but have a real impact on cost and quality. For example, build out some decent templates that help people create content more efficiently and with higher quality even in your current, yucky system.

If you must complain about things, do so very far away from the Money People.

2. Identify a problem that the Money People care about

Your problems are the wrong problems. For example:

Keyword with green "Budget" key in place of regular Shift key.

Content strategy needs budget // flickr: jakerust

  • The content publishing process is inefficient and causes stress for the whole team during every release.
  • I hate this authoring tool, and I want to work in that cool new authoring tool.
  • Our content is not consistent from one writer to another.

These are all small potato, internal problems. If you want funding for content strategy work, you need to communicate with executive management in ways that they understand.

Hint: They understand M-O-N-E-Y.


  1. For each release, the content publishing process takes 40 hours, per document, per language. We have two releases per year, with 5 documents, and 20 languages. That means we are spending
    40 x 2 x 5 x 20 = 8000 hours = roughly $400,000 per year on content publishing.
  2. Our organization has identified mobile-friendly content as a priority. Using our current authoring tools, we have to rework information to make it mobile friendly. If we switch to a different tool, we could deliver mobile friendly content immediately and automatically.
  3. Customers must currently search the technical support articles and the technical content separately. As a result, 20% of support phone calls are for information that is available in the technical content, but is not being found by customers.

3. Show ROI

After identifying the problem, show a solution that makes financial sense:

  1. An automated publishing workflow would eliminate that yearly recurring cost. The cost to implement it is roughly $150,000, so we come out ahead during the first automated release.
  2. The cost of the rework is roughly $100,000 per year, and delays delivery by four weeks. Investing in NewAuthoringTool will cost $50,000 for licensing and $30,000 for training.
  3. We want to improve the search facility to reduce the calls that can and should be solved by a search of technical content. Our technical support budget is $5M per year, so 20% is roughly $1M. We need $250,000 in funding to implement the new search, so we will break even in year 1 if we can reduce the not-found calls by 25%.

You will compete with other projects for limited funds, but a business-focused approach to content initiatives will ensure that your project is at least competitive with the other projects.




The best of 2015

December 21, 2015 by

Let’s wrap up 2015 with a look back at popular posts from the year.

Scriptorium wishes you the best for 2016!

Buyer’s guide to CCMS evaluation and selection (premium)

“What CCMS should we buy?”

It’s a common question with no easy answer. We provide a roadmap for evaluating and selecting a component content management system (registration required).

Localization, scalability, and consistency

A successful content strategy embraces consistency and plans for scaling up in the future—which in turn means localization is more efficient.

To see how consistent XML-based content can save your company time and money, check out our business case calculator.

The talent deficit in content strategy

Content strategy is taking hold across numerous organizations. Bad content is riskier and riskier because of the transparency and accountability in today’s social media–driven world.

But now, we have a new problem: a talent deficit in content strategy.

Tech comm skills: writing ability, technical aptitude, tool proficiency, and business sense

“Technical Writing is only about what software you know!”

This comment from a LinkedIn post has it partially right: technical writers should have expertise with authoring software. But to be successful, they need a balance of skills.

DITA 1.3 overview

Curious about the additions to DITA in the version 1.3 spec? Here’s a quick rundown on scoped keys, cross-deliverable linking, and more.

Structured authoring: breaking the WYSIWYG habit

It can be difficult to switch from desktop publishing to structured authoring—especially breaking out of desktop publishing’s WYSIWYG authoring mode.

You can shake your WYSIWYG habits with these tips.

The Force Awakens: Content strategy and project hype

December 14, 2015 by

With the most anticipated film of the year—Star Wars: The Force Awakens—coming out this week, I couldn’t help but think about movie hype and how sometimes it leads to disappointment.

The same thing can happen when hype builds around content strategy. Excitement about implementing a new strategy can be good for an organization, especially when the alternative is hostility or resistance to change. But too much enthusiasm can have unintended consequences and result in failure. Here are some of the pitfalls of project hype and how you can avoid them.

Problem #1: Choosing tools too early

Star Wars

Star Wars at Heroes Con 2015 (photo by Gretyl Kinsey)

If your company uses outdated tools that make your content development process slow, tedious, and draining, you may be desperate for new ones. But over-excitement about upgrading your tools can make you more vulnerable to the “ooh, shiny!” factor when you’re looking at new options—and more likely to choose tools without proper investigation. Selecting tools too hastily increases your chances of getting stuck in a workflow that’s just as inefficient as your current one.

In The Empire Strikes Back, Luke Skywalker learned a valuable lesson about choosing the wrong tools for the job when he ignored Yoda’s advice to leave behind his weapons for a Jedi training exercise. You don’t want to make a similar mistake.

The solution: Take a step back and remember that your content strategy should come first. What are your business goals? What does your content development team need to achieve those goals, and what factors are standing in the way of them? Your content strategy should inform your tool choice, not the other way around.

Problem #2: Burning through your budget

When you’re overly eager to start implementing your content strategy, you might be more likely to overspend—especially in the early stages of the project, when you still have most or all of your budget available. If you go over budget in an early part of your implementation, it’s easy to tell yourself that you’ll make up for the loss in a later phase, whether or not that’s actually feasible. (We’ve seen companies fall into this mindset, but it’s a trap!)

Unexpected budget cuts or changes can crop up at any time in an organization. Implementations can also uncover costs you didn’t anticipate at first—maybe your conversion will cost more than you initially estimated, or your output requirements will become more complex halfway through the project. If you’ve been spending over budget in your excitement, your project will have a more difficult time surviving a sudden blow to the budget, and you might not be able to implement the solution that your organization really needs.

The solution: Plan carefully, spend wisely, and always include a budget backup plan in your content strategy. After all, you don’t want to end up like Han Solo, who spent most of the original Star Wars trilogy in debt to Jabba the Hutt.

Problem #3: Ignoring the sequence of your strategy

Implementations don’t always go exactly as planned, and they can sometimes hit a delay or come to a standstill. In your eagerness to keep things moving, you might be tempted to catch up in other areas if one part of your implementation starts lagging behind. However, content strategies usually involve phases with dependencies, and it’s important to pay attention to the order of these phases before you try to change it.

Jumping ahead to one stage of your implementation to offset lag in another—or trying to implement phases in tandem rather than in sequence to save time—can cause more problems than the delays themselves. The order of your content strategy matters. For example, before you can train your team, you need to know what tools you’ll be using, but before you can choose your tools, you need to define your content goals. If you skip ahead to a phase of your project without completing that phase’s prerequisites, you will most likely have to waste time and resources repeating that phase properly at a later time.

The solution: Stick to your strategy. If your project falls behind, find out what’s causing the delay and address that directly. Resist the temptation to skip ahead. Can you imagine what would have happened if the rebels tried to destroy the Death Star before Princess Leia brought them the plans?

Problem #4: Intimidating your team

If you’re the only one who’s excited about your content strategy implementation, there’s nothing wrong with trying to motivate the rest of your team—but be careful not to go overboard. If everyone around you is facing an intimidating learning curve or major changes to their everyday work experience, hyping up the project could backfire. If you make your team feel pressured, however unintentionally, they might respond with even stronger resistance to change, which is the opposite of what you want.

The solution: Be a leader. Give others the education, encouragement, and support they need to be on board with the implementation. Emphasize the ways your new strategy will improve things for your content creation team—and, if they have concerns, listen and address them. Remember Leia’s words of wisdom about the Empire’s heavy-handed rule of the galaxy and take a more diplomatic approach with your team.

It’s great to be enthusiastic about your new content strategy, but too much hype can set you up for failure. As Yoda told Luke, “Adventure… excitement… a Jedi craves not these things.” A calmer, more controlled approach will lead to a more successful implementation—and help you channel the Force of your positive energy into something you can use to your advantage.