Scriptorium Publishing

content strategy consulting

The Rocky Road to Dublin, er, DITA

June 27, 2016 by

For LavaCon Dublin, Sarah O’Keefe and I delivered a case study presentation on some of the roadblocks we have encountered in implementing DITA at ADP. This article summarizes the key points of the presentation. The presentation and this blog do not represent the views of ADP, LLC.

Presentation slide with title "DITA Implementation Therapy Session" and a large gray DRAFT plastered across the slide

An early title, later replaced with something a bit more professional–and with a bit more local color

ADP, LLC is a global provider of cloud-based Human Capital Management (HCM) solutions and a leader in business outsourcing services, analytics, and compliance expertise. A centralized technical communication group within ADP undertook to move a group of 60 writers, mostly using RoboHelp, to structured authoring in DITA and a component content management system (CCMS).

It was relatively easy to build a business case, since the tools in place simply could not support the business demands for technical content. The primary driving factors were reducing localization cost, increasing reuse, and improving content quality.

However, the implementation was considerably more difficult. Some of the key challenges were:

  • Complex conditions
  • Resource constraints and expertise gaps
  • Matrix reporting structure

Complex conditions

Traffic sign showing the "magic roundabout" at Swindon, which is a roundabout with five subordinate roundabouts

Complexity is expensive. Photo credit: dangerousroads.org

ADP has complex products with many variants. For example, some products are available as part of an integrated suite or as stand-alone products. Within these products, users with different roles have access to different functionality; for example, a manager can review timesheets for all direct reports, but an employee can only modify his or her own timesheet. And different content is sometimes needed for different geographic regions.

In building out a classification scheme for the content and for the various conditions, it became clear that we had to balance the value of reuse against the increasing complexity of conditions.

Increased reuse reduces the cost of creating content, but overly complex conditions increase the cost of authoring and maintaining content and the risk of tagging and publishing errors. Finding the optimal balance between these factors became an ongoing issue.

Resource constraints and expertise gaps

Like all projects, there were resource constraints—not enough people and not enough funding to complete an implementation quickly. Furthermore, as is common with new DITA implementations, the people assigned to the DITA implementation team had little or no previous experience with DITA or XML. The organization had no reason to hire for those skills when the tool of choice was RoboHelp. Any team members with DITA knowledge would have acquired it in a previous position.

Increasing DITA expertise is a critical need, but it takes time to transfer knowledge to the implementation team, and that can prolong implementations dramatically. Hiring a DITA expert is an appealing option, but that takes time, too. New hires also lack social capital within the organization. They do not know the company culture, and they don’t have the connections necessary to get things done.

ADP sought to fill the resource and expertise gaps to some degree with consultants. And to maximize the value of the consulting engagements, ADP prioritized knowledge transfer—ensuring that internal resources learn from the consultants—and getting quick wins to build momentum.

Matrixed reporting structure

The ADP DITA implementation team is part of a matrixed reporting structure, in which team members are accountable to the DITA project lead but “report” to a different manager. Because the demands of a DITA project rise and fall, especially in the early stages, a matrixed or dual reporting structure is common. The team members are assigned to the implementation project for a percentage of their time and are expected to complete regular assignments in the rest of their time. Addressing and resolving conflicts between the needs of the two assignments is often challenging. Excellent planning, coordination, and communication by the leaders is a must.

The rocky, winding road

In a session that preceded ours at LavaCon, the presenter asked for a show of hands of people who have done projects that went as planned. We had a similar point to make in our presentation:

GIF

The roadblocks we discussed certainly added some twists and turns to ADP’s experience. A few of the key takeaways we identified that can help teams mitigate—or at least cope with—these twists included:

  • Start building expertise as early as possible
  • Insist on knowledge transfer if you engage consultants
  • Develop clear roles and expectations for team members
  • Have a clear communication plan if you work in a matrixed environment

Is your content overhead or a customer delight?

June 20, 2016 by

Delight is the difference between what you and your team cost, and the revenue you directly (or indirectly) produce (or protect). This concept is as important to charities as hedge funds.

Andy Kessler & Bruce Clarke

You may not think that “delighting” customers is part of your content creation responsibilities. But when customer delight is defined in terms of revenue and costs, it suddenly becomes a critical part of your job.

Determining whether your content is merely overhead or a customer delight may seem like a losing fight: it’s too subjective! However, there are questions you can ask to measure how delightful your content is, including:

  • Does the support team repeatedly answer questions addressed in the content? If customers are contacting support with queries that are (or should be) addressed in content, your content doesn’t explain things well, is hard to find, or both.
  • Where does your publicly available content show up in search engine results? If your content is not at the top of the search results, that means someone else’s content is getting all the attention. Your content probably needs an SEO tuneup. (I know some companies cannot open up their content on the web for competitive and security reasons. Even so, those companies may need a web page to direct users to an official resource, such as a customer-only portal.)
  • What do web analytics show? Web stats can show what content is popular and what isn’t getting any attention. If content isn’t getting read, can you do something to make it more useful, or should you refocus your efforts on the content customers are reading?
  • Do you have a customer feedback loop? Is there a way customers can send comments and questions about specific content? The mechanism can be as basic as a link to an email address that sends the comments to particular content creators. If you do a formal analysis of your content, be sure to include customers as part of the discovery process. Interviews with customers can be very illuminating, particularly when done by a third-party consultant (like me!) who may elicit more candid responses.
  • How do your partners or resellers use your content? If they are writing their own “cheat sheet” versions of official content for their customers (or are translating it because your company does not), your content is failing. You also lose control over how your product/service is being presented.

Measuring content use—and customers’ satisfaction with that content—is critical in how you prove delight. Without that customer delight, your job as a content creator is expendable overhead.

Content strategy patterns in the enterprise

June 13, 2016 by

What factors affect content strategy decisions? Every client has a different combination of requirements, and of course there are always outliers. But based on our consulting experience, here are some factors that affect content strategy decisions.

Is the content the product?

detail from Book of Kells

Book of Kells // flickr: Patrick Lordan

If yes, the content design will be relatively more important. The organization will want content development and publishing workflows that provide for maximum flexibility in order to deliver the highest possible quality in the final product.

Are the writers professional writers?

Full-time content creators may have a preferred way of doing things, but they usually have experience with a variety of tools, and understand that different tools are appropriate for different organizations.

Are the writers volunteers or paid professionals? Does writing the content demand specialized knowledge?

Domain knowledge is always important. If your writers have extremely specialized knowledge, are volunteering their time, or both, then they effectively have veto power over the authoring environment. Tread with care.

Are there regulatory or compliance requirements?

If so, you can expect a company that is relatively more willing to invest in content (since a failure could mean Serious Consequences), but these companies also tend to move slowly and be risk-averse. Review workflows will be relatively more important for regulated companies.

How many languages are supported or need to be supported?

More languages means more interest in governance because mistakes and inefficiencies are multiplied across each language.

Can misuse of the product injure or kill people?

If the product is potentially dangerous, the organization will look for ways to minimize the risk. At the most basic level, this results in documents with pages and pages of hazard warnings. More advanced organizations work on product design to minimize operational hazards and design their content to support correct product usage. Compliance and regulatory requirements may also come into play.

How many people contribute content? Are they full-time or part-time contributors?

A huge pool of part-time content contributors usually means looking for a lightweight, easy-to-use authoring tool that does not require per-seat licensing. A large group of full-time writers usually means investing in automation because even small productivity gains are valuable.

SubjectScheme Explained

June 6, 2016 by

Your project is coming along nicely. You have your workflow ready, your style guides are composed, and things are looking up. However, you have complex metadata needs that are starting to cause problems. You need a way to ensure that authors are only using valid attribute values, and that your publication pipeline isn’t going to suffer. This is a situation that calls for a subjectScheme.

In a note element, the type attribute only allows specific values.

In a note element, the type attribute only allows specific values.


Normally, most DITA attributes can have any text value. If you have very specific needs for your attribute metadata, it can be helpful to make sure that you only allow certain values. A subjectScheme allows you to define a list of values that are then associated with a specific attribute. When you include a subjectScheme in your map, it acts like an editor, going through your document and ensuring that your attribute values are valid.

Anatomy of a subjectScheme

A subjectScheme map consists of a root <subjectScheme> element that contains the following:

  • a <subjectdef> element, which defines your allowed values
  • an <enumerationdef> element, which binds your allowed values to a specific attribute

Take a look at this sample subjectScheme:

<subjectScheme>

  <subjectdef keys=”apps”>
    <subjectdef keys=”internal”/>
    <subjectdef keys=”external”>
        <subjectdef keys=”allowedex”/>
        <subjectdef keys=”disallowedex”/>
    </subjectdef>
    <subjectdef keys=”all”/>
  </subjectdef>

  <enumerationdef>
    <attributedef name=”app”/>
    <subjectdef keyref=”apps”/>
  </enumerationdef>

<subjectScheme>

We need to assign values to an attribute named app. The <attributedef> element identifies that attribute, and the <subjectdef> element after it assigns the allowed values. The nested <subjectdef> elements above the <enumerationdef> list five values (internal, external, allowedex, disallowedex, and all) that we consider valid. If we take this and reference it in a map, any app attribute containing anything other than those five values will cause the document to fail validation.

Also, notice that the <subjectdef> elements that define allowedex and disallowedex are nested in the <subjectdef> element that defines external. This indicates that allowedex and disallowedex are types of external apps, creating a semantic link between these types.

To use a subjectScheme in a map, you need to use the following format when referencing it:

<topicref href="filename" format="ditamap" type="subjectScheme"/>

Effects on conditional filtering

We’ve already discussed conditional filtering in a previous blog post. If you’re using a ditaval filter file to conditionally process content, the relationship between values defined in your subjectScheme map comes into play.

Let’s say that you have a ditaval file which includes the following:

<val>
  <prop action=”include” att=”app” val=”internal”/>
  <prop action=”exclude” att=”app” val=”external”/>
  <prop action=”include” att=”app” val=”all”/>
</val>

If you run your content with this filter, elements that have an app value of internal or all will be included, and elements with an app value of external will be excluded. However, values of allowedex and disallowedex will still be included in your output, and you would need to include specific handling for those values.

If your map included a subjectScheme, though, the only elements that would come through would be those with an app value of internal or all. This is because of the semantic relationship that is defined within the subjectScheme. Because external apps are excluded and because the subjectScheme defines allowedex and disallowedex as a type of external app, allowedex and disallowedex are also excluded.

By using a subjectScheme map to enrich your attribute metadata, you not only gain a way to define valid values, but also a way to create relationships between them.

Localization testing: it’s not just translation

May 30, 2016 by

It takes considerable planning and effort to run a successful localization project, from following best practices to evaluating vendors to finding and fixing the weakest link in the localization chain. But the localization process does not end when you receive the translations. Localization testing is necessary for ensuring that your content and products are ready for a distributed global release.

People commonly assume that a quick proofread of the localized content is all that is needed before release, since it’s “just a translation” of the completed source material. This assumption is wrong. In fact, localized content needs to be treated with as much care and attention as the source from which it was derived.

test tubes

Testing is critical for achieving successful results. (image source: wikimedia)

When developing your source—whether it’s a manual, marketing material, or even an application—you likely (hopefully!) test it in some manner. As Jake Campbell recently blogged, product and content need a test bed and use cases to test with. Your localization testing should be conducted using the same criteria and scenarios as your source material.

Functional testing

The first step in localization testing is to ensure that everything is correct and is functioning properly. After a thorough content review and approval of the translations, the content needs to be applied to the products and content deliverables for functional testing.

During testing, check the following:

  • Does the content render? Make sure that the correct language displays, that there are no encoding issues, and that there are no special characters dropping out.
  • Does it render properly? Check for layout and formatting issues, text expansion concerns, font use, and so on.
  • Is it easy to navigate? Ensure that all navigation controls are clearly labeled and understandable in the target language, that any alphabetically sorted lists are in the correct order, and that content flow and usability conform to the target language expectations (particularly important for right-to-left languages).
  • Do all features work? Finally, make sure that everything functions as expected. Check all menus and dialog boxes, test the index and search features using terms and phrases common to the target languages, and proof all content in context to make sure it is still correct and understandable.

For subsequent translations, much of this can be smoke tested. But the content itself should be reviewed for completeness and correctness every time, in every language.

Testing against use cases

Once the localized content passes functional testing, it must be tested for usability and relevance. These tests rely on use cases and scripted scenarios.

The use cases you employ may vary from language to language and from location to location, but they should generally follow the same contexts used for the source language tests. These tests will ensure that your localized content and products are relevant, understandable, and useful.

Use real-life scenarios that people will encounter while using the products and content. All of these scenarios need to be tested in every language to make sure that the experience is very similar from language to language (some differences may be required depending on local requirements), and that instructions and next steps are clear.

Plan accordingly

Be realistic about scale, timelines, and effort when factoring localization testing into your project cycle. Every test designed for your source language needs to be applied to each target language. Some aspects of localization testing can be expedited based on known validity of content and the extent of changes from release to release. However, proper testing—even when expedited—takes time and effort to conduct.

If you are using third parties to conduct the testing (such as partners in your target markets), they need to follow the same test scripts and validate on the same criteria as you. This is critical for tracking quality and pinpointing the source of any issues.

Do you have other tips for localization testing? Please share them in the comments!

Do you know a content strategy concrete head?

May 23, 2016 by

In lean management, a concrete head is someone resistant to change. In my years working on content strategy projects, I have noticed many people are resistant to changing how they develop and distribute content—sometimes without even knowing it.

Easter Island head statue

Don’t be a content strategy concrete head (flickr: William Warby)

If you hear any of the following things, there is a good chance your team includes a content strategy concrete head.

Disclaimer: I have heard the following thoughts expressed by multiple people working for different clients. This list is not focused on a particular client or two. Believe me.

“I don’t mind copying and pasting content.”

Copying and pasting is more efficient than writing something from scratch—in the short term. But creating another version of the same (or nearly identical) content sets up another maintenance headache. When a feature or product name changes, authors have to track down every mention and make the change. What are the chances they will miss one? Or two? Pretty high, especially if the information is across departments and developed in different content tools.

Reliance on copying and pasting is a sign your content needs a reuse strategy.

“I won’t give up the authoring tool I’m using now.”

It’s great that an author has mastered Tool You’ll Pry from My Cold, Dead Hands™. Yes, that tool may have served the authoring team and the company well. But business requirements change, and if a tool no longer supports the company’s overall requirements, it’s time to consider other options.

Ferocious loyalty to a tool can be a career-limiting move.

“The minute you put in a real process, things become unmanageable.”

The ad hoc processes in place may be working for individual authors, but probably not for the company as a whole.

Implementing consistent, repeatable processes can be inconvenient. But content creators must balance the short-term pain against the need to adapt for company growth.

Automatic dismissal of any process as “unmanageable” is really code for “I don’t want to be bothered.”

And speaking of not being bothered…

“Changing process is fine as long as it doesn’t affect what I’m doing.”

People are not really supporting change when they shift the burden of change onto others. Successful content strategies encompass the entire organization—not just a department or two. No department is an island.

“We put a PDF file out on the web. It’s searchable and easy to use.”

The PDF file is a dependable format, and it will likely be around for a while. However, reading a letter- or A4-sized PDF on a smartphone is not optimal. Also, searching a PDF for specific information is more difficult than, say, using a search field to find information across a set of web pages.

Putting a PDF file, help set, or any other content deliverable on the web is not the same thing as making content findable and useful. Find out how customers are accessing your content (or would like to), and adjust your content distribution methods accordingly.

What else have you heard from a content strategy concrete head? Put it in a comment, please!

QA in techcomm: creating use cases

May 16, 2016 by

A few months ago, I wrote about how you could benefit from having a test bed for your content. I made mention of use cases several times, but what are they, and how can you make them effective?

When you’re drawing up requirements for a project, a use case generally means the situation in which an element of that project is useful. In QA, though, a use case is an example or procedure to verify a feature.

Just knowing what you need to test isn’t enough, though; having use cases that are too narrow in scope or that are poorly constructed can complicate testing. This can lead to missing issues or inefficiency in testing.

Types of use cases

Not all use cases are created equal. When I put together test materials, I try to include a few different kinds of use cases.

  • Simple: A standard implementation of a feature which answers the question “does this work?”
  • Complex: An implementation where multiple use cases interact, verifying whether that interaction causes problems. This is useful if you have a platform or environment that requires a flattened document structure, as opposed to a nested document structure like XML.
  • Failure: An implementation where the use case is either not allowed, or set up incorrectly. This is useful if you want to test things like error reporting or fallback processing.

Keep them separated

When you set up a use case within your content, whether at the document or block level, try to keep it clearly separated from other use cases. This is important because you can locate and verify your use cases more easily if they’re self-contained.

By keeping them isolated, you can also add or remove your use cases without having to worry about the impact that this will have on the rest of your test bed. There’s nothing more frustrating than removing an obsolete use case only to find that it’s tangled up with other content.

Keep it real

Lorem ipsum text is a useful way to fill large swaths of space to check general formatting. When setting up a specific use case, however, it is often more useful to create content that more closely approximates your actual content.

The most immediate benefit is recognizability. It’s not unusual for the eye to slide across the page when trying to read placeholder text. This will complicate things if you need to focus to check a use case.

Pictured: your validation team trying to check your use cases. Flickr: Kevin Krejci

Pictured: your validation team trying to check your use cases. Flickr: Kevin Krejci

The other benefit is organization. Seeing nearly-real content when verifying your output lets you know that this section needs more attention than just general formatting. Also consider ease of locating specific cases: you need to check a use case within a large test bed, but if it’s all lorem ipsum, how do you find it?

With use cases that are well-constructed, you’ll find that your test bed can serve as an even better asset than before.

LearningDITA and Oxygen XML Web Author

May 9, 2016 by

Since Scriptorium first announced the availability of LearningDITA.com, we have had more than 1,100 subscribers to our free online DITA courses. To complete the exercises in LearningDITA, we have recommended that students install an XML editor. This has presented a difficulty to some because they cannot or do not want to download and install an editor.

We’re happy to say this limitation now gone.

One of the recent developments from Syncro Soft—the makers of the <oXygen/> XML Editor—is the release of the Oxygen XML Web Author. This web-based tool allows anyone with a modern browser to use Oxygen XML Author to edit and review XML or DITA content. Borrowing from Syncro Soft’s own materials:

“You can collaborate with other members of your team to contribute and modify content from anywhere on desktops, tablets, and mobile devices. The adaptive and innovative user interface is designed to allow you to interact with XML content in the most efficient and productive way possible.”

To demonstrate the capabilities of the Oxygen XML Web Author, Syncro Soft offers a free online demo version (link here).

Now, with the availability of Oxygen XML Web Author, anyone can complete the exercises for LearningDITA in a full-featured editing environment. It is no longer necessary to download and install an editing tool.

I was even able to edit DITA content from my phone.

Simon editing DITA content using Oxygen XML Web Author from his phone.

We tested the exercises for LearningDITA on the demo version of the Oxygen XML Web Author and found that it works great!

When we wrote the exercise instructions for the LearningDITA courses, our focus was on using an installed editor with content that is stored locally. When using the Oxygen XML Web Author, you work on content that is shared in the cloud through Dropbox, Google Drive, WebDAV, or GitHub. We focus on using Dropbox, although it is possible to use one of the other cloud tools.

A new page on the LearningDITA site describes how working with shared content is different. The page outlines the differences in some procedures and includes short videos to show how easy it really is.

We hope the availability of the Oxygen XML Web Author and its free online demo help many more people get started on the path to LearningDITA.

Virtual meeting etiquette

May 2, 2016 by

Let’s take a break from content strategy and talk a bit about virtual meeting etiquette. I’ve heard plenty of virtual meeting horror stories from friends and colleagues. There are tales of barking dogs, screaming children, loud ambient office noise, and yes, even the dreaded toilet flush (I have no words). But I haven’t heard of any cases quite like a recent one I experienced…

My virtual assistant—Siri—interrupted my meeting.

Me: Please don't interrupt me. Siri: OK, maybe not.

Thanks?

The majority of my meetings are virtual. Prior to a meeting, I turn off all notifications, set any messaging applications to Do Not Disturb, and silence my mobile phone.

Well into one recent meeting, Siri suddenly piped in, trying to be useful. LOUDLY.

My phone was on silent at the edge of my desk, but at some point someone must have said something that sounded like “Hey Siri”. I had completely forgotten about that feature (mainly because it never works when I try to use it).

Me: Will you please be quiet? Siri: I'm just trying to help.

You’re not helping.

I quickly fumbled for my phone, turned it off completely, and then proceeded to apologize to the other meeting attendees. While it certainly raised some chuckles, it was embarrassing and highly annoying. Fortunately the meeting continued with no further interruption.

No matter how much you prepare, something can always go wrong. Fortunately, the bizarre cases (I’m still not talking to you, Siri) are easily forgivable, though not always forgettable. The blatantly preventable interruptions and faux pas are neither forgivable nor forgettable.

Here are some guidelines for proper virtual meeting etiquette:

  • Arrive early, especially for web meetings. You may need to download a meeting client or update. Even if your virtual meeting is a conference call, everyone arriving early ensures that the meeting starts on time.
  • Turn off all audible notifications and silence all devices. We all use some combination of Skype, Slack, Messenger, email clients, phones, or other tools during our work day. Silencing them reduces interruptions and allows you to focus on the meeting at hand.
  • Act like it’s a face to face meeting. Don’t do anything you wouldn’t do in a conference room with the attendees. If you must make noise, please mute yourself first.
  • Use video appropriately. While video can help keep all attendees on their best behavior, it can cause bandwidth issues. Also, remote attendees in other time zones may no longer be in the office, and might appreciate some degree of privacy while in the meeting.
  • Only invite those who absolutely must attend. Too often, virtual meetings include far more people than necessary. Any type of meeting takes time and attention away from other responsibilities. Be mindful of others’ schedules and only include those whose presence is critical.
  • Plan your presentations well. Make sure everyone who is expected to present is given ample notice prior to the meeting. Conduct a dry run using the virtual meeting software to ensure that the presentation will display properly and that everyone running the presentation knows how to use the interface.

And yes, disable your virtual assistants!

Do you have other etiquette suggestions for virtual meetings? Have you experienced poor etiquette in a virtual meeting? Please share your stories and advice in the comments!

tcworld China recap

April 25, 2016 by

The tcworld China event took place in Shanghai April 18 and 19. I was there to present on content strategy and advanced DITA (yes, I hear your gasp of surprise), but for me, the most interesting part of the trip was getting a chance to connect with the technical communication community in China.

Technical Communication in Chinese

“technical communication” in Chinese

There were more than 100 attendees at the event. Most of the people I met were from Shanghai, Beijing, and Shenzhen. There were also participants from other cities, like Nanjing, and from Japan and Singapore.

For those of us completely ignorant of Chinese geography (which I’m embarrassed to say included me until I found out about this trip), here is a basic map:

I don’t recommend making a strategic decision based on my single week in China, but nonetheless, here are some observations.

Blending authoring and localization

In several conversations, I heard about a blended authoring/localization workflow. Technical writers create information in Chinese and work with the engineers to have this information reviewed and approved. Once the Chinese document is finalized, the same technical writers rewrite the information in English. The English document becomes the starting point for localization into all other languages.

English as a pivot language is common in many places, but the difference here is that a single technical writer is expected to create both the Chinese and the English versions of a document. This means that the technical writers must be able to write in both languages.

Academic background

Chinese universities are just beginning to offer technical writing courses. These courses are often intended for engineers. Technical writing is not currently available as an academic major. Like North American technical writers, Chinese technical writers have varied educational backgrounds. The most common is a university degree in English or a related subject like English translation. Engineering or computer science majors also may end up in technical writing.

In English, we usually refer to people “falling into” technical writing, and German has the word “Quereinsteiger”; that is, “a person who climbs in sideways.” In Germany, however, a large percentage of technical communicators have university-level education in technical communication, and there is also a robust certification process.

It remains to be seen which approach the technical communication industry in China will choose, or whether China will choose a third way.

Business relevance

I delivered a presentation on content strategy in technical communication at the event. My key message, as always, was that you need to have business reasons as the driving force behind your content strategy decisions.
tcworld China slide: Chocolate factory with a sign on the wall reading 400kg chocolate every three minutes. Caption for the slide is Justify your approach.

I also spent some time discussing why cheap content is really expensive—product returns, legal exposure, and inefficient content processes all increase the cost of producing information.

tcworld China slide: Two chocolate bunnies with their ears bitten off. Caption is The myth of cheap content

Both of these messages seemed to resonate with the audience, but there was concern about how to get management support for any new content initiatives.

Several people told me that, in China, organizations are often not ready to invest in content or content strategy. Their corporate culture is to keep operational costs as low as possible. This makes the argument for content strategy investment, even with compelling ROI, a difficult one. That said, it is clear that some companies are shifting their strategy toward innovation—they are delivering cutting-edge products rather than commodities.

A view of the Bund and the river at night

Shanghai at night

There is an informal Association of Shanghai Technical Communicators, which communicates mainly via WeChat. If you can read Chinese, that would definitely be something to explore.

Platform differences

At home, I rely heavily on Slack (internal business), Twitter (mostly business), and Facebook (business and personal) for social media, along with email, Skype, web meeting tools, and more. Inside China, people use different platforms, such as WeChat (similar to Twitter). In part, this is because of the Great Firewall. Facebook, for example, is not officially allowed in China, and I expected to be blocked from using it.

What I found, however, was in some locations I could use the Facebook mobile app via a cellular connection (but not Wifi). In other locations, it appeared to be wide open. I had very little luck getting Twitter to work anywhere.

This presents a business problem for us. We want to continue to connect with the Chinese technical communication industry, but the social media tools we use are not appropriate for making those connections. Information posted on Twitter will not reach people in China, but the social media applications used in China are not widely used outside of China. We have a platform divide.

Communication challenges

Finally, I want to talk about some of the communication challenges I ran into. A colleague told me that the biggest challenge in China is that you are functionally illiterate. Although many signs are provided in both Chinese and English, this is quite true. Upon arrival, I hopped in a taxi and told that driver my hotel. But because the hotel name is different in Chinese, it wasn’t until I showed him the written address, in Chinese, that he understood where I needed to go. (Based on advice from colleagues, I was prepared with the necessary version of the address.)

Shanghai was actually easier in this regard than Shenzhen, where I also spent a couple of days. (This is probably a good spot to mention that Yuting Tang of tekom did a fantastic job organizing various outings, providing translation, and acting as a general fixer for me and other speakers. And I had a great time just hanging out with her! Without her, Shenzhen would have been a big challenge.)

In Shanghai, I had a twelve-hour time difference with my office in North Carolina. Given a conference during the Shanghai day, I generally had only a few hours in the evening for synchronous communication. That is, after I got back from one of our epic dining adventures until I fell into bed, I could check in with the office as needed. For a week-long stay, this wasn’t particularly critical. For an ongoing business relationship, though, this introduces obvious challenges. One (China-based) colleague had to leave an evening get-together to attend an 8 p.m. meeting. Another (visiting) colleague had previously scheduled a webcast, so he found himself at his computer at 11 p.m. local time. There’s not much that can be done about the time zones, but best practices like rotating meeting times (so that everyone shares the pain of the occasional 11 p.m. meeting) are important to show some respect to your team members.

 

I thoroughly enjoyed my time in China, and I was delighted to meet a few of the people working in technical communication across the country. I also made a significant dent in the country’s dumpling inventory. Many thanks to Michael Fritz at tekom for the invitation!

Dumplings!

Totally worth the trip.