The rise of Web 2.0 technology provides a platform for user-generated content. Publishing is no longer restricted to a few technical writers—any user can now contribute information. But the information coming from users tends to be highly specific, whereas technical documentation is comprehensive but less specific. The two types of information can coexist and improve the overall user experience.
In early 2009, Scriptorium Publishing conducted a survey to measure how and why technical communicators are adopting structured authoring.
Of the 616 responses:
29 percent of respondents indicated that they had already implemented structured authoring.
16 percent indicated that they do not plan to implement structured authoring.
14 percent were in the process of implementing structured authoring.
20 percent were planning to do so.
21 percent were considering it.
This report summarizes our findings on topics including the reasons for implementing structure, the adoption rate for DITA and other standards, and the selection of authoring tools.
A report from Morgan Stanley states that mobile Internet use will be twice that of desktop Internet and that the iPhone/smartphone “may prove to be the fastest ramping and most disruptive technology product / service launch the world has ever seen.” That “disruption” is already affecting the methods for distributing technical content.
With users having Internet access at their fingertips anywhere they go, Internet searches will continue to drive how people find product information. Desktop Internet use has greatly reshaped how technical communicators distribute information, and having twice as many people using mobile Internet will only push us toward more online delivery—and in formats (some yet to be developed, I’d guess) that are compatible with smaller smartphone screens.
The growing number of people with mobile Internet access underscores the importance of high Internet search rankings and a social media strategy for your information. If you haven’t already investigated optimizing your content for search engines and integrating social media as part of your development and distribution efforts, it’s probably wise to do that sooner rather than later. Also, have you looked at how your web site is displayed on a smartphone?
If you don’t consider the impact of the mobile Internet, your documentation may be relegated to the Island of Misfit Manuals, where change pages and manuals in three-ring binders spend their days yellowing away.
(This post is late. In my defense, I had the flu and the glow of the computer monitor was painful. Also, neurons were having trouble firing across the congestion in my head. At least, that’s my medical explanation for it. PS I don’t recommend the flu. Avoid if possible.)
Which of these scenarios do you think is most intimidating?
Giving a presentation to a dozen executives at a prospective client, which will decide whether we get a project or not
Giving a presentation to 50 people, including half a dozen supportive fellow consultants
Giving a presentation to 400 people at a major conference
I’ve faced all three of these, and while each scenario presents its own set of stressors, the most intimidating, by far, is option #2.
In general, I’m fairly confident in my ability to get up in front of a group of people and deliver some useful information in a reasonably interesting fashion. But there is something uniquely terrifying about presenting in front of your peers.
At LavaCon, I faced the nightmare—a murderers’ row of consultants in the back of the room, fondling various tweeting implements.
Here are some of the worst-case scenarios:
No new information. I have nothing to say that my colleagues haven’t heard before, and they could have said it better.
Disagreement. My peers think that my point of view is incorrect or, worse, my facts are wrong.
Boring. I have nothing new to say, my information is wrong, and I’m not interesting.
Of course, my peers were gracious, participated in the session in a constructive way, and said nice things afterwards. I didn’t even see any cheeky tweets. (I’m looking at you, @scottabel.)
All in all, I’d have to say that it’s a lot more fun to sit in the back of someone else’s presentation, though. Neil Perlin handled his peanut gallery deftly, asking questions like, “With the exception of the back row, how many of you enjoy writing XSLT code?”
Rahel Bailie said it best, I think. After completing her excellent presentation, she noted that presenting in front of peers is terribly stressful because, “I really want you to like it.”
In addition to our November event on localization, we are adding another webcast in December. I’ll be presenting Strategies for coping with user-generated content on December 8 at 11 a.m. Eastern time via GoToWebinar. This event is free but registration is required.
Here’s the description:
The rise of Web 2.0 technology provides a platform for user-generated content. Publishing is no longer restricted to a few technical writers—any user can now contribute information. But the information coming from users tends to be highly specific.
The two types of information can coexist and improve the overall user experience. User-generated content also offers an opportunity for technical writers to participate as “curators”—by evaluating and organizing the information provided by end users.
Remember, there’s no charge to attend, but you do need to register.
October 22nd, join Simon Bate for a session on delivering multiple versions of a help set without making multiple copies of the help:
We needed to generate a help set from DITA sources that applied to multiple products. However, serious space constraints prevent us from using standard DITA conditional processing to create multiple, product-specific versions of the help; there was only room for one copy of the help. Our solution was to create a single help set in which select content would be displayed when the help was opened.
In this webcast, we’ll show you how we used the DITA Open Toolkit to create a help set with dynamic text display. The webcast introduces some minor DITA Open Toolkit modifications and several client-side JavaScript techniques that you can use to implement dynamic text display in HTML files. Minimal programming skills necessary.
I will be visiting New Orleans for LavaCon. This event, organized by Jack Molisani, is always a highlight of the conference year. I will be offering sessions on XML and on user-generated content. You can see the complete program here. In addition to my sessions, I will be bringing along a limited number of copies of our newest publication, The Compass. Find me at the event to get your free copy while supplies last. (Otherwise, you can order online Real Soon Now for $15.95.)
Register for LavaCon (note, early registration has been extended until October 12)
And last but certainly not least, we have our much-anticipated session on translation workflows. Nick Rosenthal, Managing Director, Salford Translations Ltd., will deliver a webcast on cost-effective document design for a translation workflow on November 19 at 11 a.m . Eastern time:
In this webcast, Nick Rosenthal discusses the challenges companies face when translating their content and offers some best practices to managing your localization budget effectively, including XML-based workflows and ways to integrate localized screen shots into translated user guides or help systems.
We have opened up free access to two of our white papers:
Hacking the DITA Open Toolkit, available in HTML or PDF (435 KB, 19 pages)
FrameMaker 8 and DITA Technical Reference, available inPDF (5 MB, 55 pages)
These used to be paid downloads.
Why the change of heart? Most of our business is consulting. To get consulting, we have to show competence. These white papers are one way to demonstrate our technical expertise.
(By this logic, our webcasts should also be free, but I’m not ready to go there. Why? We have fixed costs associated with the webcast hosting platform. Plus, once we schedule a webcast, we have to deliver it at the scheduled time, even if we’d rather be doing paying work. By contrast, we can squeeze in white paper development at our convenience.)
What are your thoughts? We are obviously not the only organization dealing with this issue…
If you are reading this, then we have succeeded in migrating our web site over to WordPress.
Of course, the process of managing our own content always takes a back seat to working with our customers’ content, so the process took longer than you might expect.
We did learn a couple of things, most of which should sound awfully familiar if you are working on your own content strategy:
It’s not until you try to move into a new system that you recognize all the mistakes you made the previous system.
PHP stands for Picky Hypochondriac Programming. I had several cases where code absolutely refused to work for no apparent reason. I had the resident PHP expert (Simon) look it over. Eventually, I gave up and retyped the code, and then it worked.
Learn to work with the tool and not against it. I have to credit a former coworker, Bruce Bicknell, for this little gem, which he originally applied to Word versus FrameMaker. When moving from Dreamweaver-based HTML to WordPress, take some time to learn best practices for WordPress. Don’t try to impose your existing Way of Doing Things onto the new system. It’s inefficient and it probably won’t work.
Content migration is always awful. To transfer our blog, I found a blogger-to-WordPress converter. That worked pretty well, except that a couple of posts now have my name on them even though I didn’t write them. Transferring comments was a travesty that involved the support people at Haloscan (helpful) and cleaning out random comment triplication (gross manual labor).
But I hope you like the new site and blog. Please poke around and leave us feedback.
DITA XML is of little use to readers unless it’s converted to some kind of output. The DITA Open Toolkit (DITA OT) provides transforms and scripts that convert DITA to PDF output and a long list of other formats.
Producing PDF output from DITA content can be challenging. DITA XML is converted to an XSL-FO file, a combination of content and formatting instructions. You must know XSL-FO to customize the PDF, even just to add simple content such as headers and footers, logos, and so on.
To forgo the programming, you can choose a page layout or help authoring tool, but these tools also have pitfalls. Page layout programs have varying degrees of DITA support. Help authoring tools let you style the PDF through CSS, but you can’t fine-tune page layout as you can in page layout programs.
These are just a few examples we discuss in our white paper “Creating PDF files from DITA content.” Read the white paper online (in HTML or PDF).
Conversation and Community: The Social Web for Documentation (XML Press, ISBN: 9780982219119) by Anne Gentle provides technical communicators with a roadmap for integrating social media — blogs, wikis, and much more — into their content development efforts. This is critical because, as Anne notes in the preface, “professional writers now have the tools to collaborate with their audience easily for the first time in history.”
Anne provides overviews of all the major social media concepts — from aggregation to syndication, wikis, discussion, presence, and much more. But it is Chapter 3, “Defining a Writer’s Role with the Social Web,” that will make this book a classic. Here, Anne lays out a detailed strategy for determining whether and how to introduce social media in an organization. Consider this:
It’s important to find a balance between allowing an individual’s authentic voice to speak on behalf of an organization and the requirements of institutional messaging and brand preservation. […] It’s also possible that you are ahead of the curve and need to help others see ways to apply social technologies for the company.
She goes on to explain just how to accomplish these things.
Wikis and blogs each get a chapter of their own, in which Anne discusses how to start and maintain these types of environments.
After reading so much of Anne’s work on her blog, it’s a bit odd to see her writing on paper in an actual book. The feeling that I’ve wandered into the wrong medium is augmented by extensive footnotes, most of which point to web site resources, and the many examples of web-based content (such as videos or interactive mashups). However, it’s likely that the book’s target audience is more comfortable with paper.
Conversation and Community: The Social Web for Documentation provides an excellent introduction to wikis, blogs, forums, and numerous other social media technologies for the professional content creator. There is valuable (and perhaps career-preserving) information about how to develop a strategy for user-generated content that is compatible with your organization’s corporate culture.
If you think that community participation in your documentation is coming soon, read this book immediately. If you think that it’s not coming, you’re wrong, and you especially need to read this book.
This post is Part 2 of our Flare 5 DITA feature review. Part 1 provides an overview and discusses localization and map files.
Cross-references and other links
I imported DITA content that contained three xref elements (I shortened the IDs below for readability):
Reference to another step in the same topic:
<stepresult>
Result of step. And here’s a reference to the <xref href=”task1.xml#task_8F2F9″ type=”li” format=”dita” scope=”local”>third step</xref>.
</stepresult>
Reference to another topic:
<stepresult>
Result text. And here’s a link to the other task topic:
<xref href=”task2.xml#task_8F2F94 type=”task” format=”dita” scope=”local”></xref>.
</stepresult>
Link to web site:
<cmd>
Here’s another step. Here’s a link with external scope:
<xref href=”https://scriptorium.com” scope=”external” format=”html”>www.scriptorium.com</xref>
</cmd>
All three came across in the WebHelp I generated from Flare:
On the link to the topic, Flare applied a default cross-reference format that included the word “See” and the quotation marks around the topic’s name. You can modify the stylesheet for the Flare project to change that text and styling.
Relationship tables
DITA relationship tables let you avoid the drudgery of manually inserting (and managing!) related topic links. Based on the relationships you specify in the table, related topic links are generated in your output.
I imported a simple map file with a relationship table into Flare and created WebHelp. The output included the links to the related topics. I then tinkered with the project’s stylesheet and its language skin for English to change the default appearance and text of the heading for related concepts. The sentence-style capitalization and red text for “Related concepts” in the following screen shot reflect my modifications:
conrefs
DITA conrefs let you reuse chunks of content. I created a simple conref for a note and then imported the map file with one DITA file that contains the actual note and a second file that references the note via a conref.
Flare happily imported the information and turned the conref into a Flare snippet. It’s worth noting that the referencing, while equivalent, is not the same. In my source DITA files, I had this:
aardvark.xml contains:
<note id=””>Do not feed the animals
Thus, we have two instances of the content in the DITA files — the original content and the content reference. In Flare, we end up with three instances — the snippet and two references to the snippet. In other words, Flare separates out the content being reused into a snippet and then references the snippet. This isn’t necessarily a bad thing, but it’s worth noting.
Specialization
Specialized content is not officially supported at this point. According to MadCap, it worked for some people in testing, but not for others. If you need to publish specialized DITA content through Flare, you might consider generalizing back to standard DITA first.
Conditional processing
When you import DITA content that contains attribute values, Flare creates condition tags based on those values. I imported a map file with a topic that used the audience attribute: one paragraph had that attribute set to user, and another had the attribute set to admin. When I looked in the Project Organizer at the conditions for the WebHelp target, conditions based on my audience values were listed:
I set Audience.admin to Exclude and Audience.user to Include, and then I created WebHelp. As expected, the output included the user-level paragraph and excluded the admin-level one.
DITA support level
Flare supports DITA v1.1.
Our verdict
If you’re looking for a path to browser-based help for your DITA content, you should consider the new version of Flare. Without a lot of effort, we were able to create WebHelp from imported DITA content. Flare handled DITA constructs (such as conrefs and relationship tables) without any problems in our testing. Our only quibble was with the TOC entries in the WebHelp (as mentioned in Part 1), and we’ve heard that MadCap will likely be addressing that issue in the future.
We didn’t evaluate how Flare handles DITA-to-PDF conversion. However, if the PDF process in Flare works as smoothly as the one for WebHelp, Flare could provide a compelling alternative to modifying the XSL-FO templates that come with the Open Toolkit or adopting one of the commercial FO solutions for rendering PDF output.
[Disclosure: Scriptorium is a Certified Flare Instructor.] [Full disclosure: We’re also an Adobe Authorized Training Center, a JustSystems Services Partner, a founding member of TechComm Alliance, a North Carolina corporation, and a woman-owned business. Dog people outnumber cat people in our office. Can I start my post now?]
These days, most of our work uses XML and/or DITA as foundational technologies. As a result, our interest in help authoring tools such as Flare and RoboHelp has been muted. However, with the release of Flare 5, MadCap has added support for DITA. This review looks at the DITA features in the new product. (If you’re looking for a discussion of all the new features, I suggest you wander over to Paul Pehrson’s review. You might also read the official MadCap press release.)
The initial coverage reminds me a bit of this:
(My web site stats prove that you people are suckers for video. Also, I highly recommend TubeChop for extracting a portion of a YouTube video.)
Let’s take a look at the most important Flare/DITA integration pieces.
New output possibilities
After importing DITA content into Flare, you can publish to any of the output formats that Flare supports. Most important, in my opinion, is the option to publish cross-browser, cross-platform HTML-based help (“web help”) because the DITA Open Toolkit does not provide this output. We have created web help systems by customizing the Open Toolkit output, and that approach does make sense in certain situations, but the option to publish through Flare is appealing for several reasons:
Flare provides a default template for web help output (actually, three of them: WebHelp, WebHelp Plus, and WebHelp AIR)
Customizing Flare output is easier than configuring the Open Toolkit
I took some DITA files, opened them in Flare, made some minimal formatting changes, and published to WebHelp. The result is shown here:
Not bad at all for 10 minutes’ work. I added the owl logo and scriptorium.com in the header, changed the default font to sans-serif, and made the heading purple. Tweaking CSS in Flare’s visual editor is straightforward, and changes automatically cascade (sorry) across all the project files.
Ease of configuration
Flare wins. Next topic. (Don’t believe me? Read the DITA Open Toolkit User Guide — actually, just skim the table of contents.)
Language support
The Open Toolkit wins on volume and for right-to-left languages; Flare wins on easy configuration (I’m detecting a theme here.)
Out of the box, both Flare and the Open Toolkit provide strings (that is, localized output for interface elements such as the “Table of Contents” label) for simplified and traditional Chinese, Danish, Dutch, English, Finnish, French, German, Italian, Japanese, Korean, Norwegian, Portugese, Spanish, Swedish, and Thai (I have omitted variations such as Canadian French).
Beyond that, we have the following:
Right-to-left language support: Only in the Open Toolkit
Language strings provided by the Open Toolkit but not by Flare: Arabic, Belarusian, Bulgarian, Catalan, Czech, Greek, Estonian, Hebrew, Croatian, Hungarian, Icelandic, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Slovak, Slovenian, Serbian, Turkish, and Ukrainian
Ease of adding support for a new language: Flare wins. In the Open Toolkit, you modify an XML file; in Flare, you use the Language Skin Editor (although it looks as though you could choose to modify the resource file directory directly if you really wanted to)
Thus, if you need Hebrew or Arabic publishing, you can’t use Flare. The Open Toolkit also provides default support for more languages.
Map files
I imported a map file into Flare and published. Then, I changed the map file to include a simple nested ditamap. Here is what I found:
Flare recognized the map file and the nested map file and built TOC files in Flare with the correct relationships.
Inexplicably, the nested map file was designated the primary TOC. I speculate that this might be because the nested map file was first in alphabetical order. I changed the parent map file to be the primary TOC to fix this. I don’t know what would happen for a more complex set of maps, but I am concerned.
Flare inserted an extra layer into the output TOC where the nested map is found.
The titles generated in the TOC are different in Flare than they are through the DITA Open Toolkit (see below).
I generated the output for my map file (the nested map is the “The decision to implement” section in this screen shot) through the DITA Open Toolkit and got the following XHTML output: Then, I imported the same map file into Flare, generated WebHelp, and got the following TOC output:
Notice that:
The TOC text is different (!!). The DITA Open Toolkit uses the text of the topic titles from inside the topic files. Flare uses the text of the @navtitle attribute in the map file. My topic titles and @navtitles don’t match because I created the map file, then changed a bunch of topic titles. The map file didn’t keep up with the new titles (because it doesn’t matter in the Open Toolkit), but it appears to matter for Flare. The entry in the map file for the first item is:
Full disclosure: We’re an XMetaL Services Provider and have no particular affiliation with oXygen.
I’m in the fortunate situation of having access to both XMetaL 5.5 and oXygen 9.3. Both are excellent XML editors for different reasons. I’d hate for Scriptorium to make me choose one over the other.
From the viewpoint of authoring XML and XSLT, here are my top five features of both editors:
oXygen
Apply XSLT on the fly: You can associate an XML file with an XSLT and transform the XML within oXygen. Goodbye, command line! XMetaL will convert the document to a selected output format. You don’t choose the XSLT–it hasn’t been a big concern for me.
Indented code: The pretty-print option makes working with code so easy. You can set oXygen to do this automatically when you open a file or on demand. The result is code indented according to the structure. XMetaL doesn’t have pretty print.
Autocompleting tags: As you type an element, oXygen pops up a list of elements beginning with the typed string. You press Enter when you find the right tag, and the end tag is inserted for you. The valid attributes at any particular point are also shown in a drop-down list. XMetaL doesn’t have autocompleting tags.
Find/replace in one or more documents: I’ve often needed to search and replace strings in an entire directory. In XMetaL, you can only find and replace in the current document.
Comparing two documents or directories: Compare files by content or timestamp. In a directory, you can even filter by type so only XML files, for example, are compared. XMetaL doesn’t offer this feature.
XMetaL
Auto-tagging content: You can copy and paste content from an unstructured document (a web page, for example), and XMetaL automatically wraps the content in elements. Even tables and lists are wrapped correctly. This can be handy if you have a few documents to convert. In oXygen, the content is pasted as plain text.
Auto-assignment of ID attributes: Never worry about coming up with unique IDs. XMetaL will assign them to the types of elements you select. Warning: The strings are quite long, as in “topic_BBEC2A36C97A4CADB130784380036FD6.” oXygen only inserts IDs on the top-level element but full support will be added in version 10.3.
Auto-insertion of basic elements: When you create a document, XMetaL inserts placeholders for elements such as title, shordesc, body, and p. It’s a small convenience. oXygen will also insert elements if you have Content Completion selected in the Preferences.
WYSIWYG view of tables: The table is displayed as you’d see it in a Word or FrameMaker document. In oXygen, all you see are the table element tags.
Reader-friendly tag view: The tags are a bit easier to read in XMetaL than oXygen. In XMetaL, the opening and closing tags are displayed on one line when possible. This feature saves space on the page and makes the document easier to read in tag view. For example, you might have a short sentence wrapped in p tags. In XMetal, the p tags are displayed on the same line. In oXygen, the p tags are always on separate lines. This is another convenience that doesn’t sound like a big deal, but it really makes a difference while you’re authoring.
oXygen and XMetal have so many other strengths. I’ve just chosen my top five features.
What I’d like to see in XMetaL: The ability to indent code, the ability to drag and drop topics in the map editor.
What’s I’d like to see in oXygen: The ability to view a table–lines and all–in the WYSIWYG view instead of just the element tags.
So how do I choose which editor to use at a particular moment? When I’m casually authoring in XML, I choose XMetaL for all of reasons you read above. The WYSIWYG view is more user-friendly to me. But when I’m writing XSLT or just want to get at the code of an XML document, oXygen is my choice.
Update 6/15/09:
I’m thrilled to report that two deficiencies I reported in oXygen 9 are now supported in the latest version of oXygen — 10.2.
In Author view, tables are now displayed in WYSIWYG format. Just like in your favorite word processor, you can drag and drop column rulings to resize columns. After you resize columns, the colwidth attribute in the colspec element is updated automatically. This is much easier than manually editing the colwidth.
In Author view, the tags are now displayed on one line when possible. Before, the tags were always on separate lines from the content.
I regret to announce that DocTrain DITA Indianapolis is cancelled. DocTrain/PUBSNET Inc is shutting down.
As a business owner, messages like this strike fear in my heart. If it could happen to them…gulp. (This might be a good time to mention that we are ALWAYS looking for projects, so send them on over, please.) My condolences to the principals at DocTrain.
Meanwhile, I’m also thinking about what we can do in place of the event. I had a couple of presentations scheduled for DocTrain DITA, and Simon Bate was planning a day-long workshop on DITA Open Toolkit configuration.
So, here’s the plan. We are going to offer a couple of webinars based on the sessions we were planning to do at DocTrain DITA:
Each webinar is $20. We may record the webinars and make the recordings available later, but I’m not making any promises. Registration is limited to 50 people.
Here’s the challenge part: If you were scheduled to present at DocTrain DITA (or weren’t but have something useful to say), please set up a webcast of your presentation. It would be ultra-cool if we could replicate the event online (I know that the first week in June was cleared on your schedule!), but let’s get as much of this content as possible available. If you do not have a way to offer a webinar, let me know, and I’ll work with you to host it through Scriptorium.
And here’s my challenge to those of you who like to attend conferences: Please consider supporting these online events. If $20 is truly more than you can afford, contact me.
Last week, I attended the annual DocTrain West event, which was held this year in Palm Springs, California.
Weather in Palm Springs was spectacular as always with highs in the 80s during the day. Some of my more northerly friends seemed a bit shell-shocked by the sudden change from snow and slush to sun and sand. (North Carolina was 40 degrees when I left, so that was a nice change for me as well.)
Scott Abel did his usual fine job of organizing and somehow being omnipresent.
I promised to post my session slides. The closing keynote was mostly images and is probably not that useful without audio, so I’m going to point you to an article that covers similar ground (What do Movable Type and XML Have in Common, PDF link).
I have embedded the slides from my DITA to PDF session below.
I have also posted the InDesign template file and the XSL we built to preprocess the DITA XML into something that InDesign likes on our wiki. Note that running the XSL requires a working configuration of the DITA Open Toolkit. For more information, refer to the DITA to InDesign page on our wiki.
This is the best advice for job seekers I’ve ever seen. India Amos writes about her pile of resumes:
And do you want to know what’s the most striking thing about most of these hopefuls? They are completely wasting their time. And mine, of course, but mostly their own. Because they’re not only not going to get a job with me, they’re not going to get a job with anyone unless that person is as slovenly and illiterate as these applicants.
She proceeds to offer some excellent advice in numerous categories. Here are some excerpts from a lengthy list about formatting:
Learn to use style sheets, so that you can make your heading styles consistent. If you choose to ignore my request for a PDF résumé, try to make sure your Word attachment doesn’t demonstrate to me what a slob you are, formatting everything locally and aligning text using spaces instead of tabs.
Don’t Capitalize Everything. I Cannot Emphasize This Enough. It Makes You Look Like a 419 Scammer.
Violet 9pt Arial is probably not a good choice for anything.
Of course, in today’s economy, lots of people need jobs. So here is some long-promised advice on how to get a job:
Apply for jobs where your skillset is relevant. In this job market, with tons of job seekers, you are unlikely to get the “stretch” position. So, look for positions that are equivalent to your last position, that you are uniquely qualified for, or that you are slightly overqualified for. For instance, let’s say you are a technical writer with five years of experience and “the usual” complement of technical skills. What is your unique qualification? If you speak some Japanese, look for Japanese companies where your language skills might be useful. If your undergraduate degree is in music, look for a company that makes music software or products related to music. In other words, look for a position where your outside interests are also relevant. But, at a minimum, apply only for positions that you are reasonably qualified for. It’s tempting, especially when you really need a job Right Now, to take the firehose approach and spray resumes everywhere. It doesn’t work. Focus your job search and send out a smaller number of really good applications.
Do your homework. Before contacting the company, investigate. Read their web site, read any recent news coverage. Look them up on LinkedIn and see if you know anyone in the organization. (You are on LinkedIn, right?) Use the information you find to make your application more relevant. If you get an interview, do more homework before the interview.
When you apply for the job, follow the #!%$#!%#! instructions. If asked for PDF, provide PDF. If asked for Word, provide Word. Et cetera.
Submit resumes online. Paper and snail mail takes too long. By the time your resume arrives by mail, the position could be filled. Also, dropping off your resume in person? Creepy and needy. (One exception: If you know someone at the organization and they are willing to deliver the resume for you. Even then, I would recommend sending your contact email with the resume and asking him or her to forward it.)
Whether it is requested or not, write a cover letter. The cover letter should be the body of your email and not an attachment. Follow Ms. Amos’s excellent advice. You might also use a T letter as your cover letter, but do send the resume. Tom Murrell describes the T letter in detail in his article Get More Interviews with a T-letter. But again, I disagree with his advice to leave out the resume. If you are instructed to send a resume, send a resume.
Show up on time for any in-person interview. If possible, do a dry run the day before to locate the building. Or plan to arrive very early. There are worse things than sitting in a nearby coffee shop for half an hour. (Don’t chug too much coffee.)
I could go on for a long time, but frankly, these six points will lift you above 95 percent of the other applicants, and you can do the rest.
I am a voracious reader. And by voracious, I mean that I often cook with a stirring spoon in one hand and a book in the other. I go through at least a dozen books a months (booksfree is my friend).
So why don’t I like podcasts?
They’re inconvenient. I don’t have a lot of interrupted listening time, other than at the gym. And frankly, there’s a bizarre cognitive dissonance listening to Tom Johnson interview Bogo Vatovec while I’m lifting weights. I tried listening to a crafting podcast, but that was worse — my brain can’t handle auditory input describing crocheting techniques while simultaneously operating an elliptical machine. So I went back to Dr. Phil on the gym TV. It may rot my brain, but at least it doesn’t hurt.
They’re inefficient. I can listen to a 30-minute podcast, or I can skim the equivalent text in 90 seconds.
I’ve been thinking about what would make a podcast more appealing to me, and realized that it’s not really the medium I object to, it’s my inability to control the delivery.
I’ll become a podcasting proponent when I perceive these properties:
Better navigation. Podcasts, like other content, need to be divided into logical chunks. These chunks should be accessible via a table of contents and an index.
Ability to skim. Podcasts need to provide the audio equivalent of flipping pages in a book or scrolling through a document while only reading the headings.
Depending on the software you use to consume podcasts, you may already have some of the features. For instance, a colleague told me that he listened to my recent DITA webinar at five times the normal speed:
I wanted to let you know about something in particular. I listened to it at 5x fast fwd in Windows Media Player while drinking a coke. My heart is still racing. You should try it. :o)
Do you enjoy podcasts? Do you have any special techniques for managing them efficiently?
Matthew Ellison reviews seven screen capture programs: FullShot, HyperSnap, SnagIt, Madcap Capture, RoboScreen Capture, ScreenHunter (free), and TNT. He also points out what to look for in a screen capture tool and compares features in a handy table.
SnagIt lands at the top of the bunch. Matthew describes it as “the most full-featured of the capture tools reviewed in this article.”
I’m a recent SnagIt convert after using Paint Shop Pro for years. SnagIt can’t be beat for a quick, easy screen shot. I also like the torn edge options to indicate a partial shot of the GUI. But the jagged edges might be more of a creative device than helpful visual cue. What do you think?
Have you ever been really scared? I don’t mean just the Halloween kinda scared, but really scared. That’s how I felt at the Burlington Marriott when the hotel employee delivered the box containing the workbooks for my Introduction to XMetaL and DITA workshop. He stood in the doorway, smiled, and handed me a very beat up, bent, folded, spindled, and mutilated FedEx box.
The box looked like the driver had had a flat on Route 128 and used it to prevent the truck from rolling back while jacking up the front end. It was nice and damp too. With much trepidation, I opened the box and — to my relief — found that the materials were undamaged. Whew.
Following that, Wednesday’s all-day workshop on XMetaL and DITA was smooth sailing. OK, we had a bit of a problem with powerstrips, but the helpful DocTrain folks got that taken care of. In retrospect, many of the questions I fielded in the workshop weren’t so much about DITA or XMetaL itself. Instead many of the questions were about generating output. The fact is that unless you’re willing to spend some quality time with CSS and the DITA Open Toolkit, your output from DITA will look very generic. XMetaL has a number of hooks that ease some of the pain in generating XHTML output. But even those hooks won’t save you from FO issues if you want to generate PDF output.
In my presentation on Thursday comparing XMetaL and FrameMaker support in DITA, the questions returned once again to output. Of course, this time the focus was on using FrameMaker 8.0 as a PDF engine. In workflows where content is created and maintained in XML, but then has to be delivered in PDF (or print), FrameMaker 8.0 looks like an attractive possibility. There are a few flaws in this solution (such as translating xref elements for intra-document links into live links in PDF), but users are closer to a solution than they were six months ago.
We’ve posted PDFs of the slides from both sessions on SlideShare.
When you’re done browsing the slides, take a look on our site for information about how we can help you with your FrameMaker, XMetaL, OT, PDF problems.
What do killer Internet applications have in common?
Information businesses (publishers??)
Software as a service
Internet as platform
Harnessing collective intelligence
Web 2.0: harness network effects to get better the more people use them.
Google: every time someone makes a web link, they contribute
eBay: critical mass of buyers and sellers hard for others to enter
amazon: 10M user reviews
craigslist: self-service classified ads, users do all the work
YouTube: viral distribution, user creation, user curation
Each of these companies is building a database whose values grows in proportion to the number of participants — a network-effective-driven data lock-in. (gulp)
Law of conversation of attractive profits
When attractive profits disappear at one stage the opportunity will usually emerge at an adjacent stage.
PCs used to be expensive. Software became expensive. Free precursor to rediscovery of value in some other form.
And thus, if digital content is becoming cheap, what’s next? What’s adjacent?
For publishers, the question is: where is value migrating to?
Asymmetric competition
craigslist has 18 employees, #7 site on the web (2005 numbers)
All others in top 10 have thousands of employees.
Curating user-generated content
The skill of writing is to create a context in which other people can think. — Edwin Schlossberg
The skill of programming is to create a context in which other people can share.
The cost of things tend to fall to zero over time.
You can build business around giving things away:
Free samples
Skype, YouTube, free unlimited storage on Yahoo
Ad-supported media..product is free, make it back on ads
Free ice cream samples
Give away razor, sell blades
Gift economy/wikipedia, craigslist: people donate expertise/time for nonmonetary — attention, reputation, expression…never before “dignified” as an economy. There is an economy, just money is not the currency.
If marginal cost of reaching the N+1 customer is approaching zero, then treat the product as free and figure out how to sell something else.
The price of a magazine like Wired is arbirary; it bears no relationship to the actual cost of the magazine. The subscription price is intended to qualify your interest. Setting the price too low “devalues the product.”
Most music is free. “Free as in speech” — DRM is going away. “Free as in beer” — bands are experimenting with giving away music to market the live performances.
Games and movies would be free if not protected. They are locked down to enforce prices. Artificial barriers tend to fall over time. Already seeing ad-supported videogames. (neopets)
The shining exception: Books! They are not asymptotically approaching free. Books make sense. They provide the optimal way to read. The physical product is better than digital product…excellent battery life, screen resolution, portable, and it even looks good on your shelf. Easy to flip through.
If “free” is “the business model of the 21st century,” how could a book be free?
(This was preceded with a disclaimer that many of these options would be “offensive” to people in the audience.)
For his next book, Anderson wants to do the following:
Audiobook will be free with book (mp3) (free coupon in real book)
Will participate in book search, include Google
Considering an e-book locked to a specific reader for free
Unlocked e-book with advertising inserted
Book online with ads in the margins
As many sample chapters as publisher will accept
How could a physical book be free?
Sponsored book
Consultants give away books
Book with ads
Free rebate
Free to influentials/reviewers
Libraries have always had free books
Why do it?
Free book is marketing for the non-free thing
Book is marketing vehicle for celebrity
Can’t give away time
If free version is inferior, you give it away to market the better product
Use “free” to maximize reach to new influentials
Why aren’t more people doing free content?
Most people are not represented by a speaker’s bureau and can’t monetize fame
Online sample is not a compelling example of book (maybe for cookbook, probably not for novel)
No natural advertiser
Publisher opposition — publishers not in business of selling celebrity
Annoys the retailers
Fear and timidity/fear of cannibalization
The most critical point: The interests of the author and the publisher are critically misaligned. Publishers doesn’t benefit from speaking fees of consulting fees, only from book sales.
Sounds like an argument for self-publishing to me.
As part of a brief history of publishing in the opening keynote, I’ve already seen a few friends:
The Norwegian Monks video — Technical support for books
A reference to Vannevar Bush’s “As We Might Think” article from 1945
According to Tim O’Reilly, Microsoft Encarta “fatally wounded” the Encyclopedia Britannia because of “asymmetric competition.”
A series of short, related keynotes to kick off the conference. I like this approach; in a nontechnical, high-level keynote, it can be difficult to fill a 60- or 90-minute slot.
Brian Murray, HarperCollins, Retooling HarperCollins for the Future Consumer publishing *was* straightforward. All promotion wasdesigned to drive traffic to a retailer.
In 2005, “the earth moved.” There were search wars, community sites, user-generated content, Web 2.0. Newspapers and magazines responded with premium, branded sites online based on advertising or subscription models.
Book publishers are confused. Search engines treat digitized book content like “free” content. Rights and permissions are unclear. Books are not online — except illegally! Book archives are not digitized.
Before 2004, “book search” took place in a book store.
What is the role of the publisher in a digital world? What is the right digital strategy? What are the right capabilities? “Search” provides new opportunities for publishers. Publishers must transition from paper to digital. How can publishers create value and not destroy it?
Some statistics:
65M in the U.S. read more than 6 books a year.
10M read more than 50 books a year. [ed.: waves]
Younger consumers read less; they spend more time online
Search is used more often than email.
HarperCollins decided to focus on connecting with customers, rather than e-commerce. Amazon and others already do e-commerce. They focused on the idea of a “digital warehouse” that is analogous to the existing physical warehouse. They want to:
promote and market to the digital consumer.
use digitized books to create a new publishing/distribution chain
protect author’s copyright
“replicate in digital world what we do in physical world”
got publicity, strong public response
no single vendor who could deliver turnkey
Improvements from digital production and workflow could fund some or all of the digital warehouse investment. Projects that were low priority “IT and production” projects become high priority. Savings were realized in typesetting/design costs, digital workflow, and digital asset management.
The digital warehouse now has 12,000 titles. (Looks as though they were scanned, which doesn’t meet *my* definition of “digital content.”)
At this point in the presentation, we began to hear a lot about “control.” Control of content, controlling distribution, and so on.
HarperCollins does not want others to replicate their 9-billion page archive in multiple locations. They want others to link into their digital warehouse. But if storage is cheap and getting cheaper, what’s in it for, say, Google?
Strategic issues for book publishers
Should publishers digitize, organize, and own the exclusive digital copy of their book content?
Should publisher control the consumer experience on the web?
If the cost of 1 and 2 is zero, should every publisher do them both? would they?
How to make money
The focus on controlling content was interesting and perhaps not unexpected. The business case based on savings in digital production was also interesting.
Hyper/Word Services now has a blog, which Neil Perlin (the principal) describes as “low-gibberish overviews of online help technologies and methodologies.” The world could use some of that.
But more importantly, will Neil use his blog as a venue for updates to his Guide to BBQ Restaurants?? The world is waiting.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.