Technical Writing Metrics - Basing Decisions on Data

A lot of technical writers, including yours truly, do not have particularly technical backgrounds. I was a history major who went on to study theology and biblical languages. In school, I sometimes came across a statistic that looked valuable for my thesis. In these cases, I always handled the stats like a full mug of very hot coffee.

Once, at work, before I executed some decision, a more technically-inclined manager asked me, "Are you sure that's the right decision? Do you have any data to back that up?"

Ew.

Happily, the manager rattled off a couple ideas of the kinds of data that might make the case for me. In this post, I'm just going to rattle off some occasions and the data that helped me out in them.

Demonstrating a need for copyediting

Is your document as easy to read as you hope it is? Can you prove it?

Readable can generate some stats about individual documents. If you get one of the paid plans, it generates a startling number of metrics about a whole raft of documents at once. You can run your documentation through this app and find out:

  • what your average readability score is
  • where the weak spots are
  • what makes them weak

It's not a substitute for a human brain, but it gives lots of good indicators about where to start looking.

Data to the rescue.

Identifying buried content

Buried content is content that is written but can't be found. Your customers shouldn't need a treasure map to find what they need. They shouldn't need to know how you've organized it. Making the organization intuitive for people who don't know - that's part of technical writing.

Often Customer Support answers questions by sending a link to an existing document. Ask customer support if they have tracked or can track which links they send out to customers most often. Maybe these documents are not easily findable. Find a way to make them more findable or discoverable.

Data to the rescue.

Prioritizing improvements in existing content

In most cases, the main value proposition of good docs is that good docs prevent customer service calls. One good doc that cost $400 to write can save thousands in customer service calls over time. Our work should make the work and lives of customers - internal or external - easier. So how do we know which documents are causing or failing to alleviate pain? Knowing that a document is poorly written isn't enough to know whether it's worth fixing. We must also find out whether it's bothering anyone.

Web analytics can tell you which pages get seen most often. If lots of people are looking at it, it's probably important to keep a document in ship shape.

Other web traffic stats, like duration of view, need more context for interpretation. The duration of a page view may or may not mean almost anything. A short doc should not need much reading, but then neither should a very long reference table. A single doc getting may get a lot of bounces. That might mean that your search engine is leading people there wrongly, but that your amazingly well-written headings are letting readers know right away that they are in the wrong place. So that's good at least. It's well worth learning more about analytics and interpretation of web traffic. I've heard good things about Udemy's inexpensive online course.

Data to the rescue.

Finding gaps in content

Search analytics can be very helpful in identifying missing content. At Zoomdata, we use Swiftype as our search engine because of the upcoming sunset of Google Site Search. It has pros and cons. One of the pros is the useful analytics they provide. We know what our customers search for. We know which searches provide no results or no click-throughs. Those searches might be deliberate or acceptable, or they might be gaps in content.

Happily, most of the no-result and no-clickthrough searches for our docs are typos. A customer searched for secuity instead of security. That sort of problem fixes itself. But we had a few customers search for Kafka and got no results. Because that's a databasing technology and we work with it, you can bet that Kafka made it onto our list of topics to get written.

Data to the rescue.

Managing stakeholder input

When I prioritize projects, I try to take into account several factors. Some of them are not obviously quantifiable, but some of them really are quantifiable in unexpected ways. Above, I mentioned using readability, web, and search numbers to help make a decision. Additional data can come from querying various stakeholders. My manager has ideas. Customer support has ideas. Very product managers have ideas. Salespeople who use my docs have ideas. How do I weigh them against each other? If I ask them to rank their priorities, they might walk away with the idea that I was going to follow their priorities. Really, they can't do that. They don't know all my responsibilities. At my company, even my manager would be making a real mistake if he did that. He knows that there are other people grabbing at my time and attention. I've also had goofy situations of a team lead ranking six things as the top priority. -_-

Give each stakeholder "money" to invest. Give them each $100 imaginary dollars to invest and ask them how they would invest. Tell them you're asking other stakeholders. They will say, "$20 for this, $10 for that, and so on." You'll have your data about how much pain they feel as a result of each of your projects being incomplete or unstarted as of yet.

If you want, you can give them different amounts, for that matter. My C-level exec might get $100 along with customer support, since preventing customer support calls is a significant purpose for good docs. Salespeople and marketing, who only refer to the docs sometimes, might get $50, then. Of course, my C-level exec might choose to exercise her authority to veto or mandate, as well. :)

Of course, I wouldn't tell them that they got different amounts to spend, and I better have a good rationale for how I weigh their "investments". And if somebody asks me later why I did A after B when he had clearly prioritized B over A, there is a lot of "data" available about all the other people who invested the other way around.

Data to the rescue.

Onward

The scenarios above are just some examples and ideas from my own experience. Like all data, the data you gather will admit of multiple interpretations and various degrees of validity. That's OK. It's a start. And data -
almost any data - is a good counterbalance to whatever we think we know, whatever our experience tells us, or what our gut says. Decent data can get conversations going, assure you that you're not insane, or dislodge you from your insanity.

Do you have any experience of decisions to make or tasks to undertake, in which data came to the rescue?

Comments: