A new metric to show the effectiveness of internal documentation – the looper

For those of us who have worked for years with customer-facing documentation, we have found a wealth of information (not always accurate, but a lot of it) about how valuable the knowledge we deliver is. Some examples are the number of times a bit of knowledge shows up in a search result, how frequently it was clicked on and how well the bit of knowledge was rated.

Inside organizations, it’s harder to determine the value of different types of knowledge. We all know that the amount of knowledge that is available inside an organization dwarfs the amount available outside it. People need extensive knowledge from the organization, much of it is being created by another person or team outside their own organization. But, along with that wealth of knowledge, there’s a lot of opportunity for tension between the team that creates the knowledge and the team that receives it. How often do you hear?

“That document (project map/policy) was dropped in our lap without explanation and doesn’t help us.

“Every time we complain about that document (project map/policy), we hear that it works as designed (WAD).”

It’s hard to design a measure for internal documentation that truly reveals its effectiveness. Sign-offs often devolve from their original intended purpose to evaluate if the document (project map/policy) works for the receiver to a cursory read-through and box check (that rarely rejects anything). Quality measures often focus on adherence to the style guide or template, not on the usefulness of the document (project map/policy).

So, how can we measure the effectiveness of internal documentation?

The biggest challenge for the effective flow of knowledge is the overhead in delivering the knowledge required by the team receiving it. The team receiving the document (project map/policy) has to process it and determine whether it fulfills what they need it for. They identify a gap (or multiple gaps) and ask the delivering team to clarify (or add) knowledge. The clarifying team adds knowledge (or clarifies it), then the process repeats itself.

This loop isn’t about the quality of the document. It doesn’t matter if the document works as designed. It matters if the document effectively serves the needs of the receiver. These loops are hard to uncover and are often hidden in emails, phone calls or instant messages. But they represent significant challenges to organizations trying to effectively share knowledge (and deliver value for their customers). There are not only hard costs of lost time for both the delivering and receiving teams, but also soft costs in frustration on both sides (leading to less cooperation and lower employee satisfaction).

How do you measure it?

If we could start measuring the number of times a document (project map/policy) loops back and forth between the receiver and the deliverer (clarifications, fixes, etc.), we can start to determine the efficacy of internal knowledge. The biggest challenge to measuring the loops is that they are often informal and outside official knowledge sign-off or hand-off processes (if these processes exist, at all). Measuring these informal loops requires four steps:

  1. Ask both receiving and delivering teams how much time a loop takes to complete.
  2. Establish a process to track when knowledge (documents/project maps/policies) is handed off from one team to another. The process doesn’t have to be a formal sign-off or a complex workflow system, but a list of what knowledge started in one team and went to another.
  3. Establish a similar process to track when there are required clarifications to the document (project map/policy).
  4. Work with the delivering teams to reinforce the importance of all clarifications or loops being put into the system. Make sure to give them some context around why this will end up saving time (and check to make sure they don’t use the process as a way to get around providing clarifications).

After a few months, patterns will start to emerge. Which documents (project maps/policies) loop frequently? Which ones never loop? Are there teams that create knowledge that is easier to receive than others? If so, what are they doing better? Make a list of positive attributes and apply them to those that loop more frequently (prioritizing the ones that loop the most).

After six months, quantify the number of loops required per handed-off knowledge bit. Multiply that by the time estimate provided at the beginning of the process and you have the hard cost savings you have achieved. Ask the receiving and delivering teams how much easier the knowledge hand-off processes is (this might be an informal conversation or a formal survey). That will help uncover the soft costs. Together, they give a picture of the overall savings possible through implementing a simple measure that truly gets to the effectiveness of internal knowledge.

 

Leave a Reply

Discover more from Klever Insight

Subscribe now to keep reading and get access to the full archive.

Continue reading