Tuesday, May 16, 2006

Measure intermediate or final output

I took a little time away from the blog and feedreader and just now catching up.  Lots of great stuff!
I read this great post by Jay Cross and then this one by Tony Karrer.  Both got me thinking...for different reasons. 

Here is the money line from Jay...

"Any manager worth his salt has disproved the old canard that “You can’t manage what you can’t measure” by managing such inherently immeasurable things as people, uncertainty, and image. The challenge is to share control with the entire organization, for the workers at the “bottom” of the organization are the first to learn from customers. The remote-control manager fails precisely because she only manages what she can measure.

The bottom line is not the bottom line."

and the money line from Tony...
"Yes, I care about the end result, but unless you can tell me the intermediate factors, how you will impact them, and ideally measure your impact on them, then why should I believe that your learning solution is going to work.

Brent said in his post:
The process is continuous and if our “training solution” is organic, dynamic, and flexible, it is very difficult to measure using the current method of measuring learning products. My point is “who cares”. If we have set up environments that help people collaborate, and support their informal learning, we should see output improvements.
"Who cares"? Well I do. And, actually, the business does. If you create an "organic, dynamic, flexible" learning solution but can't explain how it impacts the end numbers, then: (a) you won't get credit, (b) you won't know if you can repeat it successfully, and (c) you won't know if its really working."
Here is where I will try and make the connection...(for my 6 readers out there).
Business1.0 runs the "business by accounting".  Its a shell game, a numbers game (Jay explains it best so be sure to read his post if you are feeling lost.).  Training/Learning1.0 runs its business within the business as an event.  It too is a shell game, a numbers game of made up measures to "prove" that learning has occurred and that there will be impact to the business:  Butts in seats, multiple choice tests, On-the-job certification checklists, etc.  All of which are a joke and tell you nothing about whether your training intervention actually had any effect on performance, OR the bottom line of the business.
If you are still lost then read Tony's post.
Here's the reality that I live with and I think most training pros in the trenches live with.  The business identifies a "problem" and they decide it's a training problem and come to me.  I discover, through my analysis, all of the intermediate factors that are causing the problem and propose a solution. (BTW - more often then not, it's NOT a training problem)  Let me be very clear on this point.  The "business" (i.e. my internal customer)  DOES NOT care about the intermediate factors I discovered.  Therefore I do not waste my limited resources measuring them.    Is the problem solved or not?  If yes, I am rewarded.  If no, I have failed, period!  Simply because we do not measure the intermediate factors does not mean we do not address them.  This is not right or wrong simply the reality of corporate internal training.  The output, or lack thereof, causes a pain, and I relieve the pain.  I business care about the pain (the output), period.

My next post will be on why I hate all of this, and why I am driving towards supporting the 80-90% of where true learning occurs...AFTER CLASS, (i.e. informal learning and the unmeasurable).


Harold Jarche said...

Yes, the real payoffs are in the right analysis to determine what you call the intermediate factors. Unfortunately your analysis may show some weaknesses in the system that could ruffle some feathers. There are lots of informal measures as well, or what performance technologists call "non-instructional learning interventions", like job aids.

However, for many organisations, none of these matter, as people only hear what they want to hear, and there are many in the training business who will gladly take on any "training problem", no matter what is the actual cause.

I look forward to your next post :-)

bschlenker said...

Exactly. Some times we do not have the luxury to tell our largest internal customer that what they REALLY have is a management problem, NOT a training problem. That doesn't go over very well.

We often create the non-instructional learning interventions anyway because we know that's really all that they NEED, but we also create the course and make it as instructionally sound as possible to please management.

The only customers that care about the silly data like butts in seats and MC test scores are those who request a course to cover some legal requirement or certification that requests proof. The rest just want the the pain to go away and they respect us enough to let us solve it, or they mandate a solution. Either way, THEY measure to see if the pain went away.