I took a little time away from the blog and feedreader and just now catching up. Lots of great stuff!
I read this great post by Jay Cross and then this one by Tony Karrer. Both got me thinking...for different reasons.
Here is the money line from Jay...
and the money line from Tony...
"Any manager worth his salt has disproved the old canard that “You can’t manage what you can’t measure” by managing such inherently immeasurable things as people, uncertainty, and image. The challenge is to share control with the entire organization, for the workers at the “bottom” of the organization are the first to learn from customers. The remote-control manager fails precisely because she only manages what she can measure.
The bottom line is not the bottom line."
"Yes, I care about the end result, but unless you can tell me the intermediate factors, how you will impact them, and ideally measure your impact on them, then why should I believe that your learning solution is going to work.Here is where I will try and make the connection...(for my 6 readers out there).
Brent said in his post:The process is continuous and if our “training solution” is organic, dynamic, and flexible, it is very difficult to measure using the current method of measuring learning products. My point is “who cares”. If we have set up environments that help people collaborate, and support their informal learning, we should see output improvements."Who cares"? Well I do. And, actually, the business does. If you create an "organic, dynamic, flexible" learning solution but can't explain how it impacts the end numbers, then: (a) you won't get credit, (b) you won't know if you can repeat it successfully, and (c) you won't know if its really working."
Business1.0 runs the "business by accounting". Its a shell game, a numbers game (Jay explains it best so be sure to read his post if you are feeling lost.). Training/Learning1.0 runs its business within the business as an event. It too is a shell game, a numbers game of made up measures to "prove" that learning has occurred and that there will be impact to the business: Butts in seats, multiple choice tests, On-the-job certification checklists, etc. All of which are a joke and tell you nothing about whether your training intervention actually had any effect on performance, OR the bottom line of the business.
If you are still lost then read Tony's post.
Here's the reality that I live with and I think most training pros in the trenches live with. The business identifies a "problem" and they decide it's a training problem and come to me. I discover, through my analysis, all of the intermediate factors that are causing the problem and propose a solution. (BTW - more often then not, it's NOT a training problem) Let me be very clear on this point. The "business" (i.e. my internal customer) DOES NOT care about the intermediate factors I discovered. Therefore I do not waste my limited resources measuring them. Is the problem solved or not? If yes, I am rewarded. If no, I have failed, period! Simply because we do not measure the intermediate factors does not mean we do not address them. This is not right or wrong simply the reality of corporate internal training. The output, or lack thereof, causes a pain, and I relieve the pain. I business care about the pain (the output), period.
My next post will be on why I hate all of this, and why I am driving towards supporting the 80-90% of where true learning occurs...AFTER CLASS, (i.e. informal learning and the unmeasurable).