Thursday, May 18, 2006

Intermediate Factors - Impact many, measure one

This is a fun little exchange via blogs and I'd first like to say that these types of conversations are good and fun for me.  It helps me clear my head and work through my thoughts.  With that, I disagree with Tony that we disagree.  I think we are both on the same page simply reading through different lens.  We all solve problems for our customers.  And we all agree that proper analysis of the problem is the single most important thing we can do.  Second to that is communicating effectively with the client what, and how many factors, we will attempt to impact in order to solve the problem.  The rest is semantics.  Tony lays out some great examples of how we all do ISD.  I could write about many instances within internal corporate training that mirror his.  It's the process we all know and love and have come to define our careers by.

While writing this I continue to look back at my original statement to understand more about how we got to this point.  So let me try and be more clear about the point I was trying to make in the original post.
"In the corporate world we should only really care if the learning is transferred to the job…period!  Is the output increased, or of higher quality because of our learning intervention.  This has always been a problem for training departments because we look
at everything we do as a product, and we “evaluate” if the product had
impact.  The approach is totally wrong."
I still stand by the statement.  Just because we put some kid through StoreLayout training and Product Location training and he scores 100% on all of the tests (evaluations) DOES NOT mean he will PERFORM.  The measure MUST be how he/she performs ON THE JOB, period.  Sure the training product had impact because he passed the test.  But maybe he's a jerk and still not very helpful to customers.  But as long as someone can pass our test the training product is evaluated as a "successful" implementation.  What we tend to do when we look at training as a product is to wash our hands of the problem once we have implemented the "Training Solution" (in-class trng, eCourse, etc.).  And THAT "approach is totally wrong" because we stop at level 2 evaluations.  Today's learning systems allow for level2 evals at the end of the training to make someone "certified".   The click2death elearning that is produced today cannot support any more than that.  (Please don't tell me about your level 3 and 4 solutions you have created.  It's just not necessary.)   If you support your client past the implementation of your training solution as part of your service then I applaud you. 

However, I'll bet what you are finding is that your trainees, while good right after the training still refer back to their coworkers and cheatsheets to maintain their knowledge.  Nobody will argue that.

Our transition from an eLearning Development community that creates a product (with .4% effectiveness) to a community that provides a service supporting informal learning must come sooner rather than later.


No comments: