Week 3: Evaluating Programs & Human Performance Technology

All too often instruction is developed with little thought as to how evaluation of learning or the effectiveness of the instruction will take place. When evaluation is considered on the front end of the instructional design process, it is often limited to evaluating whether the instructional design is more effective than traditional methods.
 
Instructional Design Evaluation focuses on measuring instructional objectives and what knowledge learners acquired through training or instruction. Merit, worth, and value are the goals of evaluation.  Three evaluation models of instructional design are CIPP, Five-Domain, and Kirkpatrick’s. Two other evaluation models are Brinkerhoff’s Success Case Method and Patton’s Utilization-Focused Evaluation (U_FE).

Brinkerhoff’s Success Case Method

What worked about a training methodRobert Brinkerhoff published his book The Success Case Method in 2003. The Success Case Method (SCM) analyzes successful training cases and compares them to unsuccessful training cases. The SCM utilizes 5 steps to measure human performance by analyzing those that did not use their training, those that used some of the training but did not see any improvement in performance (the largest group, see graphic above), and those that successfully used the training to improve performance. The focus of the evaluation is directed at how successful the learners and/or organizations use the learning and not at the actual training.

The 5 steps to SCM are:

  1. establish and plan the evaluation
  2. define the program goals and benefits of the training
  3. conduct a survey to identify learner success and/or lack of success from the training
  4. carry out interviews to document success in applying the training
  5. formal report on the findings from the evaluation

The objective of the formative evaluation is to assist organizations and trainers on environmental factors that promoted success, how much value is obtained, the overall impact of the return-on-investment, and the widespread success from the training. The scope of the SCM will include responses from named individuals to further assist evaluators to coordinate interviews. Results of the unsuccessful interviews will help identify those factors that inhibit the training from being implemented partially or completely. The end goal of SCM is to evaluate learner “buy-in” to the prescribed training and how effectively the training is used in day to day operations.

Student “Buy-In”
This past year I used a modified version of the Success Case Model to evaluate my Digital Video & Audio Design class. I spoke with students I felt were successful in the class not only through their grade but how engaged they were throughout the year. The major difference between my modified version and Brinkerhoff’s SCM is the order I approached the evaluation and there was not any formal report on my findings. 

The purpose of my evaluation was to provide student feedback for this first year course and the curriculum I created for it. It would not be too difficult for me to follow the SCM steps to discover how my students are using instruction/training from my class. One area I am most interested in are the barriers and limitations that prevent students from being successful and engaged in the class. Are the barriers for unsuccessful students environmental factors, my teaching strategies, project-based learning, etc?

Patton’s Utilization-Focused Evaluation

Saving lives, changing minds.All to often processes are evaluated without any focus or direction to utilize the findings. Evaluation may be conducted to simply assess processes and trainings without the feedback for the intended users.  Patton’s Utilization-Focused Evaluation (U-FE) has been around since the 70’s and is still being used today.  Evaluation sometimes may be compared of as the old saying, “we have a meeting just to say we had a meeting.” There has to be a purpose for the evaluation and it has be used.

The objective for U-FE is to align the evaluation goals with the how the intended users plan on utilizing the evaluation results. For the evaluation to be successful, the intended users must be engaged and part of the evaluation process. This provides a sense of “ownership” for the intended user and they will be more willing to provide authentic findings for the evaluators during each step of the evaluation process. There many steps of the U-FE process and the nine listed below are those major steps of the process:

  1. Perform a readiness assessment to evaluate the commitment level of the organization
  2. Identify intended users and build a relationship of engage with user
  3. Conduct a situational analysis of environmental factors that might have an effect on the results
  4. Identify the intended users and build a process to continue the evaluation, communication, and learning after the U-FE is completed
  5. Focus the evaluation to address questions and issues of the intended users
  6. Evaluation should be designed to report relevant findings
  7. Data collection will be analyzed and translated for the intended users
  8. The evaluation results must be actively used and may need guidance to insure its use
  9. A quality control of the findings (metaevaluation) must be revisited to determine the extent, additional uses, and misuse of the evaluation was achieved

To create a successful evaluation the intended users and the evaluator must be engaged and committed throughout the utilization-focused evaluation. The U-FE approach may be used independently or it may be used to supplement other evaluation models.

Students as Intended Users 
One method I might be able to use the Utilization-Focused Evaluation process in my classroom is to allow my students to be the evaluators. Students can conduct peer evaluations of completed projects and provide feedback to each other. This provides the students the level of ownership in the evaluation process and allows them to constructively critique other student’s work. Students are aware of factors that might be limiting student success in the classroom. An advantage for this approach is students learn from each other and they sometimes have a way of “wording” things differently from the teacher. The quality control can be conducted with the students and myself.

Instructional Design Evaluation

Instructional Design vs. Traditional Methods
Here are a few questions instructional design evaluation should address when compared to traditional methods of instruction. Are students actively participating in the learning process? Are the delivery methods of the instruction engaging and relevant for the desired outcomes? Another question, what environmental factors are supporting or preventing learning in the classroom? Are some areas of the instruction working better than others and why? How much more value can be added to the instruction process? Lastly, is the feedback on student performance findings being used to improve the instruction?

Return on Investment and Instructional Programs
The return on investment (ROI) methodology should always be considered for instructional programs. The landscape of technology is constantly changing and being updated. This constant change and the need for authentic and valid instructional programs is a challenging path for schools. A school’s technology objectives and its contribution to student success should be a driving force when investing in new and current technologies. Educators should follow the ROI Process Model to support budgets with the impact and consequences new technology has on education. The ROI Process Model should begin with an evaluation plan, collect data, analyze the data, and report on the findings. Other measures such as process improvement, time to deploy instructional programs, back-end personnel needed to support programs, and enhance the image of the school should be considered.

Performance Improvement

Section IV focuses on human performance, performance support systems, knowledge management systems, and the concept of informal learning. Not all problems in learning and/or performance require an instructional one. Many times a non-instructional approach is a more appropriate solution. 

A performance problem in my school is the gap between teacher competency and “buy-in” of new technology to increase efficiencies in and out of the classroom. One of the reasons for the gap are the demands put on teachers that are beyond the instructional process. The demands to increase test scores, manage increased paperwork, plan lessons, and tutorial requirements all put a heightened level of stress on teachers. Managers in the corporate world have administrative assists to help manage the daily operational processes and scheduling requirements. Teachers have to manage and organize the daily p rocesses and stay current on their instructional processes.

Several years ago, Garland ISD created master trainer positions on each campus to help facilitate technology implementation and training for teachers. Over the last couple years, the district has began to hold master trainers accountable for facilitating technology tools to assist teachers during instruction and operational processes. Garland ISD is aware of the opportunities technology offers to improve performance in and out of the classroom. Before any technology is put in the hands of teachers or students, the district’s technology implementation plan provides adequate training for teachers to be competent in the new technology, its benefits, and proper ways to implement technology in the classroom. Support and management systems are part of the roll-out process for new technologies. This roll-out process appears to be similar to Gilbert’s Behavior Engineering Model in that master trainers align district objectives with technology to improve teacher behaviors (morale) and overall student performance. 

The implementation of Garland Education Online (GEO), a Moodle learning management system, is one of type performance support being implemented district wide. GEO is not only for students, it provides teachers support opportunities for software training on non-critical software (i.e. MS Word and Excel) and grade book tips and new features. Teachers can go access GEO at school or home anytime of the day. This is where the master trainers responsibility becomes apparent as a support system for teachers on campus and are readily available to support and answer questions. For more critical tasks, the district’s performance support will schedule training prior to rolling-out instruction technology in order to build teacher confidence and ability to correctly utilize these new instructional tools.

Resources:

The reality of learning interventions. N.d. Graphic. GoodPractice for Leaders and ManagersWeb. 19 Jun 2013. <http://toolkit.goodpractice.com/mdt/resources/development-cycle/training-cycle-evaluation/robert-o-brinkerhoff-the-success-case-method>.

Smith, Julie. Saving lives, changing minds. Project/programme monitoring and evaluation (M&E) guide. 2011. Cartoon. The International Federation of Red Cross and Red Crescent Societies (IFRC)Web. 19 Jun 2013. <http://www.ifrc.org/Global/Publications/monitoring/IFRC-ME-Guide-8-2011.pdf>. 
Latest Comments
  1. jpenix33 |
  2. Ro Menendez |
    • Brian CdeBaca |
  3. Albert Moran |

Leave a Reply to Ro Menendez Cancel reply

Your email address will not be published. Required fields are marked *