Stroke

Respond to this post

 

 

The most crucial aspect of implementing a training program is to accurately assess the impact of that training program. Meaningful assessment of a program must measure input from both the learners and the educators, the training processes such as the educational programs and assessments, and the output such as if the behaviors of the learners have changed (Heydari, Taghva, Amini, & Delavari, 2019). Continual quality improvement plans are essential for all organizations to run effectively and efficiently. It is almost more important in healthcare where the safety of patients may be at risk. Measurable outcomes are needed to improve the quality of programs (Ragsdale et. al., 2020).

 

Donald Kirkpatrick first published his four level model of evaluation in 1959. The main purpose of this model is to evaluate the results of training and learning programs. This model is widely used as an evaluation tool due to the fact that it takes into account any style of training, both informal and formal. The four levels are reaction, learning, behavior and results. The reaction level evaluates how individuals react to training by asking questions that establishes the student’s thoughts. This level will also evaluate the participant’s feelings about a program. The learning level is meant to gauge what and how much participant’s learn during training sessions. Some techniques that can be utilized at this level are pre or post learning tests, interviews and assessments. Level three is the behavior tier which evaluates the student’s behavior following the educational session. This can also imply where the training has caused a desire to change within the student. This level often occurs 3-6 months after the training session and may be evaluated through the use of observations, interviews, online evaluations, or examinations. The last level is considered the result level which evaluates the overall success of the training program. This level is often not reached within an organization due to financial constraints or staff turnover rates (Dorri, Akbari, & Dorri Sedeh, 2016).

 

            The learning objective that I chose to focus on in this discussion is for staff to understand and recognize the signs and symptoms of stroke. The aim of the first level in this model is to guarantee that participants are eager and motivated to learn by evaluating their reaction to a training program. Learners could be offered a survey before the beginning of the training session regarding what they would like to learn or be informed about regarding stroke education. The second level of evaluation is learning which is measures the acquired amount of knowledge that a student gains from the training program. At this level, specific abilities or awareness levels can become further developed as well. With regards to stroke education, a short quiz can be taken after the training program to evaluate the learner’s new grasp of the knowledge presented. Behavior accounts for having the learner utilize the skills and knowledge that has been acquired during the training program. This level searches for behaviors that have changed or developed as a result of training. Stroke education can be offered as a module at each annual training session with a posttest to be taken after the session. Therefore evaluators can look to see what information has been gained and retained by the learners from year to year. This can also be a good way for educators to understand what knowledge may still need to be incorporated into each training session (Gokhan, 2015).

           

 

References

 

Dorri, S., Akbari, M., & Dorri Sedeh, M. (2016). Kirkpatrick evaluation model for in-service training on cardiopulmonary resuscitation. Iranian journal of nursing and midwifery research, 21(5), 493–497.

 

Gokhan, O. (2015). Program evaluation through Kirkpatrick’s framework. Pacific Business Review International, 8(1), 106-111.

 

Heydari, M. R., Taghva, F., Amini, M., & Delavari, S. (2019). Using Kirkpatrick’s model to measure the effect of a new teaching and learning methods workshop for healthcare staff. BMC Research Notes, 12(388).

 

Ragsdale, J. W., Berry, A., Gibson, J. W., Herber-Valdez, C. R., Germain, L. J., & Engle, D. L. (2020). Evaluating the effectiveness of undergraduate clinical education programs, Medical Education Online, 25(1).