Evaluation
• Process that can justify that what we do as nurses and as a nurse educators make a value-added difference in the care we provide
• It is a systematic process by which the worth or value of something- in this case, teaching and learning – is judged
• It is the final component of the process and serve as the critical bridge at the end of one cycle that guides direction of the next cycle
• As an after thought is, at best, a poor idea and, at worst, a dangerous one
DETERMINING THE FOCUS OF EVALUATION
Evaluation focus on five basic components:
• AUDIENCE- comprises the person or groups for whom the evaluation is being conducted
• PURPOSE- to decide whether to continue a particular education program or to determine the effectiveness of the teaching process
• QUESTION- to be asked in the evaluation are directly related to the purpose for conducting the evaluation, are specific and are measurable
• SCOPE- can be considered an answer to the question
• RESOURCES- include time, expertise, personnel, materials, equipment, and facilities
EVALUATION MODELS
• PROCESS ( FORMATIVE) EVALUATION
its purpose is to make adjustments in an educational activity as soon as they are needed, whether those adjustments be in personnel, materials, facilities, learning objectives or even ones attitude . This is ongoing evaluation helps the nurse anticipate and prevent problems as they arise.
o CONTENT EVALUATION
The purpose is to determine whether learners have acquired the knowledge or skills taught during the learning experience. Take place immediately after the learning experience.
o OUTCOME (SUMMATIVE) EVALUATION
Is to determine the effects or outcomes of teaching efforts, it also referred to as SUMMATIVE EVALUATION because its meant is to sum what happened as a result of education. It measures changes occurring as a result of teaching and learning. Focus on measuring more long term change that persists after the learning experience.
o IMPACT EVALUATION
The purpose is to obtain information that will help decide whether continuing an educational activity is worth cost. Focus on a course of goal.
o PROGRAM EVALUATION
The purpose is to determine the extent to which all activity for an entire department or program over specified of time meet or exceed goals originally established. The scope of program is broad, generally focusing on overall goals rather on specific objectives.
Evaluation of learning book from teaching strategies
Tests- an art as well as a science and cannot be fully taught in a single chapter
- may consist of a single type of question or a combinations of types, as long as the objectives are being measured at the desired levels of learning
TYPES OF TESST QUESTION
Multiple Choice QUESTIONS
- nursing examination are often written in this types of question for such reasons of
1. challenging to create
2. easy to score and can be scored by the computer
3. NLE are multiple choice therefore educator wants the learner to be familiar in this type of questions
Two parts of multiple choice question
1. STEM – the question itself
- should be short as possible while still conveying the ideas clearly
- Negative terms should be avoided because it tends to make questions more confusing
2. OPTIONS/ DISTRACTERS - the possible answer or solution
- should be realistic
Few rules govern in writing of options
1. grammatically consistent with the stem
2. use good style and avoided giving clues
3. options should be fairly short and about the same length
4. if the answer is written longer, increase the length of the distracters as well
5. options should be place in logical order
6. avoid the use of qualifying terms, such as always, sometimes, usually and never
7. be sure to alter the positions of the correct answers
TRUE- FALSE QUESTIONS
- are designed to tests learner’s ability to identify the correctness of statements of fact or principle
- the weakness is that the learner has a 50/50 chance of guessing the right answer
Variations of true- false question
1. ask the learner to give the rationale for why the item is true or false
2. ask the learner to rewrite false statement to make the true
From Whom or What to Collect Data
WHOM?
• directly from the individuals whose behavior or knowledge is being evaluated
• from surrogates or representatives of these individuals
• from documentation or databases already created
WHAT?
Pre existing databases should never be used as the only source of evaluative data unless they were created for the purpose of that evaluation.
Eventhough these data were collected for different purpose, they may be helpful for providing additional information to the primary audience for the evaluation.
Data already in existence generally are less expensive to obtain than are original data.
The Decision whether to use pre existing data depends on whether they were collected from people of interest in the current evaluation and whether they are consistent with operational definitions used in the current evaluation.
How, When, and Where to Collect Data.
HOW?
Methods for how data can be collected include the following:
• Observation
• Interview
• Questionnaire or written examination
• Record review
• Secondary analysis of existing databases.
Which method is selected depends, first, on the type of data being collected and, second, on available resources. Whenever possible, data should be collected using more than one method. Using multiple methods will provide the evaluator, and consequently the primary audience, with more than could be accomplished using a single method.
WHEN?
The timing of data collection, or when data collection takes place,
has already been addressed both in discussion of different types of evaluation and in descriptions of evaluation design structures.
1. Process evaluation------generally occurs during and immediately after an educational activity.
2. Content evaluation------takes place immediately after completion of education.
3. Outcome evaluation-----occurs some time after completion of education, after learners have returned to the setting where they are expected to use new knowledge or perform a new skill.
4. Impact evaluation------generally is conducted from weeks to years after the educational program being evaluated because the purpose of impact evaluation is to determine what change has occurred within the community or institution as a whole as a result of an educational program.
WHERE?
An appropriate setting for conducting a content evaluation may be in the classroom or skill laboratory where learners have just completed class instruction or training.
An outcome evaluation to determine whether training has improved the nurse’s ability to perform skill with patients on the nursing unit, however, requires that data collection – in this case, observation of the nurse’s performance – be conducted on the nursing unit.
an outcome evaluation to determine whether discharge teaching in the hospital enabled the patient to provide self-care at home requires that data collection, or observation of the patient’s performance, be conducted in the home.
Who Collects Data.
WHO?
Combining the role of evaluator is one appropriate method for conducting a process evaluation because evaluative data are integral to the teaching-learning process.
Inviting another educator or a patient representative to observe a class can provide additional data from the perspective of someone who does not have to divide his or her attention between teaching and evaluating. This section, and perhaps less biased, input can strength legitimacy and usefulness of evaluation results.
EVALUATION INSTRUMENT
This chapter is intended to present keypoints to consider in selection, modification, or construction of evaluation instruments.
The initial step in instrument selection is to conduct a literature search for evaluations similar to the evaluation being planned. A helpful place to begin is with the same journals listed earlier in this chapter. Instruments that have used in more than one study should be given preference over an instrument developed for a single use, because instruments used multiple times generally have been more thoroughly tested for reliability and validity.
First, the instrument must measure the performance being evaluated exactly as that performance has been operationally defined for the evaluation.
Second, an appropriate instrument should have documented evidence of its reliability and validity with individuals who are closely matched as possible with the people from whom you will be collecting data.
The evaluation instrument most likely to require modification from an existing tool or development of an entirely new instrument is a cognitive test.
The primary reason for constructing such a test is to be comprehensive and relevant and to fairly test the learner’s knowledge of content covered.
Use of a test blueprint is one of the most useful methods for ensuring comprehensiveness and relevance of test questions because the blueprint enables the evaluator to be certain that each area of course content is included in the test and that content areas emphasized during instruction are similarly emphasized during test.
BARRIERS TO EVALUATION
Barriers to conducting an evaluation can be classified into three broad categories:
1. Lack of Clarity. Lack of clarity most often results from an unclear, unstated, or ill-defined evaluation focus.
2. Lack of Ability. Lack of ability to conduct an evaluation most often results from insufficient knowledge of how to conduct the evaluation or insufficient or inaccessible resources needed to conduct the evaluation.
3.Fear of Punishment or Loss of Self-Esteem. Evaluation may be perceived as judgment of personal worth. Individuals being evaluated may fear that anything less than a perfect performance will result in punishment of that their mistakes will be seen as evidence that they are somehow unworthy or incompetent as human beings.
STEPS ON HOW TO OVERCOME THE BARRIERS
The first step in overcoming this barrier is to realize that the potential for its existence may be close to 100%. Individuals whose performance or knowledge is being evaluated are not likely to say overtly that evaluation represents a threat to them. Rather, they are far more likely to demonstrate self-protective behaviors or attitudes that can range from failure to attend a class that has a post-test, to providing socially desirable answer on a questionnaire, to responding with hostility to evaluation questions.
The second step in overcoming the barrier of fear or threat is to remember that “the person is more important than the performance of the product” (Narrow, 1979, p. 185). If the purpose of an evaluation is to facilitate better learning, as in process evaluation, focus on the process.
A third step in overcoming the fear of punishment or threatening loss of self-esteem is to point out achievements, if they exist, or to continue to encourage effort if learning has not been achieved. Give praise honestly, focusing on the task at hand.
Finally, and perhaps most importantly, use communication of information to prevent or minimize fear. Lack of clarity exists as a barrier for those who are subjects of an evaluation as much as for those who will conduct the evaluation. If learners or educators know and understand the focus of an evaluation, they may be less fearful than if such information is left to their imaginations. Remember that failure to provide certain information may unethical or even illegal.
CONDUCTING THE EVALUATION
To conduct an evaluation means to implement the evaluation design by using the instruments chosen or developed according to the methods selected. How smoothly an evaluation is implemented depends primarily on how carefully and thoroughly the evaluation was planned. Planning is not a complete guarantee of success, however.
3 methods to minimize the effects of unexpected events that occur when carrying out an evaluation are to:
(1) conduct a pilot test first
(2) include “extra” time
(3) keep a sense of humor.
ANALYZING AND INTERPRETING DATA COLLECTED
The purposes for conducting data analysis are
(1) to organize data so that they can provide meaningful information
(2) to provide answers to evaluation questions.
Data and information are not synonymous terms. That is, a mass of numbers or a mass of comments does not become information until it has been organized into coherent tables, graphs, or categories that are relevant to the purpose for conducting the evaluation.
Data can be qualitative or quantitative
Qualitative Data- all qualitative data are at the nominal level of measurement meaning they are described in terms of categories such as health focused versus illness focused
Quantitative Data can be:
• Nominal
• Ordinal
• Interval
• Ratio level of measurement
Data also can be described as continuous or discrete.
Examples of Continuous Data:
1. Age
2. Level of Anxiety
Examples of Discrete Data:
1. Gender
2. Diagnosis
REPORTING EVALUATION RESULTS
Following a few guidelines when planning the likelihood that results of the evaluation will be reported to the appropriate individuals or groups, in a timely manner, and in usable form:
• Be audience focused
• Stick to the evaluation purpose
• Stick to the data
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment