During the course of instruction, it is important to evaluate study resources and materials to ensure they are effective (Curmise, 2002).
Publication of instructional materials is a continuous process in which educators continually assess their effectiveness. When the assessment includes evaluation with students—thereby generating data that can be used for improvement—instructional materials become more robust and effective.
Instructional material designers use research findings from previous studies to improve the quality of their work. To create effective instructional materials, designers must also consider a number of other variables that can affect the effectiveness of the materials (Singh & MacGregor, 1998).
In addition to the aforementioned variables, Curmise (2002) states that a number of terms are used to describe the way people judge instructional resources. These terms include acceptability, preference, usability, and value. Acceptability is “the extent to which users like and approve an item” (p. 822). Preference is “the degree to which a person prefers one item over another” (p. 822). Usability is “the degree to which a person is able to use an item effectively in performing a given task” (p. 822). Value is the “estimated worth of the item to its user” (p. 822).
Instructional materials can be judged according to various variables, but the way these variables are combined differs from study to study. It is common for instructional material designers to choose a specific variable and focus on it in their evaluation. For example, researchers have used acceptability as an evaluation variable by asking students about their level of acceptance for different pieces of literature (McSherry, Kieffer, & DeNisi, 2001). McSherry et al. (2001) found that acceptance changed from one piece of literature to another, but students rated the same books higher for their level of acceptance when they were asked about the texts multiple times over a short period of time.
Researchers have also used preference as an evaluation variable. For example, researchers used student responses to rate particular items as preferred or not preferred when using a forced-choice—marking a check mark if they agree with this item and a cross if they disagree—design (McSherry et al., 2001). This design allowed them to compare the ranked order of items. They found that some selected items could be considered unnecessary or irrelevant and thus improved by removing them.
Photo by Pew Nguyen from Pexels