100 Questions (and Answers) About Tests and Measurement
- Bruce B. Frey - University of Kansas, USA
SAGE 100 Questions and Answers
SAGE 100 Questions and Answers
April 2014 | 216 pages | SAGE Publications, Inc
100 Questions (and Answers) About Tests and Measurement asks (and answers) important questions about the world of social science measurement. It is ideal as an introduction to students new to the concepts, to advanced students and professionals looking to review ideas and procedures, as well as to those interested in knowing more about a test they have to take or how to interpret the score they receive.
About the Author
I. THE BASICS
1. So, What Is 100 Questions (and Answers) About Tests and Measurement About?
3. What Exactly Is Measurement?
2. What Exactly Is a Test?
4. Most of the Ways I Can Think Of to Measure Something Seem Pretty Straightforward. What Are the Important Differences Between Measuring Something Like Weight and Something More Abstract Like Knowledge?
5. What Are Some of the Different Types of Tests, and How Are They Used?
6. What Are the Different Ways to Interpret a Test Score?
7. What Are the Different Ways to Judge Whether a Test Is Any Good?
II. UNDERSTANDING VALIDITY
8. What Does It Mean to Say That a Test Is Valid?
9. What Is Content Validity, Why Is It Important, and How Is It Established?
10. What Is Criterion Validity, Why Is It Important, and How Is It Established?
11. What Is Construct Validity, Why Is It Important, and How Is It Established?
12. What Is Consequential Validity, Why Is It Important, and How Is It Established?
13. What Does It Mean to Say a Test Is Biased?
14. What Does Evidence of Test Bias Look Like?
15. Some Types of Validity Evidence Must Be More Important for Some Types of Tests, but Not Important for Other Types of Tests. What’s the General Strategy for Making an Argument for the Validity of a Test?
16. I Need to Build a Test. How Can I Make Sure It Is Valid?
III. UNDERSTANDING RELIABILITY
17. What Is Reliability, and How Is It Different From Validity?
18. What Are the Different Types of Reliability?
19. If the Traditional Way of Thinking About Reliability Is Called the Classical Theory, That Suggests There Is a More Contemporary Way of Thinking About Things. What Has Come Since?
20. What Are the Important Differences Between Classical and Item Response Theory
21. What Does Internal Reliability Look Like?
22. What Does Test-Retest Reliability Look Like?
23. What Does Inter-rater Reliability Look Like?
24. Other Than the Three Main Types of Reliability, Are There Others?
25. How Are Reliability and Validity Related to Each Other?
26. Some Types of Reliability Evidence Must Be More Important for Some Types of Tests, but Not Important for Other Types of Tests. What’s the General Strategy for Making an Argument for the Reliability of a Test?
27. I Need to Build a Test. How Can I Make Sure It Is Reliable?
IV. THE STATISTICS OF MEASUREMENT
28. When Establishing Validity or Reliability, I Can Understand That Examining Whether Two Sets of Scores Are Related Is Important, but How Do I Do That?
29. How Can I Calculate a Number to See Whether a Test Has Content Validity?
30. For Reliability’s Sake, I Can See That Showing a Test Is Consistent Within Itself Is Important, but How Do I Do That?
31. How Can I Improve the Accuracy of the Split-Half Reliability Coefficient?
32. I Never See Split-Half Reliability Reported. Is There a More Common Way to Show That a Test Has Internal Consistency?
33. We’ve Got Two Ways to Establish Internal Reliability. Which Way Is Best?
34. How Can I Use Reliability Coefficients to Improve My Validity Estimates?
35. What Useful Information Can I Get by Analyzing Individual Items on a Test?
36. How Do I Calculate and Interpret the Difficulty of a Test Question?
37. What Is Item Discrimination, and How Do I Calculate It?
38. What Is the Relationship Between Item Difficulty and Item Discrimination?
V. ACHIEVEMENT TESTS
39. What Are Achievement Tests, and How Are They Used?
40. How Is the Validity of an Achievement Test Established?
41. How Is the Reliability of an Achievement Test Established?
42. What Does SAT Stand for, and What’s the Test All About?
43. That Other Big College Admissions Test, the ACT: What Is It, and How Does It Differ From the SAT?
44. For Admission to Graduate School, My College Requires the GRE Test. What Is It, and Why Is It Better Than the SAT or ACT for Graduate School?
45. What Are the Major Tests for Professional Schools, and What Are They Like?
46. How Do I Score High on an Achievement Test?
47. How Do Test Preparation Courses Work?
VI. INTELLIGENCE TESTS
48. What Are Some of the Commonly Used Intelligence Tests?
49. What Is an IQ?
50. How Is Intelligence Usually Defined and Measured?
51. How Is the Validity of an Intelligence Test Established?
52. How Is the Reliability of an Intelligence Test Established?
53. What Is the Wechsler Intelligence Test?
54. What Is the Woodcock-Johnson Intelligence Test?
55. What Is the Kaufman-ABC Intelligence Test?
56. What Is the Stanford-Binet Intelligence Test?
57. What Are Some Alternatives to the Traditional Intelligence Tests?
58. Why Are Intelligence Tests Controversial?
VII. PERSONALITY TESTS AND ATTITUDE SCALES
59. What Do Personality Tests Measure?
60. How Is the Validity of a Personality Test Established?
61. How Is the Reliability of a Personality Test Established?
62. What Are the Different Ways to Measure Attitude?
63. How Do I Construct My Own Likert-Type Attitude Scale?
64. How Do I Construct My Own Thurstone Attitude Scale?
65. How Are Mental Disorders Diagnosed?
66. What Is the MMPI Personality Test?
67. How Is Depression Measured?
68. How Are Alcoholism and Other Addictions Diagnosed?
VIII. CLASSROOM ASSESSMENT
69. How Do I Decide What to Assess in My Classroom?
70. What Are the Different Types of Assessment I Can Use?
71. How Are the Validity and Reliability of Classroom Assessments Established?
72. What Are the Types of Traditional Paper-and-Pencil Test Items?
73. What Are the Characteristics of a Good Multiple-Choice Question?
74. What Are the Characteristics of a Good Matching Question?
75. What Are the Characteristics of a Good Fill-In-the-Blank Question?
76. What Are the Characteristics of a Good True-False Question?
77. Can I Improve the Quality of a Test After I Give It?
78. I Want to Measure My Students’ Skills and Ability. How Do I Design a Good Performance-Based Assessment?
79. I’d Like to Use Portfolio Assessment in My Classroom. What Are the Guidelines for Doing That Well?
80. How Can Classroom Assessments Actually Increase Learning?
81. I Want My Tests to Be Authentic. What Does That Mean, and How Do I Accomplish That Goal?
82. I Want My Tests to Be Valid for All of My Students. What Are the Universal Design Principles I Need to Follow?
IX. UNDERSTANDING TEST REPORTS
83. What Is the Normal Curve?
84. I See Percentile Ranks Reported All the Time. What Are Those?
85. What Does It Mean to Say That a Test Is Standardized?
86. What Is a Standardized Score, and Why Is It Important?
87. What Is a Z-score?
88. What Is a T-score?
89. I Want to Understand How I Did on the SAT. How Can I Interpret My Score?
90. How Can I Interpret My ACT Score?
91. I Took the GRE. What Does My Score Mean?
92. What Is a Standard Error of Measurement?
93. What Is a Confidence Interval?
94. What Is a Survey, and How Does It Work?
95. I Need to Build My Own Survey. What Are the Characteristics of Good Survey Questions?
96. How Do I Put Together a Generalizable Sample?
97. I’ve Got My Survey and a Sample. Now, How Do I Administer the Survey?
98. How Do I Report the Results of My Survey?
99. What Is the “Margin of Error” That Is Often Reported With Survey Results?
100. What Are Common Mistakes Made in Survey Construction?
Did not meet course objectives
Health Sciences, Baker College Of Auburn Hills
February 5, 2016