Cambridge University Press
978-0-521-86549-4 - Cognitive Diagnostic Assessment for Education - Theory and Applications - by Jacqueline P. Leighton and Mark J. Gierl
Index



Author Index



Abrahamsen, A., 209

Ackerman, T. A., 319

Adams, R., 222

Adams, R. J., 225

Afflerbach, P., 153

Allen, N., 229

Almond, R. G., 9, 22, 178, 179, 189, 207, 225, 234, 247

Alonzo, A. C., 176, 177, 186, 195

Anastasi, A., 5

Anderson, J. R., 10, 124, 127, 155, 213, 221, 242, 245, 247, 248

Artelt, C., 213

Baddeley, A. D., 152

Baird, A. B., 22

Bait, V., 211

Bandalos, D. L., 148

Bara, B. G., 160, 161, 162

Baron, J., 160

Bauer, M., 175, 179, 180, 181, 195

Baumert, J., 213

Baxter, G. P., 21, 22, 52, 244, 245, 256, 266

Bearden, W. O., 185

Beaton, A. E., 229

Bechtel, W., 209

Bechtold, H. P., 110

Behrens, J. T., 175, 179, 180, 181, 195

Bejar, I. I., 21

Bennett, E., 151

Bennett, R. E., 20–21, 189, 321

Berger, A. E., 151

Bergin, K., 189

Birenbaum, M., 221, 278

Bisanz, J., 266

Black, P. J., 31, 52, 312

Bollen, K. A., 102

Bolt, D., 223

Borsboom, D., 7, 111, 235

Bothell, D., 242

Bradlow, E. T., 225, 234

Bransford, J. D., 245

Braunstein, M. L., 152

Brennan, R. L., 21, 146, 147, 222

Briggs, D. C., 176, 177, 186, 195

Britton, B. K., 124

Brown, A. L., 245

Brown, J. S., 124, 126, 245

Bruce, F. A., 55

Buck, G., 50

Burton, R. R., 124, 126, 245

Bybee, R., 189

Byrne, M. D., 242

Byrne, R. M., 160

Cahallan, C., 22

Calfee, R. C., 210

Camacho, F., 49, 320, 322, 326, 327

Campbell, D. T., 102

Carlin, B. P., 292, 293

Carlin, J. B., 288, 289, 290, 292, 299

Carlo, M. S., 247

Carpenter, P. A., 131, 132

Carroll, J. B., 219

Carroll, J. S., 152

Champagne, A., 189

Charness, N., 125

Chernick, H., 22

Chi, M. T. H., 125, 147, 151, 153, 155, 156, 157, 158, 165, 166, 167

Chilukuri, R., 321

Chipman, S. F., 21, 146, 147, 212, 222

Chiu, M. H., 156, 167

Chudowsky, N., 312

Cisero, C. A., 247

Clark, A., 209

Clayton, D. B., 245

Cliff, N., 320

Cocking, R. R., 245

Coffey, J., 312

Collins, A. M., 125

Cook, L. K., 210

Cook, T. D., 102

Cooney, T., 275

Corbett, A. T., 10

Corter, J. E., 178, 278

Coulson, R. L., 21

Cowles, M. K., 293

Cronbach, L. J., 3, 4, 7, 13, 65, 105, 107, 110, 121, 122, 275

Crone, C., 50

Cross, D. R., 174, 188

Crosswhite, F., 275

Cui, Y, 262

Das, J. P., 125

Dawson, M. R. W., 169, 247

Dayton, C. M., 137

De Boeck, P., 94, 100, 225

De Groot, A. D., 108

de la Torre, J., 119, 222, 287–288

de Leeuw, N., 156, 167

De Vries, A. L. M., 105

DeMark, S. F., 175

Desimone, L. M., 155

DeVillis, R. F., 185

Devine, O., 292

DiBello, L. V., 21, 22, 50, 119, 127, 223, 276, 278, 279, 280, 281, 284–285, 287, 290, 291, 294, 298, 299, 303, 347

Diehl, K. A., 196

Dionne, J. P., 154–155, 247

Divine, K. P., 55

Doignon, J. P., 119

Dolan, C. V., 95, 101

Donovan, M. S., 245

Dorans, N. J., 297

Dossey, J., 275

Douglas, J. A., 119, 193, 222, 287–288

Douglass, S., 242

Draney, K. L., 119

Drasgow, F., 321

Drum, P. A., 210

Duncan, T., 321

Duschl, R., 189

Eckhout, T. J., 148

Edwards, C., 232

Edwards, M. C., 49, 326

Embretson, S. E., 3, 6, 8, 21, 100, 120, 122, 125, 129, 130, 131, 132–133, 135, 136, 137, 139, 141, 151, 178, 181, 182, 183, 185, 195, 196, 210, 223, 235, 244, 256, 283

Enright, M. K., 212

Ericsson, K. A., 8, 125, 147, 151, 152, 153, 155, 156, 157, 158, 169, 195, 247

Evans, J., 160

Falmagne, J. C., 119

Farr, M., 125

Feigenbaum, M., 327

Feltovich, P. J., 21

Ferrera, S., 321

Fife, J., 278

Fischer, G. H., 20, 217

Fodor, J. A., 247

Folske, J. C., 326

Formann, A. K., 20, 222

Foy, P., 213

Frederiksen, J., 124

Frederiksen, N., 189

Freed, R., 321

Freedle, R., 210

Freidin, R., 211

Frey, A., 229

Fu, J., 223

Galotti, K. M., 160

Garcia, P., 210

Gay, A., 55

Gelman, A., 288, 289, 290, 292, 299

Gertner, A., 190

Gessaroli, M. E., 326

Ghallager, J., 189

Gierl, M. J., 9, 21, 50, 119, 120, 142, 158, 159, 178, 243, 246, 249, 250, 254, 256, 262, 265, 266, 319, 321–322, 325, 326

Ginther, A., 21, 210

Girotto, V., 158

Glaser, R., 3, 21, 22, 52, 125, 224, 244, 245, 256, 266

Glasnapp, D., 245

Gleser, G. C., 107

Gluck, K. A., 10, 213

Gokiert, R., 158, 162, 163, 245

Goldsmith, T. E., 124

Goodman, D. P., 24, 47, 54, 244, 264, 265

Gorin, J., 8, 21, 151

Gorin, J. S., 182, 183, 185, 195, 196, 210

Gorsuch, R. L., 326

Grabe, W., 212

Graesser, A., 50, 265

Graham, G., 209

Greene, J., 148

Greeno, J. G., 209

Gross, A.L., 107

Guerrero, A., 278

Guttman, L., 99, 129, 217

Haack, P., 245

Habermas, J., 72, 73

Haertel, E. H., 222

Hambleton, R. K., 47, 54, 98, 103, 210, 244, 264, 265, 286, 319

Hamel, L., 22

Hamilton, L. S., 149, 151

Handley, S. J., 160

Harper, C. N., 160

Harré, R., 68

Hartig, J., 229

Hartz, S. M., 119, 178, 223, 276, 283, 287

He, X., 224, 305

Healy, A. F., 10

Henson, R. A., 193, 223, 225, 278, 279, 280, 288, 290, 291, 294, 298, 299, 303, 305, 307, 308, 309, 310, 311

Hessen, D. J., 101

Hively, W., 216

Hoogstraten, J., 99, 102

Huff, K. L., 23, 24, 50, 53, 265, 321–322, 326

Hunka, S. M., 21, 119, 120, 142, 159, 178, 243, 249, 250, 256, 262, 321, 325

Hunt, E., 244

Impara, J. C., 55

Inhelder, B., 89, 95

Ippel, M. J., 129

Irvine, S. H., 128, 321

Jackson, D. N., 99

Jacquemin, D., 21

Jaeger, R., 55

Jamieson, J., 210

Jang, E. E., 278, 279, 280, 290, 291, 292, 294, 298, 299, 303, 310

Jansen, B. R., 89, 95

Jebbett, L., 195

Jenkins, F., 189

Jiang, H., 283

Johnson, E. J., 152

Johnson, M. S., 299

Johnson, P. J., 124

Johnson-Laird, P. N., 159, 161, 162, 166

Jöreskog, K. G., 326

Juhasz, B. J., 195

Junker, B. W., 119, 136–137, 222, 224, 226, 285, 287, 289

Just, M. A., 131, 132

Kane, M. T., 4, 7, 13, 129

Kaplan, D., 102

Katz, I. R., 151

Keller, T., 312

Kifer, E., 275

Kindfield, A. C. H., 22

Kintsch, w., 51

Kirby, J. R., 125

Kitcher, P., 66, 67

Klein, M. F., 221

Klieme, E., 213

Koch, G. G., 306

Koda, K., 212

Koedinger, K. R., 10

Kostin, I., 50, 210

Kuhn, D., 247

Kyllonen, P. C., 128, 321

Lajoie, S., 224, 245, 266

Landis, J. R., 306

LaVancher, C., 156, 167

Lebiere, C., 242

LeFloch, K. C., 155

Leighton, J. P., 3–9, 21, 63, 79, 119, 120, 142, 148, 149, 151, 158, 159, 162, 163, 169, 178, 185, 195, 243, 245, 246, 247, 249, 250, 254, 256, 262, 321, 325

Lesgold, A., 224, 245, 266

Leucht, R. M., 320, 321

Levy, R., 175, 179, 232, 234

Li, Y. Y., 266

Lin, Y., 124

Liu, J., 327

Liverman, M. R., 55

Liversedge, S. P., 195

Loevinger, J., 3, 10, 110

Loftus, E. F., 125

Lohman, D. F., 3, 4, 8, 9, 11, 13, 14, 20, 21, 100, 125, 128, 129, 147, 149, 150, 153, 219, 246, 275, 351

Longford, N. T., 335

Lord, F. M., 9, 103, 104, 210

Louwerse, M., 50, 265

Luecht, R. M., 53, 321–322, 323, 326

Lukin, L. E., 148

Lutz, D., 50

MacCorquodale, K., 63

MaCready, G. B., 137

Maris, E., 100, 119, 222, 282, 283, 284

Marshall, S. P., 129, 177, 193

Martin, M. O., 213

McErlean, J., 65

McGivern, J., 321

McKeachie, W. J., 124

McKnight, C., 275

McNamara, D. S., 50, 265

Meara, K., 55

Meehl, P. E., 3, 4, 13, 63, 110, 121, 122

Mellenbergh, G. J., 7, 99, 101, 102, 104, 107, 111, 235

Meng, X. L., 299

Meredith, W., 101, 104

Messick, S., 3, 4, 6, 7, 8, 10, 11, 13, 20, 93, 100, 103, 108, 110, 111, 120, 121, 122, 185, 205, 235, 242, 351

Michell, J., 91

Mickelson, K., 148

Millman, J., 148

Millsap, R. E., 104, 106, 111

Mischel, W., 65

Mislevy, R. J., 5, 9, 21, 22, 120, 141, 150, 175, 178, 179, 180, 181, 189, 195, 207, 210, 225, 232, 234, 235, 247, 266, 321

Molenaar, I. W., 103

Moore, J. L., 209

Morley, M. E., 21, 50, 327, 328

Morningstar, M., 217

Mosenthal, P. B., 210, 212

Moulding, B., 312

Mulcahy-Ernt, P., 212

Mulholland, J., 292

Mullis, I. V. S., 213

Muthén, B. O., 224

Nagel, E., 66

Naglieri, J. A., 125

Naveh-Benjamin, M., 124

Neale, M. C., 95

Nelson, L., 49, 320, 322, 326, 327

Netemeyer, R. G., 185

Neubrand, M., 213

Newell, A., 6, 156, 166, 208, 212, 219

Nichols, P. D., 4, 9, 11, 12, 13, 21, 22, 49, 52, 146, 147, 149, 150, 158, 159, 222, 235, 244, 256

Nisbett, R., 152

Nishisato, S., 326

Nitko, A. J., 20

Norris, S. P., 63, 71, 75, 79, 153, 245

North, B., 228

Notar, C. E., 31, 52

Novick, M. R., 104, 210

Nunan, D., 232

Nussbaum, E. M., 149, 151

O’Callaghan, R. K., 50, 327, 328

O’Neil, T., 23

Oort, F. J., 104

Oosterveld, P., 99

Page, S. H., 216

Paris, S. G., 174, 188

Patterson, H. L., 216

Patz, R. J., 224, 287, 289

Payne, J. W., 152

Peak, H., 7

Pek, P., 190

Pellegrino, J. W., 3, 21, 22, 52, 244, 245, 256, 265, 266, 312

Pelletier, R., 10

Perfetti, C. A., 51

Perie, M., 321

Persky, H., 189

Phelps, M., 50

Phillips, L. M., 63, 75, 79, 245

Piaget, J., 89, 95

Pirolli, P., 119

Poggio, A., 245

Poggio, J., 245

Poh, K. L., 190

Popham, W. J., 108, 312

Porch, F., 288

Prenzel, M., 213

Pressley, M., 153

Proctor, C. H., 137

Psotka, J., 127

Qin, Y., 242

Rabe-Hesketh, S., 224

Rasch, G., 282

Ratcliff, R., 94

Raven, J. C., 130, 131

Rayner, K., 195

Reder, L. M., 247

Reeve, B. B., 49, 320, 322, 326, 327

Reichenbach, H., 65

Reise, S. P., 100

Rescher, N., 65

Riconscente, M. M., 22, 321

Roberts, K., 195

Roberts, M. J., 160

Rosa, K., 49, 320, 322, 326, 327

Rost, J., 229

Roussos, L. A., 21, 119, 127, 223, 224, 276, 278, 279, 280, 281, 284–285, 287, 290, 291, 294, 298, 299, 303, 305, 307, 347

Royer, J. M., 247

Rubin, D. R., 288, 289, 290, 292, 299

Rumelhart, D. A., 221

Rumelhart, D. E., 125

Rupp, A. A., 210, 222, 225

Russo, J. E., 152

Sabini, J. P., 160

Salmon, W. C., 65

Samejima, F., 125, 128, 281

Sandifer, P., 312

Schedl, M., 21, 212

Scheiblecher, H., 217

Scheider, G., 228

Schiefele, U., 213

Schmittmann, V. D., 95

Schneider, W., 213

Schraagen, J. M., 212

Schum, D. A., 206

Schwab, C., 176, 177, 186, 195

Schwartz, A., 50, 327, 328

Scriven, M., 66, 243

Senturk, D., 22

Shadish, W. R., 102

Shalin, V. J., 212

Sharma, S., 185

Shavelson, R. J., 312

Sheehan, K. M., 21, 119, 210

Shell, P., 131, 132

Shepard, L. A., 312

Shunn, C. D., 245, 248

Shute, V. J., 127

Siegler, R. S., 166

Sijtsma, K., 103, 119, 136–137, 222

Simon, H. A., 6, 8, 147, 151, 152, 153, 155, 156, 157, 158, 166, 169, 195, 208, 212, 219, 247

Singley, M. K., 21

Sinharay, S., 293, 299, 300

Sireci, S. G., 21, 23, 53, 194

Skrondal, A., 224

Slater, S., 55

Sloane, K., 176

Snow, R. E., 3, 4, 8, 9, 11, 13, 14, 20, 21, 100, 125, 128, 149, 151, 219, 246, 275, 351

Spiro, R. J., 21

Stanat, P., 213

Standiford, S. N., 221

Steffen, M., 21

Steinberg, L. S., 9, 22, 175, 178, 179, 180, 181, 189, 195, 207, 247

Stephens, D. L., 152

Stern, H. S., 288, 289, 290, 292, 299

Sternberg, R. J., 4, 14, 154, 219

Stiggins, R., 31, 52

Stout, W. F., 21, 119, 127, 223, 224, 276, 278, 280, 281, 284–285, 287, 308, 309, 310, 311, 347

Stouthard, M. E. A., 99, 102

Su, W. H., 107

Sugrue, B., 256

Suppes, P., 217

Swafford, J., 275

Swaminathan, H., 98, 103, 286, 319

Swan, M., 232

Swanson, D. B., 326

Swygert, K. A., 49, 320, 322, 326, 327

Tan, X., 321–322, 326

Tatsuoka, C., 278

Tatsuoka, K. K., 21, 49, 50, 119, 151, 159, 178, 221, 225, 254, 278, 281

Taylor, C., 73

Taylor, K. L., 154–155, 247

Teague, K. W., 124

Templin, J. L., 137, 223, 224, 225, 278, 279, 280, 288, 290, 291, 294, 298, 299, 303, 307, 308, 309, 310, 311

Thissen, D., 49, 320, 322, 326, 327

Thomas, J., 245

Tidwell, P., 124

Tillman, K. J., 213

Tolbert, P., 292

Tomko, T. N., 63

Toulmin, S. E., 206

Travers, K., 275

Tuerlinckx, F., 94

Uebersax, J. S., 95

Underwood, G., 195, 211

Van der Linden, W. J., 103, 107, 210

Van der Maas, H. L. J., 89, 95

Van der Veen, A. A., 50, 265, 327, 328

van Fraassen, B. C., 67

van Heerden, J., 7, 111, 235

van Lehn, K., 213, 221

VanEssen, T., 50

Van Lehn, K. A., 166, 190, 191

Vevea, J. L., 49, 320, 322, 326, 327

von Davier, M., 225

Wainer, H., 49, 55, 225, 234, 320, 322, 326, 327

Walker, C., 319

Walker, M. E., 327

Wang, W. C., 222

Wang, X., 225, 234

Warren, T., 195

Waxman, M., 125, 129

Webb, N. L., 257

Weiß, M., 213

Weiss, A., 189

White, B., 124

Wicherts, J. M., 101

William, D., 31, 52, 312

Williamson, D. M., 175, 179, 180, 181, 195

Willis, G. B., 147, 155

Willis, J., 232

Wilson, J. D., 31, 52

Wilson, M. R., 100, 119, 176, 177, 185, 186, 195, 222, 224, 225

Wilson, T. D., 152

Xia, H., 292

Xin, T., 278

Xu, Z., 278

Yamada, Y., 278

Yan, D., 22, 234

Yang, X., 136

Yunker, B. D., 31, 52

Zenisky, A. L., 21, 53, 194

Zuelke, D. C., 31, 52





Subject Index



ability parameters

   Bayesian framework for, 286–287

   in Fusion Model, 293–294

Abstract Reasoning Test (ART), 130–132

   cognitive psychometric item properties modeling, 132–134

   item structures of, 132, 136

acceptability, judgment of, 74

achievement

   cognitive models of, 15

   educational assessment of, 10

   normative foundations of, 62

achievement tests

   diagnostic, remedial uses of, 63

   inferences about, 74

   understanding and, 61

ACT®, 25, 31

action, understanding and, 77

Advanced Progressive Matrix test, 131

AHM. See Attribute Hierarchy Method

Andes Intelligent Tutoring System, 190

aptitude, achievement and learning, theories of, 8

Arpeggio, 276, 280, 287

ART. See Abstract Reasoning Test

ART data

   latent ability state representation of, 140

   latent class models fitted to, 138–139

   parameter estimates, 138–139

ART item structures, 132

   ability states and, 136

assessment(s). See also classroom-based assessment; cognitive diagnostic assessment; commercial assessments; computer-based assessment systems; diagnostic assessment; educational assessment(s); large-scale assessments

   of achievement, 10

   cognitive model role in, 76

   cognitive theories about, 10

   curriculum, instruction and, 22

   design templates, 22

   educational benefits from, 22

   formative vs. summative, 277

   instructional-relevant results from, 24–47

   teacher produced, 29

   technology and, 52–53

   underlying model of learning and, 51–52

assessment analysis, IRT and, 280

assessment design

   evidence-centered, 22

   psychometric vs. cognitive approaches to, 19, 21–24

assessment developers

   commercial assessment use and, 29

   demand for CDA from, 19–24

   educators’ needs and, 47

assessment systems, standards movement for, 275, 276

assessment tasks, cultural symbolism and, 211

attribute(s)

   as composites, 97–98, 99

   item response and, 89, 97–98

   as latent variables, 89

   measurement process and, 89

   as moderators, 93–94

   ordering of, 249–250

   as parameters, 93–94

   quantitative structure of, 91

   response process and, 96

   structure of, 88

   test scores and, 93, 109

attribute hierarchy, 249–252, 253

   in Johnson-Laird theory, 159–161

   sequencing of, 164

   task performance and, 159

Attribute Hierarchy Method (AHM), 119, 178

   for CDA, 16, 243, 249–265, 266–267

   convergent, 251

   four-step diagnosis process for, 249–250

   linear, 250

   within mathematics, 252–253

   Q matrix in, 255

   rule-space approach to, 252–253

   unstructured, 250

attribute probabilities, calculation of, 268–269

augmented scores, 332–337

   anomalous results for, 333

   reliability coefficients for, 335

balance scale test, 89, 90

BEAR Assessment system, 176

behavioral observations, 194

behavioral perspective, psychometric models with, 216–217

behavioral psychology, 208

beliefs, causal efficacy of, 70–71

Binet, Alfred, 4

Brennan, Robert, 3

British Columbia (BC) Ministry of Education, 54

California Achievement Test, 25

CAT. See computerized adaptive testing systems

causation

   beliefs and, 70–71

   constant conjunctive view of, 67

   explanation and, 62–70

   Harré’s view of, 68–69

   nominalism and, 65

   randomized experiments for, 101

   regularity view of, 65

   understanding and, 61, 70–75

CDA. See cognitive diagnostic assessment

CDM. See cognitive-diagnostic models

CDS. See Cognitive Design System

CEF. See Common European Framework of Reference for Languages

Chipman, Susan, 3

classical test theory, 8

classroom-based assessment

   CDAs and, 16, 31, 147–148, 349–350

   diagnostic information from, 29

   state standards and, 47

   of student strengths and weaknesses, 52

cognition theories, instruction and, 245

cognitive antecedents, of problem-solving behaviors, 6

cognitive assessment

   AHM for, 243, 249–265, 266–267

   balance scale test, 89, 90

cognitive competencies

   convergent hierarchy and, 251

   divergent hierarchy and, 251

   linear hierarchy and, 250

   unstructured hierarchy and, 250

Cognitive Design System (CDS), 181–184

   advantages of, 182

   Model Evaluation, 181–182

   procedural framework of, 181

cognitive development, stages in, 89

cognitive diagnostic assessment (CDA), 146

   AHM model for, 16, 243, 249–265, 266–267

   assessment developer demand for, 19–24

   classroom-based assessment and, 31, 147–148, 349–350

   cognitive processes, components, capacities and, 125

   computer technology for, 350

   construct representation study for, 135–140

   construct validation and, 7, 15, 119–120, 123–140

   development, validation of, 169

   educators demand for, 19, 24–47

   empirical grounding of, i

   foundations of, 14–15

   future research, 341–343

   goals of, 124–125

   higher-order thinking skills and, 125

   history of, 3

   influential articles on, 3

   item design impact on, 128

   in K-12 education, 19

   large-scale assessments and, 49–51

   learning environment integration of, 244

   literature on, 19, 20–21

   mental processes and, 120

   NAEP vs., 148

   PISA vs., 148

   potential benefits of, 245–246

   principle test design for, 343–345

   problem-solving and, 146

   program validity required by, 12–14

   protocol, verbal analysis and, 147–150

   SAIP vs., 148

   SAT vs., 148

   score reporting for, 265

   skill profiles, knowledge and, 125

   structural fidelity in, 12

   structured procedure, knowledge network and, 125

   term usage, 19–20

   test development, 11–12

   test items in, 149

   thinking patterns and, 167

   traditional large-scale tests vs., 147–148

   for trait measurement, 130–140

   validity of, 141

   value of, 147–150

   verbal reports and, 147

cognitive diagnostic methods, applications of, 278

cognitive functioning

   computer models of, 8

   item difficulty and, 20

cognitive information, psychometric models and, 16

cognitive information processing, trait performance and, 242

cognitive item design, 120, 182, 257–264

cognitive model(s)

   of achievement, 15

   assessment and, 76

   attribute hierarchy in, 250–252

   coefficients for, 134

   diagnostic fit statistics for, 300

   educational measurement and, 243, 246–248

   future research, 341–343

   IRT modeling vs., 281

   item development in, 255–257

   item response and, 247

   item scoring, design in, 257–264

   normative models and, 80, 82

   of performance, 79, 119

   in psychometric literature, 194

   psychometric procedures and, 249–265

   of reasoning steps, strategies, 80

   of task performance, 149, 158, 243, 244, 253

   for test development, 150

   understanding and, 75–81

   weakness of, 248

cognitive model development

   eye-tracking and, 195–198

   verbal protocols and, 195

cognitive model variables, item difficulty regression on, 133–134

cognitive perspective, of educational assessment, 207

cognitive processes, 147

   categorization of, 128–129

   context and, 79

   dependencies of, 247

   hierarchy of, 247

   indicators of, 265–266

   in item solution, 128

   psychometric models for, 100

cognitive psychologists. See also psychologists

   psychometricians and, 20

cognitive psychology

   discipline structure of, 207, 208–213

   educational measurement and, 4, 8, 14

   information, test validity and, 4–5

   methodological characteristics of, 211–213

   psychometrics and, 4

   substantive approach with, 8–9

   test validity and, 4–5

   theoretical premises in, 209–211

   usefulness of, 8

cognitive skills

   explicit targeting of, 23

   inferences about, 248

   reporting, 264–265

   schematic representation of, 191–192

   state test specifications application of, 23

cognitive structures, universality of, 209–210

cognitive task analysis (CTA), 212

cognitive task demands, 229

cognitive theory

   assessment and, 10

   diagnostic item evaluation using, 194–198

   latent trait theory vs., 266–267

   measurement and, 246–247

   in test design, 120

cognitive variable structure, 232–234

cognitive-diagnostic models (CDM), 178–179

   CDS vs., 178–179

   ECD vs., 178–179

Cognitively Diagnostic Assessment (Nichols, Brennan, Chipman), 3

college admission assessments, 25, 31

College Board, 50

commercial assessments

   state-mandated vs., 36–41

   use of, 29, 38

commercial large-scale assessments, 25

Common European Framework of Reference for Languages (CEF), 224, 228

communicative action

   cooperation in, 72

   Habermas’s theory of, 72

   validity claims of, 72

competency models, theoretical, 227–228

competency scaling, 229–234

Competency Space Theory, 281

complex item-responses, task processing, student cognition and, 189

component latent variables, 225–226

computer models, of cognitive functioning, 8

computer networking, technology

   for CDA, 350

   diagnostic items and, 175–176

   test assembly, 191–194

computer-based assessment systems, 53

   complex scoring models of, 53

   in K-12 education, 53

computerized adaptive testing (CAT) systems, 193

concurrent interview, 151–153

construct definition

   item development and, 198

   of multiple-choice reading comprehension test questions, 185

   reporting of, 185–186

construct irrelevant variance, 123

construct map, 185–186

   OMC items and, 185–186

construct representation, 21, 122

   for cognitive diagnosis, 135–140

   completeness of, 126

   construct validity and, 126–127, 134

   granularity of, 126

   simplification in, 127

construct theory, 13

   CDA and, 7

   data and analysis relevant to, 7

   substantive approach in, 6

   validation, 7

construct underrepresentation, 122–123

construct validity, 111, 141, 177, 182, 300

   See also validation, validity

.

   CDA and, 15

   CDA issues, 123–140

   cognitive diagnosis and, 119–120

   construct representation and, 126–127, 134

   diagnosis meaning and, 123–124

   general framework, 121–123

   issues of, 126–130

   Messick’s six aspects of, 121–123

   test design, administration for diagnosis and, 126–127

   verbal reports informing, 151

content analysis, test validation and, 100

content representation studies, for trait measurement, 130–140

content validity, 110, 111, 121

   See also validation, validity

.

   consequential aspect of, 122

   external aspect of, 122

   generalizability aspect of, 122

   structural aspect of, 122

   substantive aspect of, 121–122

context, cognitive process and, 79

contributing skills, inference about, 174

convergent hierarchy, cognitive competencies and, 251

correct-answer-key (CAK), 323

   nSAT and, 332

   subscores and, 332

correlational analysis, 101–102

covering law model. See deductive- nomological model

criterion validity, 110–111

criterion-referenced testing, 106

critical thinking, standards, criteria of, 71

Cronbach, Lee, 4

CTA. See cognitive task analysis

curriculum, instruction and assessment, integrated system of, 22

data augmentation, 321, 323

   diagnostic score computation and, 325–327

   empirical study of, 327–332

   response data, scoring evaluators and, 322–325

decision concepts, 105–107, 110

declarative knowledge, diagnostic assessment and, 124

deductive-nomological model (D-N), 65

   alternatives to, 67–68

   critiques of, 66

   explanation relation asymmetry and, 67

   nominalist relative of, 68

deductivism, critique of, 66–67

deterministic-input, noisy-and-gate model (DINA), 222

diagnosis

   aspects of, 124

   meaning of, 123–124

diagnostic assessment

   additional formats of, 190

   cognitive basis of, 119

   declarative knowledge and, 124

   higher-order thinking skills and, 128

   implementation steps, 279

   item design and, 120

   verbal analysis and, 165–167

diagnostic inference, characteristics of, 243–248

diagnostic information, 29

   from classroom assessment practices, 29

   classroom practice integration of, 47–49, 51–52

   educator demand for, 47

   instructional relevance of, 51–52

   interpretation difficulty with, 41

   item-level results as, 32

   from large-scale assessments, 32, 38, 42–47

   legislation and, 31

   obstacles to use of, 38–41

   presentation of, 41, 54–55

   required, 31

   from SAT®, 50

   teacher use of, 36–38

   teacher views on, 31–32

   timeliness of, 38, 53–54

   utility of, 36

diagnostic items, 174–177

   computer networking and, 175–176

   evaluation of, 194–198

   key components of, 188

   scientific assessment and, 176–177

diagnostic modeling system, 124–125

diagnostic process, reporting system for, 244

diagnostic skill subscores, CAK, PIK, SNC-based, 329–330, 334

diagnostic tests

   educational reform and, 174

   frameworks for, 177–184

   history, 173

   item type selection, 188–190

   measurement models for, 119

   penetration of, 174–175, 177

   test design construct definition, 185–194

differential item functioning (DIF) analysis, 297

DINA. See deterministic-input, noisy-and-gate model

disattenuated correlations, CAK, PIK, SNC-based, 331–332

divergent hierarchy, cognitive competencies and, 251

D-N model. See deductive-nomological model

ECD. See Evidence Centered Design

educational accountability, 5

educational assessment(s)

   achievement measured by, 10

   cognitive perspective, 207

   cognitive psychology, examples, 226–234

   component latent variables, 225–226

   cultural specificity premise for, 211

   developmental perspective on, 224

   grain size, feedback purpose, 213–215

   psychological perspectives, SIRT models for, 215–226

   psychology and, 207

   under trait/differential perspective, 218

   universality premise for, 209

educational measurement

   cognitive models and, 243, 246–248

   cognitive psychology and, 4, 8, 14

   latent trait theories in, 266

Educational Measurement (Linn), 3, 20

educational measurement specialists. See psychometricians

educational psychometric measurement models (EPM)

   item response theory and, 9

   limitations of, 9

   psychological theory and, 9–14

educational reform

   diagnostic testing and, 174

   German context of, 227

educational testing, nominalism impact on, 62

Educational Testing Service (ETS), 276

educational testing theories, epistemology and, 69–70

educators. See also teachers

   assessment developers and, 47

   current efforts addressing, 47–51

ELL. See English Language Learning

EM. See Expectation-Maximization

EMstats. See examinee mastery statistics

English, national standards for, 228

English Language Learning (ELL), 278

epistemology, educational testing theories and, 69–70

EPM. See educational psychometric measurement models

ethical considerations, 82

ETS. See Educational Testing Service

Evidence Centered Design (ECD), 22, 179–181

   CDM vs., 178–179

   evidence model, 179–181

   student model, 179–181

   task model, 179–181

examinee mastery statistics (EMstats), 301–303

Expectation-Maximization (EM), 289

experimental manipulation, 100–101

explanation, causation and, 62–70

extended essay responses, 150

eye fixations, eye tracking, 150, 195–198

Fast Classier, 306

feedback, granularity, purpose of, 213

formal education, aims of, 221

Four Decades of Scientific Explanation (Salmon), 66

A Framework for Developing Cognitively Diagnostic Assessment (Nichols), 3

French, national standards for, 228

Fusion Model, 178, 223, 280–285

   ability parameters, 293–294

   Arpeggio in, 276, 280, 287

   checking procedures, 289–305

   convergence checking, 289–293

   convergence in, 290–293

   development status of, 314–315

   four components of, 276–277

   internal validity checks, 300–303

   item parameters, 294–296, 298

   MCMC in, 276

   model fit statistics, 298–300

   non-influential parameters in, 296

   parameter estimation method, 280–285

   proficiency scaling, 307–308

   Q matrix, 298, 309

   reliability estimation, 304–305

   reparameterization, 284

   score reporting statistics, 289–305

   skill mastery and, 282, 306

   statistics calculated in, 299

   subscore use, 308–312

   weighted subscoring in, 311–312

future research, 51–55, 267–269, 337–338

   cognitive diagnostic inferences, 345–347

   cognitive models in CDA, 341–343

   granularity, 343–345

   integrated theories, 348–351

   principled test design, 343–345

   reporting CDA results, 347–348

Galton, Sir Francis, 4

general component trait model (GLTM), 223

general intelligence, 91–92

   IQ-test for, 92

Germany, educational reform in, 227

GLTM. See general component trait model

group comparisons, 101

guessing, 66

   acceptable risk of, 66

   nominalist view of, 63

Habermas, J., 72

Harré, R., 68–69

Hierarchy Consistency Index (HCIi), 262–264

higher-order thinking skills

   CDA and, 125

   measurement of, 21

How People Learn: Brain, Mind Experience, and School, 24

Human Information Processing (Newell, Simon), 208

ideal response patterns, examinee’s falling in, 136–137

impact concepts, 107–109, 110

IMstats. See item mastery statistics

inferences, inference model

   about achievement tests, 74

   about cognitive skills, 248

   about contributing skills, 174

   measurement model and, 85

   scientific theory and, 85

   statistical generalization model and, 64

   theoretical difficulties of, 62

information processing

   computational model of, 158

   cultural specificity of, 210–211

   mental operations and, 219

   performance and, 219

   problem solving and, 208–209

   psychonomic research on, 94

   SIRT models and, 219–225

Institute for Educational Progress (IQB), 227

institutional barriers, 74

instruction, cognition theories and, 245

instructional design, search strategy and, 176

instructional intervention, understanding and, 82

integrated theories, future research, 348–351

intelligence

   general, 91–92

   individual differences in, 91

   IQ-test for, 92

   theoretical attribute of, 91

   WAIS, 91

intelligent tutoring systems, 190

internal validity checks, in Fusion Model, 300–303

interviews, concurrent vs. retrospective, 151–155

Iowa Test of Basic Skills (ITBS), 25

IQB. See Institute for Educational Progress

IQ-test, validity of, 92

IRF. See item response function

IRT. See item response theory

IRT modeling, cognitive diagnosis models vs., 281

ITBS. See Iowa Test of Basic Skills

item counts, by cognitive category, 328

item design, scoring

   assembling objects, 182

   CDA impact by, 128

   cognitive, 120, 182, 257–264

   cognitive research methodology, findings and, 128

   construct definition and, 198

   development of, 21

   diagnostic assessment and, 120

   performance and, 120

   systematic, defensible approach to, 120

   theoretical attribute in, 99

item difficulty

   modeling research on, 21

   response slip and, 267–268

item forms, 216

item mastery statistics (IMstats), 301–303

item parameters

   Bayesian framework for, 287

   chain length impact on, 290–292

   in Fusion Model, 294–296, 298

item response, 88

   attribute and, 89, 97–98

   cognitive models and, 247

   systematic variation in, 94

   task performance and, 247

item response function (IRF), 280

   for Unified Model, 282

item response theory (IRT), 8, 210, 278

   assessment analysis through, 280

   EPM models based on, 9

   trait/differential vs. behavioral perspective, 8

item solution, cognitive processes involved in, 128

item-level results, as diagnostic information, 32

Johnson-Laird theory, 158–167

   attribute hierarchy in, 159–161

K-12 education

   CDA in, 19

   computer-based assessments in, 53

KMK. See Standing Conference of the Ministers of Education and Culture

Knowing What Students Know: The Science and Design of Education Assessment, 24

knowledge space theory, 119

knowledge structure, 147

knowledge-based work environments, preparation for, 5

LanguEdge, 278

large-scale assessments

   CDA and, 49–51

   classroom, instructional planning and, 47

   commercial, 25

   diagnostic information and, 32, 38, 42–47

   instruction and, 26–29

   state vs. commercial, 36–41

   of student strength and weaknesses, 5, 41–42

   suitability of, 41

   teacher’s views of, 41

   test specifications for, 23

   use of, 26–27

large-scale state assessments

   instructional program evaluation and, 5

   stakeholders need from, 15

   teachers and, 26

latent trait theories, cognitive theories vs., 266–267

latent variables, situated vs. generic, 225–226

learning

   connectionist view of, 209

   task-based, 232

Learning and Understanding: Improving Advanced Study of Mathematics and Science, 24

learning environment, CDA integration with, 244

linear hierarchy, cognitive competencies and, 250

linear logistic test model (LLTM), 217–218

Linn, Robert, 3

LLTM. See linear logistic test model

Markov Chain Monte Carlo (MCMC) algorithm, 276, 280

   convergence, 290

   description of, 287–289

mastery and understanding, indicators of, 8, 282, 301–303, 306

mathematics

   AHM application in, 252–253

   national standards for, 227–228

matrix completion item, 131

MCMC. See Markov Chain Monte Carlo algorithm

MCQ. See multiple-choice questions

measurement invariance, 103–104

   prediction invariance vs., 106

measurement, measurement models, 87–88, 103–105, 110

   See also educational measurement; educational psychometric measurement models; psychological measurement; trait measurement, performance

.

   attribute structure and, 89

   bias in, 106

   cognitive principles with, 246–247

   cognitive process modeling approach to, 129

   decisions, impact of testing and, 103–112

   for diagnostic testing, 119

   experimental design and, 101

   of higher-order thinking skills, 21

   inference/inference model, 85

   invariance, 103–104, 106

   multidimensional, 319–320

   precision, 104

   sampling theory and, 129

   structural theory of, 129

   test validity, 104–105

   theoretical attribute structure in, 91

   uncertainty and, 85

   unidimensionality, 103–105

   without structure, 96

mental models, processes. See also student cognition, mental processes

   information-processing perspective, 219

   Johnson-Laird theory of, 158–167

   of problem solving, 129

   during test-taking behaviors, 7

mental processes, CDA and, 120

Messick, S., 121–123

metacognitive processes, 153–154

metrics, psychology vs., 5

Metropolis-Hastings procedure, 288

Ministry of Education, British Columbia (BC), 54

mixed response process

   nested, 95

   not nested, 95–96

MLTM. See multicomponent latent trait model

model parameter estimates, interpretation of, 293–300

MRCMLM. See multidimensional random coefficients multinominal logic model

multicomponent latent trait model (MLTM), 223

multidimensional random coefficients multinominal logic model (MRCMLM), 222

multidimesionality, in measurement information, 319–320

multiple classification models, 222

multiple-choice questions (MCQ), 321, 322

   diagnostic assessment limitations of, 189

   distractor-based scoring evaluators, 324, 337

   PIK and, 324

   SNC and, 324–325

   test design and, 189–190

multiple-choice reading comprehension test questions, construct definitions of, 185

mutual understanding, 73

NAEP. See National Assessment of Educational Progress

NAEP Science Assessment Framework, 189

National Assessment of Educational Progress (NAEP), 189, 313

   CDA vs., 148

national standards, 214, 227

   for English, French, 228

   for mathematics, 227–228

NCLB. See No Child Left Behind Act

Networking Performance Skill System (NetPASS), 175–177

   claim-evidence chain for, 180

new SAT (nSAT), 327–328

   CAK and, 332

Newell, A., 208

Nichols, Paul, 3

No Child Left Behind (NCLB) Act, standardized achievement tests and, 5, 173

nominalism

   causation, explanation and, 65

   educational testing impact of, 62

   guessing and, 63

   psychological constructs and, 63–64

normative models, cognitive models and, 80, 82

nSAT. See new SAT

Ohio Department of Education, 48

OLEA-on-line assessment, 190

OMC. See ordered multiple choice questions

optimality, 107

ordered multiple choice (OMC) questions, 176, 185–186

Paiget, J., 89

penetration, 177

   of diagnostic tests, 174–175

performance

   cognitive models of, 79, 119

   information-processing perspective on, 219

   item design and, 120

   understanding and, 82

PIK. See popular incorrect key

PISA. See Program for International Student Assessment

popular incorrect key (PIK), 329–330, 331–332, 334

   MCQ and, 324

   SNC and, 324–325

prediction invariance, measurement invariance vs., 106

predictive accuracy, 105–106

Preliminary SAT/National Merit Scholarship Qualifying Test (PSAT/NMSQT®), 25, 276

principled assessment design, 22

problem-solving

   CDAs and, 146

   cognitive antecedents of, 6

   information-processing perspective on, 208–209

   mental processes of, 129

   strategies, 128

   weaknesses, 146

processing skills, ordering of, 161

production rule, 220

proficiency scaling, in Fusion Model, 307–308

proficiency tests, unidimensional, 320–321

Program for International Student Assessment (PISA), 148

   CDA vs., 148

propositional network, 155

protocol analysis, 147, 151–155

   CDA and, 147–150

   limitations of, 169–170

   verbal analysis vs., 155–158, 168–170

PSAT/NMSQT®. See Preliminary SAT/National Merit Scholarship Qualifying Test

psycholinguistics, cognitive grammars in, 211

Psychological Bulletin, 20

psychological measurement, aim of, 87

psychological processes, inferences about, 6–7

Psychological Review, 64

psychologists, specializing in psychometrics, 5

psychology

   educational assessment and, 207

   EPM models and, 9–14

   metrics vs., 5

psychology-driven test development

   design revision, 12

   design selection, 11

   response scoring, 12

   substantive theory construction, 11

   test administration, 11

psychometric decision theory, 107

psychometric literature, cognitive model in, 194

psychometric models

   with behavioral perspective, 216–217

   choosing among, 215

   for cognitive processes, 100

psychometric procedures, cognitive models and, 249–265

psychometricians

   cognitive psychologists and, 20

   testing history and, 4–5

psychometrics

   cognitive information about students and, 16

   cognitive psychology and, 4

   models, 16

   procedures and applications, 15–16

   psychology and, 5

   technical advances in, 15

   test validity in, 86

Q matrix

   in AHM, 255

   in Fusion Model, 298, 309

randomized experiment

   in laboratory setting, 211

   for test validation, 101

Rasch-scaling, 229

reasoning steps, strategies, cognitive models of, 80

Reparameterized Unified Model (RUM), 223, 280

reporting

   cognitive skills, 264–265

   construct definitions, 185–186

   diagnostic process, 244

   future research, 347–348

   goals for, 264–265

   test scores, 264–265

response patterns, 150, 257–259, 260–261

response process

   attribute differences and, 96

   validity and, 93–99, 100

response slip, item difficulty and, 267–268

restricted latent class models, 222

retrospective interview, 151, 153–155

Rule Space Methodology (RSM), 119, 178, 221

RUM. See Reparameterized Unified Model

SAIP. See School Achievement Indicators Program

Salmon, W. C., 66

sampling, 129

sampling theory, measurement and, 129

SAT®. See Scholastic Assessment Test

SAT 10. See Stanford Achievement Tests

Scholastic Assessment Test (SAT®), 25, 31

   CDA vs., 148

   diagnostic information from, 50

School Achievement Indicators Program (SAIP), CDA vs., 148

science, scientific theory

   assumptions in, 86

   entities, attributes and, 85

   inference and, 85

   uncertainty in, 86

   validity in, 86

scientific assessment, diagnostic items and, 176–177

scores, scoring. See also subscores

   augmented, 332–337

   diagnostically useful, 319

   dichotomous vs. polytomous, 322

   empirical study of, 327–332

   observed vs. model estimated, 299–300

Scriven, Michael, 243–244

search strategy, instructional design and, 176

Simon, H., 208

simulation-based assessment, 188–189

SIRT. See structured item response theory

SIRT models, 215–226

   behavioral, trait differential perspectives, 215–218

   extensions to, 223

   under information-processing perspective, 219–225

   latent variables in, 213

   mixture, 223

   multivariate, 222

   utilization of, 225

skills diagnosis, 275–276, 277–278, 279

SNC. See strongest negative correlation

Spearman, Charles, 4

standardized achievement tests, NCLB Act and, 173

Standards for Educational and Psychological Testing, 121, 173

Standing Conference of the Ministers of Education and Culture (KMK), 227

Stanford Achievement Tests (SAT, 10), 25, 31, 48

Stanford-Binet, 91

state-mandated assessments, 25

   commercial vs., 36–41

   diagnostic information presentation and, 41

   instruction at individual level and, 32–36

   teachers and, 31

   use of, 26–27, 36

statistical models, 66

   inferences and, 64

strongest negative correlation (SNC)

   MCQ and, 324–325

   PIK, 324–325

structured item response theory (SIRT), 16, 207

   See also SIRT models

.

   precursor developments for, 215–218

   universality premise and, 210

student background, 78

   relevance of, 74–75

student cognition, mental processes

   complex item-responses and, 189

   test-based inferences about, 6

student strengths and weaknesses

   classroom-based assessments of, 52

   dimensionality of, 321–322

   explanatory information about, 13

   interpretation of, 23

   large-scale assessments of, 5, 41–42

students

   eye fixations of, 150, 195–198

   knowledge, skill categories of, 16

   response latencies of, 150

   teachers and, 74

subscores

   CAK and, 332

   in Fusion Model, 308–312

syllogistic reasoning, 161, 162–165

task decomposition, 21

task design, cultural symbolism and, 211

task performance

   attribute hierarchy and, 159

   cognitive models of, 149, 158, 243, 244, 253

   item response and, 247

   specifying cognitive model of, 250–255

task processing, complex item-responses and, 189

task-based learning, 232

Tatsuoka rule space model, 16, 254

teachers

   assessment options available to, 29–31

   assessments produced by, 29

   commercial large-scale assessments and, 31

   diagnostic information and, 31–32, 36–38

   large-scale assessments and, 41

   large-scale state assessments and, 26

   state-mandated assessments and, 31

   students and, 74

technology. See also computer-based assessment systems

   assessment practices and, 52–53

test assembly, 191–194

   computerized adaptive testing, 191–194

   discrimination, 193–194

test development, analysis, 15

   See also psychology-driven test development

.

   for CDA, 11–12

   cognitive theory in, 120, 150

   construct definition for, 185–194

   construct validity and, 126–127

   deductive, facet design methods for, 99

   multiple-choice questions, 189–190

   test validation and, 99–100

   transparency in, 23

   verbal reports informing, 151

test items

   in CDAs vs. traditional large-scale tests, 149

   development of, 248

   empirical relationships of, 120

Test of English as a Foreign Language (TOEFL), 278

test scores, performance

   attributes and, 93, 109

   cognitive information processing approach to, 242

   comparisons of, 105–106

   explanations of, 61

   interpretation of, 7

   normative models of, 61

   predictive accuracy of, 110

   reporting, 264–265

test validation, validity, 4, 99–102, 104–105

   See also validation, validity

.

   cognitive psychology and, 4–5

   content analysis and, 100

   correlational analysis and, 101–102

   in correlational studies, 102

   experimental manipulation and, 100–101

   group comparisons and, 101

   in psychometrics, 86

   randomized experimentation for, 101

   response process analysis and, 100

   test construction and, 99–100

testing procedures, 88

   consequences of, 112

   fairness of, 108

   ideological system impact on, 108

   impact concepts, 107–109

   social consequence of, 108

   systematic, 88

   validity of, 90

tests, testing

   criterion-referenced, 64

   goals of, 8

   impact of, 103–112

   psychometricians in history of, 4–5

   standardized vs. diagnostic, 141

   underlying constructs of, 8

   validity of, 93

test-taking behaviors, mental processes during, 7

thought

   CDA and, 167

   features of, 79

TIMSS. See Trends in International Mathematics and Science Study

TOEFL. See Test of English as a Foreign Language

traditional large-scale tests

   CDA vs., 147–148

   test items in, 149

trait measurement, performance

   cognitive information processing theories and, 242

   content representation studies vs. CDA for, 130–140

trait/differential psychology, educational assessment under, 218

Trends in International Mathematics and Science Study (TIMMS), 178

two-parameter (2PL) logistic IRT model, 259

uncertainty, measurement model and, 85

understanding

   achievement tests and, 61

   action and, 77

   causation and, 61, 70–75

   cognitive models and, 75–81

   degrees of, 81

   empirically modeling, 79

   fundamental normative of, 73

   instructional intervention and, 82

   mutual, 73

   normative implications and, 80

   performance and, 82

   validation of, 75

unidimensional tests, 319

unidimensionality, 103–105

Unified Model

   IRF for, 282

   Q matrix in, 281

universality premise, 209

   SIRT and, 210

unstructured hierarchy, cognitive competencies and, 250

validation, validity, 86, 102, 109

   See also construct validity; content validity; criterion validity

.

   of CDA, 141

   composite attributes and, 98

   concept of, 87–93, 103, 110

   consequential basis of, 108

   criterion, 121

   defined, 99

   epistemology of, 99

   evidence for, 97

   internal vs. external, 300–301

   methodological problem of, 86

   response process and, 93–99, 100

   of test, 93

   of testing procedure, 90

   of understanding, 75

verbal analysis, 147, 150–158

   CDA and, 147–150

   diagnostic assessment and, 165–167

   8 steps of, 165–167

   limitations of, 169–170

   protocol analysis vs., 155–158, 168–170

verbal protocols, 195

verbal reports, 247

   CDA development and, 147

   construct validation and, 151

   test construction and, 151

WAIS. See Wechsler Adult Intelligence Scale

Wechsler Adult Intelligence Scale (WAIS), 91

weighted complex Skill k sum-score, 310




© Cambridge University Press