|
Resources
Description of the A*Star Audit and Methods of Analysis
|
|
|
The A*StarSMAudit brochure | |
|
|
Description of the A*StarSM Audit method with examples | |
|
|
Example A*StarSM Report | |
|
|
Example A*StarSM multi-test Report | |
Evaluation of the A*Star method of review
|
2010
|
|
Final Management Report of the Office of Inspector General, January 25, 2010 |
|
The Department of Education, Office of Inspector General (OIG), conducted a data analytics project, in part to evaluate the oversight of the federal program of Ability-To-Benefit testing. As part of this project, the OIG evaluated 106 test administrators identified by the A*Star method of review as likely to have misadministered ATB tests. Several of the test administrators were excluded from the OIG review due to either the small number of tests administered or the statue of limitations. Of the remainder, the OIG identified 83 “who provided potentially compromised or invalid ATB examinations for approximately 5,619 students at 133 Title IV post-secondary institutions.” | |
A*Star Presentations to Professional Groups
|
2012 |
CREATE – Consortium for Research on Educational Accountability and Teacher Evaluation, Annual Conference, Oct. 4-6, 2012, held at the Omni Shoreham Hotel, Washington, D.C.
|
|
Oversight of teacher test administration: Respect of educators, respect for standardization, and a program to identify what’s wrong and fix it together. |
|
Historical data on irregularities in test administration are presented in support of a program to oversee educational testing through a process of continuous evaluation, training, monitoring, and, only as a last resort, sanctions. The presentation provides illustrative examples, matching deviations in procedures with irregularities in test results.
Presented by Eliot R. Long, A*Star Audits, LLC
|
|
2010 |
CREATE - Consortium for Research on Educational Accountability and Teacher Evaluation Annual Conference, Oct. 7-9, 2010, held at The College of William & Mary, Williamsburg, VA
|
|
Misadministration of Standardized Achievement Tests |
|
There has been a marked increase in the frequency of misadministration of achievement tests with the misadministration more often involving the school administration. Misadministration varies widely across school districts and is most frequent among small schools. |
|
2009 |
CREATE - Consortium for Research on Educational Accountability and Teacher Evaluation Annual Conference, Oct. 8-10, 2009, held at The Brown Hotel, Louisville, KY
|
|
Masking Variations in Achievement Gain |
|
School districts commonly encourage all students to guess, if necessary, to complete an answer for all test questions. Among low achieving students, this encouraged guessing may account for half or more of all test answers and up to 40% of the total test score. The random variation of guessed correct answers, the improper influence of teachers that guessing encourages, and the erratic test work behavior of students all combine to mask important variations is true achievement and achievement gains. |
|
2009 |
EERS - The Eastern Evaluation Research Society Annual Conference, April 19-21, 2009, held at the Seaview Resort and Spa, Galloway, NJ
|
|
The A*Star Audit of Group Test Administration |
|
The A*Star Audit is based on a three step process where (1) a normative test item response pattern is identified, (2) the response pattern of each test-taker group (i.e. classroom, test center) is measured against the normative pattern, and (3) outliers, groups whose response pattern significantly differs from the normative pattern, are identified. Additional steps are taken to identify those test-takers most likely to have been subject to improper influence and to assess the likely method of influence. |
|
2008 |
FCAP - Fordham Council on Applied Psychometrics First Annual Conference, June 26-28, 2008, held at Fordham University, Bronx, NY
|
|
Searching for DIFF, Drift, and Gain among the deck chairs: Why we can’t make sense of measure of educational achievement |
|
A method for parsing test scores for guessed correct answers is derived from large test item data sets, including elementary schools, the NAEP, and employer administered tests. Applied to elementary school data, the parsing method reveals a very large role for guessing in determining student scores. Guessed correct answers are found to create a test score modulator, minimizing both gains and losses in measured achievement. It is suggested that the practice of teachers encouraging their students to guess, if necessary, to complete answers for all test questions is a substantial impediment to the use of test scores for program evaluation and accountability |
|
2003 |
ATP - The Association of Test Publishers Annual Conference, Feb. 24-26, 2003, held at the Amelia Island Plantation, Amelia Island,FL
|
|
Identifying Rogue Test Administrators |
|
A detailed presentation of the method and the results from applying the newly developed A*Star Audit to “Ability-To-Benefit” testing and to elementary school assessment. |
|
|