UNIVERSITY PARK – There is near universal agreement among policymakers that schools should be held accountable for meeting high expectations. In fact, every state has adopted some form of a school accountability system. However, there are serious questions about what these accountability systems actually measure and whether the systems accurately identify school effectiveness.
Research by Ed Fuller, executive director of the Center for Evaluation and Educational Policy Analysis (CEEPA) in Penn State’s College of Education, suggests that Pennsylvania’s School Performance Profile (SPP) scores are inaccurate measures of school effectiveness.
“Researchers have consistently argued that accountability measures such as SPP scores must be adjusted for factors outside the control of educators in order to accurately identify school effectiveness,” Fuller said. “The Commonwealth’s SPP scores are strongly associated with student- and school-characteristics, and therefore may not be accurate in their assessments.”
Instead, Fuller said, “SPP scores are more accurate indicators of the percentage of economically disadvantaged students in a school than of the effectiveness of a school.”
Fuller’s research suggests that the currently available SPP scores should not be used to make judgments about school effectiveness unless the scores from one school are compared to only the SPP scores from schools with similar student- and school- characteristics. Even then, he says the comparison should be used cautiously as other unmeasured factors may explain differences in scores.
“There are a number of options that the Commonwealth could employ to calculate SPP scores that are more accurate measures of school effectiveness. In doing so, the Commonwealth would be assisting educators to improve their practice while providing valid information to the public and policymakers about the effectiveness of their local schools,” Fuller said.
In addition, Fuller cautions that SPP scores should not be used as a component of educator evaluations because it will lead to inaccurate judgments about teacher and principal effectiveness and potentially exacerbate existing inequities in the distribution of teachers.
“Because the SPP scores are so strongly correlated with student characteristics, teachers and principals in schools serving high percentages of economically disadvantaged students will be identified as less effective than they really are while those serving in schools with low percentages of economically disadvantaged students will be identified as more effective than in actuality,” Fuller said. This could lead to the most qualified and effective teachers seeking jobs in schools with high SPP scores, magnifying the existing inequities in the distribution of educator quality across schools.
Fuller’s research includes several recommendations. They are:
— Review the percentage weights assigned to the various SPP components. Specifically, the Commonwealth should carefully assess the weights assigned to the individual indicators and components and discuss increasing the weights of the indicators and components with the weakest relationships with student- and school- characteristics.
— Create an online tool that identifies comparison schools for each school in the Commonwealth. The identification of comparison schools would be based on high-quality statistical efforts that accurately identify schools with similar student- and school- characteristics. The set of comparison schools would provide educators with an appropriate set of schools against which they could compare their own school effectiveness score. Such a system would also give local educators and policymakers a far more accurate view of local school effectiveness.
— Construct an alternative rating system outside the system required by the USDoE. This alternative system would adjust the SPP scores for student- and school- characteristics outside the control of educators so these alternative SPP scores would more accurately capture school effectiveness. This would be beneficial in two ways. First, the public and policymakers would have more accurate information about schools, thus could make far more informed judgments and choices about the schools. Second, educators in lower performing schools could accurately identify high-performing comparison schools from which they could learn.
— Recognize the flaws in the current system and work collaboratively to build a more accurate system. The Commonwealth should recognize the strengths and weaknesses of the current SPP effort and engage educators, policymakers, and the public in a discussion about how to more accurately capture school effectiveness. Importantly, the Commonwealth should provide data to researchers so that those with experience in evaluating such systems could provide unbiased and useful information about creating more effective systems.
Fuller said that to assist educators in making more accurate judgments about their own effectiveness and in selecting appropriate comparison schools, CEEPA will create a new index that adjusts the existing scores based on available data related to student characteristics and other school contextual factors.
The mission of the CEEPA is to provide unbiased, high-quality evaluation and policy analysis services to education and other organizations in the Commonwealth of Pennsylvania and across the nation.
For more information, visit http://www.ed.psu.edu/ceepa online.