We don't need another test
PSSAs do not provide diagnostic value or direct help to individual students
PSSA results are not timelyWhy another test? I am sympathetic to the critique that too much time is spent testing students, and therefore we are sacrificing time that could be spent teaching students. Why do we need the PSSAs when we have so many other tests? For example, our elementary students already take several tests during the year, GMADE, GRADE, DIBELS, and teacher-designed quizzes and tests. Aren't we testing the same thing over and over? Let's consider what makes PSSAs different than these other exams.
We should only use valid, reliable, and fair test instruments when assessing school performance or setting graduation requirements. PSSAs and Keystones are criterion-referenced exams. And they are extensively tested to ensure validity, reliability, and fairness. While some other classroom assessments meet these criteria (GMADE, GRADE, and DIBELS, for example), most do not.
PSSA results are comparable to all other PA schools. PSSAs are taken in every school in Pennsylvania. Because testing is compulsory, test results represent a ‘census’ rather than a ‘sampling’, enhancing the validity of the results. And PSSAs are administered under controlled conditions. This ensures excellent comparability across districts and allows us to benchmark UCF performance against other top districts. This provides our true position among all schools, and highlights where we might be falling short vs. other top-performing districts.
PSSAs are aligned to Pennsylvania's educational standards. If we accept that the standards outline the knowledge our students are supposed to have, then tests to measure student learning and school performance must be aligned to those standards.
PSSAs are uniquely able to assess student growth – how much students are learning each year. Growth scores have the least bias from socio-economic status and other non-school factors, and do the best job of isolating the contribution of schools to student learning. School-level PVAAS (Pennsylvania value added assessment system) is the single best measurement of how well our schools are performing. So PSSAs are uniquely able to provide valid, reliable results measuring student achievement and growth against PA standards, and can be benchmarked against peer institutions around the state. Our other in-school assessments fall short:
Classroom exams and end of year summative assessments are usually unique to the classroom and the teacher, so the results are not comparable across classrooms, schools, or districts. They are administered in 'uncontrolled' conditions and lack statistical testing to ensure validity, reliability, and fairness.
District-wide summative assessments like GMADE and GRADE are normed and validated. However these exams are not utilized consistently across the state, and therefore do not enable us to compare our performance against all other districts. They are also not specifically aligned to Pennsylvania standards.
SATs and ACTs are not aligned to Pennsylvania standards nor to our UCFSD curriculum. Nor are those exams designed to assess elementary or middle school students.Where is the Diagnostic Value? It is true that PSSAs lack diagnostic value, especially at the student level. But I think this is a misplaced criticism. To understand why, we first need to appreciate that different assessments have different purposes: summative, formative, and diagnostic. In the same way that a carpenter cannot complete a project using only a hammer, our teachers and schools must combine different assessments to achieve their educational objectives. Diagnostic tests are administered prior to instruction, in order to identify the unique needs of