Tests show that during the pandemic, students lost a lot of ground, especially those already vulnerable. But only certain tests can tell educators what to do about it.
When people hear of “standardized tests,” they usually think of state-mandated assessments given out in the spring. These scores can have huge ramifications for schools and teachers, but they usually do not have direct consequences for individual students.
Most schools, especially at the elementary level, also give other standardized assessments in reading and math throughout the school year to determine where students are and what they need. This practice is called “monitoring progress” and concerns about lack progress during the pandemic has brought this to light.
In theory, tracking progress makes sense. In practice, this can end up holding back students. The effects largely depend on what is tested or what educators think is being tested and what actions they take based on the results.
The origins of progress monitoring can be traced back to the 1980s and an approach called “program-based measurement”. Teachers were required to give students brief assessments, such as measuring the number of words read aloud in a text in a minute, perhaps weekly or monthly. The results would indicate a student’s overall academic progress, and the collective data could show whether a pedagogical approach was working. Despite its name, the curriculum-based measure is not tied to any particular curriculum.
Progress tracking was linked to special education and possibly a system called “Response to Intervention” or RTI. Teachers use assessments in the early years to identify students at risk for reading problems and offer them an “intervention”. Students are divided into three levels: level 1 receives the regular program; Level 2 obtains lessons focused on their particular needs, in small groups, as well as further follow-up; and level 3, made up of those who did not respond to level 2, benefit from more intensive intervention.
Both progress tracking and RTI were prevalent in 2015. But that year, an in-depth study by the federal government found that Grade 2 first-graders actually did. worse than similar students who stayed at level 1. Second and third graders did not experience a negative impact, but they did not get a boost either.
The study did not examine the quality interventions, however. Most teachers receive inadequate training in teaching reading, so it is possible that RTI, still widely used, could work well if teachers are equipped to implement it. And there is no standard RTI program. Some students in Levels 2 and 3 may have had more of the same approach that was not working for them at Level 1.
There are now several widely used standardized progress monitoring assessments that are supposed to be given three times a year, although schools may give them more often. And recent reports have used data from three of those tests to estimate the impact of the pandemic on student achievement.
Reports differ on the details but come to the same basic conclusions: Students are significantly behind what they would be in a normal year, losing the equivalent of four or five months of learning to read and more in mathematics. The already huge gaps between racial groups have widened, as well as between high and low poverty schools. Typically, students who achieved more distance learning performed worse (a finding reinforced by detailed data on Ohio State testing). And because the most vulnerable students were less likely to be included in the data, the situation may be even more serious than it appears.
It is important to have this information, but if schools are going to act on it, they should know that different reading tests measure different things. One, called DIBELS, focuses primarily on early literacy or “decoding” skills: things like naming letters of the alphabet and phonics. If this test shows that students need more intensive instruction in these kinds of skills, as many children in early grades no doubt will, they certainly should get it – and it should be delivered by a teacher who has received the necessary training.
Other progress monitoring tests, like MAP Growth and i-Ready, focus more on comprehension “skills”, such as being able to determine the main idea of a passage. And for complex reasons, it is generally assumed that most reading problems are comprehension problems, especially after the second or third grade.
But, as I said before, comprehension “skills” are different from decoding skills. Rather than simply improving with practice, they can only develop alongside academic knowledge and vocabulary. The best way to solve a comprehension problem is to immerse children in a cohesive curriculum that reinforces knowledge of history, geography, science, and the arts in addition to literature. Unfortunately, however, most students spend many hours practicing supposed comprehension skills on a random variety of texts, without focusing on the content.
Even when armed with potentially useful information, schools can go in the wrong direction. In a first grade class I attended, the MAP test showed that students needed to work on phonemic awareness (hearing individual sounds in words) and phonetics. But the administration asked the teacher to solve the problem by leading the children through a “careful reading” of a simple poem before they retake the test.
In another very poor elementary school, I observed a ‘data day’, when teachers looked at the results of progress monitoring supposed to measure students ‘proficiency in things like’ making inferences ‘or’ comparing and contrast ”.
“But they knew that, ”a teacher moaned. They may have “known” it with one text, but that does not mean that they “knew” it with another, as these are not generally applicable skills.
Progress tracking that claims to measure abstract comprehension skills can have devastating consequences. This can mask decoding issues that, unresolved, carry through to high school and beyond. It can also distract from the need to develop students’ academic knowledge.
The state reading tests and the National Assessment of Educational Progress (NAEP), administered to a representative sample of American students every two years, also suffer from these flaws. But at least they don’t directly guide educators to assign children to “levels” that can just hold them back, or to spend more time in class “getting the main idea.” That could change, however.
Florida Governor Ron DeSantis has announced plans to use that state’s test for “tracking progress,” administering assessments three times a year instead of just one in the spring. And a team of researchers, seeking to counter the waning enthusiasm for standardized condition testing, suggested making the results more “actionable.”
But treating condition reading tests as diagnostic evaluations risks making a terrible situation worse. Progress tracking and RTI, at least as implemented in most schools, has led to decades of stagnant or declining reading scores. State tests, with higher stakes, could only increase the counterproductive pressure to spend more time “reading.”
We don’t need to abolish reading comprehension tests (although that might be a good idea). But we need to see them for what they are: general indicators of groups of students who lack the academic knowledge and vocabulary they need to do well, and not precise measures of understanding deficits and skills. concomitant remediation guides.
We should also assess decoding skills separately from reading comprehension: a recent study found that lower scores on the fourth-grade NAEP reading test were significantly more likely to have decoding problems than those with scores higher. And reading comprehension tests should ideally be based on the content that students learn in school. Louisiana is currently experimenting with such a test, but it will only work if all or most schools in a state use the same curriculum. Louisiana is perhaps the only one to fulfill this condition.
Educators also need to recognize that “testing” does not just mean standardized assessment in reading or math. At the elementary level, it is no longer common to question children to see if they have understood the information they have been taught. In fact, I have the impression that it is considered illegitimate, because you are “just” testing children’s knowledge rather than their supposed general understanding or critical thinking ability. But it is crucial to know whether the children understood and absorbed the content that the teachers taught, assuming that they have taught content. And this kind of test, which can be as simple as asking children to write a sentence about what they have learned, could actually to help children learn instead of hampering their progress.