I have an update on the i-Ready scores of the Appleton Area School District’s Achievement Gap Reduction schools. Back in July, AASD Assistant Superintendent Nan Bunnow had provided the percentage of AGR Kindergarteners – 3rd Graders who had met their target growth goals for the 2021-22 school year. Across those four grade levels and across both math and reading, only 41% – 60% of students reached their targeted growth for the academic year.
During the 06/27/2022 Board of Education meeting during which the Achievement Gap Reduction End-of-Year Report had originally, been presented to the Board, lead AGR principal Karen Brice equated achieving proficiency with a student reaching their targeted growth.
I ended up spending about a month communicating back and forth via email and finally spoke on the phone with Nan regarding these scores and what they meant. I’ve included PDFs of the emails and a lightly edited transcript of the phone conversation.
Firstly, Principal Brice incorrectly equated achieving proficiency with reaching the targeted growth goal. In my phone conversation with Nan she said, “[P]roficiency probably wasn’t the best term. I think Karen used it incorrectly.” She did note that she thought the reason the Board of Education members didn’t ask about it was because the leadership team had talked to them about targeted growth and stretch growth in previous meetings.
While no definitive reason was provided as to why Principal Brice inaccurately described what targeted growth goals were, Nan did suggest, “I think in Karen’s mind she was relating proficiency to accomplishment, but because we use the term ‘grade level proficiency’ it got muddy.”
Per a website Nan directed me to, the targeted growth goal “demonstrates the annual growth for an average student taking the i-Ready Diagnostic. On its own, it is an insufficient source of information to improve student outcomes because it is a normative measure that is not focused on grade-level proficiency.”
I asked what AASD’s proficiency scores were as opposed to targeted growth scores and asked if it was possible to separate them out by AGR, non-AGR Title I, and non-AGR/non-Title I schools as had been done with the targeted growth scores in the Achievement Gap Reduction report. AASD was able to provide district-wide scores but not break them out, so there doesn’t seem to be a ready way to compare the proficiency levels at AASD AGR schools with the proficiency levels at other schools; however, Nan did say, “We will be looking if it’s possible to [separate the scores out by school type] with the new scores that we have, but I’m not sure it will be. Because with AGR we look at the fact that they’ve been there for the full academic year and we have to set up certain assessment groups.”
The District-wide proficiency scores are fairly close to the AGR targeted growth scores for 2nd Grade reading as well as 2nd and 3rd grade math, but were, thankfully, higher for all the other grades in both reading and math.
I asked about that mismatch and why such a low percentage of students were achieving one year of academic growth but a higher percentage of students were achieving proficiency. Nan had no definitive answer but suggested, “You know in speaking with Steve Harrison, and from my understanding of i-Ready, what could be happening is that, you know, students aren’t coming in behind, they’re coming in, in fact, ahead and we’re still expecting a year’s growth. So, they can reach proficiency, but we’re not giving them a full year’s growth. So I think you had an example in here. You said if a third grader starting out with the reading level of a student entering fourth grade, their targeted growth is still a year and it’s not based on proficiency.”
It was not clear to me (and continues to remain unclear to me) why AASD is using targeted growth scores instead of actual proficiency rates in order to measure the achievement gap between AGR and non-AGR schools. Curriculum Associates, the creator of i-Ready, clearly states on their website that “on its own, [i-Ready] is an insufficient source of information to improve student outcomes because it is a normative measure that is not focused on grade-level proficiency.”
My understanding, which Nan did not disabuse me of, is that it would be possible for a student to start the school year 2 years behind, achieve their targeted growth goal, and end the school year still 2 years behind. They would, however, based solely on having achieved their targeted growth goal, look like they were doing better than a student who started the year out half a grade ahead, only achieved 3/4th of a year of growth, and ended the year 1/4th of a grade ahead.
AASD specifically chose the i-Ready program because it provides them data several times a year, but, hypothetically, it seems like it would be possible for AGR students to be well-behind other schools in terms of actual student proficiency and that fact might never show up in the data that was being reported out because i-Ready is based on somewhat subjective individual targets and not how student performance compares against an unmoving and objective goal.
When I raised that concern in our telephone conversation, Nan responded, “One of the things that I—because of the same concern that you have that I had as well, when I finished my doctoral program that was my dissertation. I looked at our data and I wanted to compare apples to apples. Because not only are you mixing in different socioeconomic statuses, you’re also mixing in kids coming and going out of our schools, some that have received lower class size kindergarten through third and some that received it maybe for half a year. And they’re all mixed in there and that’s where it’s very hard to really understand is it the lower class size that is making a difference? And so my dissertation related to comparing our students that have been in lower class size kindergarten through third at AGR classes versus low income students at our non-AGR Title I that did not experience. And the study showed that we were making a statistically significant difference in ELA but not math, and so my hypothesis is as we have changed how we teach reading and not changed how we teach math as far as differentiation. So that’s one small study, but it feeds into exactly what you’re saying. There’s different measures, and we always have to make sure we really understand what we’re measuring.”
It also seems to me that because the i-Ready measure of what constitutes a year’s worth of academic growth is based on the average growth of all the students taking the i-Ready test it would be possible for standards to decline as the years progress if there is widespread academic decline and students across the country become less proficient, but that decrease in performance and achievement would not necessarily be noticeable because of the subjective nature of the i-Ready assessment. If enough students perform slightly worse every year, that would drag down what is considered to be a normal year’s worth of growth, but there doesn’t seem to be a mechanism to make sure that that average year’s worth of growth is academically rigorous and not being degraded over time.
Nan’s response to that concern was, “You know, this is our first–we’re dipping our big toe into the i-Ready pool. But the results that we received were not sufficient to us as a workgroup, and that’s why I shared that we would be expanding the number of data points so that we could get a better picture. But I also–when we speak about assessments, they’re subjective and objective, but I also–you know, we would put this in the category of being norm referenced and more objective in the educational world. But we also understand that if a student came in and didn’t feel well the day they took the test, it may not be a great measure of where they’re scoring or where they’re performing. So those are things that we work with our teachers on to retest if they feel that it’s not coming out the same way as they’re seeing in the classroom. But, again, this is a twice a year report. There’s a lot more going on behind the scenes that the teachers are doing with each student, and then looking at grade levels as well. Each building has a score card and they’re tracking their information as well and making changes for continuous improvement along the way.”
Comparing AASD’s scores against national averages (both for i-Ready scores and actual proficiency rates) also seems to hold the risk of comparing our scores to scores that have been brought down by badly performing schools. I asked Nan what the standard would look like if districts like Chicago and Los Angeles and districts with really atrocious scores were taken out of the mix and AASD was only compared against towns of a similar size and socioeconomic status.
Nan answered, “We haven’t received that data. This is how it’s given to us from i-Ready. Certainly, with the Wisconsin Forward we do look at that and compare ourselves to other large school districts in the state.”
There are a few takeaways from all this that stand out to me.
(1) While I can understand the benefit of being able to see which schools are fostering more than one year of academic growth in their students, that information seems woefully incomplete if not paired with student performance as measured against an actual, solid, and unmoving academic target. The fact that AASD could not provide the actual proficiency rates in AGR, non-AGR Title I, and non-AGR/non-Title I schools is surprising to say the least. How these schools are objectively performing in terms of proficiency rates as compared to each other seems like an important piece of information.
(2) AASD doesn’t seem to have a way to even track whether or not i-Ready’s targeted growth goals decrease in rigor over the years. Given the subjective nature of how i-Ready determines what one year of academic growth is, that seems like something that would be very much warranted.
(3) I thought it was odd that AASD staff had no clear answer as to why, in some cases, the percentage of students who reached their targeted growth goals was markedly lower than the percentage of students who attained proficiency. There was speculation as to why that might be the case, but no definitive answer. That seems like an important question to have answered. If, as was suggested, students began the school year with greater than expected knowledge but then that head start wasn’t able to be capitalized upon, then that seems like something that should be explored as well.
Be the first to reply