How School Districts Should Respond: Measuring Meaningful Educational Benefit
TASA ID: 10771
In light of the U.S. Supreme Court ruling, Endrew F. v. Douglas County School District, this article will address how school districts should respond to the decision in this case. In March of 2017, a stunning eight to zero unanimous decision by the U.S. Supreme Court ruled in favor of the case, Endrew F. V. Douglas County School District. Their ruling strengthened the seminal, 1982 special education case, Board of Education of the Hendrick Hudson Central School District v. Rowley. In the Rowley decision, it was determined that the student’s individual education plan (IEP) must be reasonably calculated to enable the child to receive educational benefit. This decision was the “gold standard” which school districts used to drive the process of IEP development. However, this decision left behind a standard of ambiguity in terms of defining the calculation for determining what was considered an educational benefit.
According to the National Center of Education Statistics (2017), approximately 6.6 million children are served under the Individuals with Disabilities Education Act (IDEA) or 13 percent of the total public-school enrollment. Under the IDEA, an IEP must be prepared and reviewed by the school officials and the child’s parents or guardian at least annually. Students with disabilities must be provided a free appropriate public education (FAPE). The term “appropriate” has ignited the fire of many litigious debates as to argue whether “appropriate” was provided or not. The term appropriate is vague, at best.
In the Endrew case, the decision implies districts must provide students with disabilities the opportunity to make appropriately ambitious and measured progress. In this case, the parents of a child on the autism spectrum, who attended public school through fourth grade, were concerned he was not making the progress that he should be making. They disenrolled their child from the public school and unilaterally placed him in a private school that specialized in working with children on the spectrum. Endrew made documented progress while in the private school. The parents argued the district should pay for their child’s tuition. The district said “no.” The parents lost their case before an administrative-law judge, a federal district judge, and the U.S. Court of Appeals for the 10th Circuit, in Denver. The 10th Circuit said the district was only responsible to provide a merely more than a de minimis program, a legalese way of saying, not much at all (Lee, 2017). That statement harks back to the old analogy of providing the Chevy but not the Cadillac; it is a floor of opportunity, not the best education available.
However, in the Endrew case, the Supreme Court ruled for a more demanding standard. Chief Justice Roberts wrote that a child’s IEP must be "appropriately ambitious," providing the child the chance to "meet challenging objectives." Furthermore, Chief Justice Roberts said that "for children with disabilities, receiving instruction that aims so low would be tantamount to 'sitting idly, awaiting the time when they were old enough to drop out,'" quoting from Rowley (Samuels and Walsh, 2017). In other words, a trivial benefit is not a strong enough standard. The court simply remanded the case back to the Tenth Circuit to be reconsidered in light of the higher standard. In short, thanks to this decision, we now know that IDEA requires meaningful benefits. We just don’t know what “meaningful” means (Dunn, 2017).
That should leave school districts scratching their heads and wondering if they are delivering special education programs that do meet the rigor of this new Supreme Court ruling. School districts should wonder if their programs could withstand this inspection and defend the concept of meaningful benefit. FAPE remains unclear to many within the field of special education or to those who serve as the local educational agent (LEA). LEAs are generally administrators who have supervisory authority and need to be able to defend their programs. Districts must carefully design educational programs that result in education benefit and are validated through data collection that proves progress toward significant learning (Katsiyannis, Counts, Popham, Ryan, & Butzer, 2016). A legally defensible IEP program will uniquely support the eligible student and optimize conditions so the student makes meaningful educational progress.
Teams must determine and articulate IEP goals in a way that will demonstrate meaningful progress. IEP teams often struggle with determining the criterion for IEP goals. How much growth should be expected, at times, is only a best guess judgment. Often numbers are tossed out to suggest the degree of expected progress with no more than a “gut feeling.” Hint, the response to how much growth to specify on an IEP goal is not “85% of the time,” as is often noted with no reasonable calculation in mind.
Research has equipped educators with sophisticated and accurate methods for determining reasonable and ambitious growth for many skill areas. The scope of this brief will provide the reader with a method to determine instructional reading levels, determine the rate of improvement, and review a method to monitor progress.
For example, the process of determining the instruction reading level of the student begins with individually assessing the child by conducting a “sit-by-the child” assessment. In other words, listen to the child read. The reading level is generally determined by assessing three variables: reading accuracy, comprehension, and reading fluency rate.
The teacher determines the independent, instructional, and the frustration reading levels of the child by assessing how accurately they read the words in the passage. Accuracy is calculated by the percentage of the words read correctly.
The independent level is the difficulty level in which the student is able to apply the skill of oral reading with accuracy, decode the text, and is able to comprehend at an appropriate level without teacher support. The student’s level of accuracy in reading the words of the passage is 98%-100% with a comprehension of 67%-79%, or scoring three or four on the four-point retell rubric, discussed later in this brief.
The instructional level is the level to instruct the child. The instructional reading materials should match this level. In general, it is the level in which the student can read the words in a leveled passage with 93%-97% accuracy and respond to 75% of comprehension questions or score three or four on the four-point retell rubric.
The frustration level is below the accuracy level of the instructional level. Comprehension at this level is 50% or lower or scoring one or two on the four-point retell rubric.
Assessment is conducted using a cold reading, which means this passage has never been read by the student before. Reading probes are separate from instructional materials. In anticipation of conducting a reading assessment, the teacher prepares reading passages for multiple reading levels. Prepared passages should be 100 to 250 words in length and the number of words, per a line, should be tabulated at the end of each line of the passage in a cumulative manner.
There are several online resources where a teacher can create, or download leveled reading probes. One resource, Intervention Central Reading Probe Generator, can be found at http://www.interventioncentral.org/teacher-resources/oral-reading-fluency-passages-generator. If the teacher creates their own assessments, they should use the same reading formula each time to ensure fidelity. Consistently using the same reading formula will avoid conflicts in computed grade levels. Different formulas produce different results. A popular reading formula that can be selected is known as Flesch-Kincaid.
It is recommended the teacher has multiple passages, or probes of the same difficulty level, so a baseline can be accurately established. Weekly progress monitoring probes are created in the same manner. The baseline is established usually using two to three probes of the same level.
There is always some debate as to what is considered a reading error and what is not. So that data is comparable, applying error consistency rules is essential. Oral reading errors include: mispronunciations, substitutions, omissions, transpositions of word-pairs (counted as one error), and words read to the student by the examiner after three seconds are also counted as errors. Not counted as oral reading errors includes: self-corrected words, repetitions, dialectical speech, and inserted words are ignored.
Reading fluency is the number of words read minus the counted errors. Having a running total of the number of words written at the end of each line facilitates scoring the reading passage. The student reads for one-minute. The teacher should have a timer or a watch with a sweep hand. To ensure fidelity, the teacher should use the same directions each time an evaluation is conducted to ensure fidelity. “When I say, 'start,' begin reading aloud at the top of this page. Read across the page [point and sweep across the page left to right]. Try to read each word. If you come to a word you don't know, I'll tell it to you. [The teacher will wait for three seconds before providing a word.] Be sure to do your best reading. Are there any questions? Ready, begin.”
The teacher and the student should be looking at the same reading passage. The teacher’s passage has the number of words, per line, and the student’s passage does not include the number of words per line. The size of the font should be appropriate for the student’s age or needs. The teacher marks the types of errors made by the student on their protocol. At the end of one-minute (precisely) the teacher marks the last word read on the protocol. The teacher allows the student to continue to read so enough of the passage is read. Comprehension is assessed using a retell method.
The teacher begins to assess the student at a passage difficulty level that is assumed as the student’s instructional level. A student may score at the instructional level on multiple levels. The teacher should continue to test at higher levels to establish the frustration level and is prepared to test at lower levels if an instructional level has not been established. An instructional level is established when the student reads 93%-97% of the words in the passage accurately within one-minute.
Once the student has finished reading, the teacher will assess the student’s comprehension by asking the student to retell what they have read. The passage is removed from the student and says, “Now tell me as much as you can about the passage you have just read.” If the student stops or hesitates, provides a limited response, or gets off-track, the teacher says, “Can you tell me anything else about the passage.” Retell is not a timed assessment. The following retell rubric is used to judge the quality of the student’s response.
Retell Rubric |
Comprehension is acceptable |
4 | Provides 3 or more details in a meaningful way that captures the main idea. |
3 | Provides 3 or more details in a meaningful sequence although the main idea may not be stated. |
Comprehension is considered weak |
2 | Provides 3 or more details that relate to the passage. |
1 | Provides 2 or fewer details that may or may not relate to the passage. |
Fluency rates are collected for each probe read. Once the highest instructional level is determined, based on accuracy and comprehension, compare the student’s reading rate against national or local norms. Norms for oral reading fluency developed by Jan Hasbrouck and Gerald Tindal (2006) are an excellent source. These norms were first published in The Reading Teacher. The Hasbrouck and Tindal Oral Reading Fluency Table is a valuable source and can be used in various ways. Oral reading fluency rates will allow the teacher to:
1. Identify the fluency rate, by grade level and time of school year, with different norms for the fall, winter, or spring.
2. Match the student’s reading rate according to the number of words read correctly per minute.
3. Allow the teacher to determine the percentile level based on the corresponding score.
4. Allow the team to recommend students who need supplemental reading fluency building strategies (students scoring 10 or more words below the 50th percentile).
5. Calculate the long-term fluency goals for struggling readers using the Hasbrouck and Tindal table.
Extensive research was conducted to determined oral reading rates for students in grades one through eight (Hasbrouck and Tindal, 2006). This research established norms for students in those grades during specific time bands; fall, winter, and spring. An average weekly improvement rate, the rate for expected growth known as the rate of improvement (ROI), is reported for the 90th, 75th, 50th, 25th and 10th percentiles. This data allows the team to predict the expected rate of improvement, by the week, based on researched expectations.
Predicting the rate of improvement is calculated by multiplying the anticipated number of weeks of the intervention by the average weekly improvement figure, and then added to the baseline (the number of words per minute read correctly). This calculation becomes the reasonably calculated goal.
Progress monitoring occurs during the intervention and requires frequent data collection using “end goal” leveled reading passages. Data is graphically displayed so as to ease instructional decisions. The goal, also called the aim line, is added to the data graph to assist teams in determining if the student’s progress is on track or if the intervention adjustment is needed.
The Supreme Court case, Endrew F. v. Douglas County School District, implies a greater emphasis on measured progress. IEPS must reflect more than a de minimis, or minimal, educational benefit. Applying the calculation of ROI to establish criterion or goal is a thoughtful practice. The ROI calculation, (ROI x weeks of intervention) + baseline reading rate = goal, provides a scientifically derived method for determining a reasonably calculated method for goal decision-making, not a “pulled out of thin-air” decision.
The use of ROI calculation for goal development is a promising practice and is simple to calculate. Once the instructional reading level has been determined and a reasonable goal, based on ROI has been calculated, progress monitoring can begin. Through progress monitoring and a frequent review of the data, teams will improve the rate of reading growth by making sound instructional decisions in real-time. This measured method for determine goals and progress monitoring is a satisfying method of applying educational research and optimizing students’ academic growth. This method is one way to ensure a defensible IEP under this new evidence-base standard.
Full disclosure, this article is written from the perspective of a former director of special education and now a full-time college professor, teaching special education courses at undergraduate and graduate levels. The author is not as an attorney. Education and career experiences have provided this writer with a heightened awareness for the need to require teachers, preservice and experienced, with methods to design meaningful IEP goals that include reasonably calculated criterion. When applying these methods of goal determination and progress monitoring, District are able to defend that students are making meaningful benefit through their specialized designed educational programs.
References
Dunn, J. (2017, Summer). Special education standards: Supreme Court raises level of benefit, Education Next, 17(3), 7.
Hasbrouck, J., & Tindal, G. (2006). Oral reading fluency norms: A valuable assessment tool for reading teachers. The Reading Teacher, 59(7), 636-644. doi: 10.1598/RT.59.7.3
Intervention Central, (n.d.). Reading Retrieved from http://www.interventioncentral.org/teacher-resources/oral-reading-fluency-passages-generator
Katsiyannis, A., Counts, J., Popham, M., Ryan, J., & Butzer, M. (2016). Litigation and students with disabilities: An overview of Cases from 2015. NASSP Bulletin, 100(1), 26-46.
Lee, A.M., Endrew F. (2017, March 22). Case decided: Supreme Court rules on how much benefit IEPs must provide, In the New (Blog): Understood, Retrieved from https://www.understood.org/en/community-events/blogs/in-the-news/2017/03/22/endrew-f-case-decided-supreme-court-rules-on-how-much-benefit-ieps-must-provide?gclid=CjwKCAjwoNrMBRB4EiwA_ODYv_SdZp0RdAxRu7CD2Zjhf00ViGJFFMmpiV0EvKQEJYu48rdnhDpVThoCQzgQAvD_BwE.
National Center of Education Statistics, (2017). Children with disabilities. The Condition of Children. Retrieved from https://nces.ed.gov/programs/coe/indicator_cgg.asp
Samuels, C. A., and Walsh, M., (2017, April 5). High court ruling firms up goal posts on special education rights, Education Week, 36(27), 1.
This article discusses issues of general interest and does not give any specific legal or business advice pertaining to any specific circumstances. Before acting upon any of its information, you should obtain appropriate advice from a lawyer or other qualified professional.
This article may not be duplicated, altered, distributed, saved, incorporated into another document or website, or otherwise modified without the permission of TASA. Contact marketing@tasanet.com for any questions.