Skip to Content | Accessibility | Español

Frequently Asked Questions about Curriculum-Based Measurement

In 2004, the National Center on Student Progress Monitoring held its first Summer Institute. The focus of the Institute was on CBM in Reading. Here, you will find an extensive compilation of the questions asked at the Institute, as well as some others that we thought would be helpful.

Reading Sub-topics
Click to explore below - page last updated on 9/9/05

If there are questions from the new Reading FAQ that you would like to ask or there is something you want to discuss, visit our Discussion Board.

General Curriculum-Based Measurement (CBM) Questions

Curriculum-Based Measurement (CBM) versus Other Tools

Benchmarks

Letter Sound Fluency (LSF)

Word Identification Fluency Measure (WIF)

Passage Reading Fluency (PRF)

Where do I go from here?

General Curriculum-Based Measurement Questions

What is the purpose of the “timing”, measuring what the child has/can master within a certain timeframe?

The purpose of timing during CBM is to determine the child’s fluency with literacy-related skills. If a child is able to read 100 words of 150 words correctly, but takes five minutes to do so, then the child is reading at a rate that indicates some degree of reading difficulty. Additionally, one criterion to be met by any CBM measure is that it must be efficient. One way to maximize efficiency is to minimize the amount of time a teacher must take to implement the assessment.

Back to Top

These measures are called curriculum-based, but are they based on our curriculum? Does that matter?

Research has shown that it doesn’t matter whether passages are based on a particular school’s curriculum. What’s important is whether the passages used for monitoring are constant (or at a similar level of difficulty) from one measurement period to the next. We would like to be able to detect whether students are growing in their overall reading proficiency.

Back to Top

How is the minimum # of assessments required determined? How do these requirements relate to NCLB? (Are there requirements stated by law?)

In terms of NCLB, each State establishes its assessment program. While CBM can be used within a framework of Adequate Yearly Progress, and can certainly provide information as to the progress of individual students toward meeting state standards, NCLB has no requirements related to the number of assessments for instructional decision making.

Back to Top

Doesn’t the child get distracted by the timer and the teacher marking on a piece of paper while they are trying to recite?

Student distractions can be kept to a minimum if the assessor does not sit right next to the child during the assessment and uses a clipboard that is held in a position where the child cannot see when responses are being recorded on the protocol. In addition, place the timer where the student cannot see the digits. Many assessors clip the timer to the top or side of the clipboard for convenient and easy viewing. Traditional kitchen timers with a dial that “ticks,” should not be used because they are not accurate. Only digital timers or stopwatches should be used.

Back to Top

For students who have difficulty reading how do you deal with anxiety/self-esteem issues connected with reading aloud for one minute?

Self-esteem and anxiety issues can be addressed successfully by the way the teacher sets the stage for CBM in his/her classroom. If the teacher makes CBM seem like a punitive activity linked directly to other things students consider negative (e.g., grades or class ranking), then students will feel insecure and nervous. If the teacher explains CBM as a method for helping a student see the gains he/she is making in reading and a method for individual goal-setting, then students are less likely to be insecure or nervous.

Back to Top

What sorts of adjustments are made for students whose native language is not English?

There are no research-based adjustments made for ELL. The only adjustment supported by CBM research is purposeful ignoring of errors attributable to dialectical differences.

Back to Top

How can assessments be valid and reliable if teachers can grade tests differently?

Assessments for the purpose of progress monitoring can be valid and reliable at the teacher level if they grade consistently. However, comparisons between teachers may not be as valid and reliable if tests are graded differently.

Back to Top

Do you have any resources/references for research correlating CBM results with performance on high-stakes testing?

See:

Good, R.H. III, Simmons, D.C., & Kameenui, E.J. (2001). The importance of decision-making utility of a continuum of fluency-based indicators of foundational reading skills for third-grade high-stakes outcomes. Scientific Studies of Reading, 5, 257-288.

(We are currently seeking permission to post this article on our site.)

Back to Top

CBM versus Other Tools

We already use the MCAs and other standardized achievement test to assess students. How is this different from CBM?

Standardized tests of achievement, like the MCAs, the Northwest Achievement Levels Tests, and the Iowa Tests of Basic Skills, are typically given once a year and provide an indication of student performance relative to peers at the state or national-level. Conversely, curriculum-based measures are an efficient means of monitoring student performance on an ongoing basis. With CBM, we are able to detect whether students are, in fact, making progress toward an end goal and to monitor the effects of instructional modifications aimed at helping the student reach this goal.

Back to Top

How is CBM different from running records? Or informal reading inventories?

Running records and informal reading inventories (IRIs) focus on specific skills whereas curriculum based measures are indicators of overall reading proficiency. In addition, a large body of research has shown that one-minute samples of the number of words read correctly from reading passages are sensitive, reliable, and valid of measures of reading proficiency – there is little research to support the use of running records and IRIs. If teachers find them useful, running records and IRI’s may be used in conjunction with weekly progress monitoring to help inform changes to students’ instructional programs.

Back to Top

What are the similarities and differences between Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and Curriculum-Based Measurement (CBM)?

CBM was developed over 25 years ago at the University of Minnesota and focused on reading, spelling, writing, and math for students in grades 1 through 8. CBM was developed to be efficient, reliable and valid, inform instruction, monitor student growth, and be tied to the curriculum. DIBELS was developed over 10 years ago at the University of Oregon and extended the procedures used in CBM down to early literacy skills in Kindergarten through grade 3. The DIBELS procedures used for Oral Reading Fluency are the same as Passage Reading Fluency in CBM.

Back to Top

Can you use DIBELS materials and apply the decision making framework and goal-setting criteria used with CBM?

The oral reading fluency passages in DIBELS could be used as the materials for CBM passage reading fluency. The same criteria for setting goals and using the data in a decision making framework would apply. The additional DIBELS materials — Initial Sounds Fluency, Phoneme Segmentation Fluency, Nonsense Word Fluency, Word Use Fluency, Retell Fluency — are not identical to the other CBM reading materials (Letter-Sounds Fluency, Word-Identification Fluency, or Mazes) and therefore should not be substituted for CBM measures when setting goals and using the data in a decision making framework.

Back to Top

Benchmarks

What is the rationale for conducting benchmarking three times a year, especially for those students who are performing at or above the benchmark?

The rationale for conducting benchmarking three times a year with ALL students, even those students who have met or exceeded the benchmark, is that there is no guarantee students will continue to learn the skills being taught. Benchmarking ALL students three times a year provides educators with an efficient and accurate indicator of current skill performance. It also allows educators to monitor which students are on track for making AYP without having to wait until the end of the year testing.

Back to Top

How are benchmarks determined?

The benchmarks were derived from a compilation of research studies that have examined what are “typical” performance levels for students on these tasks. The most frequently cited source is (Fuchs, Fuchs, Hamlett, Walz, & Germann, 1993).

Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 27-48.

(We are currently seeking permission to post this article on our site.)

Back to Top

Does the Tukey method use the middle point or median number?

The Tukey method uses both the “median” and “middle” data points in establishing a trend.

Here, the “median” refers to the middle score with respect to absolute value from an array of scores. The “middle” refers to the middle score with respect to relative position among a group of scores. For example, let’s say that you have the following scores in Phase I (33, 45, 36, 42, 44) and Phase III (52, 60, 58, 64, 68) (recall that collected scores are divided into three fairly equal groups in the first step of the process). In Phase I, “42” would represent the median score as it is the middle score with respect to absolute value (i.e., 33, 36, 42, 44, 45). Once the median is established, an “x” would be placed in Phase I at the place where 42 on the vertical axis and 36 on the horizontal axis meet. Why 36 on the horizontal axis? Because it represents the middle score with respect to relative position among the five scores. Here, absolute value is of no concern. We simply want to identify the midpoint of the data run with respect to time (recall that the horizontal axis represents time or weeks of instruction). In this example then, an “x” would be placed above the score of 36 (because it represents the midpoint of the five weeks in which data were collected) at a point that corresponds to 42 on the vertical axis. Once this is done in Phase I, the same is done in Phase III. In this case “60” represents the median score and 58 corresponds with week 3 of the five weeks in Phase III in which data were collected. In this case, the “x” would be placed just above the score of 58 at a point that corresponds to 60 on the vertical axis. Once each “x” is determined, they are connected with a straight line. This line represents the “trend line” for student performance during the entire progress monitoring period.

Back to Top


Letter Sound Fluency (LSF)

What is the purpose of the Letter Sound Fluency Test?

The purpose of the Letter Sound Fluency Test is to determine the student’s ability to fluently recode letters into sounds at the kindergarten level. Each of the five alternate forms of the test contains the 26 letters of the alphabet in random order. Therefore, in addition to measuring fluency with letter sounds, student responses can be used to make data based instructional decisions.

Back to Top

How is the Letter Sound Fluency Test administered?

Letter Sound Fluency is administered individually. The examiner presents the student with a single page showing 26 letters in random order. The student has 1 minute to say the sounds that correspond with the 26 letters. The examiner marks student responses on a separate score sheet. The soccer is the number of correct letter sounds spoken in 1 minute. If the student finishes in less that 1 minute, the score is prorated.

Back to Top

How do you grade the responses of a child with speech or dialect issues?

Students with speech impairments or dialect issues are not penalized. For example, if the letter is c and a student who has a frontal lisp, says, “th-ee,” that response iscorrect.

Back to Top

According to the directions in the Letter Sound Fluency Test, students are asked to tell the sounds the letter makes. If a letter has two sounds (e.g., hard and soft c and g), which response is correct?

The most common sound is counted as the correct response. Therefore, the hard sounds of c (as in cat and cute) and g (as in gate and get) are the correct responses. The soft sounds of c (as in city and cent) and g (as in gem and gist) are incorrect. Regarding vowels, only the short sounds are counted as correct responses.

Back to Top

On the Letter Sound Fluency test, why are some letters capitalized and some lower case?

In the updated edition of the Letter Sound Fluency Test, all letters are lower case.

Back to Top

Is it important to monitor the progress of phonemic awareness?

For kindergarten students, an alternative to measuring letter-sound fluency is to measure phoneme-segmentation fluency, but it is much more difficult to administer and score.

Back to Top


Word Identification Fluency Measure (WIF)

What is the Word Identification Fluency measure and what purpose does it serve?

The Word Identification Fluency measure is designed to monitor the reading progress of first-grade students. Alternate forms of 50 high-frequency words are used to determine whether students gain fluency in recognizing and correctly pronouncing high-utility words across time.

Back to Top

How is the Word Identification Fluency measure administered?

The Word Identification Fluency measure is administered individually for 1 minute. Typically, a list of 50 words, divided into three columns, is provided to the student. The examiner’s copy is identical to the student copy except that it also contains a line next to each word. The examiner records a “1” on the line if the student pronounces the word correctly and a “0” if the student misses the word. If the student hesitates in reading a word for 2 seconds, the examiner says to “Go on.” If the student attempts to sound out a word, the examiner waits 5 seconds before prompting the student to move to the next word. When 1 minute has elapsed, the examiner says, “Stop” and circles the last word read. The total number of words the student read correctly is recorded, along with the date, on the test protocol and then the score is charted on the student’s progress monitoring graph. If a student finishes reading the entire list of 50 words in less than 1 minute, the examiner prorates (i.e., adjusts) the student’s score.

Back to Top

How should I score the student’s response if he/she sounds out each sound in the word but doesn’t say the entire word fluently?

If the student sounds out the word correctly, even if the word is not pronounced quickly, the word should be scored as correct. For some sight words, the student may be able to use decoding skills to sound out the word. The examiner should keep in mind that the student takes up testing time by trying to sound out a word. Later, as a student acquires more skill in fluency or learns to recognize words by sight, he/she will be able to say the word more quickly and be able to read farther on the list, ultimately scoring more words correct. Of course, if any sound is not correct while the student sounds out the word, the word is scored as incorrect (unless the student self-corrects).

Back to Top

How should I score a student’s response if he/she corrects a word that was first mispronounced? How should a score a response if the student repeats the word and the second pronunciation is incorrect—even if the original pronunciation was correct?

For self-corrections: The examiner scores the word as correct if the student self-corrects within the time period allowed. To carry the example to the extreme, if the student mispronounces the first word in the list, the examiner scores it as incorrect and waits for the student to pronounce the second word on the list. After the student hesitates for 2 seconds, the examiner would mark the second word wrong and tell the student to proceed to the next (i.e., third) word. However, if the student says the first word incorrectly but then self-corrects within the 2-second hesitation period that the examiner mistakenly thought was for pronouncing the second word, the examiner would go back to the first word and mark it as correct. If the student says the first word incorrectly but starts then to sound it out, the examiner may wait 5 seconds for a possible self-correction before prompting the student to go to the next word. If the student does provide a self-correction, the word is scored as correct.

For incorrect repetitions: The student’s last response within the allotted time period is scored. For example, even if the student pronounced the word correctly the first time and then mispronounces the word when repeating it, the word is scored as incorrect. However, if the student just repeats a correct word, the repetition is counted as correct.

Back to Top

If a student hesitates for two seconds on a word, why doesn’t the teacher tell them the word before instructing them to “Go on?”

If a student hesitates for 2 seconds on a word in the Word Identification Fluency measure, the examiner provides only the prompt to “Go on” and does not tell the student the correct word. Of course, the teacher may make note of particular words that were missed and may provide instruction later on those words. Rapid recognition of particular words, that is, high-frequency words, is the critical aspect of the Word Identification Fluency measure. However, knowledge of one high-frequency word does not affect performance on other words. Therefore, the examiner does not want the student to focus any more attention on the word for which he/she hesitates; rather, the student should be focusing on the next isolated word.

Back to Top

A list of 50 words may seem overwhelming to some of my first graders. Can I put the words on individual flash cards instead?

The Word Identification Fluency measure should be administered in the same way each time. Given the same amount of time, a student is more likely to be able to read a greater number of words from a list than from flash cards, because, with a list of words, the student does not have to wait for the examiner to flip a card to the next one or wait as the examiner records a student’s response while still manipulating the flash cards. However, when using the 50-word list, a portion of the words may be covered while the student reads. The examiner may use a blank sheet of paper to cover up two columns of the words while the student reads down the first column. When the student finishes reading the first column of words, the examiner moves the blank sheet to expose an additional column of words. Having the examiner copy attached to a clipboard facilitates examiner recording while, at the same time, frees one hand to move the blank sheet to cover/reveal one or more of the columns of words on the student list.

Back to Top

How is the student’s score prorated (i.e., adjusted) if a student finishes reading a Word Identification Fluency list in less than 1 minute?

If a student finishes reading the Word Identification Fluency list of 50 words in less than 1 minute, the examiner should record the number of seconds it took the child to read the list and must also count the number of words the student read correctly. Then the examiner completes the following formula to determine how many words the student would likely have read correctly in 1 minute if he/she had continued reading at the same rate from a longer list of words.

(number of words read correctly/number of seconds it took to read list) x 60 = estimated number of words read correctly in 1 minute

Example: The student finished reading the list of 50 words in just 54 seconds and got 44 words correct. What score should be plotted on the student’s progress monitoring graph?

(44/54) x 60 = .815 x 60 = 48.9; We estimate that the student would have read approximately 49 words correctly in 1 minute had we provided more words and timed the student for 1 minute. We place 49 on the student’s graph to indicate the number of words read correctly in 1 minute.

Back to Top

How are words selected for the Word Identification Fluency measures? Why are these words chosen? Shouldn’t I just use special vocabulary words from the student’s language arts program for the word lists?

Being able to read sight words fluently is a critical reading skill, especially for beginning readers. Although teachers should work to expand a student’s oral language vocabulary and teach specialized reading vocabulary within text passages, young students need to be able to read high-frequency words with ease if they are to become fluent readers. Consequently, the Word Identification Fluency measures are comprised of 50 words selected randomly from the Dolch word list of 100 frequent words or from an educator’s guide of 500 frequently used words in reading (Zeno, Ivens, Millar, & Duvvuri, 1995). When the pool of 500 words is used to construct the Word Identification Fluency lists, 10 words are selected randomly from each set of 100 words. The reference for the educator’s guide is provided below as well as information for purchasing word lists that already have been constructed.

Zeno, S. M., Ivens, S. H., Millard, R. T., & Duvvuri, R. (1995). The educator’s word frequency guide. New York: Touchstone Applied Science Associates.

Available: http://www.tasaliteracy.com/wfg/wfg-main.html

20 tests for Word Identification Fluency for Grade 1 are available from:

Phone: 615-343-4782

Mail: Diana Phillips

Peabody College of Vanderbilt University
Box 328
230 Appleton Place
Nashville, TN 37203-5721

23 tests for reading isolated words are available from:

http://www.edcheckup.com

Edcheckup LLC
7701 York Avenue South - Suite 250
Edina, MN 55435
Telephone: 952-229-1441

Back to Top

Can Word Identification Fluency measures be reused?

Identical forms should not be used on successive testing occasions. However, once an entire set of alternate forms has been used, the examiner may reuse this same set of forms. Lengthening the interval of time between testing with an identical form minimizes the chance the student would remember the same words in the same order.

Back to Top

How should I score a student’s response if the student has an articulation problem or mispronounces a word but always pronounces the word that same way in his/her own speech?

The examiner would not count as errors any mispronunciations that he/she knows are due to difficulties with articulation or are related to dialectical differences (i.e., variations in pronunciation that conform to local language norms). However, in order to make such a decision, the examiner would need to have knowledge of the student’s speech outside of the testing situation or have knowledge of local dialect. Consequently, when the examiner is uncertain, a more conservative approach of marking the item as incorrect is recommended.

Back to Top

Passage Reading Fluency (PRF)

Passage Reading Fluency seems similar to some other measures (e.g., Running Records, Reading Miscue Inventory, and other types of Reading Inventories). How is Passage Reading Fluency different from these other measures?

Passage Reading Fluency (PRF) is a standardized assessment that has a strong research base demonstrating its reliability (i.e., stability of scores) and validity (i.e., measures what it claims to measure). PRF measures rate as well as accuracy while most other measures focus only on accuracy. PRF is quick and efficient to administer and score (about 1 minute) while other measures can take up to 30 minutes. PRF has multiple passages at each grade level and is a sensitive index of student skills making it an excellent tool for progress monitoring.

Back to Top

How is Passage Reading Fluency administered?

Passage Reading Fluency is administered individually. For each PRF reading probe, the student reads from a “student copy” that contains a grade-appropriate reading passage. The examiner scores the student on an “examiner copy.” The examiner copy contains the same reading passage but has a cumulative count of the number of words for each line along the right side of the page. The numbers on the teacher copy allow for quick calculation of the total number of words a student reads in 1 minute.

Back to Top

If the child stammers and then says the word, does that count as a mistake?

It only counts as a mistake if the child hesitates longer than 3 seconds, and you have already supplied the word. Those who work with students that have speech and language difficulties often ask what to do if a child stutters or stammers. They often wonder if PRF is a good measure to use when monitoring these students’ progress – The answer is that it should work just fine as a measure of reading progress as long as the teacher is consistent. So, the teacher should take care to keep track of the time the student hesitates, and make sure to supply the word after 3 seconds. Since the student’s progress is compared to his/her self, as the student improves his/her speech and learns better strategies for reading, the teacher should see increases in reading performance.

Back to Top

In Reading Passage Fluency if the student hesitates on a word, why do you tell them the word before they move on?

With Passage Reading Fluency reading in context is critical. Although the examiner doesn’t correct each error the student makes, the examiner does provide the word if the student hesitates for 3 seconds. In this way, the flow of the passage is not disrupted, and the student may be able to use knowledge of that word in context in reading the rest of the passage.

Back to Top

Since we use “looking at the picture” as one of our reading strategies, does the passage reading section for lower grades (i.e. 1 st and 2 nd) include pictures?

The passages do not include pictures for precisely that reason. We don’t want students using pictures as cues while they read. We want an un-aided measure of how they perform on text that is novel to them. Remember that during the PRF task, teachers are not necessarily teaching reading strategies, but are assessing the effects of the strategies that they teach and use during the rest of the day. So, you would see the effects of the use of the strategy “looking at the picture” and any other strategies you’re using, as an outcome on the students’ PRF scores.

Back to Top

What are PRF benchmarks for 4 th grade up?

4 th grade: 25 th percentile, 100wpm

50 th percentile, 125wpm
5 th grade: 25 th percentile, 110wpm

50 th percentile, 140wpm
6 th grade: 25 th percentile, 130wpm

50 th percentile, 160wpm

Back to Top

How does one address prosody (reading expression) within reading fluency?

PRF is an overall measure of reading proficiency, meaning that is a good indicator of things such as comprehension, prosody, and fluency. Researchers like Jay Samuels (repeated reading, automaticity theory) would say that as students become more fluent readers, it frees up cognitive resources so that they can understand more of what they read and so that they can focus on other aspects of reading like prosody. Some teachers have students retell what they have read after their one-minute PRF task, and other teachers use a reading expression scale from 1 to 10 to describe the students’ prosody (i.e., 1=the student read with little or no expression; 10= the student read with a high degree of expression.) However, using scales and teacher judgment to rate prosody is subjective. In addition, adding extra tasks to the CBM procedures reduces the time-saving and efficiency of the procedure. You can be confident that PRF is addressing overall reading proficiency without adding all of these extra ratings and scales.

LaBerge, D. and Samuels, S.J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293-323.

Back to Top

When scoring a reading passage is there a difference between proper vs. common nouns? If they miss a proper noun one time, do you continue to count it as incorrect throughout the passage?

Most often, teachers choose to have all words counted equally, so that if a student misses a proper noun, it counts as an error, just as a common noun would. If you have a passage with many difficult proper nouns (difficult names of people and countries, for example), then you may want to supply these words and not count them as errors. The most important rule is to be consistent. If you decide to supply difficult proper nouns, make some decisions about which ones you will supply, and then always supply those and count them as correct. In our research, we count a word as incorrect each time the student reads it incorrectly, even if it is a proper name. For example, if the student misses a proper name in the first sentence, and then says the word incorrectly again in the 3 rd sentence, this is counted as another error.

Back to Top

All of my students’ oral reading scores go up on one passage and all go down on another passage. Do these passages have different levels of difficulty even though they are supposed to be at the same level?

There is no way to assure that all passages used are at the exact same level of difficulty. Passages (even taken from the same level) are going to vary. In addition to passage difficulty, student performance may vary from week-to-week for a number of reasons – lack of sleep, problems with friends, being hungry, etc. That’s why it is important to look at the overall trend of the data (it’s kind of like the stock market). Every data point that is collected adds stability to the measure of reading performance. This problem can be dealt with by measuring frequently (once a week) and taking the median of 3 passages at each measurement period.

Back to Top

If the passages have different levels of difficulty, what should we do? Are they still useful for measuring?

Even if the passages are somewhat different in level of difficulty, they can still be useful for measuring progress. It is important to stay consistent, such as within the same level (i.e., A, B, or C, for example) or grade level when monitoring students’ progress over time. It is likely that passages in a given level will vary in difficulty, but if students are monitored frequently enough, this shouldn’t make much of a difference.

Back to Top

Should I have my students practice reading passages out loud for one minute?

No. Reading out loud is NOT the intervention – it is an indicator of reading proficiency.

Back to Top

Should I count mispronounced words wrong for ELL students? Even if the student mispronounces a word due to an accent? Should I count mispronounced words wrong for students who speak with a different regional dialect?

If a student mispronounces a word, and this mispronunciation is due to an accent, or different regional dialect, the word should be scored correct. A distinction should be made between incorrectly pronounced words and words that are pronounced differently due to accent or dialect.

Back to Top

For the “maze” test – is it always multiple choice? Ever just a straight closed procedure?

In the maze research, for CBM, the researchers have always provided the correct choice, and 2 distracters. The purpose of the Maze task is for students to move quickly through text, reading to themselves and selecting correct choices. If it were a closed task, students would be prompted to think of the correct word, try to spell that word, and write that word legibly. This would take much longer, and would raise other questions like, “do we count a word correct if it has minor misspellings?”

Back to Top


Where do I go from here?

You say to keep the child at the same “level” all year long. What if they make great progress and get 100% accuracy for several trials? Do you still keep them for a handful of months until the year is over?

Yes, students’ progress is monitored at the instructional level that is determined during the initial assessment. The purpose of progress monitoring is to document the students’ progress throughout the year using a constant criterion: the student’s instructional level at the initial assessment. Even if a student reads with 100% accuracy as the year progresses, their reading rate usually increases. Advancing the student to more difficult text will not be an accurate gauge of progress because the criterion has been changed to a higher standard.

Back to Top

Some of my students’ scores are going down instead of up. Does this mean that they not learning or that they are actually becoming worse?

There are different factors that might lead to a decrease or lack of progress in a student’s performance. It is important to look at performance over time. If a student is not increasing, it is important to continue to monitor them frequently and modify instruction to accelerate his/her progress.

Back to Top

Some of my students are making progress but they are still not meeting their goal. Should I lower their goal?

No, instead of lowering the goal, we might ask: is there anything I can do differently, or is there a need for an instructional change? And remember, there will be individual differences across students. Students will not always grow at the same rate.

Back to Top

After a student has demonstrated a need for a different intervention, where can I find these interventions? Do these interventions increase in intensity as one is discarded as ineffective for another?

This is a very common question and one that is hard to answer in a simple fashion since each case is usually handled differently. However, an on-line search is probably a good place to start. Searching terms such as, “reading interventions,” “teaching reading,” etc. will likely to start you off in the right direction.

Some sites we can suggest include:

The Access Center: Improving Access for All Students K-8. www.k8accesscenter.org; and the National Center on Accessing the General Curriculum. www.cast.org/ncac.

Back to Top