Home > Literacy Centers, Reading > Reading Counts! Claims and Counterclaims

Reading Counts! Claims and Counterclaims

Accelerated Reader or Reading Counts!

AR or RC?

The purpose of this article on Reading Counts! is threefold: 1. To briefly summarize the basics of the Reading Counts! (RC) independent reading management program 2. To analyze three key claims made by Houghton Mifflin Harcourt (HMS) regarding the efficacy of the RC (formerly Scholastic Reading Counts!) program and provide counterclaims by reading researchers, librarians, students, teachers, and Yours Truly. 3. To promote my own reading intervention program at the end of the article with free teaching resources 🙂

Background

I previously ventured into the deep waters of independent reading management programs a number of years ago with my article, The 18 Reasons Not to Use Accelerated Reader. Accelerated Reader™ is the most popular independent reading management program with 180,000 book titles (January 2019) assigned a Reading Practice Quiz. RC is the second place challenger with 45,000. Teacher comments on my article tend to focus more on the abuses of the program, and less so on the program itself. Many teachers are quite defensive about their use of the AR program. Understandably so. We teachers view our instructional choices as reflections of our professionalism. Curriculum is personal. In anticipation of similar comments to this article on Reading Counts!, I would like to preemptively respond by saying, “I’m sure that you are doing your part to adapt the Reading Counts! program to the needs of your kids, and I respect your professional judgment that you know your students best.” Please don’t shoot the messenger! However, as I re-read “The 18 Reasons Not to Use Accelerated Reader” in preparation for this article, I would have to say that most of the problems in the AR program are applicable to the RC program, as well. I won’t cover the same ground in this article. However, I will analyze three of the claims made in the RC program, which I see as being more exclusive to this program. But first, a brief overview of how the RC program works.

How Reading Counts! Works

  • A school or district pays a school start-up fee of $375.00 and is assigned a sales representative. The RC software management program is licensed for an annual fee of $4.00 per student (a lower price for 2019). The reading placement and monitoring assessment, recently re-named the Reading Inventory (RI), costs an additional $4.00 per student. So, if my math is correct, that’s $4,000.00 for a 500 student elementary school every year. Plus, more money…
  • The school and/or district re-allocate portions of their budgets to purchase books included within the RC program. Currently, RC has about 45,000 titles, but unlike the books in the AR program, the company makes money from each sale, because HMH publishes them! These purchases will necessarily become an every-year budget item. 
  • The HMS sales representative in-services school librarians, teachers, and administrators (lots of online help, as well) on how to implement the RC program. Suggestions as to how to inform and work with parents and corresponding resources are provided. The program resources are relatively easy to use, but time-consuming.
  • The classroom teacher or librarian administers the computer-adaptive Reading Inventory (RI) as a reading placement test to all students participating in the RC program. This test provides a Personal Lexile® score for each student.
  • Teachers use the Student Achievement Manager (SAM) data and management system to generate student and class reports. The reports list the results of the RI as a Personal Lexile® number (level or measure) for each student and a class Lexile average. A higher Lexile number indicates a higher reading level ability.
  • The reports also list the students’ optimal Lexile text readability levels (a numerical range). A text’s Lexile level is determined by its semantic and syntactic degree of difficulty and sentence length. Once students know their reading levels, they can select books from the Search Book Expert Online ,within these reading levels. Although the RC is a Lexile-based program, it also includes grade-level equivalency and guided reading levels in this search engine. Additional filters include grade-level interest (K–2, 3–5, 6–8, high school, and high interest/intervention), fiction and non-fiction, subject areas, genre, and curriculum-integrated books. Note that the HMS reading intervention programs, READ 180 Next Generation® and System 44® include some RC titles for their independent reading rotations.
  • Teachers and students set reading goals in terms of a point system. Each book is assigned a specific point value based upon its length and text complexity. Many teachers establish a monthly points requirement.
  • Once students have finished their books, they take a corresponding quiz on the computer, or the teacher may choose to print the quiz. Although the test bank for each quiz includes 30 items, the default number of questions is 10. The RC authors and sales representatives make much ado about the larger quiz bank of questions compared to that of the AR program. They claim that is less easy for students to cheat due to the randomized 10 question default when students are sitting side-by-side. This may be true; however, a quick search indicated plenty of RC quiz “cheat sites,” as are found with the AR program. Where there’s a will, there’s a way. Students are allowed to examine their incorrect responses, but there is no pay-off for doing so if the quiz re-takes use different questions. 
  • If the students achieve a predetermined score (mastery criteria set by the teacher), they receive a “congratulations screen” and an opportunity to rate the book they read on the “Read-o-Meter.” Students can also check their own RC Student Progress Report. Points are awarded based upon the percentage of quiz questions answered correctly. If the students do not achieve mastery, the teacher may require them to read the book again and retest or re-visit the students’ RI Lexile level range and the level and content of the book. Students are able to take the 10-question quiz 3 times, because there are 30 questions.
  • Teachers generate reports on students’ quiz scores and track the amount of reading and student test scores. They can also receive alerts when a student has not taken a quiz within a given period.
  • Once individual student point goals (usually set monthly) have been mastered, the student receives a certificate of achievement.
  • The Reading Counts! Educator’s Guide provides plenty of reproducibles to supplement the quizzes, such as reading logs, story charts, book reports, parent letters (in several languages), and guides for teachers to write their own quizzes (if the school library does not have the RC book).

Claims and Counterclaims

Claim 1: Students improve their reading more when the complexity of the text they read matches their reading ability. The best test to measure that optimal match or zone of proximal development (Vygotsky, 1978)? The HMH Reading Inventory. Why? The RI is a criterion (compared to a fixed goal, such as a Common Core Standard) and norm-referenced (compared to other students) test. This is important because the test design allows teachers to administer the RI twice more within the school year to monitor progress. The Lexiles, which RI uses, have improved readability assessments (standard errors of measurement have been minimized and the amount of comprehension variance that can be explained by text difficulty has been improved. Accelerated Reader’s STAR test doesn’t have those advantages.

Counterclaim: Given that the RI is state of the art, in terms of Lexile levels and matching students to texts, and given that the ability to administer the test three times per year does provide a valid measure to monitor progress. But, the entire design of the RC programs begs the question. It assumes what has yet to be proven. As noted reading researcher, Dr. Tim Shanahan asserts,

…Lexiles have greatly improved readability assessment … and yet we are in no better shape than before since there are no studies indicating that if you teach students at particular Lexile levels more learning will accrue. http://www.readingrockets.org/blogs/shanahan-on-literacy/teaching-books-students-reading-levels

…we have put way too much confidence in an unproven theory. The model of learning underlying that theory is too simplistic. Learning to read is an interaction between a learner, a text, and a teacher. Instructional level theory posits that the text difficulty level relative to the student reading level is the important factor in learning. But that ignores the guidance, support, and scaffolding provided by the teacher. [In doing so, educators] have striven to get kids to levels where they will likely learn best with minimal teacher support. https://shanahanonliteracy.com/blog/rejecting-instructional-level-theory

Matching the right books to readers is simply more complex than the quantitative Lexile approach RC uses. Content, theme, and sophistication of thought matter, as well as the age and maturity of the reader are critically important factors to consider when students select books for independent reading. Most would find the following strictly quantitative Lexile measurements, listed in parentheses, to be inappropriate criteria for these grade levels.

  • 2nd Grade: Night – Wiesel (570)
  • 3rd Grade: The Sun Also Rises – Hemingway (610); Twisted – Anderson (680); Incarceron – Fisher (600)
  • 4th Grade: Grapes of Wrath – Steinbeck (680); The Color Purple – Walker (670)
  • 5th Grade: For Whom the Bell Tolls – Hemingway (840); Kite Runner – Hosseini (840); A Farewell to Arms – Hemingway (730); Cat’s Cradle – Vonnegut (790)
  • 6th Grade: As I Lay Dying – Faulkner (870); The Sound and the Fury – Faulkner (870); To Kill a Mockingbird – Lee (870); Fahrenheit 451 – Bradbury (890)

http://www.unleashingreaders.com/?p=8891

Additionally, the authors of the Common Core State Standards, with their emphases on text complexity, specifically challenge the notion that reading instruction should focus solely on texts at student ability levels. The authors cite research suggesting that with such scaffolds as close reading, even struggling readers can access significantly more complex text than that to which they have been traditionally given access. https://achievethecore.org/content/upload/Implementation%20-%20Issues%20With%20a%20Leveled-Only%20Text%20Approach[1].pdf

“Below are bibliographic citations for the 26 studies referenced in Shanahan (2014) regarding students making gains with more complex text when given appropriate scaffolding. In addition abstracts and full-text PDF’s of all studies are available as well. These references were provided by Shanahan in “Building Up To FrustrationLevel Text” in Reading Today Online available here:”

https://www.literacyworldwide.org/blog/literacy-daily/2014/09/02/building-up-to-frustration-level-text

Furthermore, reading research has repeatedly demonstrated the important variable of prior knowledge with respect to reading comprehension. When readers have significant prior knowledge on a topic, familiarity with the genre, or experience with the author’s writing style, even high Lexile level texts can be accessible. Prior knowledge and scaffolding relevant content and context can often trump the quantitative challenges of complex semantic and syntactic text for students.

Motivation is another significant variable in matching readers to text that can override the limitations of the RC Lexile levels. My youngest son was in 4th grade when the last Harry Potter novel, Harry Potter and the Deathly Hallows, came out. Clearly, the quantitative Lexile level of 880 should have prevented his MA reading specialist father (me) from purchasing this “frustration level” book. Instead, I dutifully ignored the quantitative data and waited in line with my fourth grader for the midnight release of this treasured book. My son plowed through the book with a high level of comprehension. By the end of fourth grade, my son was reading significantly above grade level. Thanks to motivational influence of J.K. Rowling and the dozens of peers who were concurrently reading and discussion that book during recess.

Others would agree that reader motivation is far more important than instructional reading levels in book selection. From Ricki Ginsberg’s article, “This is my Anti-Lexile, Anti-Reading Level Post” (Ginsberg is Assistant Professor of English Education at Colorado State University):

I’m a 6th grader and when I took a Lexile test for my grade, I got stuck with books I hate so much. We had to search for books in my Lexile. I am so bored of those books. I want to read whatever I want to.

I took my grandson (a few years ago) to his book fair to purchase some books with him. He chose a few, and then we went back to his classroom to get his things, where I met his teacher. She took a look at the books he had chosen, and was excited about, and said, “Oh, I think these are too hard for you. You need to choose ones more at your level.” She didn’t know that I was a teacher, and I didn’t tell her. I almost hit her, but I didn’t do that either. She was the one who pretty much stopped his excitement about reading…

As a librarian, I have fought for years against leveling books. I was supported my district years ago against AR, but my job as a librarian was shifted to support classroom curriculum instead of supporting reading enjoyment, reference process, and library skills. Now a new deputy superintendent, whose old District used a Lexile based reading program, is spending money on a program that is Lexile leveled. While library books are hardly given any budget money, tens of thousands are being spent… The skills that teachers built by learning how to “fit” a book to a student and teaching students to self-select challenging and interesting reading material is being prostituted to paying publishers for poorly written formulaic books dressed up with attractive level numbers. It is a disservice to our students that ultimately destroys their confidence in becoming independent readers.

Unshelved

Used with permission www.unshelved.com

Claim 2: RC provides the accountability to ensure that students are reading independently.

At the heart of this powerful program is the practice provided by its quizzes. Unlike other reading assessment programs, no two quizzes in Reading Counts! are the same, struggling readers have the opportunity to retake quizzes, and quiz settings can be customized based on individual students’ needs for extra support or challenge. This quiz quality leads to more accurate and actionable data to keep students on track for success.

[Reading Counts!] automatically generates a quiz that meets each student’s reading needs. Because every quiz provides a true formative, curriculum-based assessment, As a computer-based program, RC provides immediate feedback and unique opportunities for mastery. Students can review questions that were incorrectly answered. Because each quiz is drawn from a database of up to 30 questions, students not showing an expected level of mastery can retake quizzes with a different set of questions. Research shows that when students are provided with immediate feedback, they are able to self-correct and make academic progress (Branford, Goldman & Vye 1991). https://www.hmhco.com/programs/reading-counts

Counterclaim: While the reading research is clear that students who read independently are significantly more likely to outperform peers who do not read on their own (Anderson, Wilson & Fielding 1998), and those who read more independently score higher on reading tests compared to those who read less (Juel, 1988; Juel, Griffith, & Gough, 1986; Stanovich, 1986), the research does not support the claim of the RC authors and editorial board that the type of accountability that the program uses (quizzes) is necessary to achieve optimal reading gains.

Each of the 45,000 RC quizzes includes a test bank of 30 questions. They are primarily recall questions with some vocabulary and a minimum number of inferential questions. Few of the questions are relevant to the big ideas or themes of the corresponding books. In essence, the quizzes are designed to hold students accountable for reading their books.

Some researchers such as Dr. Stephen Krashen, argue that free voluntary reading, without accountability, produces greater reading gains than independent reading programs with accountability, as with the quizzes in the RC program. You may wish to check out my dialogue with Dr. Krashen on in-class independent reading and accountability. I disagree with Dr. Krashen and support independent reading with accountability.

My take is that we teachers have much better methods to hold students accountable for independent reading that also reinforce effective reading practice. For example, as a middle school teacher, I use online peer book clubs and student-parent discussions for my middle school students. I’ve also taught high school ELA and supervised elementary teachers doing the same. Plenty of accountability and practice, using the motivating social nature of reading. And no in-class independent reading. It’s all homework. I’m no guru, but I’m persistent, and I get between 80–90% participation (more the first semester than the last).

I teach students and their parents how to self-select reading, informed, but not limited by word recognition measures. However, challenging books need not be the only books students read. Reading at multiple levels has clear advantages and reflects real-world reading. I also train students how to discuss their reading in their online book clubs with their peers (one daily post and two comments required using the SCRIP Comprehension Bookmarks… download follows… to prompt), and I pop in to add my 2 cents. At Back-to-School Night (I require at least one family member to attend, and arrange infinite make-up sessions until I meet with every parent or guardian), I train adults how to hold 3-minute student-led reading discussions and parents assign points for their kid’s 5-days-per-week independent reading and discussion. I’m in a lower, poverty-challenged school with 75% free and reduced lunch, multi-ethnic, multi-languages, etc. If you have tricks up your sleeves to hold students accountable for reading that don’t require additional teacher correction or huge amounts of time, please add to the comments section of this post. At the end of this article, I link to a nicely organized list of articles and free resources for ELA and reading intervention teachers with quite a few more ideas on independent reading.

In the RC program, the SAM management system tracks individual and class quiz scores and also the number of words students have read in each book. If a student doesn’t pass the quiz after three attempts, she or he loses credit for having read the book. This means that the number of words the student has read is not tallied, and the student doesn’t receive a reward certificate as quickly. If it’s the independent reading that reinforces comprehension, vocabulary acquisition, and fluency, why doesn’t the student receive credit for doing so? The bottom line is that students receive positive reinforcement for mastering quizzes, not for reading. Reading is not rewarded; passing the quizzes is.

Claim 3: RC EMPOWERS educators with reports and actionable data at the student, school and district level. As a supplementary reading program, RC REINFORCES comprehension, vocabulary, and fluency skills. 

Counterclaims: The reports do provide information to the teacher regarding who read what, at what Lexile levels, how many pages read, what quiz scores were achieved, who hasn’t taken a quiz for awhile (alerts), and more. Plenty of information about what your students are and are not doing with respect to their independent reading. All interesting information, but information which takes time to input, analyze, and report (whoever says that technology is a time-saver is crazy); and information which RI administrators (like your principal) can access and compare to that of your colleagues. Although not advocated by the authors of the RC program, most teachers do use this data in various ways to provide incentives for participation in terms of rewards and/or grades. Of course, the incentives can become problematic. See my article, The 18 Reasons Not to Use Accelerated Reader for examples. In short, the SAM reports do provide data collection and management functions (ones which could be done by paper and pencil or a simple Excel® spreadsheet in less time at no cost); however, none of these data informs reading instruction.

Next, let’s take a look at the claim about empowering educators with actionable data. Remember, the two assessments of the RC program are the three-times per year, Lexile-based HMH Reading Inventory (used for initial placement and subsequent progress monitoring) and the 45,000 quizzes. To my mind, actionable data should mean teachable data derived from prescriptive assessments that are reliable and valid. Let’s examine whether these two assessments provide information which is teachable.

For example, let’s say the students in your class take the RI during the first week of school. One of your bright students, Amanda, scores an above grade-level Personal Lexile score of 700, while  your class average is 550. With the SAM management software, you are able to use that data to match readers to books. However, other than that use (which we’ve already shown to be of questionable value), those initial RI Lexile scores provide no data to inform our reading instruction. On the RI given 3 months later, Amanda improves to a 750 and her average quiz scores from 80–90%, but your class averages the same 550 Lexile level and has not improved its 70% quiz average.

What does that data indicate? Something appears to be helping Amanda improve her reading, but we have no idea what it is. It could be the RC program; it could be the independent reading, itself; it could be the reading instruction you are doing in class, though you may not know exactly what instruction is helping; it could be what her parents are doing at home. Regarding your class, average Lexile and quiz scores, something appears not to be working. But what is the something so we can do something about it? We don’t know. You could look at subgroups and find out that your girls have improved more than your boys, or one ethnic group over the other, etc. But how does the Lexile and quiz data inform our instruction? The short answer? It doesn’t. The RI and quizzes provide no information about which reading skills have not yet been mastered and which have been mastered by Amanda or class as a whole. Neither assessment offers the teacher any specific data regarding what to teach and what not to teach. So why test if it does not provide actionable data?

A good question. Of course, teachers have been creating diagnostic and formative assessments for years that do inform their reading instruction in specific sub-skills. Good teachers are more than willing to test when the data pinpoints what needs to be taught and practiced and what does not require repeated instruction. Like many teachers, I’ve developed my own assessments to inform my instruction. I’ve written and field tested 13 diagnostic reading assessments with recording matrices and audio files, which provide teachable data. I provide them free of charge to help your students, and because some teachers would prefer not to re-invent the wheel by creating their teaching resources to correspond to each assessment item. Yes, you can buy those instructional resources from Pennington Publishing. Simply click the link and look in the header to download and print the free assessments. Additionally, skim the Articles and Resources to find over 700 articles of interest to the ELA and reading teacher, including a slew of articles on how to create your own no-cost independent reading program that I think does a better job for students than either the Accelerated Reader™ and Reading Counts! programs.

Both the Accelerated Reader™ and Reading Counts! program authors are careful to label their independent reading management programs as supplementary programs, as they should. However, as every teacher knows, instructional time is reductive: if you add on this, you have to take away that. Because both programs are designed for in-class and home practice, AR and RC supplant other instruction, most always reading instruction. Accepting at face value the RC claim that RC REINFORCES comprehension, vocabulary, and fluency skills, my question to teachers would be… Which would help your students improve their reading more? REINFORCING or TEACHING? Feel free to download my SCRIP Comprehension Strategies TEACHING resource at the end of this article as a reward for slogging through this rather long diatribe. I look forward to your comments.

Intervention Program Science of Reading

The Science of Reading Intervention Program

The Science of Reading Intervention Program: Word Recognition includes explicit, scripted instruction and practice with the 5 Daily Google Slide Activities every reading intervention student needs: 1. Phonemic Awareness and Morphology 2. Blending, Segmenting, and Spelling 3. Sounds and Spellings (including handwriting) 4. Heart Words Practice 5. Sam and Friends Phonics Books (decodables). Plus, digital and printable sound wall cards and speech articulation songs. Print versions are available for all activities. First Half of the Year Program (55 minutes-per-day, 18 weeks)

The Science of Reading Intervention Program: Language Comprehension resources are designed for students who have completed the word recognition program or have demonstrated basic mastery of the alphabetic code and can read with some degree of fluency. The program features the 5 Weekly Language Comprehension Activities: 1. Background Knowledge Mentor Texts 2. Academic Language, Greek and Latin Morphology, Figures of Speech, Connotations, Multiple Meaning Words 3. Syntax in Reading 4. Reading Comprehension Strategies 5. Literacy Knowledge (Narrative and Expository). Second Half of the Year Program (30 minutes-per-day, 18 weeks)

The Science of Reading Intervention Program: Assessment-based Instruction provides diagnostically-based “second chance” instructional resources. The program includes 13 comprehensive assessments and matching instructional resources to fill in the yet-to-be-mastered gaps in phonemic awareness, alphabetic awareness, phonics, fluency (with YouTube modeled readings), Heart Words and Phonics Games, spelling patterns, grammar, usage, and mechanics, syllabication and morphology, executive function shills. Second Half of the Year Program (25 minutes-per-day, 18 weeks)

The Science of Reading Intervention Program BUNDLE  includes all 3 program components for the comprehensive, state-of-the-art (and science) grades 4-adult full-year program. Scripted, easy-to-teach, no prep, no need for time-consuming (albeit valuable) LETRS training or O-G certification… Learn as you teach and get results NOW for your students. Print to speech with plenty of speech to print instructional components.

Get the SCRIP Comprehension Strategies FREE Resource:

Get the Diagnostic ELA and Reading Assessments FREE Resource:

Literacy Centers, Reading , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,


  1. December 2nd, 2020 at 10:30 | #1

    Yesterday I came across this commentary, as well as your earlier dissection of AR. I was particularly interested in your citations of Dr. Shanahan, which deal with a subject that has nagged at the back of my mind for years. Way back in 2005, Jim Trelease’s blog carried a critique of AR by Steven Krashen which dealt with four aspects of the program, but omitted one that I felt would be relevant. This led to an email exchange with Mr Trelease, most of which I reproduce here (with apologies for its length).
    **********************************************
    Dear Mr. Trelease,
    Several years ago I undertook to write a computer program that would duplicate the functions of Accelerated Reader, running as free software on the Gnu/Linux platform. Being a fully-employed school teacher, time was in short supply, and that project wound up gathering dust on the back shelf. But I finally retired last June and am now in the process of dusting off some of these old projects.

    So I was very interested to see your current discussion of Accelerated Reader. Dr. Krashen disusses four elements of the AR program, but he omits one that I think merits attention in its own right, quite apart from any issue of computer management. That is the feature of levelling, or explicitly restricting the range of reading difficulty available to the student.

    The heart of a reading management program is the library of book tests. If a point system is used to reward reading, then all books do not count the same. Some books will get more points because they are longer, or harder, or both. So when I wrote (320!) tests for my demo program, I had to find some objective way to evaluate what a book was worth. Right away, there was a problem.

    The problem perhaps appeared first in legal form. The simplest and fastest way for me would be simply to use the levels already published by AR in their text listings. But would that involve risk of copyright problems? Better I think about that a while…. At some point my wife, who teaches English at Whittier High School asked an AR representative how they determined levels, and was told that it was largely based on Fry’s readability formula. I don’t know if that was true; I do know that Fry himself explicitly objected to the claims for accuracy that AR’s usage implies: “I have no reason to think that my graph is any more accurate than Spache’s, for example, and he only knows where a book is within .6 of a year and 50 percent of the time.” (Edward Fry, Readability Graph: Clarifiations…., Journal of Reading, December 1977)

    Without being cynical, I think that whatever method AR uses does not always seem to work so well. I regret that I don’t have the exact titles in hand, but I will take a chance and suggest the following experiment: Get a book from the series Clifford the Big Red Dog, and another from the Magic Schoolbus that carries the same AR level. If your experience matches mine, you will not need any formal difficulty assessment to see at once that The Magic Schoolbus is much more challenging.

    On investigating, I found an extensive literature dealing with assessment of reading difficulty. A surprising number of professions have had an interest in the subject, and many proposals have been put forward. But as you wade through this literature, a nagging cloud grows in the back of your head. Is any of this really useful? My 21 years in the classroom suggest that levelling is not that big a problem. The kids are not always going to get baby books, because they are boring. And they catch on pretty fast to the problems with books that are way over their heads. (Of course, contests in which books or points are counted can skew this behavior, and not necessarily in desirable directions.)

    I guess the bottom line for me is that I would like to know what evidence there is that artificial restriction of difficulty levels has any beneficial effect on reading instruction. One thing for sure: if your library is small to begin with, it redues the number of books from which a student can choose.

    *******************************************

    Dear John:

    Your points are very valid about AR. I used to be a big fan but as the program grew beyond a certain point and began to be misused by districts and teachers (both with the points turning into grades and the kids being “locked” into guided reading levels), I’ve begun to sour on much of the program. I’m going to pass along your comments to Steve Krashen, as I think he’d enjoy your perspective.

    — Jim Trelease

    *************************************************

    Personally, I see reading management programs (as well as computer programs generally) as potentially useful tools. But they are not teachers. And as a teacher, I respect the old shop proverb: “A poor workman always blames his tools.”

    One of my biggest objections to these programs remains the extent to which the publishers are milking the limited financial resources of our public schools. My own program has pretty much convinced me of the feasibility of a wiki type site in which teachers can write the tests and schools can access the services for no charge. Any one of our state education departments could create such a site, hire competent moderators to run it, and the savings to states like California or New York would be enormous.

  2. January 16th, 2021 at 09:50 | #2

    Thank you for sharing. We have much in common. I wrote my master’s thesis on McGuffey’s readers.

  1. No trackbacks yet.