education and teaching

Mastery Disconnect

How the District’s Over-Reliance on Mastery Connect Frustrates Teachers and Harms Students

Over the last few years, Greenville County Schools has become increasingly dependent on Mastery Connect (MC) as its primary summative assessment tool. Over the last two years, GCS has been pushing for increased use of MC as its primary formative assessment tool as well. While I am often an early adopter of new technology, and although I am an advocate for technology-assessed assessment, I cannot share GCS’s enthusiastic support of Mastery Connect, and I have serious concerns about its effectiveness as an assessment tool: because of its poor design and ineffectively vetted questions, the program, instead of helping students and teachers, only ends up frustrating and harming them.

UI Design

Using Mastery Connect with my students is, quite honestly and without hyperbole, the worst online experience I have ever had. I am not an advocate of the ever-increasing amount of testing we are required to implement, but having to do it on such a poorly-designed platform as Mastery Connect makes it even more difficult and only adds to my frustration.

I don’t know where MC’s developers learned User Interface (UI) design. It is as if they looked at a compilation of all the best practices of UI design of the last two decades and developed ideas in complete contradiction to them. If there were awards for the worst UI design in the industry, Mastery Connect would win in every possible category.

Basic Design Problems

Note in the above image that there are more students in this particular class (or “Tracker” as you so cleverly call it). However, the only scroll bar on the right controls the scrolling of the whole screen. To get to the secondary scroll bar, I have to use the bottom scroll bar to move all the way over to the right before I can see the scroll bar. This means that to see data for students at the bottom of my list, I have to first scroll all the way to the right, then scroll down, then scroll back to the left to return to my original position. Not only that, but the secondary vertical scroll bar almost invisible. My other alternative is to change the sorting order to see the students at the bottom.

This is such a ridiculous UI design choice that it seems more to be a literal textbook example of how not to create a user interface than a serious effort to create a useful and easily navigable tool for teachers.

Ambiguous Naming

When we get to the assessment creation screen, we see even more poor design. The information Mastery Connect provides me about a given text is collection of completely useless numbers:

There is no indication about the text for a given question, no indication about what the actual question is. Instead, it’s a series of seemingly arbitrary numbers.

Scroll Bar Design Reliance

Finally, once I try look at the completed assessment, I have even more challenges.

To navigate to see the students work, I have to deal with four scroll bars. Four! If there were an award for most inept UI design, this would have to be the hands-down winner. For twenty years now basic UI design best practice has always been to limit the number of scroll bars because the more there are on a page, the more inefficient and frustrating the user experience.

The fact that the district has made Mastery Connect the center of its assessment protocol given this bad design that makes it difficult for me even to access information is so frustrating as to make me think that perhaps it wasn’t the program itself that sold the district on spending this much money on such a horrific program.

Assessment Results and Screen Real Estate

When I want to look at the results of the assessment, there’s another significant problem: the fast majority of the screen is frozen while only a small corner (less than 25% of the screen) scrolls.

The results of the assessment are visible in the non-shaded portion of the screen. The rest of the screen stays stationary, as if there is a small screen in a screen. Not only that, but that portion of the does not include any indication that it does scroll: a user only happens to discover this if the cursor is in the lower portion of the screen and then the user presses the arrow-down button. Otherwise, a user is just going to face increasing frustration at not being able to view past the first seven students in a given tracker.

A further problem arises when trying to view assessments group by standards. In the screen shot below, it’s clear that while the scroll bar is at its lowest extreme, there is still material not visible on the page. How one can access that information remains a mystery to me.

Assessment Creation and Previewing

When creating a CFA or even deciding on which standard to assess for the CFA, it would be useful to be able to browse questions without first creating an assessment. This, however, is not possible. Browsing for content in a given program seems like such a basic function that all websites have that we take it for granted, but it’s not available in MC.

Organizational and Content Concerns

Question Organization

Many of the questions are two-part queries, with the second part usually dealing with standard 5.1, which for both literary and informational texts is the same: “Cite the evidence that most strongly supports an analysis of what the text says explicitly as well as inferences drawn from the text.” The problem is, when one is creating a formative assessment to target that one standard and one uses the filter view to restrict Mastery Connect to that single standard, the “Part B” question shows up with no indication of what “Part A” might be:

One teacher explained to me that she found a way to fiddle with the question number to trick the program into showing more questions about a given text. Again we’re working to overcome the deficiencies of the program.

This is especially problematic for a CFA because we are trying to focus on a single standard. Standard 5.1 is arguably the most foundational of all standards, and this is even reflected in the number of questions per standard: RL-5.1 and RI-5.1 have vastly more questions than any other standard, yet they are usually tied to another question, and it is all but impossible to figure out what that question is.

There’s a certain irony in the fact that the most foundational standard is all but impossible to assess in isolation.

Finally, some standards are grouped together into their parent standard, and this can create significant issues when trying to create an assessment that includes one standard but excludes another. For instance, questions about RI-11.1 and RI-11.2 are combined into RI-11.

  • RI-11.1 Analyze the impact of text features and structures on authors’ similar ideas or claims about the same topic.
  • RI-11.2 Analyze and evaluate the argument and specific claims in a text, assessing whether the reasoning is sound and the evidence is relevant and sufficient; recognize when irrelevant evidence is introduced.

RI-11.1 is about text structure. (Strangely, so, too, is RI-8.2, a fact districts have been pointing out to the state department since the standards were released. If MC were the kind of program the district touts it to be, the programmers would have realized this and dealt with it within the program in some efficient manner.) Standard RI-11.2 deals with evaluating an argument. It says a lot about the state department that they grouped these two ideas together under a main standard of RI-11, but it says even more about MC that they then take these two standards, which are radically different, and drop them into the same category. This means that when trying to make a CFA that deals with evaluating an argument, one has to deal with the fact that many of the questions MC provides in its search/filter results will have to do with a topic completely unrelated.

Question Length and Effective Use of Class Time

There are almost no ELA questions in the MC database that are not text-based. Even a standard like RI-8.2 (“Analyze the impact of text features and structures on authors’ similar ideas or claims about the same topic,” which, again, is identical, word for word,to RI-11.1) has entire essay-length questions. If one wants 10 questions about standard RI-8.2/RI-11.1, instead of delivering ten, single-paragraph questions asking students to identify the text text in play, creating such an assessment would likely result in eight, multi-paragraph text for ten questions. This means that such an assessment would take an entire class period for students to complete.

Indeed, it has taken entire class periods. Below are data regarding the time required to implement CFAs during the third quarter of the 2023/2024 school year. Periods 4 and 5 were English I Honors classes; periods 6 and 7 were English 8 Studies classes.

PeriodTotalStudentsMan-Minutes/Hours
40:26:002410:24:00
50:25:002912:05:00
61:46:002951:14:00
71:43:002949:47:00
123:30:00

The roughly one hour and forty minutes per English 8 class amounts to two entire class periods spent on Mastery Connect. That is two class periods of instruction lost because of the assessments Master Connect creates. Multiplied out for all students, it comes to an astound 123 man-hours spent on Mastery Connect. This does not take into consideration other, district-mandated testing such as benchmarks and TDAs, all done through MC.

It’s difficult to comprehend just what an impact this use of time has on students’ learning. The 123 man-hours spent on MC equates to three weeks of eighth-hour-a-day work. That is a ridiculous and unacceptable waste of student time, and that is just for one team’s students. If we take an average of the time spent per class, that comes to 1:05. Multiple that across the school, and we arrive at a jaw-dropping 975 man-hours spent in the whole school just for English MC work. If those numbers carry over to the other classes required to use MC for its assessments, the total time spent on MC assessments borders on ludicrous: 3,900 man-hours. That is just for CFAs. Factoring in benchmarks and TDAs administered during the third quarter and that number likely exceeds 10,000 man-hours spent on Mastery Connect.

A common contention among virtuosos of any given craft is that it takes about 10,000 hours to master that skill, whether it be playing the violin or painting pictures. That’s the amount of time we’re spending in our school for one quarter just to assess students, and Master Connect’s inefficiency only compound the problem.

This excessive time spent on Mastery Connect skews the data as well as wastes time. Students positively dread using Mastery Connect: on hearing that we are about to complete another CFA, students groan and complain. There is no way data collected under such conditions can possibly be high quality. Add to it the overall lack of quality in the questions themselves and I am left wondering why our district is spending so much time and money on a program of such dubious quality.

Quality of Questions

Most worrying for me is the baffling fact that many of the district-approved, supposedly-vetted questions about a given standard have nothing whatsoever to do with the given standard.

As an example, consider the following questions Mastery Connect has classified as having to do with text structure. The standards in question read:

  • RI-8.2 Analyze the impact of text features and structures on authors’ similar ideas or claims about the same topic.
  • RI-11.1 Analyze the impact of text features and structures on authors’ similar ideas or claims about the same topic.

Accurate Questions

Some of the questions are clearly measuring text structures:

Acceptable Questions

Other questions have only a tenuous connection at best:

The above question has, at its heart, an implicit text structure, but it is not a direct question about text structures but instead deals with simply reading the text and figuring out which comes first, second or third.

The questions above deal with the purpose of the given passage, and text structure is certainly connected to the purpose of a given passage, and that purpose will change as the text structure changes.

Unacceptable Questions

Some questions, however, have absolutely nothing to do with the standard:

The preceding question has nothing to do with text structures, and it is in no way connected to or dependent upon an understanding of text structures and their role in a given text. It is at best a DOK 2 question asking students to infer from a given passage, thus making it an application of RI-5.1; at worst it is a DOK 1 question connected to no standard.

The question above is another example that has nothing at all to do with text structure. Instead, this might be a question about RI-9.1 (Determine the meaning of a word or phrase using the overall meaning of a text or a word’s position or function) or perhaps RI-9.2 (Determine or clarify the meaning of a word or phrase using knowledge of word patterns, origins, bases, and affixes). It cannot be, in any real sense, a question about text structures, and it requires no knowledge of text structures to answer.

The preceding question has more to do with DOK 3-level inferences than text structure.

Multiple Question Banks

Teachers are told to use this bank and not that bank, but why should the question bank matter? If the program is worth the money the district is paying for it, we shouldn’t have to concern ourselves about the question bank. A question about standard X should be a question about standard X. If one question bank is better than another, that speaks more to Mastery Connect’s quality control and vetting process than anything else.

Accessibility Concerns

Due to the poor UI design and the generally poor quality of images used within questions, students with visual impairments are at a severe disadvantage when using Mastery Connect. Images can be blurry and hard to read, and the magnification tool is inadequate for students with profound vision impairment.

Output Concerns

Incompatible Data

I have significant concerns about the efficacy and wisdom of using Mastery Connect as an assessment tool. It’s bad enough that it’s so poorly designed that it appears the developers tried to create a good example of bad user interface design: working with MC is, from a practical standpoint, a nightmare. Add to it the fact that the data it produces for one assessment is often incompatible with data from a different assessment and one would have to wonder why any district would choose to use this program let alone pay to use the program. Neither of those two issues, though, is most significant concern I have.

When comparing CFAs to benchmarks, we are comparing apples to hub caps. The CFAs measure mastery of a given standard or group of standards. The resulting data are not presented as a percentage correct but rather as a scale: Mastery, Near-Mastery, Remediation. However, the benchmarks do operate on a percentage correct, so we’re comparing verbal scale to a percentage when comparing benchmarks and CFAs. These data are completely incompatible with each other, and it renders moot the entire exercise of data analysis. Analytics only makes sense when comparing compatable data. To compare a percentage to a verbal scale makes no sense because it is literally comparing a number to a word. Any “insights” derived from such a comparison would be spurious at best. To suggest that teachers should use this “data” to guide educational choices is absurd. It would be akin to asking a traveler to use a map produced by a candle maker.

Proprietary Data

While my concerns about proprietary data are tangential to the larger issue of the district’s self-imposed dependency on Mastery Connect, they do constitute a significant concern I have about Mastery Connect’s data in general. Because a for-profit private company creates the benchmark questions, the questions are inaccessible to teachers. The only information we teachers receive about a given question is the DOK and the standard. Often, the data is relatively useless because of the broad nature of the standards.

I’ve already mentioned the amalgamation of standards RI-11.1 and RI-11.2:

  • RI-11.1 Analyze the impact of text features and structures on authors’ similar ideas or claims about the same topic.
  • RI-11.2 Analyze and evaluate the argument and specific claims in a text, assessing whether the reasoning is sound and the evidence is relevant and sufficient; recognize when irrelevant evidence is introduced.

If I see that a high percentage of students missed a question on RI-11, I have no clear idea of what the students didn’t understand. It could be a text structure question; it could be a question about evaluating a claim; it might be a question about assessing evidence; it could be recognizing irrelevant evidence. This information is useless.

Occasionally, the results of a given question are simply puzzling. In a recent benchmark there were two questions about standard 8-L.4.1.b which is that students will “form and use verbs in the active and passive voice.” At the time of the benchmark, I had not covered the topic in any class, yet the results were puzzling:

P4 (English I Honors)P5 (English I Honors)P6 (English 8)
726936
727652

For the second question about active/passive, the results among the English 8 students were vastly better, especially in my inclusion class (period 7). However, among English I students, the results were consistently high despite having never covered that standard. It would have been useful to see why the students did so much better on one question than another given the fact that no class had covered the standard.

For the one of a sentence

One sentence — one single, simple sentence, the contents of which students already had planned in class. It was merely a matter of taking two phrases and generalizing. That was one class’s homework last night. In fulfillment of one of the many instructional standards for the eighth-grade language arts curriculum, students were working to write a single sentence that expressed the main idea of a multi-paragraph non-fiction text. We’d examined the text in class. Student effort hadn’t been stellar, but the majority fulfilled the most basic criteria for the small project. We had in place everything we needed to write that sentence, but we’d run out of time. The homework was something like leftovers: we didn’t have time, do it at home.

One sentence, probably no longer than ten words. Out of a class of twenty-five, four did the “work” in its entirety, one had begun writing the sentence but was less than half finished, three had the presence of mind to jot the sentence on a piece of paper as I was checking that other students had completed the work, and the rest did nothing.

One sentence, and sixty-six percent of the class was too lazy, too unmotivated to do it. “I had better things to do with my time,” one “student” said. “I forgot,” another said. “I just didn’t want to do it,” a third explained.

In a flash, I saw the possible future, and it was terrifying. Students in the second world — countries like China, Brazil, and India — see what we have, and they want it. Their parents see it, and they want their children to have it. And so they work for it. They work for the education that will give them the job that will allow them to buy that smartphone, that flat-screen television, that car — their little version of the American dream, exported and translated.

“Yet we already have it — we’ve won. We’ve got nothing to worry about,” replies the consumer prevailing (often unacknowledged or even unrealized) “wisdom.” True, we won. In the Cold War, we came out on top. What spurred us? A moment like we’re facing now, a moment where we realize our ascendancy is being eclipsed. We’ve grown complacent, though, and most feel our current reality could never truly disappear.

Yet looking at the standings of US students among those from the rest of the world, it certainly does appear that they want education — and all that that brings with it — more than we in the already-ascended West.

Four Numbers

The setup is simple: two circles of desks created of triads of one inner-circle desk and two outer-circle desks. Students in the inner circle can talk; students in the outer circle can only listen until they tag in and exchange places with an inner-circle student. However, the desks are usually set up in rows, so students have to rearrange the desks to get them in these circles.

I often race classes to see who can settle the rearrangement the fastest. The results are telling:

My P4 and P5 classes are my honors classes; my P6 and P7 classes are my on-level groups. The result is consistent: the honors classes get the job done faster than the on-level classes.

Why is this?

The on-level classes sometimes refer to the honors classes as “the smart-kids classes,” and I often point out that they’re not smarter. They usually just work harder. They stay focused in class and give their best effort at all times. When I ask them to do something at home, they generally do it. The thing is, they’ve been doing this for years, so they’ve gradually become increasingly better students — better readers, better writers.

I know that for many of the honors kids, there is a socioeconomic element at play as well. They most likely live in homes with more books. Their fathers and mothers are lawyers and teachers, so they see reading and writing modeled frequently. And most of them come from two-parent family, which offers great economic advantages over single-parent households. This is not to say all these factors are true for all honors students or that none of these factors are true for on-level students. There are a lot of factors at play.

Be all that as it may, though, the honors kids get the desks arranged faster. This is connected to executive functioning more than academic achievement. So which came first? Probably neither: both were nurtured at every turn by a number of different adults, and the numbers tell that story.

Monday Thoughts

School Thoughts

We received a new student on our team today: a fifteen-year-old boy from Central America who doesn’t speak a word of English and has not been in school since the first grade.

I have reservations.

I’m not fussing about any extra work entailed by having such a kid in my classroom. I’ve already got two complete-non-speakers and a fourth kid who barely speaks English. My reservations are about how effectively I can really help these kids. They are, of course, in my lowest level classes, which means there are a lot of behavior issues in those classes. I’m supposed to create a new curriculum for these boys because they’re so low with their English that modified materials don’t do anything for them in my class. In science, yes. In math, certainly. In social studies, a qualified yes. In English class, though? It’s impossible just to modify the curriculum. This newest student is illiterate in his first language: I can’t modify my curriculum that includes standards like “Determine one or more themes and analyze the development and relationships to character, setting, and plot over the course of a text; provide an objective summary” and “Determine the figurative and connotative meanings of words and phrases as they are used in text; analyze the impact of specific word choices on meaning and tone, including analogies texts.” You can’t do this with pictures. Besides, I struggle teaching the native speakers these things because of their low motivation — teaching a non-English-speaking student with the aid of pictures? Not going to happen. So I’ll have to invent a curriculum for these boys.

Is that type of teaching really in these boys’ best interest? Wouldn’t a part-time immersion with classes like gym and art coupled with a couple of direct English instruction courses be more effective? The people at the district office downtown will say, “No, the data don’t support that.” But I think that’s bullshit. I know from my own experience in Poland that dumping me into an environment where I didn’t speak the language without any direct language instruction would have only frustrated me, and that’s with me being 22 years old at that time. If I were only 14 in such a situation — forget it.

Parenting Thoughts

The Boy’s church league basketball team had their last game this evening, which sadly they lost 22-30. It was a tough season: they went 1-8. But it wasn’t the losing that bothered the Boy so much; it was the unsportsmanlike conduct so many of the players on the other teams exhibited. Tonight, for example, there was one boy who screamed at every shot attempt our team made in an effort to distract our boys.

I had some choice words to say in texts to K about this kid’s behavior.

“Just keep your cool,” she gently reminded me.

“Of course — he’s just a kid,” I replied. But that type of behavior doesn’t come from nowhere. Either his parents never tried to correct him because they saw nothing wrong with it, or they actively encouraged and/or taught him to behave like that.

Were I to coach such a kid, I’d tell him and his parents, “Look, if you do that, I bench you for the quarter. You do it again, it’s for the rest of the game. And every time after that, it’s for the rest of the game.”

The Boy’s inherently empathetic outlook on things means such behavior would never enter his mind. Was that something we had to teach him? I guess we did, but I don’t remember doing so, and I suspect his empathy would lead him not to do that even if we didn’t explicitly teach him that.

School Drama

How much of drama in the school is from adult modeling, both in the popular media and the home? The norm is to be upset about something, to be stressed about something, to feel wronged by someone. It’s a victimization mentality, a life lived in the passive voice and ordered by second and third conditionals. What we see on the tabloids while waiting to check out at Publix is what the kids try to emulate in their daily interactions with others, both because of what they see in the media and because the media informs the behavior of so many adults around them.

Watch any reality TV show: it is one constant conflict. Granted, it’s a hyped, artificial conflict: this or that individual doesn’t want to get kicked of this or that show, and the backstabbing and conniving of other participants creates conflict and heightens audience self-identification: “Hey, I’ve been stabbed in the back like that myself!” we say when we known someone’s scheming on reality TV has paid off.

That ain’t us

“Every single kid in this class been suspended at least once.”

It was a fair claim, and honestly speaking, I knew the girl who said it might actually be right. At least for half a second, that’s what I thought. A quiet voice beside me reminded me that that probably wasn’t the case.

“I haven’t.”

The shy words came from one of the best students in the class, a hard work boy who never has any behavior problems. The two girls with whom I was speaking — with whom I’d drifted so off topic from our classwork that I felt somewhat guilty continuing it and did so only because of a perceived need to explain some basic facts to some confused girls — the two girls just looked at him. I jumped in.

“And in fact I can show you a whole class of students that have never been suspended.” I had in mind my honors group, but times are changing, and being in an honors class no longer necessarily means perfect behavior, so they argued, tossing a couple of names at me. Knowing they were likely right, I persisted nonetheless in asserting that none of them had been suspended.

Finally, the girls turned to the fatalistic refrain of at-risk kids: “Well, that’s them, not us.”

“But it could be you,” I suggested, and one would think I’d suggested that they could fly to the moons of Jupiter by their own power, such was the looks of disbelief.

“That ain’t us!” they insisted.

The Bird

The kids are all taking a benchmark test. We’re spending two hours of each of the two days students will be in school taking a district-mandated benchmark test, which, truth be told, will be of little to no value to me. I know where my students are; I know where we’re going; I know what I haven’t covered. Further, I know the students better than a benchmark could show

In the midst of all this, a bird flies up to the window and perches on the sill. It cocks its head as it investigates all the humanoid forms on the inside, all hunched over glowing boxes, almost all oblivious to the bird’s presence. Except Anna. She’s sitting next to the window and has watched the bird flutter up. She takes a break from her test and looks over at the bird, smiling and likely grateful for the break the bird’s presence has brought.

Birds come to this window regularly, but their presence injects a bit of tragic chaos into the class atmosphere. Twice this year, birds have flown into the window with a sickening thud, only to lie outside the window slowly dying of the blunt force trauma the window and physics delivered. They flap about just outside our window as if they are trying to distract a predator to lure it away from its nest. These times, though, the bird is not faking. 

The Girls

I was on my way out to my car when the two little Muslim sisters (I knew this because they both cover their heads with scarves) passed me. I greeted them and somehow, we began talking. A group of their friends, all girls, gathered around us, all talking to me at almost the same time. I asked them where they’re from, and one girl said that she’s from Afghanistan.

“Do you speak Dari or Pashto at home?” I asked. Her jaw dropped.

“You know those?!”

“No, no, not how to speak them. I just know they exist. I know they’re the primary languages of Afghanistan.”

She smiled ear to ear: “We speak Pashto.”

“I’m from Iran,” another girl said. “I speak Persian at home.”

“Oh — Farsi, right? Isn’t ‘thank you’ in Farsi ‘Mersi’?” I asked.

Another jaw dropped.

“I just always found it strangely beautiful that it’s a loan word from French.”

“Do you speak French” the lone boy asked.

“Un peu,” I responded, winking, hoping he wouldn’t push me beyond my meager limits in the language.

But before that could happen, one of the young covered girls announced, “I’m Fatima!” They’d been telling me their names, and she finally got hers squeezed in.

“Oh, like the prophet Mohammed’s daughter, right?” I asked.

Her eyes got enormous and she ran back into the classroom, presumably to tell someone.

The fact that I know these little tidbits seemed to me simply basic education about other cultures. I know Dari and Pashto were Afghan languages because of our country’s involvement in that country and learning a little about it and its history at that point. I know “mersi” was one way in Farsi to say “thank you” because I sat next to an Iranian woman and her child on a flight from Charlotte to Munich in 2015 when I followed K and the kids to Poland a few weeks after they’d left. I know Mohammed’s daughter was Fatima because I read parts of a book about the supposed apparitions of Mary at Fatima. I know a bit of French because I too two years of it in college. Just a few tidbits of knowledge about these girls’ (and one boy’s) language and culture, but it seemed to make their day.

So little to create so much.

Overheard

“We’re just trying to teach the responsibility,” she said, explaining the reasoning her son’s teacher gave for assigning some work that the mother felt was unnecessary.

The words had hardly left her mouth when her interlocutor jumped in with how he would have responded and perhaps in doing so, suggesting how she likely replied or wanted to respond: “That’s my job.”

So many ideas packed into that handful of words.

The overarching notion is that there are some things that a teacher teaches, but there are some things that only a parent teaches. This notion of non-overlapping domains is popular with those who lean right, and it is fast becoming a key right-wing talking point. Whether it’s issues of race or questions of gender, the right is quick to point out that there are things that parents teach and it’s hands-off for everyone else.

I’m certainly not suggesting that there aren’t things that are predominantly in the domain of parents. Religion, for example, is something that as far as proselytizing is strictly off-limits for teachers, and rightly so. The problem with religion and issues about science is that the right is constantly redefining what is acceptable. It’s no longer acceptable, some feel, merely to teach students the beliefs and rituals of other religions for them to be educated about the beliefs and motivations of others. This is growing to include ideas like scientific literacy. Young Earth creationist parents resist the teaching of evolution in schools as an infringement on their religion as much as they do about teaching students the basics of Buddhist belief. If it contradicts or threatens Christian faith, they want it out.

Perhaps none of this applies to the individuals I overheard. Perhaps it all is. (Living in the South and overhearing this at a Scouting function, I would suspect it’s likely that at least some of it is.) What I found most interesting was the realization I had on hearing this that many parents in America have no idea at all what’s going on in schools. Teaching responsibility might very well be something the parent I heard does regularly and well, but schools are filled with students who are not taught these basic things at home. Teachers have to pick up the slack that negligent parents, overwhelmed parents, single parents, and any other parents leave.

Literal

We’re reading the balcony scene and looking closely at Romeo’s famous monolog (almost a soliloquy) when we get to the second half where he begins comparing Juliet’s eyes to stars:

Two of the fairest stars in all the heaven,
Having some business, do entreat her eyes
To twinkle in their spheres till they return.
What if her eyes were there, they in her head?

“What would happen if that exchange happens? If Juliet’s eyes were replaced by stars and vice versa?”

“Um, she would burn up from the heat of the stars, Mr. Scott,” says Mr. Literalist in the front row.

Sentence Frames

It’s a tough prompt: the analysis required might be too much for my students even at the end of the year; at this point in the year, it’s an impossibility. But I can apply various supports that will help them ease into the whole argument unit.

“What evidence does the author use to support the claim that MLK was the right man born at the right time?”

We’re not evaluating the argument: we’re not even looking to determine the claim. The claim is settled: MLK was the right man born at the right time.

I look over the passage and realize that the key idea is that he was born at the right time. It’s a question of context. He rose to prominence after Excecutive Order 9981 desegregated the military and Brown v. Board did the same (in theory) for schools. The author also points out that the rise of television helped King and the civil rights movement as it made it impossible to ignore the brutality directed at the African American community.

I help the students see all this, creating a graphic organizer to put this information into manageable form.

At the end of the lesson, I wrap up how our planning would form an answer:

The author supports the claim that King was the right man at the right time by showing the context of his leadership. For example, the author gives the context of laws and court cases. He explains Executive Order 9981, which banned segregation in the military. He also explains how Brown v Board ended school segregation. In addition, the author gives the context of technology. He points out that television made it impossible to hide how African Americans suffered.

As I say this, I point to each part of the organizer to show where the ideas are coming from.

The next day, I plan for an easy task. We’re simply going to take our graphic organizer and turn it into sentences. “I gave them all the answers yesterday,” I think to myself. “How much of a challenge can this be for them?”

We begin reviewing our work, and I add some more schaffolding: I number the sentences they need to write and add some transitional elements to help them connect things:

Each line, each numbered element becomes a sentence. I remove the parenthetical annotations to make it even easier. So I’m hoping students will see “Gives context of laws and cases” and realize the only thing missing is the subject. I don’t even expect or even hope that they will think in those terms. All they have to do is read it and think about it:

“‘Gives context of laws and cases.’ Who gives the context of laws and cases?” That’s the first step, but some of them struggle even realizing this.

One young man comes to me for help.

“I don’t know what to do with number two,” he admits.

“Well,” I begin, “read the text for number two.”

“Gives context of laws and cases.”

“What’s missing? What question do you have when you say that?”

He looks at me, a completely blank expression suggesting that there’s so much he doesn’t understand about it that he doesn’t even know where to begin. I decide to simplify.

“Imagine I walk up to you and say, ‘Gives her an apple.’ What question comes to mind when I say that.”

How hard can it be for this kid to see that we have an action here and we have no idea who’s doing it? How difficult can it be to realize that the simplest question in response to this “Who”?

I finally help him to see that we don’t know what’s going on there and that the questions, “Who gives her the apple?” And I think we’re ready to return to my original question.

“So, when I say ‘Gives her an apple,’ the obvious question is ‘Who gives her an apple.’ So if I say ‘Gives the context of laws and cases,’ what’s the obvious question?” I don’t even bother looking up at him because he should catch this almost immediately. It’s the same problem. He just stares at me.

Even after I get through to him that we’re trying to figure out who provided the context, he can’t take the next step. I’ve had this problem with other students, and they get confused about what we’re really writing about. They ask, “Martin Luther King?” sheepishly.

This is a deceptively complex question we’re working on: we’re not asking a question about the contents of the text itself — what it’s about — but the decisions the author made in creating the text. It’s not an analysis of the contents of the text but of the structure of the text, of the process and thinking behind the writing of the text.

But this level of questioning is not even our ultimate goal. We’re ultimately supposed to get students ready to answer questions about evaluating the claim and evidence of an argument. Here, I’m giving the claim and the paragraph in which to find the evidence. I’m just asking them to figure what the evidence is. I’m not asking them to find the claim. I’m not asking them to find the evidence among all the paragraphs. And I’m certainly not asking them to make decisions about the quality of the evidence provided. And as far as potential counterclaims — forget it. I just want them to find the evidence.

While I’m working with this boy, a handful of students realize the relatively straightforward nature of what I’m asking them to do and how it’s all on the board and write beautiful (although simple and short) paragraphs about it.

These kids are in the same class along with a boy who speaks very little English and a boy who speaks no English at all, and the state expects me to get them all to the same place in nine months: analyze the argument in an eighth-grade level text and evaluate its effectiveness.

And they are struggling to do it when I’ve already done it with them. Using a fifth-grade level text.

1.5

End of the Break

The break is over: the kids go back tomorrow, with E starting his second semester in middle school and L beginning her last semester as a junior. Two facts that are hard to comprehend: the Boy is 11; the Girl just turned 17. One more hard-to-believe fact: the school year is half over now.

I went back to school today for a teacher’s workday. Walking down the halls this morning I had the realization that we only have a matter of months before the end-of-year testing kicks in, and few of my on-level kids are ready for it. Granted, they’ve made progress this first semester, but there’s still so much more to do. One of the frustrations I have with all this testing is that it’s heartlessly uniform in its expectations: growth doesn’t matter; improvement doesn’t register — everyone has to reach the same place at the same time. The kids who go from struggling to write a paragraph with more than three sentences to writing fully-formed Schaffer paragraphs that make a claim, provide evidence, and explain that evidence will still get a “Not Met” score at the end of the year even though they’ve grown more than the English Honors kids who will score “Exceeds Expectations.” The kids who had so many emotional issues that sitting in a class and focusing for more than a few moments who grow to the point that they can remain focused for ten minutes at a time and work collaboratively with their peers without getting off-topic for a full five minutes — they’ll still “fail” despite all the evidence I could provide to the contrary.

That Time of Year

We always have some kind of decorating competition in school around Christmas — door decorating, hall decorating, tree decorating. And there’s always a group of kids who are so very eager to do the work.

It’s also this time of year that we often start Romeo and Juliet. I’ve about completed the whole first act in a single week. We could have pulled it off if it weren’t for today’s quiz…

Basketball 2023

Cheering for my students — few things are better.

December Friday

Today was our annual trip to the district’s vocational school to give our soon-to-be-high-schoolers an overview of what’s available to them there: everything from cosmetology to firefighting, from diesel engine repair to culinary arts, from mechatronics to nail tech. It’s quite an impressive variety.

Once I got back home, I saw that the inevitable has begun: our poor widowed neighbor has moved out of her house and family and friends have already started on the house — they took down the back deck that looked to be made of nothing but rotten boards.

“Wonder what kind of neighbors we’ll get,” will become a common topic of discussion, I’m sure — not that we have any say in the matter.

For dinner, Babcia made placki ziemniaczane with mushroom sauce — utter heaven.

And after dinner, a walk with the dog while the rest of the family went to church, a walk that included a street I haven’t been on in ages. I’d forgotten about the holiday scene they create.

Imagine

An imaginary email:

Thank you for attending the Q2 Student Progress Monitoring meeting with Bob Smith from the district office. As we prepare to engage in the Q3 Student Progress Monitoring process, please discuss and have one person from your collaborative team respond to this email no later than 4pm on Thursday. Please Cc: Bob Smith when you respond.

  • Question 1: What is the title of your current unit of study and what date do you anticipate finishing this unit?
  • Question 2: What is the title of your next unit of study and what date do you anticipate beginning this unit?

Let me know if you have any questions/concerns.

An imaginary response:

Thank you for your email thanking me for my attendance at the mandatory meeting. I appreciate the chance to sit with my colleagues and hear from someone at the district office how to do my job. Since I’m completely unfamiliar with monitoring student progress having taught only 24 years now, I appreciated the refresher of the basic ideas with which only the rawest of new teachers are unfamiliar. However, given the amount of time it took to fill out the forms your method required, I think I will have to politely decline further participation. I trust the district office will understand that my experience should suffice.