Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
Education
In reply to the discussion: Leaked Questions Rekindle Fierce Debate Over Common Core Tests. [View all]Igel
(36,190 posts)5. The blogger does something a bit odd.
It's apparently the NY version of the PARCC test that's at issue.
And the blogger points out that the test is at odds with some of the requirements of the Common Core standards.
What's missing is the connection between the two assertions. Why? Because the NYS standards are not verbatim CC.
Take one issue: An essay question has to do with "structural elements" but the CC standards require that they be explained, presumably orally. Not in writing. The horrors! The test is unfair! Oh, noes!
However, the NYS writing standards say,
Standard 2: Students will read, write, listen, and speak for literary response and expression.
Write interpretive and responsive essays that
- describe literary elements such as plot, setting, and characters
- describe themes of literary texts
- compare and contrast elements of texts
Write interpretive and responsive essays that
- describe literary elements such as plot, setting, and characters
- describe themes of literary texts
- compare and contrast elements of texts
Which is exactly what the PARCC question requires. In other words, the conflict isn't between the NYS standards to be taught in 4th grade and the test question, the conflict is between national CC standards for 4th grade and the test question for NYS students. Oh.
And so it continues. She complains that the test isn't aligned to standards that it's not supposed to be aligned with. In so doing, we assume she's picked the right standards for alignment. She's the expert, after all. And she's led us down the garden path. I personally don't think I look good with a big old iron ring in my nose.
The quibble over lexile ranking for one reading passage is possibly important, possibly not. The difficulty with that part of the blog is a lack of knowledge. The book is 6-8th grade or 9th grade, depending on how you gauge lexiles. But this is presumably a cold read, and there's no information on the lexile ranking of the passage given. The blogger seems to be saying that all portions of the book must be at the same lexile. This is not true, on its face. Is this the case with the shark text? Who knows? We certainly don't, but we're asked to condemn based on the ignorance we possess and the certainty that the teacher wouldn't be trying to mislead us. Or herself.
I've studied literature in a few languages. As I've learned languages and done "sample" readings along the way, I've read selections that were easy. Not simplified, just easy. Going from selections to entire works was traumatic. One Turgenev story pops out as especially memorable: I could read 20 pages and look up perhaps 10 unknown words ... after the first couple of pages. Those first couple of pages described in fantastic detail a forest glade at sunrise--types of plants and plant parts, trees and tree shapes, landscape textures, ground shapes, gradations of light and dark and varieties of green and grey and brown. There were dozens of fairly low-frequency words on each page I'd never seen. The first page or two determined the lexile. After that, the lexile ranking would have dropped down many grade levels. The point: The lexile of "the passage" is no higher than the lexile of the work as a whole, but can be far lower. It's a serious error to think that a part is necessarily equivalent to the whole.
As for the issue of fairness, I'd note there were questions on a Texas standardized test that a student had difficulty with. We're allowed to read the questions out loud or deal with some mechanical aspects--correct page, etc. Not with answers. In other words, some of the info in the test leaks out among teachers, inevitably. However, it was obvious that some questions were difficult, some were confusing, some were just miswritten. "Refer to diagram 3" and there were only two diagrams labeled "1" and "2"; the question should have read "refer to diagram 1". Or the question says "refer to the dialog between Jake and Ben on lines 10-18 of the text," except that there was no Jake and no Ben in the reading passage. You might realize that Jake and Ben were in the next story or that the question works if you realize it refers to "Mary and Sue" on lines 10-18. Or you might just randomly guess as your stress level spikes. Either way, if you know the question and can figure out ahead of time what it must be asking--or know that it's a nonsense question and not to spend 5 minutes puzzling over it--you're at an advantage. Some students have an easier time than others. But all are measured by the same metric, inequitably. Test questions, good, bad, or indifferent all wash out in the end when the passing score cut-off is decided based on an administration assumed blind for all students. When individual students get that advantage, it's called "cheating." When a teacher does this for some students it's called "freedom of speech." I guess students who blurt out, "Hey, the answer to #10 is (b)" or "The answer to #3 is 'synecdoche'!" don't get freedom of speech.
I'd say that the tests Texas just gave weren't proofed well. On the other hand, I don't know if any of the goof-ball questions even counted towards a student's score. Why? Because every test has experimental questions. There may be 50 questions, but only 40 or 45 are scored. The others are there to determine discrimination and difficulty, some resurface and some are scrapped. Keep this in mind with the odd questions that the blogger decries. We assume they counted. We can't know that.
All standardized tests are, moreover, constructed with a similar kind of item composition. Some are entry level: If you're in the bottom quartile for your age group, you should be able to answer them. Everybody gets a foot in the door, ranked some place above zero. Where you, the average student, should be is in the upper half, where there's the most sensitivity to rankings and more questions are pitched. However, there are also much harder questions above those that are a challenge to the 1%ers. (Most young teachers these days, I am given to understand, think that the average kid should readily achieve 100/100 because the test should just cover the essential points. Reach for the mean! Strive for mediocrity!)
Take a released science test question from Texas for a now-defunct test; Texas is vehemently non-CC. A car goes 1/3 of the way around a race track with a diameter of (let's say) 500 m. This takes 1 second. If the coefficient of friction between the rubber of the tires and the track is 0.2, what angle must the track be at to avoid skidding? (It gave the car's mass, as well. And the numbers themselves are lost in the Lethe.)
To solve this you have to find the distance, then the car's speed, and then the force needed to keep the car on the track and the maximum centripetal acceleration exerted by the track on the vehicle. You'd find that the car would skid. So then you'd have to adjust the track angle to so that the horizontal component of the track's normal force matched the maximum centripetal acceleration. The standard formula chart doesn't give the geometric formula and over half the students for the grade level that the test was written for didn't have trig. Most of the teachers of regulars classes for that subject were in the dark about how to find the answer. Yet there it was. The AP teachers' students, however, would mostly have been able to do this, as would some of the pre-AP students. It's not an entry level question. It's not a "this is where the average student should be" question. It's "let's see what you can do, baby Einsteins" question. Those who think that a bell curve should look like a sliding board were outraged. I was amused. At their outrage. And by the fact that that year, at the school I was observing, the science team had decided that centripetal force was omissible because it wasn't one of the few required standards that made up 60% of the test but one of the dozens of "supporting" standards that made up only 40% of the test. In other words, that school's regulars students wouldn't have even seen the topic. Yet there it was, and it was a fair question.
For every question like that there were two like "How long does it take a stone to fall 3 meters from rest with an acceleration of g?" and one "if a car goes at 60 mph for 2 hours, how far does it travel?" question. I remember none of those kinds of questions; the one that stands out is the one that would have given AP students a run for their money. (Along with one that dealt with Faraday's law where all the info and distractors were provided via a truly gnarly graphic. Again, the mediocre and easy questions are thoroughly Lethe-rinsed.)
Lots of distrust going on, with a huge hankering to be outraged over what we pride ourselves on knowing but which isn't so. Bodes poorly for all kinds of future outcomes.
Edit history
Please sign in to view edit histories.
Recommendations
0 members have recommended this reply (displayed in chronological order):
5 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
RecommendedHighlight replies with 5 or more recommendations
Good lord, they're arguing that copyright law forbids the questions from going public?
arcane1
May 2016
#1
Shred them and use it for a community barbecue. Their over-hyped well paid devastation
Ford_Prefect
May 2016
#3
Almost everything is privatized now, including the 'TRADE SECRET' voting machines.
Peace Patriot
May 2016
#4