8.17.2009

Comparing Apples to Oranges = ?


So, as I was sitting in the third hour of our welcome back pep rally professional development workshop today, many questions came to mind. As we began to tear our data apart from last school year we [the faculty] began to question the relevancy of the data. Here is what happened.

We were comparing two year data, that of 9th and 10th grade students that had taken the Florida Comprehensive Achievement Test [FCAT]. We were comparing 2008 data with that of 2009 data to see is students made learning gains. Now, here is where I take issue.

If comparing two sets of data, that of 2008 and 2009, we should be comparing the same group of students, right? However, we were comparing data from 9th and 10th grade students from 2008 and comparing that with the results from 9th and 10th grade students from 2009. This data is not from the same group of students, how can it be relevant in telling us anything about learning gains? The 2008 9th graders became 10th graders, their data is relevant. But, the 10th graders in 2008 became 11th graders in 2009 and their data was not taken into consideration. The 9th graders in 2009 were actually 8th graders in 2008, again their data is not being compared either.

So, to sum up... we are comparing data from a two year period to examine learning gains, but from two different groups of student data. This is how we are grading schools and assessing student needs. Your thoughts?

More to come...

Mike Meechin
mike@innovateeducation.com

flickr photo by dslrphotos

1 comments:

Ed Kuzma said...

I agree 100% with you on this. The data does hold value in identifying areas of weakness and some trends. However, we know that there are decisions being made that effect our instruction, curriculum, scheduling and documentation. We all know this and most agree, However, the next question is "How do we effectively and reliably track student learning and educator effectiveness?" The answer is more standardized tests in each subject area. that requires standardized curriculum mapping and pacing, which in turn, takes the development of a curriculum from me as the teacher. As much as I understand why teachers may fight this, I believe I welcome it. Maybe two students who have different Algebra teachers should have their instruction and assessments have more in common. I do know that I welcome the microscope of assessments in my classroom. I am ready to see the data and face the truth. My students will be tested in the beginning of the year and the end of the year. Have I taught them well? there will be no arguing this data. 65% of my students made learning gains on the FCAT math. That is above our school average, but there are too many factors to really put any value on that. I had a high number of students fail the exit exam. However, it was give to me late and I had not given lessons on some of the information and did not allow students who did not bring calculators use any because I had told them all year to bring them. I found out this summer that they are allowed to use graphing calculators. My scores would have increased significantly if I had provided them to my students. My point is that these course specific, county generated assessments will give me a clear indication of my effectiveness as an teacher and my students as learners. Will it tell me if I am helping develop young men and women who will contribute to society instead of being a burden? Not really. Maybe schools are taking on too much of that which could be shared with the family , church and community. Bottom line is that I will have a good indication after this years assessments and look forward to comparing that data to FCAT data. Anyway...that is a little rant on the topic.

Post a Comment