So, as I was sitting in the third hour of our welcome back pep rally professional development workshop today, many questions came to mind. As we began to tear our data apart from last school year we [the faculty] began to question the relevancy of the data. Here is what happened.
We were comparing two year data, that of 9th and 10th grade students that had taken the Florida Comprehensive Achievement Test [FCAT]. We were comparing 2008 data with that of 2009 data to see is students made learning gains. Now, here is where I take issue.
If comparing two sets of data, that of 2008 and 2009, we should be comparing the same group of students, right? However, we were comparing data from 9th and 10th grade students from 2008 and comparing that with the results from 9th and 10th grade students from 2009. This data is not from the same group of students, how can it be relevant in telling us anything about learning gains? The 2008 9th graders became 10th graders, their data is relevant. But, the 10th graders in 2008 became 11th graders in 2009 and their data was not taken into consideration. The 9th graders in 2009 were actually 8th graders in 2008, again their data is not being compared either.
So, to sum up... we are comparing data from a two year period to examine learning gains, but from two different groups of student data. This is how we are grading schools and assessing student needs. Your thoughts?
More to come...
flickr photo by dslrphotos