Week 6 Relfection

I am appreciative that the assignments are being broken down into steps, so that for each assignment I am creating a piece of the end product. This week’s Twitter helped me start thinking about how and why I want to assess and collect the data for my project. I have always known that I would use AIMSweb as the final assessment to collect student’s fluency scores. After the Twitter discussion I started thinking about the data of asking the students how they felt about using iTalk to practice repeated readings. I am wondering if students will feel that using iTalk helped their reading or not. However, after reading comments about my blog this week now I am second guessing whether asking students has relevance or not.   As the week ended and I completed my assignments I felt I was heading in the right direction. Then I start thinking about all the variables in my project, how will account for tech use over teacher instruction? How will I account for reading increasing because students are being read to at home or not? But how will any of us? Those colleagues looking at student engagement how will they know engagement increased because of the use of the technology versus the topic being taught versus the time of day. I could really use some guidance, because I think I have misunderstood the task at hand.



6 thoughts on “Week 6 Relfection

  1. That’s kind of what makes qualitative research a little bit easier (in my opinion): you really just have to come up with some good questions to ask the students and the. you analyze their feedback. With something quantitative (like improving fluency), you need to find trends in data–which is totally doable in your paper, it can just get tricky depending on variables.
    If you want to change things around, you might want to link reading fluency to a student’s confidence/self-esteem and likelihood that they’ll feel comfortable reading or speaking in front of a class. If you could find research on those topics, you could easily turn this into a qualitative paper–asking students how they feel about the recording app would be a great way to gather data for a question like that. And I do believe that using iTalk would do exactly that for some students who lack confidence in that area, but it might be harder to establish just how valuable that is.
    As you’ve currently described your project, I think you could pull it off, but I would make sure that each group you compare has a comparable population based on your previous AIMSweb data–the same percentage of students in tiers I, II, and III, etc. (unless you have time to monitor each student’s progress for enough data points both before and after app use).
    Now I feel bad because I’m afraid I’ve contributed to a loss in confidence–I’m sure your project will be great, just make sure you can minimize those variables!

    • Sorry, I know you are just trying to help and I agree with most things you presented. I changed up my question and I’m going to continue with research project and do my best to account for the variables.

      • Something else occurred to me about your project: are you only going to be gauging fluency by the number of words read per minute? While that is important when measuring fluency, does it account for expression? In your previous post, you talk about using iTalk to increase fluency scores, but AIMSweb only tracks scores in speed and accuracy. If your question uses the term “fluency scores,” that doesn’t seem to leave much space for expression (and “increase” implies a quantitative measure; you could use “improve” to try to encompass the qualitative side of fluency). Expression is a huge part of reading fluency. Expression itself can’t easily be measured quantitatively, but I think that using recordings can have a tremendous impact on expression. So I suggest that you use wpm as a quantitative measure, but also to use student surveys asking them to self-assess changes in expression. It will be especially apparent to them when they are listening to (and comparing) the same passage recorded at different times.
        This may have been your intention all along, but it was hard to tell by the way things were worded–the AIMSweb data accounts for speed (and accuracy), but using “fluency” as an all-encompassing term might cause some confusion in your proposal.
        If fluency is made up of speed, accuracy, and expression, AIMSweb scores can only measure part of that. If you refer to students reflecting on their recordings, you might want to specifically use “expression” for that. Reflecting on recordings does not help measure speed and accuracy (which has already been done), but it’s a great way to gather data for expression. I also recommend that you use your own judgment on gauging improvements in expression, and not just the students’ feedback.
        An amended research question might suggest using iTalk “to increase reading speed and accuracy while improving expression” or something like that.
        I think I just repeated myself a whole bunch of times. I’m sorry about that.

      • How about: Does using iTalk for repeated reading practice help improve reading fluency and expression? Then I was thinking of looking at 2 classes of the same grade level, for example 1st graders. I was going to look at their rate of increase from Sept. until the date of data collection (with out iTalk) then after using iTalk look at there rate of increase after using iTalk for 3-4 weeks. The only repeated reading practice would be done with the app in both classes. Then yes the students would be getting “different” instruction, but it wouldn’t be fluency instruction. Both classes would get the same fluency instruction during the use of the iTalk app. With in that I could look at expression like you suggested in your prior post.

  2. I think that’s a good idea for gathering data…though ideally, you would want to track their rate of increase for the same amount of time with and without iTalk. I’d use the 4 weeks before iTalk, then the 4 weeks after iTalk has begun. Also, that will eliminate a lot of potentially low scores from the very start of the year–often the result of not reading over summer, but once they’ve been in school for a little bit, their fluency increases dramatically. Including that drastic uptick might skew your data and make your pre-iTalk data look better than it actually is. Will the students be tested weekly?
    As for the question, I might say:” Does using iTalk for repeated reading practice helping improve reading fluency by increasing rate, increasing accuracy, and improving expression?” I just wouldn’t want to use “fluency and expression” side by side, as expression is just one of the components of fluency (the others being rate and accuracy).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s