top of page

Data Analysis

      Overall, the positive results from this study were due to purposefully planning guided reading groups based on students' Lexile reading level, preplanning differentiated questions with TDQ and QAR question types, scaffolding the complexity of my questions, giving students autonomy with creating meaningful goals for themselves, and continuously utilizing the collected data to monitor student progress and effectively inform my daily instruction.

Unit Five, Week Four Pre/Post-Assessment

Results

         Students regularly completed Wonders reading assessments every other week. On these assessments, students are asked to read two passages at a third-grade level, answer several selected response questions, and complete a constructed response question where they are required to compare both texts that they read. Educators analyze this data to inform their instruction by identifying what comprehension, vocabulary, and grammar skills students are struggling with. To measure the impact of this study I administered a pre-assessment on January 21st and a post-assessment on March 11th. 
​
         On the pre-assessment, 6 students received an overall score of 55% or below, 7 students between 56-75%, and 2 students above 75% percent. Two additional Wonders biweekly assessments and four selection quizzes were administered throughout the study to monitor and adjust my teaching based on student performance. After administering the post-assessment on March 11th, I was able to analyze the overall impact of targeted questioning on my students' reading comprehension. When reviewing my students' results, it was clear that targeted questioning with a focus on using Question-Answer-Relationship and Text-Dependent Questions had a positive impact on students' reading comprehension. Out of the 15 students who participated in this study, 12 showed growth, 2 maintained the same score, and 1 declined slightly. Critically examining my students' individual performance helped me determine that although almost all showed growth, 9 students were still performing below 76% on the post-assessment. This closely analyzed data revealed a continued instructional need for the use of targeted questions to strengthen students' reading comprehension skills.
​
         In sum, these results indicate that the use of targeted questioning and direct student exposure to varying types of questions during reading instruction help students to answer more complex comprehension questions; such as those that require students to locate text evidence. Students also felt more empowered to utilize learned comprehension strategies with more confidence and accuracy. 
Screen Shot 2021-03-27 at 9.01.42 AM.png
Screen Shot 2021-04-06 at 10.43.18 PM.pn

Statistical Significance of Study

A paired-samples t-test was conducted to determine the effect of targeted questioning strategies and if they would increase students’ reading comprehension. There was a significant difference in the scores prior to implementing targeted questioning strategies (M=59.93, SD=18.60) and after implementing (M=71.33, SD=17.50) the targeted questioning strategies; t(15)= 4.75, p = 0.0002. The observed standardized effect size is large (1.23). That indicates that the magnitude of the difference between the average and μ0 is large. These results suggest that the use of targeted questioning strategies had a positive effect on students’ reading comprehension. Specifically, the results suggest that the use of targeted questioning strategies increased students’ reading comprehension.

Biweekly Wonders Assessment Averages

Screen Shot 2021-03-27 at 10.03.29 AM.pn
         From pre-test to post-test, my students' average percentages displayed steady growth through the biweekly Wonders assessments. It is important to consider that the unit four test is a comprehensive assessment that has many more questions assessing several reading skills taught during the unit. After each assessment, I analyzed the results and identified what concepts students were most commonly missing. This particular data was helpful to determine what specific skills from the week students were not understanding and what needed to be retaught or practiced more during instruction.
​
         After giving the unit 4, week 4 biweekly Wonders assessment and selection quiz, I noticed that most students were performing well when answering vocabulary questions, but were incorrectly answering comprehension questions. Five students in particular, who make up my lower guided reading groups, were struggling to find text evidence to correctly locate causes and effects. For the following week, I focused on asking those students to find specific text evidence for general questions and ones that applied with cause and effect. I saw improvements in their abilities to search for the evidence and understand how to use clue words to locate effects of causes. 

Weekly Wonders Selection Quiz Averages

         Selection quizzes were completed each week on Thursdays. These quizzes consisted of five vocabulary questions and five comprehension questions about the story read on Wednesdays during whole group reading instruction. When analyzing the data after each quiz, students showed a strong understanding of the vocabulary words taught during the week but struggled with answering comprehension questions about the weekly story.
​
​
         Because of this, I began to plan most questions around the comprehension skills of the week and identifying text evidence rather than the vocabulary skills. Additionally, I spent more time reading and analyzing the Wednesday story than others during the week. My CADRE associate intentionally collected anecdotal data on that particular day to monitor students' understanding during reading so that I could compare the data to their ability to apply the skills learned when taking the quiz. Taking the time to intentionally close read each text during the week and challenge students to answer higher-level questions using the comprehension skill of the week resulted in an increased level of comprehension abilities.  
Screen Shot 2021-04-03 at 7.07.57 PM.png

Whole Group and Small Group Anecdotal Results

         Anecdotal data was collected during whole group and small group reading instruction as students responded to various types of questions. This daily and weekly data was collected with the support of my CADRE associate. During whole group instruction, I would call on several students in a short amount of time while my associate collected data on whom I called on and how accurate their response was. Throughout the whole group anecdotal data, students displayed strong comprehension skills as many responses were accurate. Occasionally students would give an incorrect answer or a close answer that I would have to adapt to. When this occurred, I provided purposeful feedback that would lead the student to success or had other students add to their peers' answers to invite more collaborative discussions while we read. After our whole group reading, I would jot down the names of the students who incorrectly answered questions and which type of questions they incorrectly answered to inform me of the particular skills those students were struggling with, and how I could address them moving forward through reteaching strategies or further scaffolds.  
      
         Early on in using targeted questioning within my small group guided reading instruction, I would ask each student in the group a different type of question to better scaffold their understanding of the various types of questions. For example, I would ask each student one of the QAR question types or one of the text-dependent varying levels of question types. Then, I would have students sit in a new spot each day so that they would be exposed to each of the different types of questions throughout the week.
​
         As my students became more comfortable with the various question types, I utilized the anecdotal data collected during both settings to make purposeful choices and slightly changed my approach to questioning. I began to designate specific questions for particular students within guided reading to ensure I was targeting comprehension skills they were needing to improve. This differentiation assisted in determining what types of questions students needed more scaffolding with, what support was given in order for the student to correctly answer the question, my students’ level of understanding when identifying different types of questions or where to locate answers, and what level of complexity I would use within my questions. Using this anecdotal data allowed me to gather accurate information regarding my students' progress and adjust my teaching promptly.   
​
         The pictures and graphic below provide evidence of the common themes found in my anecdotal notes throughout the duration of this study. As the weeks passed, I noticed connections between all four of my guided reading groups and displayed these trends from the bottom to the top of the tree as the themes became apparent. I consistently provided prompts or instructional support with finding text evidence throughout the study as the complexity of my questions grew deeper. As a result, I adjusted my questioning to incorporate more enrichment or scaffolding/modeling of finding text evidence to specific students during small group instruction. This purposeful instruction helped build my students' confidence and supported them in the process of answering questions that challenged them. Anchor charts and bookmarks were widely used by all of my students as seen in many of the anecdotal data and proved to be helpful tools to refer to as students resulted in more accurate answers. These resources were often reviewed and utilized to prompt students when they were lost in finding an answer to a question.
​
         Many students demonstrated high engagement and motivation to answer questions throughout the study. For the few students who were reluctant to answer questions aloud, I provided positive reinforcement and individual support in order to increase their confidence. By increasing my students' confidence, they were able to have more collaborative discussions with their peers which added to their comprehension of texts. For example, during the weeks of February 1st and March 1st, within my approaching-level and beyond-level anecdotal data, I noted that a friend had helped another and that discussions were sparked after a question to either clarify or expand upon.  
"The picture shows how they can control the flying car. How can that even be possible? "
(pointed to picture) "If you see here, the screen shows the different controls. They must have to practice a lot before they can fly it." 
Click and drag to make pictures larger!
Common Themes of
Anecdotal Notes
Tree
Peer Support/Collaborative
Discussions
Prompting/Instructional Support with Finding Text Evidence
Increased Engagement and Motivation to Answer Questions
Anchor Charts/Bookmarks Referred to Often

Student Experience Questionnaire Results

Before Implementation

After Implementation

Screen Shot 2021-03-24 at 7.26.19 PM.png
Screen Shot 2021-03-24 at 7.26.41 PM.png
Screen Shot 2021-03-24 at 7.27.05 PM.png
Screen Shot 2021-03-24 at 7.27.28 PM.png
Screen Shot 2021-03-24 at 8.43.26 PM.png
Screen Shot 2021-03-24 at 8.43.48 PM.png
Screen Shot 2021-03-24 at 8.44.09 PM.png
Screen Shot 2021-03-24 at 8.44.26 PM.png
         An experience questionnaire was administered at the beginning and end of the study. The questionnaire asked four questions and was given on the students' Chromebooks through a Google form. My CADRE associate supported each student when they completed the questionnaire on January 27th. On March 11th, students completed the questionnaire independently after reviewing the directions with me.
 
         The graphs above show the positive impact targeted questioning had on my students' self-confidence with reading, answering questions, and finding text evidence. The main areas students grew in self-confidence include answering questions in front of the class and reading aloud. My students' confidence showed improvements with finding text evidence, but it was interesting to see that some still feel "slightly confident". Students who selected "slightly confident" at the end of the study will be students that I plan to practice more with in small groups as this will hopefully increase their confidence from "slightly" to "fairly" or "completely confident".

Individualized Graph with Goal Setting

IMG_8698_edited_edited.jpg
IMG_8697_edited.jpg
IMG_8699_edited.jpg
          Students graphed their scores for each bi-weekly Wonders assessment and weekly selection quiz. After graphing, I was able to set meaningful goals with students based on what they wanted to improve in. The most common goals shared amongst my students were to improve in finding text evidence, writing RACE responses, and answering particular types of questions. These conversations with my students were great insights as to how students were feeling about their own performance and what I needed to do in order to support them individually.
​
         Based on the goals we set, I was able to provide more individualized instruction to students with the use of my questioning during guided reading. I provided frequent positive reinforcement, scaffolding, and corrective feedback constantly. As many of my students had similar goals, I incorporated more collaborative opportunities to practice writing RACE responses, as well as finding text evidence during whole group instruction. 
​
         Having students graph their performance and set goals for themselves was a critical piece in giving them the autonomy to take control of their own learning. They were able to visually see how they were doing on assessments and quizzes, in addition to truly understanding what questions they were answering incorrectly and why. 
IMG_8732.jpg
IMG_8731.jpg
IMG_8730.jpg
IMG_8729.jpg

Why were these data methods used? 

Anecdotal Data

Triangulation of

Data

Pre/Post
Assessment
Student
Experience 
Questionnaire
         These methods of data collection were selected as they provided both qualitative and quantitative forms of data to inform my instruction. Utilizing several methods of data collection is best practice as the variety of types add value to each other, have the ability to confirm or disprove hypotheses generated in one set of data, and shed light on unexpected outcomes derived from one set of data. Through triangulating the data collected from the student experience questionnaire, anecdotal notes during guided reading and whole group instruction, and the unit five, week four pre/post Wonders assessment, I was able to produce valid information about my students' growth in reading comprehension and identify the themes that surfaced.
​
         Through my anecdotal data collection within both settings, I saw an increase in students' level of engagement during the reading process and ability to accurately find text evidence more frequently with differentiated supports. By creating differentiated questions and providing instruction that was targeted to my students' needs, students were able to demonstrate steady improvements to their reading comprehension and self-confidence in answering questions about what they had read. Additionally, the student experience questionnaire given at the beginning and end of the study revealed that my students also felt more confident when reading aloud and answering questions in front of their peers. This behavior was evident during daily whole group observations as more students showed signs of wanting to independently read aloud or raise their hand, and correlated with the daily anecdotal data collected during guided reading. Furthermore, student performance on the post-assessment indicated that 80% of my students demonstrated growth. Across the data collected throughout this action research, students' ability to locate text evidence, utilize comprehension skills while reading, and self-confidence were positively impacted and strengthened due to implementing targeted questioning. 
bottom of page