Choose a path through the site:

Data Analysis and Interpretation

Once information is gathered in research, it must be organized. The organizational strategy can be structured around observation setting, by theme or variables, or by research questions. The procedures for organizing and reducing data to a form that can be summarized and utilized are different for quantitative and qualitative data. For qualitative information, explicit procedures are used to organize summaries around common themes or categories and then to identify patterns (content analysis). The nature of the qualitative information and the research questions may require a template or rubric, either designed prior to data collection or after examining the data that organizes and summarizes the findings (see Lofland & Lofland, 1995; Miles & Huberman, 1994; Wolcott, 1994). Quantitative information needs to be numerically summarized (e.g., average ratings, frequencies) as well as more extensively analyzed (see Fink, 1995; Fitz-Gibbon & Morris, 1987).

Because there are many ways to conduct the analysis of information, neither quantitative nor qualitative analyses are simple or straightforward, especially to those who do not have experience with statistics or methods of thematic interpretation. Often, it is necessary to try different strategies of data analysis before the most meaningful approach is identified. In addition, different analyses might be appropriate for different purposes or audiences. Consulting an expert in statistics and data analysis is helpful, and hiring an expert to conduct the analysis may be necessary.

As analyses progress, they will need to be put into perspective as to their relevance to the questions or hypotheses that are the focus of the research. The perspective for interpretation might be the theoretical framework, expected results (e.g., hypotheses), a standard or benchmark, a comparison within the set of data (e.g., first-year vs. upper-division students), comparisons over time (e.g., achievement of learning outcomes, changes in attitudes), results from past research (e.g., in the research literature, at your institution), and implications for future programming (e.g., strengths, weaknesses, recommendations). Interpretation of results should be appropriate for the confidence that is inherent in the research design and measurement methods that were selected and should acknowledge limitations, as appropriate. When analyzing and interpreting both quantitative and qualitative data, care should be taken to avoid some of the most common pitfalls:

  • Assuming that the intervention is the only cause of positive changes documented. Several factors, some of which were unrelated to the intervention, may be responsible for changes in participants or in a community. Isolating specific causes is difficult and the report should at least acknowledge the possibly that other factors may have contributed to change.
  • Forgetting that the same methods may give different results when used by different researchers, in different settings, using different procedures, or when different subjects are studied or sampled. For example, two interviewers may ask the same questions but receive different answers because one was friendlier or more patient than the other. As a result, problems or difficulties may be ignored or hidden because people do not report those outcomes.
  • Choosing the wrong groups to compare or comparing groups that are different in too many ways. For example, gender, age, race, economic status, and many other factors can all have an impact on outcomes. If comparisons between groups are important, try to compare those with similar characteristics except for the variable being studied.
  • Claiming that the results of small-scale research also apply to a wide group or geographic area. For example, it is misleading to claim that participants' responses to a particular intervention in one course apply to the United States as a whole (W. K. Kellogg Foundation, 2006).