Reading: Analysing Data

Reading: Analysing Data

Kara, H. (2015) Analysing Data. In: Kara, H. Creative Research Methods in the Social Sciences: A Practical Guide. Bristol: Policy Press, pp.99-120.

  • use a recognised analytic technique, such as interpretive phenomenological analysis (Finlay 2011: 140)
  • look at the metaphors people used, to see what they might tell you (Fletcher 2013: 1555–6)
  • analyse interactions between people in the focus group to find out what those add to the analysis of data content (Farnsworth and Boon 2010; Halkier 2010; Belzile and Öberg 2012)
  • consider any silences, pauses or omissions in order to try to uncover what might have been left unsaid and why (Frost and Elichaoff 2010: 56)
  • ask someone else to analyse the data independently to see whether or not they reach the same conclusions as you (Odena 2013: 365)
  • involve your participants in the analysis, for confirmation and reciprocal learning (Nind 2011) (Kara, 2015, pp.99-100)

Meticulous data preparation is essential; there is not much scope for creativity in accurate transcription or data entry. Coding data can also feel quite tedious and may be very time consuming. When it has been prepared and coded, data usually needs to be sorted into categories and sub-categories, a process that can become very complex (Mason 2002: 151). (Kara, 2015, p.100)

should you record non-speech sounds that people make, such as laughter, coughs, sighs and so on? If so, how? Do you record pauses? If so, do you measure their length, or just note each occurrence? … How do you lay out your transcription on the page, and how do you identify the different speakers/actors in the transcript? (Hammersley 2010: 556–7). (Kara, 2015, p.101)

  • content analysis – a semi-quantitative technique for counting the number of instances of each category or code (Robson 2011: 349)
  • thematic analysis – identifying themes from coded data (Robson 2011: 475)
  • narrative analysis – analysing stories from primary or secondary data (Bryman 2012: 582)
  • conversation analysis – detailed analysis of the verbal and non-verbal content of everyday interactions (Bryman 2012: 527)
  • discourse analysis – analysing patterns of speech and interaction in a detailed and sometimes semi-quantitative way, for example by measuring the length of pauses (Bryman 2012: 529)
  • metaphor analysis – analysing metaphors from primary or secondary data (Fletcher 2013: 1555–6)
  • phenomenological analysis – analysing participants’ stories from, and descriptions of, their ‘life-worlds’, or individual experiences and perceptions, with a focus on meaning (Papathomas and Lavallee 2010: 357; Mayoh, Bond and Todres 2012: 28)
  • life course analysis – analysis of the ‘interaction between individual lives and social change’ (Brittain and Green 2012: 253). (Kara, 2015, p.102)

Because there is so much that can be done with any dataset, and because data gathering can be onerous for participants, researchers’ attention has turned more and more to the opportunities offered by secondary data – that is, data previously gathered for some other purpose (sometimes research, sometimes not) and that can be used again. (Kara, 2015, p.103)

The analysis of documents will benefit from taking this into account. There are three steps to analysing documents: superficial examination, or ‘skimming’; thorough examination by careful reading and re-reading; and interpretation (Bowen 2009: 32). (Kara, 2015, p.105)

There are two central methods of analysing talk: discourse analysis and conversation analysis. (Kara, 2015, p.105)

Conversation analysis (CA) is an evolving analytic method based on the idea that any verbal interaction is worth studying to find out how it was produced by the speakers (Liddicoat 2011: 69). CA requires a detailed form of transcription, capturing not only the words that are spoken but also aspects of talk such as intonation, volume of speech, pauses, non-word utterances such as ‘um’ and ‘er’, overlapping talk, interruptions and non-verbal sounds such as laughter or coughs (Groom, Cushion and Nelson 2012: 445). The aim is to facilitate a thorough analysis of people’s conversation in normal everyday interactions. (Kara, 2015, p.106)

Discourse analysis (DA) is based on the concept that the way we talk about something affects the way we think about that phenomenon. ‘Discourse’ in this context doesn’t refer solely to talk itself, it refers to talk that is constructed within the constraints of a social structure. DA can be applied to other kinds of data, such as written texts (Bryman 2012: 528) and images (Rose 2012: 195). (Kara, 2015, p.106)

Diagrams and maps can be particularly useful in data analysis to help you visualise your data and the ideas and relationships that develop as you work through the analytic process. (Kara, 2015, p.107)

Diagrams can of course be created by hand, or using specialist diagram software such as Gliffy, or research analysis software that supports diagramming such as NVivo. Similarly, maps can be drawn by hand or using specialist mapping software such as Esri or Maptitude. (Kara, 2015, p.108)

Data integration in mixed-methods research can be conducted for a number of reasons, such as to address a research question from a variety of perspectives or to bring together different parts of a phenomenon or process (Mason 2002: 33). Within a research project, data integration has three main purposes: triangulation of data, the development of richer analysis and the illustration of findings (Fielding 2012: 124). The aim is to synthesise equivalent or complementary findings and make further investigation of contradictory findings (Fielding 2012: 125). (Kara, 2015, p.112)

  1. How far can each of your datasets contribute to answering your research questions?
  2. To what extent can your findings be brought together to create an explanatory narrative?
  3. How much do the answers to 1 and 2 above benefit your research? (Kara, 2015, p.112)

For qualitative data, this is achieved by offering increased flexibility in coding and ensuring that researchers can rapidly retrieve every item with a given code. Also, it is still the researcher’s job to assign names to codes and codes to data, and to derive meaning from the slices of data served up by the computer in response to the researcher’s queries. (Kara, 2015, p.115)

Once the coding is done, analysis is comparatively straightforward. … You need to choose what to consider, compare or calculate, and those decisions should be based on a credible methodological rationale.  (Kara, 2015, p.115)

I-poems are a way of identifying how participants represent themselves in interviews, by paying attention to the first-person statements in the interview transcripts. … The interview transcripts are carefully read to identify the ways in which interviewees speak about themselves, paying particular attention to any statements using the personal pronoun ‘I’. Each instance of ‘I’ is highlighted, together with any relevant accompanying text that might help a reader to understand the interviewee’s sense of self. These highlighted phrases are then copied out of the transcript and placed in a new document, in the same sequence, each instance beginning in a new line, like the lines of a poem. I-poems can be very helpful in identifying participants’ senses of self by foregrounding the voice, or voices, that they use to talk about themselves. (Kara, 2015, pp.118-119)