How I learnt the value of qualitative research — the hard way

Carole Kenrick
6 min readSep 13, 2019

Featuring physics-based analogies!

My first degree was in physics. So when it came to designing a research project for my education MA, my first instinct was to design an intervention study, with pre and post tests and a control group. I had some concerns about accuracy and so, when a lecturer mentioned ‘triangulation’ and suggested presenting analysis of data back to participants as a way of checking your conclusions, I decided to include this as part of my methodology.

I’m so glad I did.

Sitting in a circle with my participants several months later, they tore my analysis to shreds.

“Nah, I didn’t mean it like that miss!”

“Oh wait, was that during Ramadan? Yeah, I was grumpy cos I was hungry, ignore everything I said that day.”

“So I said that then, but then when you asked the same question again after, I realised that the first time round I thought that but I just didn’t know enough about it, and I do now so actually now I know it should have been a 1 before, I just didn’t know it then.”

I had over-interpreted the data; some of my data was influenced by other variables; and I learnt the hard way about the Dunning-Kruger effect (in short, novices tend to self-assess less accurately than experts do).

It seemed obvious in retrospect that these are inherent issues for anyone wanting to use pre and post surveys to answer research questions about attitudes and opinions. It was only through a conversation — interviews — with participants that I was able to find out why they had responded in certain ways, to check my interpretation of their responses, what else might have been on their mind at the time, and whether they still considered their initial responses to be accurate. I was aware that this would not eliminate all inaccuracies, but it was certainly a big improvement.

And so when I started my doctoral research project, and my research questions developed into questions about pupils’ science identity (their perceptions of themselves in relation to science), I started to delve more deeply into qualitative methodologies. And I have concluded that these approaches are the best fit for my research, for three key reasons: I am interested in why rather than what; I am interested in the experiences of pupils who are members of a group that is under-represented in science; and science identity is a concept that is still being developed, and so it should be explored in more detail and further theorised (which can then also lead to the development of hypotheses and survey instruments for large-scale quantitative studies).

It has been quite a steep learning curve, and there is a key point that I would like to share with anyone else going on this journey, and with anyone looking to critically read research of this kind (particularly if they come from a more quantitative background, as I did!). This key point is:

There are some questions that quantitative methods cannot answer

I’m a physics teacher, so I tend to use analogies to help explain things. Bear with me — it’s physics analogy time…

The ideal gas law, PV = nRT, is very handy indeed. It allows us to carry out all sorts of calculations, and to understand the relationships between the different properties (pressure, volume and temperature) of a gas.

But there are limits to what this law can tell us — it is underpinned by a set of assumptions (more on that later), as well as by averages taken over a large number of molecules, meaning that it only allows us to answer questions on a ‘macro’ scale. And so if you’re interested in what’s going on at the ‘micro’ level it isn’t much help. For instance, what if you want to know how an individual particle is moving, the path is is taking, the interactions it is having? And what if you want to understand why some gases don’t quite behave in the way the ideal gas law predicts, or why the law breaks down in certain conditions (for instance, at very low temperatures)?

To answer these questions, you need to ‘zoom in’, and consider what is going on at the ‘micro’ scale. For instance, it would be helpful to spend some time closely observing one individual particle, and the path it takes, what it bumps into and what happens after each collision — and whether the assumptions that underpin the law always hold true. You could create a rich, detailed description with words and diagrams, annotated with arrows and velocities, showing and explaining that particle’s path.

Now, this will not tell you exactly what every other particle is doing — you can’t ‘generalise’ from this data. But if you do this for a few particles, you start to build a picture of how the gas particles tend to move, and the kinds of ways in which collisions and interactions influence their path. And from these observations you might be able to develop some models to explain the underlying mechanisms behind them, as Einstein did when analysing the random motion of pollen grains in water described by Robert Brown.

This kind of data can give you an insight into how the particles move, and into the reasons why they move in this way.

Back to education research.

Large-scale quantitative studies can be incredibly valuable for: testing theories with a hypothesis; establishing whether single-variable interventions in highly controlled contexts have an impact; and for using statistical analysis to identify patterns and correlations on a large scale. If designed carefully, they can lead to generalisable findings.

But if you want to understand individual experiences and journeys through that system (particularly those that are atypical); if you want to understand causation, mechanisms and reasons why the system looks the way it does; or if you are seeking to disentangle the influence of a single variable in a complex system, then you need to use a different approach.

Enter qualitative methods.

This umbrella term encompasses a huge range of methodologies, which can also be used in combination with quantitative approaches — ‘mixed methods’.

All too often I see comparisons of quantitative and qualitative methods couched in combative language — as though one were better that the other. As Hammarberg, Kirkman and de Lacey put it:

In quantitative circles, qualitative research is commonly viewed with suspicion and considered lightweight because it involves small samples which may not be representative of the broader population, it is seen as not objective, and the results are assessed as biased by the researchers’ own experiences or opinions. In qualitative circles, quantitative research can be dismissed as over-simplifying individual experience in the cause of generalisation, failing to acknowledge researcher biases and expectations in research design, and requiring guesswork to understand the human meaning of aggregate data.

Crucially, they later point out (as I hope my analogy makes clear) that the approach you take should ultimately depend on what you are trying to find out — your research question.

Of course the researcher’s own experiences and perspectives will inevitably shape the type of questions they choose to ask in the first place. For instance, if you are interested in patterns and trends, or you enjoy simplifying the world into neat models (fellow physicists — I’m looking at you!) from which hypotheses can be developed and tested, then quantitative methods are clearly the way to go. But if you consider it important not to lose all the messiness of the real world, or if you have taught children who just don’t ‘fit the stats’, then you may want to take a different approach. I see tremendous value in both approaches — and what’s more, I see them as complementary, enabling us to develop an understanding of education that is both broad and deep.

Further reading

Qualitative research methods: when to use them and how to judge them’ by K. Hammarberg, M. Kirkman, S. de Lacey

Validity Issues in Narrative Research’ by D. Polkinghorne

Thank you to Professor Lucy Avraamidou and Dr Alex Sinclair for providing helpful feedback on a draft of this post.

--

--