Handle with caution: empathy as a research tool

The first (properly academic) thing I’ve written since finishing the PhD has been a contribution to book about violence in research, which allowed me to explore my interest in empathy and methodology. The chapter gives a detailed account of my fieldwork, but here I’ve decided to sketch some general propositions for anyone new to these ideas. In short: I’ve come to think of empathy as an important but dangerous item in the researcher’s toolkit. With it, we can do all kinds of things we couldn’t do otherwise, but wielded improperly, it is likely to cause harm both to us and those around us.

Defining empathy

Empathy is notoriously difficult to define. It’s common to distinguish between automatic empathy (or emotional contagion – where we ‘feel’ the fear, pain, anger, sorrow or joy of other people) and cognitive empathy, a distinct neurological process where we imagine how we would feel in another person’s situation. The Greater Good Science Centre has an Continue reading

Living Well, or Just Surviving?

Sometimes, you just can’t fight it anymore. For the fourth time this week, I’ve tried to write a focused blog post about a topic that concerns me very much, and ended up with a homily on research methodology instead. I suppose it stems from my inability to deal with the following problem: despite having realised quite early on in the PhD that the distinction between desk research and fieldwork is largely artificial, I continue to experience all the cognitive symptoms of stress whenever circumstances conspire to blur the line between the two. Sometimes, I even feel that the integrity of my research is being compromised by my Jekyll-and-Hyde transformation (not that I have a murderous alter ego lurking in me at all times, more that there is a huge contrast between the introverted analyst and the extroverted fieldworker).

Of course, there is also a difference between being at home, working from my nice little desk with a view over Nassau Street and a map of the Caucasus neatly pinned to the drawing board, and being in Azerbaijan or Armenia and gathering deeply sensitive data by taking a walk in the park with a friend. However, that’s a simplification of how research works in the digital age. At home, I can still spend several hours a week urgently trawling through my Facebook feed for updates from the Caucasus, sometimes feeling an umbilical cord-like attachment to the place I’m supposed to have left behind. In the field, I can spend days on end wrapped up in theory, trying to put together robust chapter outlines or plan conference papers. In the past week I’ve felt more ‘at home’ than ‘away’, as I struggled with a large volume of desk-work. But while part of me welcomed the isolation after an equal excess of social interaction (the second half of April was exhausting), part of me felt guilty for deliberately constructing a kind of temporary barrier between myself and the field.

Is this a good thing or a bad thing? Certain methods textbooks have given me the impression that the ‘right’ way to do social research is to (a) go to the field, (b) collect data, (c) come home and analyse it, preferably with the assistance of some complicated statistical software. But what if ‘home’ isn’t the Ivory Tower, what if ‘home’ is my kitchen table in Yerevan or a hotel room in Tbilisi, and the act of recording data is virtually inseparable from the act of analysis? What if, rather than wanting to analyse the complete set of data when it’s finished, I want to analyse as I go, and allow the emerging themes to guide the remainder of the fieldwork, perhaps taking me quite far from my original starting point? What if, when I’m working on the idea for a chapter, I’m as influenced by something that’s trending on Twitter right now as I am by an interview I recorded six months ago?

It’s probably not the end of the world if that’s the approach I’ve taken – in fact, many people would say it’s inevitable and some would even say it’s appropriate – but it does leave me with questions about how to ensure ‘methodological rigour’ and, in particular, how to explain my methodology to an examining committee 18 months from now without using the phrase “I just made it up as I went along”. Sure, I can describe my methods – how I conducted interviews and observation – but how do I describe the methodology, the sinews of analysis holding the muscle of data to the skeleton of theory? As I’ve mentioned before on this blog, I’m really not a trained sociologist  – and a lot of the ‘How To Do Social Research’ books are surprising lacking on information in this department.

An exception to this, which I discovered just before embarking on the latest round of fieldwork, is a relatively short and very readable book called (surprise, surprise) How To Do Your Case Study. At one point, the author introduces something he calls the “constant comparative method”, which – if I understand rightly – involves cycling back and forth continuously between different sets of data (e.g. transcripts and field notes) and trying to establish connections between the parts in order to make sense of the whole. When I read that section, I had one of those revelatory, light-bursting-through-clouds moments: “but that’s exactly what I’ve been trying to do!” So, if I can set aside the quibbling fear that my methodology just isn’t good enough, and learn instead to articulate what I’ve been doing with confidence and precision, then there is a chance I will walk into the viva with one less knot in my stomach.

Of course, that’s easier said than done, but at least it gives me hope. I would even go so far as to venture that my harum-scarum analysis has so far helped to make better sense of what I’m observing in the field – a bit like focusing a microscope. There’s no point in pouring all your energy into a lengthy description of a fuzzy-looking cross-section of a plant cell, only to realise at the end that you could have got a much better view if you’d twiddled the knobs a bit. Ultimately, this is a question of reflexivity – if a reflexive attitude to the data isn’t built into your research design, then how does it help at the end of your fieldwork to consider how your identity and relationship to the participants affected the way you collected and interpreted the data?

This sounds like I’m moving towards an argument in favour of strong objectivity, but I don’t really mean to weigh in on that debate right now. For me, the more immediate challenge is making sense of the research environment – understanding the complex codes of communication in a climate of conflict and surveillance, becoming more aware of my unconscious habits of interaction with others, learning to absorb some complex forms of data while constructing a comprehensive filtering system for that which can be identified as false or misleading, developing the maturity to engage with what feels challenging or uncomfortable rather than setting it aside for ‘later’. I’m not bothered about whether or not my analysis is correct – what matters is whether or not my methodology is still workable. In other words, am I doing things just so I can say I stuck to the research design, or am I doing things in a way that will actually further my own (and eventually other people’s) understanding of the subject?

I didn’t intend for this to become a review of How To Do Your Case Study, but in finishing up I want to add one final thing about the book. Most of it is filled with very concrete advice and tools for mapping your own case study, but one of the chapters turns to epistemology and discusses the concept of phronesis – often translated as practical wisdom – as opposed to the more abstract theory. I haven’t nearly enough time to go into what this means (by which I mean, I still hardly know myself), so I’ll quote from the entry on Aristotle’s Ethics in the Stanford Encyclopaedia of Philosophy:

“What we need, in order to live well, is a proper appreciation of the way in which such goods as friendship, pleasure, virtue, honor and wealth fit together as a whole. In order to apply that general understanding to particular cases, we must acquire, through proper upbringing and habits, the ability to see, on each occasion, which course of action is best supported by reasons. Therefore practical wisdom, as he [Aristotle] conceives it, cannot be acquired solely by learning general rules. We must also acquire, through practice, those deliberative, emotional, and social skills that enable us to put our general understanding of well-being into practice in ways that are suitable to each occasion.”

This is an important reminder that the field of ethics extends beyond basic principles such as ‘informed consent’ or ‘plausible deniability’ (though these are important too) – it takes us into the vague and unchartered territory of ‘living well’, and acting in accordance with the situation rather than the rules. But how far removed is this from the reality of the typical postgraduate student?

Based on conversations with fellow and former PhD candidates, I can vouch for the fact that an awful lot of us get stressed when we feel ourselves deviate from the strictures governing academic life. Most of us seem to have this absurdly simple idea of what research is supposed to look like or how we are supposed to perform. Then, when it turns out that the reality of doing research is in no way like our preconceived notions, we panic. And our views are so deeply internalised (How? Why?) that we rarely ever manage to dig ourselves out of this hole. Instead, we just wait for ‘normality’ to eventually reassert itself. Aristotle’s ethics have the benefit of turning that fake-it-’til-you-make-it logic on its head – if we can only learn to accept that our natural response is sometimes the best one, we’ll be a lot better off than if we’re constantly striving to meet our idealised image of the ‘right’ research performance.